Note: This page contains sample records for the topic quantified results show from
While these samples are representative of the content of,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of
to obtain the most current and comprehensive results.
Last update: November 12, 2013.

Btu accounting: Showing results  

Microsoft Academic Search

In the preceding article in this series last month, the author showed how to calculate the energy consumed to make a pound of product. To realize a payoff, however, the results must be presented in graphs or tables that clearly display what has happened. They must call attention to plant performance and ultimately lead to more efficient use of energy.



Quantified security is a weak hypothesis: a critical survey of results and assumptions  

Microsoft Academic Search

This paper critically surveys previous work on quantitative representation and analysis of security. Such quantified security has been presented as a general approach to precisely assess and control security. We classify a significant part of the work between 1981 and 2008 with respect to security perspective, target of quantification, underlying assumptions and type of validation. The result shows how the

Vilhelm Verendel



Emerging Trends in Contextual Learning Show Positive Results for Students.  

ERIC Educational Resources Information Center

|This issue focuses on contextual learning (CL), in which students master rigorous academic content in real-world or work-based learning experiences. "Emerging Trends in CL Show Positive Results for Students" discusses CL as an important strategy for improving student achievement. It describes: how CL raises the bar for all students, challenging…

WorkAmerica, 2001



Breast vibro-acoustography: initial results show promise.  


ABSTRACT: INTRODUCTION: Vibro-acoustography (VA) is a recently developed imaging modality that is sensitive to the dynamic characteristics of tissue. It detects low-frequency harmonic vibrations in tissue that are induced by the radiation force of ultrasound. Here, we have investigated applications of VA for in vivo breast imaging. METHODS: A recently developed combined mammography-VA system for in vivo breast imaging was tested on female volunteers, aged 25 years or older, with suspected breast lesions on their clinical examination. After mammography, a set of VA scans was acquired by the experimental device. In a masked assessment, VA images were evaluated independently by 3 reviewers who identified mass lesions and calcifications. The diagnostic accuracy of this imaging method was determined by comparing the reviewers' responses with clinical data. RESULTS: We collected images from 57 participants: 7 were used for training and 48 for evaluation of diagnostic accuracy (images from 2 participants were excluded because of unexpected imaging artifacts). In total, 16 malignant and 32 benign lesions were examined. Specificity for diagnostic accuracy was 94% or higher for all 3 reviewers, but sensitivity varied (69% to 100%). All reviewers were able to detect 97% of masses, but sensitivity for detection of calcification was lower (? 72% for all reviewers). CONCLUSIONS: VA can be used to detect various breast abnormalities, including calcifications and benign and malignant masses, with relatively high specificity. VA technology may lead to a new clinical tool for breast imaging applications. PMID:23021305

Alizad, Azra; Whaley, Dana H; Urban, Matthew W; Carter, Rickey E; Kinnick, Randall R; Greenleaf, James F; Fatemi, Mostafa



Quantifying Fine Root Carbon Inputs To Soil: Results From Combining Radiocarbon And Traditional Methodologies.  

NASA Astrophysics Data System (ADS)

Estimates of high belowground net primary productivity (50% or more) in forest ecosystems are often based on assumptions that almost all fine roots (< 2 mm in diameter) live and die within one year. Recent radiocarbon (14C) measurements of fine root cellulose in three eastern temperate forests of the United States show that at least a portion of fine roots are living for more than 8 years (Gaudinski et al. 2001) and that fine root lifespans likely vary as a function of both diameter and position on the root branch system. New data from investigations under way in several different temperate forests further support the idea of large variations in root lifespans with radiocarbon-derived ages ranging from approximately one year to several years. In forests where both mini-rhizotron and 14C lifespan estimates have been made, the two techniques agree well when the 14C sampling is made on the same types of roots viewed by mini-rhizotron cameras (i.e. first and second order roots; the most distal and newest roots on the root branching system), and the 14C signature of new root growth is known. We have quantified the signature of new tree roots by taking advantage of locally-elevated 14C at Oak Ridge Tennessee, which shows that carbon making up new roots was photosynthesized approximately 1.5 years prior to new root growth. Position on the root branching system shows a correlation with age, with ages up to 7 years for 4th order roots of red maple. The method by which roots are sampled also affects the 14C-estimated age, with total fine root population, sampled via soil cores, showing longer lifespans relative to roots sampled by position on the root branch system (when similar diameter classes are compared). Overall, the implication of our studies is that assumptions of turnover times of 1 year result in underestimates of the true lifespan of a large portion of fine root biomass in temperate forests. This suggests that future calculations of belowground net primary productivity should take variation in fine root lifespan into account. Reference: Gaudinski JB, Trumbore SE, Davidson EA, Cook A, Richter D (2001) The age of fine-root carbon in three forests of the eastern United States measured by radiocarbon, Oecologia 129:420-429.

Gaudinski, J. B.; Trumbore, S. E.; Dawson, T.; Torn, M.; Pregitzer, K.; Joslin, J. D.



The Paris MEGAPOLI campaign to better quantify organic aerosol formation in a large agglomeration: first results  

NASA Astrophysics Data System (ADS)

Within the FP7 MEGAPOLI project, two intensive field campaigns have been conducted in the Greater Paris region during July 2009 and January/February 2010. The major aim was to quantify sources of primary and secondary aerosol, and the interaction with gaseous precursors, within a large agglomeration, and in its plume. Greater Paris has been chosen for such a campaign because it is a major and dense pollution source (more than 10 million inhabitants), surrounded by rural areas and relatively flat terrain. A particular focus is put on organic carbon, for which secondary formation, but also primary emissions are still not well quantified. Detailed aerosol and gaseous precursor measurements have been conducted at an urban and two sub-urban sites, from five mobile platforms and from the French ATR-42 research aircraft (for plume characterisation). State of the art instrumentation has allowed determination of aerosol chemical composition, either with very high frequency (several minutes to half an hour), or with large chemical detail (several dozens of organic compounds from filter samples). In addition, the size distribution, optical and hygroscopic and mixing properties has been determined in order to relate the aerosol chemical composition to its potential radiative and climate impact in the urban region and its plume. Gas phase measurements have focussed especially on detailed VOC measurements in order to relate SOA build-up to gaseous precursor species abundance. A network of backscatter lidars at urban and rural sites and on a mobile platform gives the access to the aerosol vertical distribution in the region and to variations of the boundary layer height at the urban / rural interface. Meteorological parameters and especially wind profile measurements allow interpretation of transport processes in the region. In this paper, the campaign set-up and objectives, meteorological and general pollution conditions observed during the field experiments and a first overview over the measurement results will be given. First particular results obtained during the campaign will be highlighted. For instance, from airborne primary pollutant measurements it appeared that the pollution plume was still well defined at more than one hundred kilometres downwind from the agglomeration. This will give a "safe" framework for evaluating secondary organic aerosol build-up in the plume. Significant new particle formation events were observed in the area during the whole month of the campaign. These events were assisted by the relatively low particulate matter concentration levels and resulting low surface area during most of July 2009. Preliminary attribution of organic aerosol (OA) from AMS mass spectrometer urban and peri-urban measurements during the summer campaign shows a large fraction of oxidised organic aerosol (OOA), comprising both chemically processed (oxidized) primary organic aerosol and classical secondary organic aerosol (from aromatic and biogenic VOC precursors), and a smaller fraction of unoxidised organic aerosol (HOA) of primary origin. Another aspect is water solubility of OA available from PILS-TOC measurements. At the urban LHVP site, about half of OA is water soluble, corresponding probably to classical secondary organic aerosol, another half is water insoluble, corresponding probably to primary and chemically processed primary OA. First attempts of source attribution of primary OA will also be presented. Finally, the comprehensive data set obtained during the campaign will be used for a first evaluation of regional chemistry-transport model simulations.

Beekmann, Matthias; Baltensperger, Urs; Sciare, Jean; Gros, Valérie; Borbon, Agnes; Baklanov, Alexander; Lawrence, Mark; Pandis, Spyros



Quantifying the Variability in Damage of Structures as a Result of Geohazards  

NASA Astrophysics Data System (ADS)

Uncertainty is ever present in catastrophe modelling and has recently become a popular topic of discussion in insurance media. Each element of a catastrophe model has associated uncertainties whether they be aleatory, epistemic or other. One method of quantifying the uncertainty specific to each peril is to estimate the variation in damage for a given intensity of peril. For example, the proportion of total cost to repair a structure resulting from an earthquake in the regions of the affected area with peak ground acceleration of 0.65g may range from 10% to 100%. This variation in damage for a given intensity needs to be quantified by catastrophe models. Using insurance claims data, we investigate how damage varies for a given peril (e.g. earthquake, tropical cyclone, inland flood) as a function of peril intensity. Probability distributions (including those with a fat tail, i.e. with large probability of high damage) are fitted to the claims data to test a number of perils specific hypotheses, for example that a very large earthquake will cause less variation in losses than a mid-sized earthquake. We also compare the relationship between damage variability and peril intensity for a number of different geohazards. For example, we compare the uncertainty bands for large earthquakes with large hurricanes in an attempt to assess whether loss estimates are more uncertain for hurricanes say, compared to earthquakes. The results of this study represent advances in the appreciation of uncertainty in catastrophe models and of how losses to a notional portfolio and notional event could vary according to the empirical probability distributions found.

Latchman, S.; Simic, M.



Quantifying the effects of interventions for movement disorders resulting from cerebral palsy.  


The purpose of this article is to review a variety of tests and measures that are useful in documenting and quantifying the outcomes of intervention for persons with cerebral palsy. The topics included are (1) a discussion of the need for meaningful outcome measures, (2) a model of the disabling process for classifying measures used to assess several different dimensions of disablement, and (3) a review of selected tests and measures categorized according to the model described. PMID:8959463

Campbell, S K



Quantifying viruses and bacteria in wastewater—Results, interpretation methods, and quality control  

USGS Publications Warehouse

Membrane bioreactors (MBR), used for wastewater treatment in Ohio and elsewhere in the United States, have pore sizes small enough to theoretically reduce concentrations of protozoa and bacteria, but not viruses. Sampling for viruses in wastewater is seldom done and not required. Instead, the bacterial indicators Escherichia coli (E. coli) and fecal coliforms are the required microbial measures of effluents for wastewater-discharge permits. Information is needed on the effectiveness of MBRs in removing human enteric viruses from wastewaters, particularly as compared to conventional wastewater treatment before and after disinfection. A total of 73 regular and 28 quality-control (QC) samples were collected at three MBR and two conventional wastewater plants in Ohio during 23 regular and 3 QC sampling trips in 2008-10. Samples were collected at various stages in the treatment processes and analyzed for bacterial indicators E. coli, fecal coliforms, and enterococci by membrane filtration; somatic and F-specific coliphage by the single agar layer (SAL) method; adenovirus, enterovirus, norovirus GI and GII, rotavirus, and hepatitis A virus by molecular methods; and viruses by cell culture. While addressing the main objective of the study-comparing removal of viruses and bacterial indicators in MBR and conventional plants-it was realized that work was needed to identify data analysis and quantification methods for interpreting enteric virus and QC data. Therefore, methods for quantifying viruses, qualifying results, and applying QC data to interpretations are described in this report. During each regular sampling trip, samples were collected (1) before conventional or MBR treatment (post-preliminary), (2) after secondary or MBR treatment (post-secondary or post-MBR), (3) after tertiary treatment (one conventional plant only), and (4) after disinfection (post-disinfection). Glass-wool fiber filtration was used to concentrate enteric viruses from large volumes, and small volume grab samples were collected for direct-plating analyses for bacterial indicators and coliphage. After filtration, the viruses were eluted from the filter and further concentrated. The final concentrated sample volume (FCSV) was used for enteric virus analysis by use of two methods-cell culture and a molecular method, polymerase chain reaction (PCR). Quantitative PCR (qPCR) for DNA viruses and quantitative reverse-transcriptase PCR (qRT-PCR) for RNA viruses were used in this study. To support data interpretations, the assay limit of detection (ALOD) was set for each virus assay and used to determine sample reporting limits (SRLs). For qPCR and qRT-PCR the ALOD was an estimated value because it was not established according to established method detection limit procedures. The SRLs were different for each sample because effective sample volumes (the volume of the original sample that was actually used in each analysis) were different for each sample. Effective sample volumes were much less than the original sample volumes because of reductions from processing steps and (or) from when dilutions were made to minimize the effects from PCR-inhibiting substances. Codes were used to further qualify the virus data and indicate the level of uncertainty associated with each measurement. Quality-control samples were used to support data interpretations. Field and laboratory blanks for bacteria, coliphage, and enteric viruses were all below detection, indicating that it was unlikely that samples were contaminated from equipment or processing procedures. The absolute value log differences (AVLDs) between concurrent replicate pairs were calculated to identify the variability associated with each measurement. For bacterial indicators and coliphage, the AVLD results indicated that concentrations <10 colony-forming units or plaque-forming units per 100 mL can differ between replicates by as much as 1 log, whereas higher concentrations can differ by as much as 0.3 log. The AVLD results for viruses indicated that differences between replicates can be as great as 1.2 log g

Francy, Donna S.; Stelzer, Erin A.; Bushon, Rebecca N.; Brady, Amie M. G.; Mailot, Brian E.; Spencer, Susan K.; Borchardt, Mark A.; Elber, Ashley G.; Riddell, Kimberly R.; Gellner, Terry M.



Soils-Based Rapid Assessment for Quantifying Changes in Salt Marsh Condition as a Result of Hydrologic Alteration  

Microsoft Academic Search

Wetland condition can be severely altered as a result of a change in hydrology. In this study, we examined several soil-based\\u000a methods to quantify and assess changes in salt marsh condition as a result of tidal restriction. Soil properties were compared\\u000a between two tidally restricted and two (paired) unrestricted salt marshes. Organic horizon morphology provided a qualitative\\u000a metric of marsh

Timothy M. Twohig; Mark H. Stolt


Gun Shows and Gun Violence: Fatally Flawed Study Yields Misleading Results  

PubMed Central

A widely publicized but unpublished study of the relationship between gun shows and gun violence is being cited in debates about the regulation of gun shows and gun commerce. We believe the study is fatally flawed. A working paper entitled “The Effect of Gun Shows on Gun-Related Deaths: Evidence from California and Texas” outlined this study, which found no association between gun shows and gun-related deaths. We believe the study reflects a limited understanding of gun shows and gun markets and is not statistically powered to detect even an implausibly large effect of gun shows on gun violence. In addition, the research contains serious ascertainment and classification errors, produces results that are sensitive to minor specification changes in key variables and in some cases have no face validity, and is contradicted by 1 of its own authors’ prior research. The study should not be used as evidence in formulating gun policy.

Hemenway, David; Webster, Daniel; Pierce, Glenn; Braga, Anthony A.



Quantifying Fine Root Carbon Inputs To Soil: Results From Combining Radiocarbon And Traditional Methodologies  

Microsoft Academic Search

Estimates of high belowground net primary productivity (50% or more) in forest ecosystems are often based on assumptions that almost all fine roots (< 2 mm in diameter) live and die within one year. Recent radiocarbon (14C) measurements of fine root cellulose in three eastern temperate forests of the United States show that at least a portion of fine roots

J. B. Gaudinski; S. E. Trumbore; T. Dawson; M. Torn; K. Pregitzer; J. D. Joslin



Testing Delays Resulting in Increased Identification Accuracy in Line-Ups and Show-Ups.  

ERIC Educational Resources Information Center

Investigated time delays (immediate, two-three days, one week) between viewing a staged theft and attempting an eyewitness identification. Compared lineups to one-person showups in a laboratory analogue involving 412 subjects. Results show that across all time delays, participants maintained a higher identification accuracy with the showup…

Dekle, Dawn J.



Mitochondrial DNA transmitted from sperm in the blue mussel Mytilus galloprovincialis showing doubly uniparental inheritance of mitochondria, quantified by real-time PCR.  


Doubly uniparental inheritance (DUI) of mitochondrial DNA transmission to progeny has been reported in the mussel, Mytilus. In DUI, males have both paternally (M type) and maternally (F type) transmitted mitochondrial DNA (mtDNA), but females have only the F type. To estimate how much M type mtDNA enters the egg with sperm in the DUI system, ratios of M type to F type mtDNA were measured before and after fertilization. M type mtDNA content in eggs increased markedly after fertilization. Similar patterns in M type content changes after fertilization were observed in crosses using the same males. To compare mtDNA quantities, we subsequently measured the ratios of mtDNA to the 28S ribosomal RNA gene (an endogenous control sequence) in sperm or unfertilized eggs using a real-time polymerase chain reaction (PCR) assay. F type content in unfertilized eggs was greater than the M type in sperm by about 1000-fold on average. M type content in spermatozoa was greater than in unfertilized egg, but their distribution overlapped. These results may explain the post-fertilization changes in zygotic M type content. We previously demonstrated that paternal and maternal M type mtDNAs are transmitted to offspring, and hypothesized that the paternal M type contributed to M type transmission to the next generation more than the maternal type did. These quantitative data on M and F type mtDNA in sperm and eggs provide further support for that hypothesis. PMID:20608851

Sano, Natsumi; Obata, Mayu; Komaru, Akira



NIH trial shows promising results in treating a lymphoma in young people

Patients with a type of cancer known as primary mediastinal B-cell lymphoma who received infusions of chemotherapy, but who did not have radiation therapy to an area of the thorax known as the mediastinum, had excellent outcomes, according to clinical trial results.


Quantifying saltmarsh vegetation and its effect on wave height dissipation: Results from a UK East coast saltmarsh  

NASA Astrophysics Data System (ADS)

The degree to which incident wind waves are attenuated over intertidal surfaces is critical to the development of coastal wetlands, which are, amongst other processes, affected by the delivery, erosion, and/or resuspension of sediment due to wave action. Knowledge on wave attenuation over saltmarsh surfaces is also essential for accurate assessments of their natural sea-defence value to be made and incorporated into sea defence and management schemes. The aim of this paper is to evaluate the use of a digital photographic method for the quantification of marsh vegetation density and then to investigate the relative roles played by hydrodynamic controls and vegetation density/type in causing the attenuation of incident waves over a macro-tidal saltmarsh. Results show that a significant statistical relationship exists between the density of vegetation measured in side-on photographs and the dry biomass of the photographed vegetation determined through direct harvesting. The potential of the digital photographic method for the spatial and temporal comparison of marsh surface vegetation biomass, density, and canopy structure is highlighted and the method was applied to assess spatial and seasonal differences in vegetation density and their effect on wave attenuation at three locations on a macro-tidal saltmarsh on Dengie Peninsula, Essex, UK. In this environmental setting, vegetation density/type did not have a significant direct effect on wave attenuation but modified the process of wave transformation under different hydrodynamic conditions. At the two locations, characterised by a relatively tall canopy (15 26 cm) with biomass values of 430 500 g m-2, dominated by Spartina spp. (>70% of total dry biomass), relative incident wave height (wave height/water depth) is identified as a statistically significant dominant positive control on wave attenuation up to a threshold value of 0.55, beyond which wave attenuation showed no significant further increase. At the third location, characterised by only slightly less biomass (398 g m-2) but a shorter (6 cm) canopy of the annual Salicornia spp., no significant relationship existed between wave attenuation and relative wave height. Seasonally (between September and December) significant temporal increase/decrease in vegetation density occurred in one of the Spartina canopies and in the Salicornia canopy, respectively, and led to an expected (but not statistically significant) increase/decrease in wave attenuation. The wider implications of these findings in the context of form process interactions on saltmarshes and their effect on marsh evolution are also discussed.

Möller, I.



Natural Language, Sortal Reducibility and Generalized Quantifiers  

Microsoft Academic Search

Recent work in natural language semantics leads to some new observations on generalized quantifiers. In $\\\\S 1$ we show that English quantifiers of type $<1,1>$ are booleanly generated by their generalized universal and generalized existential members. These two classes also constitute the sortally reducible members of this type. Section 2 presents our main result--the Generalized Prefix Theorem (GPT). This theorem

Edward L. Keenan



Quantifying the effect of crops surface albedo variability on GHG budgets in a life cycle assessment approach : methodology and results.  

NASA Astrophysics Data System (ADS)

We tested a new method to estimate the radiative forcing of several crops at the annual and rotation scales, using local measurements data from two ICOS experimental sites. We used jointly 1) the radiative forcing caused by greenhouse gas (GHG) net emissions, calculated by using a Life Cycle Analysis (LCA) approach and in situ measurements (Ceschia et al. 2010), and 2) the radiative forcing caused by rapid changes in surface albedo typical from those ecosystems and resulting from management and crop phenology. The carbon and GHG budgets (GHGB) of 2 crop sites with contrasted management located in South West France (Auradé and Lamasquère sites) was estimated over a complete rotation by combining a classical LCA approach with on site flux measurements. At both sites, carbon inputs (organic fertilisation and seeds), carbon exports (harvest) and net ecosystem production (NEP), measured with the eddy covariance technique, were calculated. The variability of the different terms and their relative contributions to the net ecosystem carbon budget (NECB) were analysed for all site-years, and the effect of management on NECB was assessed. To account for GHG fluxes that were not directly measured on site, we estimated the emissions caused by field operations (EFO) for each site using emission factors from the literature. The EFO were added to the NECB to calculate the total GHGB for a range of cropping systems and management regimes. N2O emissions were or calculated following the IPCC (2007) guidelines, and CH4 emissions were assumed to be negligible compared to other contributions to the net GHGB. Additionally, albedo was calculated continuously using the short wave incident and reflected radiation measurements in the field (0.3-3µm) from CNR1 sensors. Mean annual differences in albedo and deduced radiative forcing from a reference value were then compared for all site-years. Mean annual differences in radiative forcing were then converted in g C equivalent m-2 in order to add this effect to the GHG budget (Muñoz et a. 2010). Increasing the length of the vegetative period is considered as one of the main levers for improving the NECB of crop ecosystems. Therefore, we also tested the effect of adding intermediate crops or maintaining crop voluntary re-growth on both the NECB and the radiative forcing caused by the changes in mean annual surface albedo. We showed that the NEP was improved and as a consequence NECB and GHGB too. Intermediate crops also increased the mean annual surface albedo and therefore caused a negative radiative forcing (cooling effect) expressed in g C equivalent m-2 (sink). The use of an intermediate crop could in some cases switch the crop from a positive NEP (source) to a negative one (sink) and the change in radiative forcing (up to -110 g C-eq m-2 yr-1) could overwhelm the NEP term.

Ferlicoq, Morgan; Ceschia, Eric; Brut, Aurore; Tallec, Tiphaine



[Postoperative mental blocking in a continuous reaction task. With supplementary results showing the influence of age (author's transl)].  


By means of a four colour device for measuring continuous reaction sequences the mental blockings of brain damaged patients in comparison with patients suffering from skin disease were determined. Differences due to age were also investigated. The analysis of frequency distribution of reaction times (dissection method according to Daeves and Beckel) yielded the following results: a) Brain damaged patients show a higher percentage of blockings (23%) than patients suffering from skin-disease (10%). "Normal" reaction times as well as "blockings" are not prolonged significantly. b) Older patients show prolonged normal reaction times and prolonged blockings without increase in the percentage of blockings. c) Patients with left hemisphere lesions show longer normal reaction times than those who undergo right hemisphere operations. The results are discussed with regard to their significance for theory and practice (road accidents). PMID:4959

Bäumler, G; Gerlach, J; Kaether, G



Quantifying chain reptation in entangled polymer melts: Topological and dynamical mapping of atomistic simulation results onto the tube model  

NASA Astrophysics Data System (ADS)

The topological state of entangled polymers has been analyzed recently in terms of primitive paths which allowed obtaining reliable predictions of the static (statistical) properties of the underlying entanglement network for a number of polymer melts. Through a systematic methodology that first maps atomistic molecular dynamics (MD) trajectories onto time trajectories of primitive chains and then documents primitive chain motion in terms of a curvilinear diffusion in a tubelike region around the coarse-grained chain contour, we are extending these static approaches here even further by computing the most fundamental function of the reptation theory, namely, the probability ?(s,t) that a segment s of the primitive chain remains inside the initial tube after time t, accounting directly for contour length fluctuations and constraint release. The effective diameter of the tube is independently evaluated by observing tube constraints either on atomistic displacements or on the displacement of primitive chain segments orthogonal to the initial primitive path. Having computed the tube diameter, the tube itself around each primitive path is constructed by visiting each entanglement strand along the primitive path one after the other and approximating it by the space of a small cylinder having the same axis as the entanglement strand itself and a diameter equal to the estimated effective tube diameter. Reptation of the primitive chain longitudinally inside the effective constraining tube as well as local transverse fluctuations of the chain driven mainly from constraint release and regeneration mechanisms are evident in the simulation results; the latter causes parts of the chains to venture outside their average tube surface for certain periods of time. The computed ?(s,t) curves account directly for both of these phenomena, as well as for contour length fluctuations, since all of them are automatically captured in the atomistic simulations. Linear viscoelastic properties such as the zero shear rate viscosity and the spectra of storage and loss moduli obtained on the basis of the obtained ?(s,t) curves for three different polymer melts (polyethylene, cis-1,4-polybutadiene, and trans-1,4-polybutadiene) are consistent with experimental rheological data and in qualitative agreement with the double reptation and dual constraint models. The new methodology is general and can be routinely applied to analyze primitive path dynamics and chain reptation in atomistic trajectories (accumulated through long MD simulations) of other model polymers or polymeric systems (e.g., bidisperse, branched, grafted, etc.); it is thus believed to be particularly useful in the future in evaluating proposed tube models and developing more accurate theories for entangled systems.

Stephanou, Pavlos S.; Baig, Chunggi; Tsolou, Georgia; Mavrantzas, Vlasis G.; Kröger, Martin



QUANTIFYING FOREST ABOVEGROUND CARBON POOLS AND FLUXES USING MULTI-TEMPORAL LIDAR A report on field monitoring, remote sensing MMV, GIS integration, and modeling results for forestry field validation test to quantify aboveground tree biomass and carbon  

Microsoft Academic Search

Sound policy recommendations relating to the role of forest management in mitigating atmospheric carbon dioxide (COâ) depend upon establishing accurate methodologies for quantifying forest carbon pools for large tracts of land that can be dynamically updated over time. Light Detection and Ranging (LiDAR) remote sensing is a promising technology for achieving accurate estimates of aboveground biomass and thereby carbon pools;

Lee Spangler; Lee A. Vierling; Eva K. Stand; Andrew T. Hudak; Jan U. H. Eitel; Sebastian Martinuzzi



Quantifying Quantumness  

NASA Astrophysics Data System (ADS)

We introduce and study a measure of ``quantumness'' of a quantum state based on its Hilbert-Schmidt distance from the set of classical states. ``Classical states'' were defined earlier as states for which a positive P-function exists, i.e. they are mixtures of coherent states [1]. We study invariance properties of the measure, upper bounds, and its relation to entanglement measures. We evaluate the quantumness of a number of physically interesting states and show that for any physical system in thermal equilibrium there is a finite critical temperature above which quantumness vanishes. We then use the measure for identifying the ``most quantum'' states. Such states are expected to be potentially most useful for quantum information theoretical applications. We find these states explicitly for low-dimensional spin-systems, and show that they possess beautiful, highly symmetric Majorana representations. [4pt] [1] Classicality of spin states, Olivier Giraud, Petr Braun, and Daniel Braun, Phys. Rev. A 78, 042112 (2008)

Braun, Daniel; Giraud, Olivier; Braun, Peter A.



Children of Low Socioeconomic Status Show Accelerated Linear Growth in Early Childhood; Results from the Generation R Study  

PubMed Central

Objectives People of low socioeconomic status are shorter than those of high socioeconomic status. The first two years of life being critical for height development, we hypothesized that a low socioeconomic status is associated with a slower linear growth in early childhood. We studied maternal educational level (high, mid-high, mid-low, and low) as a measure of socioeconomic status and its association with repeatedly measured height in children aged 0–2 years, and also examined to what extent known determinants of postnatal growth contribute to this association. Methods This study was based on data from 2972 mothers with a Dutch ethnicity, and their children participating in The Generation R Study, a population-based cohort study in Rotterdam, the Netherlands (participation rate 61%). All children were born between April 2002 and January 2006. Height was measured at 2 months (mid-90% range 1.0–3.9), 6 months (mid-90% range 5.6–11.4), 14 months (mid-90% range 13.7–17.9) and 25 months of age (mid-90% range 23.6–29.6). Results At 2 months, children in the lowest educational subgroup were shorter than those in the highest (difference: ?0.87 cm; 95% CI: ?1.16, ?0.58). Between 1 and 18 months, they grew faster than their counterparts. By 14 months, children in the lowest educational subgroup were taller than those in the highest (difference at 14 months: 0.40 cm; 95% CI: 0.08,0.72). Adjustment for other determinants of postnatal growth did not explain the taller height. On the contrary, the differences became even larger (difference at 14 months: 0.61 cm; 95% CI: 0.26,0.95; and at 25 months: 1.00 cm; 95% CI: 0.57,1.43) Conclusions Compared with children of high socioeconomic status, those of low socioeconomic status show an accelerated linear growth until the18th month of life, leading to an overcompensation of their initial height deficit. The long-term consequences of these findings remain unclear and require further study.

Silva, Lindsay M.; van Rossem, Lenie; Jansen, Pauline W.; Hokken-Koelega, Anita C. S.; Moll, Henriette A.; Hofman, Albert; Mackenbach, Johan P.; Jaddoe, Vincent W. V.; Raat, Hein



"The Show"  

ERIC Educational Resources Information Center

|For the past 16 years, the blue-collar city of Huntington, West Virginia, has rolled out the red carpet to welcome young wrestlers and their families as old friends. They have come to town chasing the same dream for a spot in what many of them call "The Show". For three days, under the lights of an arena packed with 5,000 fans, the state's best…

Gehring, John



Quantifying machine flexibility  

Microsoft Academic Search

There are several studies aiming to quantify several aspects of flexibility in manufacturing systems like routing flexibility, product mix flexibility, volume flexibility, etc. However, there is still a need to develop more generic measures that can be used to quantify flexibility of systems in order to enable decision-makers to reach better decisions in selecting between different system configurations. In this

Adil Baykaso?lu



Universally Quantified Interval Constraints  

Microsoft Academic Search

Non-linear real constraint systems with universally and\\/or existentially quantified variables often need be solved in such contexts as control design or sensor planning. To date, these systems are mostly han- dled by computing a quantifier-free equivalent form by means of Cylindri- cal Algebraic Decomposition (CAD). However, CAD restricts its input to be conjunctions and disjunctions of polynomial constraints with rational

Frédéric Benhamou; Frédéric Goualard



Development of a MALDI-TOF-MS method to identify and quantify butyrylcholinesterase inhibition resulting from exposure to organophosphate and carbamate pesticides.  


A novel, proteomics based method was developed for the detection, quantification, and categorization of serum butyrylcholinesterase (BChE) inhibitors, including organophosphates (OPs) and carbamates (CBs). This method was based on the MALDI-TOF-MS analysis of the trypsin generated BChE active site peptide (191-SVTLFGESAGAASVSLHLLSPR-212) previously modified by reaction with an OP or CB. The ionization efficiency of OP modified active site peptides by MALDI was greatly improved by adding diammonium citrate to the MALDI matrix, which made the quantification of OP exposure feasible. Excellent linearity (r2 > 0.98) between the normalized abundance ratios (NARs) and OP concentrations or logarithm of carbaryl concentration was obtained. The accuracy of the developed assay was evaluated by comparison of IC50 and IC100 values from the assay with those determined by the Ellman method. Results from this method were comparable with those from the Ellman method. The advantage of the assay was that both the origin and the extent of pesticide exposure can be determined in one analysis. Our MALDI method can provide critical evidence for the pesticide exposure at low BChE inhibition levels even down to 3%, not available with the Ellman method. PMID:17223355

Sun, Jinchun; Lynn, Bert C



A high-density wireless underground sensor network (WUSN) to quantify hydro-ecological interactions for a UK floodplain; project background and initial results  

NASA Astrophysics Data System (ADS)

Floodplain meadows support some of the most diverse vegetation in the UK, and also perform key ecosystem services, such as flood storage and sediment retention. However, the UK now has less than 1500 ha of this unique habitat remaining. In order to conserve and better exploit the services provided by this grassland, an improved understanding of its functioning is essential. Vegetation functioning and species composition are known to be tightly correlated to the hydrological regime, and related temperature and nutrient regime, but the mechanisms controlling these relationships are not well established. The FUSE* project aims to investigate the spatiotemporal variability in vegetation functioning (e.g. photosynthesis and transpiration) and plant community composition in a floodplain meadow near Oxford, UK (Yarnton Mead), and their relationship to key soil physical variables (soil temperature and moisture content), soil nutrient levels and the water- and energy-balance. A distributed high density Wireless Underground Sensor Network (WUSN) is in the process of being established on Yarnton Mead. The majority, or ideally all, of the sensing and transmitting components will be installed below-ground because Yarnton Mead is a SSSI (Site of Special Scientific Interest, due to its unique plant community) and because occasionally sheep or cattle are grazing on it, and that could damage the nodes. This prerequisite has implications for the maximum spacing between UG nodes and their communications technologies; in terms of signal strength, path losses and requirements for battery life. The success of underground wireless communication is highly dependent on the soil type and water content. This floodplain environment is particularly challenging in this context because the soil contains a large amount of clay near the surface and is therefore less favourable to EM wave propagation than sandy soils. Furthermore, due to high relative saturation levels (as a result of high groundwater levels and occasional overland flooding) considerable path losses are expected. Finally, the long-term below-ground installation of the nodes means that batteries cannot be replaced easily, therefore energy conservation schemes are required to be deployed on the nodes. We present a brief overview of the project and initial findings of the approach we have adopted to address these wireless communication issues. This involves tests covering a range of transmission frequencies, antennae types, and node placements. *FUSE, Floodplain Underground SEnsors, funded by the UK Natural Environment Research Council, NE/I007288/1, start date 1-3-2011)

Verhoef, A.; Choudhary, B.; Morris, P. J.; McCann, J.



Simple instruments used in monitoring ionospheric perturbations and some observational results showing the ionospheric responses to the perturbations mainly from the lower atmosphere  

NASA Astrophysics Data System (ADS)

Ionospheric disturbances such as SID and acoustic gravity waves in different scales are well known and commonly discussed topics. Some simple ground equipment was designed and used for monitoring continuously the effects of these disturbances, especially, SWF, SFD. Besides SIDs, They also reflect clearly the acoustic gravity waves in different scale and Spread-F and these data are important supplementary to the traditional ionosonde records. It is of signifi-cance in understanding physical essentials of the ionospheric disturbances and applications in SID warning. In this paper, the designing of the instruments is given and results are discussed in detail. Some case studies were introduced as example which showed very clearly not only immediate effects of solar flare, but also the phenomena of ionospheric responses to large scale gravity waves from lower atmosphere such as typhoon, great earthquake and volcano erup-tion. Particularlyresults showed that acoustic gravity waves play significant role in seeding ionospheric Spread-F. These examples give evidence that lower atmospheric activities strongly influence the ionosphere.

Xiao, Zuo; Hao, Yongqiang; Zhang, Donghe; Xiao, Sai-Guan; Huang, Weiquan


Quantifying ice sheet flow characteristics  

NASA Astrophysics Data System (ADS)

Advances have been made in describing ice sheet motion, but in situ rheology (characteristics that affect the flow) of the ice has been hard to measure in the field. Gillet-Chaulet et al. show that they can measure ice rheology and strain rates in situ using a phase-sensitive radar. They used the technique on the Greenland ice sheet to quantify the rheology there. The researchers were able to achieve sufficient resolution to measure a flow phenomenon known as the Raymond effect, in which the ice sheet shows horizontal variations of the vertical strain rate pattern, sometimes creating anticlines in radar-detected stratigraphic layers that are known as Raymond arches. This effect is due to a highly viscous plug of nearly stagnant ice under an ice ridge. The study is, the researchers believe, the first direct confirmation of the Raymond effect. Their results suggest that laboratory ice studies do not capture the full range of ice flow that exists in nature, so additional field studies are needed. (Geophysical Research Letters, doi:10.1029/2011GL049843, 2011)

Balcerak, Ernie



Field measurements along the 2010 Ms 7.1 Yushu earthquake rupture shows strike-slip and dip-slip activities, resulting in mountains uplift  

NASA Astrophysics Data System (ADS)

The Yushu Ms 7.1 earthquake occurred in the Qinghai Province, China, on April 14th, 2010. Understanding its mechanism is critical to studying the local stress field and the mechanism of earthquake, therefore we conducted careful field investigation immediately after the main shock. Morphological field research shows that the earthquake was triggered by the Ganzi-Yushu fault, trending NW-SE and dipping NE. It spreads at the base of the range-front, along which huge triangular facets (up to 600 m) are distributed, attesting the important vertical component of this fault. Geomorphic features (such as troughs, rivers, fences, and alluvial fans) exhibit sinistral offsets that vary from tens of meters to hundreds of meters. Due to both strike-slip and dip-slip displacements, this fault seems to be a transtensional fault. Thorough observation and measurements were made along the rupture zone, which is about 49 km-long, and consists of 3 discontinuous left-stepping rupture segments (19 km, 22 km, and about 8 km, respectively, from west to east). We observed a maximum sinistral offset of 2.3 m along the central segment and a maximum vertical offset of 0.6 m along the western segment. These offsets, as well as push-up, co-seismic pull-apart and left stepping en-echelon tension fissures, show strike-slip and dip-slip components. The angle (?) between the Principal Displacement Zone (PDZ) and en-echelon tension fissures can reflect the surface rupture kinematics: ? larger than 45 degrees in transpression, ? less than 45 degrees in transtension, and ? equals to 45 degrees in simple shear. For instance, along the Changu Temple segment, we measured 125 rupture directions and found that the mean PDZ strike is ca.295 degrees NW while the fissures’ strike ranges from 278-300 degrees NW. The value of ? is less than 45 degrees, revealing the transtensional regime. In the Guoyangyansongduo segment, we measured 287 rupture directions and found that the PDZ strikes ca.300 degrees NW while fissures strike 265-290 degrees NW, also indicating ? is less than 45 degrees and a transtensional regime. Lastly, in the Longbao Lake segment, 30 rupture direction measurements show that the PDZ strikes 290 degrees NW and fissures strike 270 degrees NW, also showing a transtensional regime. The uplift of the mountain range therefore results from the transtensional regime of the fault and the long term activity resembles Yushu earthquake of this fault.

Fuyao, W.; Li, H.; Pan, J.; Xu, Z.; Li, N.; Guo, R.; Zhang, W.



Quantifying anoxia in lakes  

Microsoft Academic Search

The anoxic factor (AF, days per year or per season) can be used to quantify anoxia in stratified lakes. AF is calculated from oxygen profiles measured in the stratified season and lake surface area (A,) as AF represents the number of days that a sediment area, equal to the whole-lake surface area, is overlain by anoxic water. Average AF for

Gertrud K. Niirnberg



Quantifying the Uruguay Round  

Microsoft Academic Search

The effects of the Uruguay Round are quantified using a numerical general equilibrium model which incorporates increasing returns to scale, twenty-four regions, twenty-two commodities, and steady state growth effects. The authors conclude that the aggregate welfare gains from the Round are in the order of $96 billion per year in the short run, but could be as high as $171

Thomas F. Rutherford; David G. Tarr



Field measurements along the 2010 Ms 7.1 Yushu earthquake rupture shows strike-slip and dip-slip activities, resulting in mountains uplift  

Microsoft Academic Search

The Yushu Ms 7.1 earthquake occurred in the Qinghai Province, China, on April 14th, 2010. Understanding its mechanism is critical to studying the local stress field and the mechanism of earthquake, therefore we conducted careful field investigation immediately after the main shock. Morphological field research shows that the earthquake was triggered by the Ganzi-Yushu fault, trending NW-SE and dipping NE.

W. Fuyao; H. Li; J. Pan; Z. Xu; N. Li; R. Guo; W. Zhang



Genetic susceptibility to childhood acute lymphoblastic leukemia shows protection in Malay boys: Results from the Malaysia-Singapore ALL Study Group  

Microsoft Academic Search

To study genetic epidemiology of childhood acute lymphoblastic leukemia (ALL) in the Chinese and Malays, we investigated 10 polymorphisms encoding carcinogen- or folate-metabolism and transport. Sex-adjusted analysis showed NQO1 609CT significantly protects against ALL, whilst MTHFR 677CT confers marginal protection. Interestingly, we observed that NQO1 609CT and MTHFR 1298 C-allele have greater genetic impact in boys than in girls. The

Allen Eng-Juh Yeoh; Yi Lu; Jason Yong-Sheng Chan; Yiong Huak Chan; Hany Ariffin; Shirley Kow-Yin Kham; Thuan Chong Quah



Quantifying PV power Output Variability  

SciTech Connect

This paper presents a novel approach to rigorously quantify power Output Variability from a fleet of photovoltaic (PV) systems, ranging from a single central station to a set of distributed PV systems. The approach demonstrates that the relative power Output Variability for a fleet of identical PV systems (same size, orientation, and spacing) can be quantified by identifying the number of PV systems and their Dispersion Factor. The Dispersion Factor is a new variable that captures the relationship between PV Fleet configuration, Cloud Transit Speed, and the Time Interval over which variability is evaluated. Results indicate that Relative Output Variability: (1) equals the inverse of the square root of the number of systems for fully dispersed PV systems; and (2) could be further minimized for optimally-spaced PV systems. (author)

Hoff, Thomas E. [Clean Power Research, Napa, CA (United States); Perez, Richard [ASRC, The University at Albany, Albany, NY (United States)



Thematic contribution to overgeneralization in memory for quantified discourse  

Microsoft Academic Search

Examined whether overgeneralization of the quantified relations in a story reflects reasonable inferences from the story's theme. 90 college students were asked to read narratives with similar story structures but with different thematic conflicts. Two-choice recognition results show that overgeneralization \\

Russell Revlin; Bruce Bromage; Michael Van Ness



Genetic susceptibility to childhood acute lymphoblastic leukemia shows protection in Malay boys: results from the Malaysia-Singapore ALL Study Group.  


To study genetic epidemiology of childhood acute lymphoblastic leukemia (ALL) in the Chinese and Malays, we investigated 10 polymorphisms encoding carcinogen- or folate-metabolism and transport. Sex-adjusted analysis showed NQO1 609CT significantly protects against ALL, whilst MTHFR 677CT confers marginal protection. Interestingly, we observed that NQO1 609CT and MTHFR 1298 C-allele have greater genetic impact in boys than in girls. The combination of SLC19A1 80GA heterozygosity and 3'-TYMS -6bp/-6bp homozygous deletion is associated with reduced ALL risk in Malay boys. Our study has suggested the importance of gender and race in modulating ALL susceptibility via the folate metabolic pathway. PMID:19651439

Yeoh, Allen Eng-Juh; Lu, Yi; Chan, Jason Yong-Sheng; Chan, Yiong Huak; Ariffin, Hany; Kham, Shirley Kow-Yin; Quah, Thuan Chong



Visualizing and quantifying the suppressive effects of glucocorticoids on the tadpole immune system in vivo  

NSDL National Science Digital Library

This article presents a laboratory module developed to show students how glucocorticoid receptor activity can be pharmacologically modulated in Xenopus laevis tadpoles and the resulting effects on thymus gland size visualized and quantified in vivo.

Alexander Schreiber (St. Lawrence University)



Normalized wavelet packets quantifiers for condition monitoring  

NASA Astrophysics Data System (ADS)

Normalized wavelet packets quantifiers are proposed and studied as a new tool for condition monitoring. The new quantifiers construct a complete quantitative time-frequency analysis: the Wavelet packets relative energy measures the normalized energy of the wavelet packets node; the Total wavelet packets entropy measures how the normalized energies of the wavelet packets nodes are distributed in the frequency domain; the Wavelet packets node entropy describes the uncertainty of the normalized coefficients of the wavelet packets node. Unlike the feature extraction methods directly using the amplitude of wavelet coefficients, the new quantifiers are derived from probability distributions and are more robust in diagnostic applications. By applying these quantifiers to Acoustic Emission signals from faulty bearings of rotating machines, our study shows that both localized defects and advanced contamination faults can be successfully detected and diagnosed if the appropriate quantifier is chosen. The Bayesian classifier is used to quantitatively analyse and evaluate the performance of the proposed quantifiers. We also show that reducing the Daubechies wavelet order or the length of the segment will deteriorate the performance of the quantifiers. A two-dimensional diagnostic scheme can also help to improve the diagnostic performance but the improvements are only significant when using lower wavelet orders.

Feng, Yanhui; Schlindwein, Fernando S.



On quantifying insect movements  

SciTech Connect

We elaborate on methods described by Turchin, Odendaal Rausher for quantifying insect movement pathways. We note the need to scale measurement resolution to the study insects and the questions being asked, and we discuss the use of surveying instrumentation for recording sequential positions of individuals on pathways. We itemize several measures that may be used to characterize movement pathways and illustrate these by comparisons among several Eleodes beetles occurring in shortgrass steppe. The fractal dimension of pathways may provide insights not available from absolute measures of pathway configuration. Finally, we describe a renormalization procedure that may be used to remove sequential interdependence among locations of moving individuals while preserving the basic attributes of the pathway.

Wiens, J.A.; Crist, T.O. (Colorado State Univ., Fort Collins (United States)); Milne, B.T. (Univ. of New Mexico, Albuquerque (United States))



Quantifier Comprehension in Corticobasal Degeneration  

ERIC Educational Resources Information Center

|In this study, we investigated patients with focal neurodegenerative diseases to examine a formal linguistic distinction between classes of generalized quantifiers, like "some X" and "less than half of X." Our model of quantifier comprehension proposes that number knowledge is required to understand both first-order and higher-order quantifiers.…

McMillan, Corey T.; Clark, Robin; Moore, Peachie; Grossman, Murray



Quantifier Comprehension in Corticobasal Degeneration  

ERIC Educational Resources Information Center

In this study, we investigated patients with focal neurodegenerative diseases to examine a formal linguistic distinction between classes of generalized quantifiers, like "some X" and "less than half of X." Our model of quantifier comprehension proposes that number knowledge is required to understand both first-order and higher-order quantifiers.…

McMillan, Corey T.; Clark, Robin; Moore, Peachie; Grossman, Murray



Quantifying the Nonclassicality of Operations  

NASA Astrophysics Data System (ADS)

Deep insight can be gained into the nature of nonclassical correlations by studying the quantum operations that create them. Motivated by this we propose a measure of nonclassicality of a quantum operation utilizing the relative entropy to quantify its commutativity with the completely dephasing operation. We show that our measure of nonclassicality is a sum of two independent contributions, the generating power—its ability to produce nonclassical states out of classical ones, and the distinguishing power—its usefulness to a classical observer for distinguishing between classical and nonclassical states. Each of these effects can be exploited individually in quantum protocols. We further show that our measure leads to an interpretation of quantum discord as the difference in superdense coding capacities between a quantum state and the best classical state when both are produced at a source that makes a classical error during transmission.

Meznaric, Sebastian; Clark, Stephen R.; Datta, Animesh



Quantifying T lymphocyte turnover.  


Peripheral T cell populations are maintained by production of naive T cells in the thymus, clonal expansion of activated cells, cellular self-renewal (or homeostatic proliferation), and density dependent cell life spans. A variety of experimental techniques have been employed to quantify the relative contributions of these processes. In modern studies lymphocytes are typically labeled with 5-bromo-2'-deoxyuridine (BrdU), deuterium, or the fluorescent dye carboxy-fluorescein diacetate succinimidyl ester (CFSE), their division history has been studied by monitoring telomere shortening and the dilution of T cell receptor excision circles (TRECs) or the dye CFSE, and clonal expansion has been documented by recording changes in the population densities of antigen specific cells. Proper interpretation of such data in terms of the underlying rates of T cell production, division, and death has proven to be notoriously difficult and involves mathematical modeling. We review the various models that have been developed for each of these techniques, discuss which models seem most appropriate for what type of data, reveal open problems that require better models, and pinpoint how the assumptions underlying a mathematical model may influence the interpretation of data. Elaborating various successful cases where modeling has delivered new insights in T cell population dynamics, this review provides quantitative estimates of several processes involved in the maintenance of naive and memory, CD4(+) and CD8(+) T cell pools in mice and men. PMID:23313150

De Boer, Rob J; Perelson, Alan S



A methodology for quantifying seated lumbar curvatures.  


To understand the role seating plays in the support of posture and spinal articulation, it is necessary to study the interface between a human and the seat. However, a method to quantify lumbar curvature in commercially available unmodified seats does not currently exist. This work sought to determine if the lumbar curvature for normal ranges of seated posture could be documented by using body landmarks located on the anterior portion of the body. The development of such a methodology will allow researchers to evaluate spinal articulation of a seated subject while in standard, commercially available seats and chairs. Anterior measurements of boney landmarks were used to quantify the relative positions of the ribcage and pelvis while simultaneous posterior measurements were made of lumbar curvature. The relationship between the anterior and the posterior measures was compared. The predictive capacity of this approach was evaluated by determining linear and second-order regressions for each of the four postures across all subjects and conducting a leave-one-out cross validation. The relationships between the anterior and posterior measures were approximated by linear and second-order polynomial regressions (r(2?) =? 0.829, 0.935 respectively) across all postures. The quantitative analysis showed that openness had a significant relationship with lumbar curvature, and a first-order regression was superior to a second-order regression. Average standard errors in the prediction were 5.9° for the maximum kyphotic posture, 9.9° for the comfortable posture, 12.8° for the straight and tall, and 22.2° for the maximum lordotic posture. These results show predictions of lumbar curvature are possible in seated postures by using a motion capture system and anterior measures. This method of lumbar curvature prediction shows potential for use in the assessment of seated spinal curvatures and the corresponding design of seating to accommodate those curvatures; however, additional inputs will be necessary to better predict the postures as lordosis is increased. PMID:22168743

Leitkam, Samuel T; Bush, Tamara Reid; Li, Mingfei



Quantifiers and approximation  

Microsoft Academic Search

We investigate tile relationship between logical expressibility of NP optimization problems and their approximation properties. First sucll attempt was made by Papadimitriou and Yannakakis, who defined the class of NPO problems MAX NP. We show that many importaut optimization problems do not belong to MAX NP and that in fact there are problems in P which are not ill lk'IAX

Alessandro Panconesi; Desh Ranjan



Quantifiable fluorescent glycan microarrays  

Microsoft Academic Search

A glycan microarray was developed by using 2,6-diaminopyridine (DAP) as a fluorescent linker and printing of the glycan-DAP\\u000a conjugates (GDAPs) on epoxy-activated glass slides. Importantly, all coupled GDAPs showed a detectable level of concentration-dependent\\u000a GDAP fluorescence under blue laser excitation (495 nm) that can be used for both grid location and on-slide quantification.\\u000a A glycan array including a large number of

Xuezheng Song; Baoyun Xia; Yi Lasanajak; David F. Smith; Richard D. Cummings



Quantifying network heterogeneity  

NASA Astrophysics Data System (ADS)

Despite degree distributions give some insights about how heterogeneous a network is, they fail in giving a unique quantitative characterization of network heterogeneity. This is particularly the case when several different distributions fit for the same network, when the number of data points is very scarce due to network size, or when we have to compare two networks with completely different degree distributions. Here we propose a unique characterization of network heterogeneity based on the difference of functions of node degrees for all pairs of linked nodes. We show that this heterogeneity index can be expressed as a quadratic form of the Laplacian matrix of the network, which allows a spectral representation of network heterogeneity. We give bounds for this index, which is equal to zero for any regular network and equal to one only for star graphs. Using it we study random networks showing that those generated by the Erdös-Rényi algorithm have zero heterogeneity, and those generated by the preferential attachment method of Barabási and Albert display only 11% of the heterogeneity of a star graph. We finally study 52 real-world networks and we found that they display a large variety of heterogeneities. We also show that a classification system based on degree distributions does not reflect the heterogeneity properties of real-world networks.

Estrada, Ernesto



Quantifying network heterogeneity.  


Despite degree distributions give some insights about how heterogeneous a network is, they fail in giving a unique quantitative characterization of network heterogeneity. This is particularly the case when several different distributions fit for the same network, when the number of data points is very scarce due to network size, or when we have to compare two networks with completely different degree distributions. Here we propose a unique characterization of network heterogeneity based on the difference of functions of node degrees for all pairs of linked nodes. We show that this heterogeneity index can be expressed as a quadratic form of the Laplacian matrix of the network, which allows a spectral representation of network heterogeneity. We give bounds for this index, which is equal to zero for any regular network and equal to one only for star graphs. Using it we study random networks showing that those generated by the Erdös-Rényi algorithm have zero heterogeneity, and those generated by the preferential attachment method of Barabási and Albert display only 11% of the heterogeneity of a star graph. We finally study 52 real-world networks and we found that they display a large variety of heterogeneities. We also show that a classification system based on degree distributions does not reflect the heterogeneity properties of real-world networks. PMID:21230700

Estrada, Ernesto



Quantifying air pollution removal by green roofs in Chicago  

Microsoft Academic Search

The level of air pollution removal by green roofs in Chicago was quantified using a dry deposition model. The result showed that a total of 1675kg of air pollutants was removed by 19.8ha of green roofs in one year with O3 accounting for 52% of the total, NO2 (27%), PM10 (14%), and SO2 (7%). The highest level of air pollution

Jun Yang; Qian Yu; Peng Gong



Tableaux for Quantified Hybrid Logic  

Microsoft Academic Search

We present a (sound and complete) tableau calculus for Quantified Hybrid Logic (QHL). QHL is an extension of orthodox quantified modal logic: as well as the usual Box and Diamond modalities it contains names for (and variables over) states, operators @_s for asserting that a formula holds at a named state, and a binder downarrow that binds a variable to

P. Blackburn; M. J. Marx



Tableaux for Quantified Hybrid Logic  

Microsoft Academic Search

We present a (sound and complete) tableau calculus for Quantified Hybrid Logic (QHL). QHL is an extension of orthodox quantified modal logic: as well as the usual 2 and 3 modalities it contains names for (and variables over) states, operators @s for asserting that a formula holds at a named state, and a binder# that binds a variable to the

Patrick Blackburn; Maarten Marx



Quantifying PV power Output Variability  

Microsoft Academic Search

This paper presents a novel approach to rigorously quantify power Output Variability from a fleet of photovoltaic (PV) systems, ranging from a single central station to a set of distributed PV systems. The approach demonstrates that the relative power Output Variability for a fleet of identical PV systems (same size, orientation, and spacing) can be quantified by identifying the number

Thomas E. Hoff; Richard Perez



Automata and quantifier hierarchies  

Microsoft Academic Search

The paper discusses results on -languages in a recursion theoretic framework which is adapted to the treatment of formal languages. We consider variants of the arithmetical hierarchy which are not based on the recursive sets but on sets defined in terms of finite automata. In particular, it is shown how the theorems of Büchi and McNaughton on regular -languages can

Wolfgang Thomas; RWTH Aachen; Lehrstuhl f'tir Informatik



Quantifying nonisothermal subsurface soil water evaporation  

NASA Astrophysics Data System (ADS)

Accurate quantification of energy and mass transfer during soil water evaporation is critical for improving understanding of the hydrologic cycle and for many environmental, agricultural, and engineering applications. Drying of soil under radiation boundary conditions results in formation of a dry surface layer (DSL), which is accompanied by a shift in the position of the latent heat sink from the surface to the subsurface. Detailed investigation of evaporative dynamics within this active near-surface zone has mostly been limited to modeling, with few measurements available to test models. Soil column studies were conducted to quantify nonisothermal subsurface evaporation profiles using a sensible heat balance (SHB) approach. Eleven-needle heat pulse probes were used to measure soil temperature and thermal property distributions at the millimeter scale in the near-surface soil. Depth-integrated SHB evaporation rates were compared with mass balance evaporation estimates under controlled laboratory conditions. The results show that the SHB method effectively measured total subsurface evaporation rates with only 0.01-0.03 mm h-1difference from mass balance estimates. The SHB approach also quantified millimeter-scale nonisothermal subsurface evaporation profiles over a drying event, which has not been previously possible. Thickness of the DSL was also examined using measured soil thermal conductivity distributions near the drying surface. Estimates of the DSL thickness were consistent with observed evaporation profile distributions from SHB. Estimated thickness of the DSL was further used to compute diffusive vapor flux. The diffusive vapor flux also closely matched both mass balance evaporation rates and subsurface evaporation rates estimated from SHB.

Deol, Pukhraj; Heitman, Josh; Amoozegar, Aziz; Ren, Tusheng; Horton, Robert



Quantifying Dictyostelium discoideum Aggregation  

NASA Astrophysics Data System (ADS)

Upon nutrient deprivation, the social amoebae Dictyostelium discoideum enter a developmental program causing them to aggregate into multicellular organisms. During this process cells sense and secrete chemical signals, often moving in a head-to-tail fashion called a `stream' as they assemble into larger entities. We measure Dictyostelium speed, shape, and directionality, both inside and outside of streams, and develop methods to distinguish group dynamics from behavior of individual cells. We observe an overall increase in speed during aggregation and a decrease in speed fluctuations once a cell joins a stream. Initial results indicate that when cells are in close proximity the trailing cells migrate specifically toward the backs of leading cells.

McCann, Colin; Kriebel, Paul; Parent, Carole; Losert, Wolfgang



Methods to quantify tank losses are improved  

Microsoft Academic Search

A major revision to the American Petroleum Institute's Publication 2519¹ provides a tool to accurately quantify evaporative losses and the resultant atmospheric emissions from petroleum stocks stored in internal floating-roof tanks (IFRTs). As a result of significant improvements in the loss calculation procedures included in the revised publication, entitled ''Evaporation Loss from Internal Floating Roof Tanks,'' more accurate stock inventory

K. M. Hanzevack; B. D. Anderson; R. L. Russell



Quantifying Northern Hemisphere freshwater ice  

NASA Astrophysics Data System (ADS)

The areal extent and volume of peak freshwater (river and lake) ice are quantified across the Northern Hemisphere for the period 1957-2002. Quantification is conducted using a degree-day ice growth model and ice growth coefficients defined for 14 ice-specific hydroclimatic regions. The model is driven by ERA-40 gridded daily air temperature data, and the Global Lakes and Wetlands Database is employed to spatially define rivers and lakes. Results indicate that the total area covered by freshwater ice, at peak thickness, north of the January 0°C isotherm (excluding the Greenland ice sheet) is 1.7 × 106 km2 and the total freshwater ice volume is 1.6 × 103 km3. This area is approximately equal to that of the Greenland ice sheet and the volume to snow on land (Northern Hemisphere). Such values now permit a more complete quantification of the cryosphere (evaluations already having been completed for other components, such as snow, glaciers, and sea ice) and provide a reference data set for assessing future climate-related changes.

Brooks, Rheannon N.; Prowse, Terry D.; O'Connell, Ian J.



Processing Bare Quantifiers in Discourse  

PubMed Central

During reading or listening, language comprehenders construct a mental representation of the objects and events mentioned. This model is augmented and modified incrementally as the discourse unfolds. In this paper we focus on the interpretation of bare quantifiers, that is, expressions such as ‘two’, to investigate the processes underlying the construction and modification of the discourse model. Bare quantifiers are temporarily ambiguous when sentences are processed incrementally. For instance, in ‘Three ships were in the port. Two…’, ‘two’ can either refer to a subset of the set just mentioned (e.g.,‘two of the three ships’), a different set of the entities mentioned (e.g., ‘two other ships’), or a set of different entities (e.g., ‘two people’). Data from previous studies, and a current completion study, suggest that the subset interpretation is preferred over the establishment of a different set. The current study aimed to investigate ERP correlates of quantifier interpretation and their timing. Quantifiers that unambiguously signaled the establishment of a new referent elicited a late positive component (900-1500 ms), which we interpret as a Late Positive Complex, related to the difficulty involved in context updating. An additional 500-700ms positivity was elicited only in a subset of readers, suggesting that there are individual differences in quantifier interpretation and the timing thereof.

Kaan, Edith; Dallas, Andrea C.; Barkley, Christopher M.



Processing bare quantifiers in discourse.  


During reading or listening, language comprehenders construct a mental representation of the objects and events mentioned. This model is augmented and modified incrementally as the discourse unfolds. In this paper we focus on the interpretation of bare quantifiers, that is, expressions such as 'two', to investigate the processes underlying the construction and modification of the discourse model. Bare quantifiers are temporarily ambiguous when sentences are processed incrementally. For instance, in 'Three ships were in the port. Two...', 'two' can either refer to a subset of the set just mentioned (e.g.,'two of the three ships'), a different set of the entities mentioned (e.g., 'two other ships'), or a set of different entities (e.g., 'two people'). Data from previous studies, and a current completion study, suggest that the subset interpretation is preferred over the establishment of a different set. The current study aimed to investigate ERP correlates of quantifier interpretation and their timing. Quantifiers that unambiguously signaled the establishment of a new referent elicited a late positive component (900-1500 ms), which we interpret as a Late Positive Complex, related to the difficulty involved in context updating. An additional 500-700 ms positivity was elicited only in a subset of readers, suggesting that there are individual differences in quantifier interpretation and the timing thereof. PMID:17070788

Kaan, Edith; Dallas, Andrea C; Barkley, Christopher M



Investigations of information quantifiers for the Tavis-Cummings model  

NASA Astrophysics Data System (ADS)

In this article, a system of two two-level atoms interacting with a single-mode quantized electromagnetic field in a lossless resonant cavity via a multi-photon transition is considered. The quantum Fisher information, negativity, classical Fisher information, and reduced von Neumann entropy for the two atoms are investigated. We found that the number of photon transitions plays an important role in the dynamics of different information quantifiers in the cases of two symmetric and two asymmetric atoms. Our results show that there is a close relationship between the different quantifiers. Also, the quantum and classical Fisher information can be useful for studying the properties of quantum states which are important in quantum optics and information.

Obada, A.-S. F.; Abdel-Khalek, S.; Berrada, K.; Shaheen, M. E.



Quantifying quantum correlations in fermionic systems using witness operators  

NASA Astrophysics Data System (ADS)

We present a method to quantify quantum correlations in arbitrary systems of indistinguishable fermions using witness operators. The method associates the problem of finding the optimal entanglement witness of a state with a class of problems known as semidefinite programs, which can be solved efficiently with arbitrary accuracy. Based on these optimal witnesses, we introduce a measure of quantum correlations which has an interpretation analogous to the Generalized Robustness of entanglement. We also extend the notion of quantum discord to the case of indistinguishable fermions, and propose a geometric quantifier, which is compared to our entanglement measure. Our numerical results show a remarkable equivalence between the proposed Generalized Robustness and the Schliemann concurrence, which are equal for pure states. For mixed states, the Schliemann concurrence presents itself as an upper bound for the Generalized Robustness. The quantum discord is also found to be an upper bound for the entanglement.

Iemini, Fernando; Maciel, Thiago O.; Debarba, Tiago; Vianna, Reinaldo O.



Quantifying and Assessing Learning Objectives  

Microsoft Academic Search

A number of studies have been conducted which use the Bloom taxonomy to improve teaching and learning. However, to our knowledge, neither the Bloom taxonomy nor any other established learning taxonomy has been used as a basis to develop a quantifiable tool that will enable teachers to analyse the cognitive process embedded in the objectives and assessment of a subject,

Julian D Gribble; Lois Meyer; Anna Jones


Resolution and Quantified Epistemic Logics  

Microsoft Academic Search

Quantified modal logics have emerged as useful tools in computer science for reasoning about knowledge and belief of agents and systems. An important class of these logics have a possible-world semantics from Kripke. Surprisingly, there has been relatively little work on proof theoretic methods that could be used in automatic deduction systems, although decision procedures for the propositional case have

Kurt Konolige



Quantifying cognitive decrements caused by cranial radiotherapy.  


With the exception of survival, cognitive impairment stemming from the clinical management of cancer is a major factor dictating therapeutic outcome. For many patients afflicted with CNS and non-CNS malignancies, radiotherapy and chemotherapy offer the best options for disease control. These treatments however come at a cost, and nearly all cancer survivors (~11 million in the US alone as of 2006) incur some risk for developing cognitive dysfunction, with the most severe cases found in patients subjected to cranial radiotherapy (~200,000/yr) for the control of primary and metastatic brain tumors. Particularly problematic are pediatric cases, whose long-term survival plagued with marked cognitive decrements results in significant socioeconomic burdens. To date, there are still no satisfactory solutions to this significant clinical problem. We have addressed this serious health concern using transplanted stem cells to combat radiation-induced cognitive decline in athymic rats subjected to cranial irradiation. Details of the stereotaxic irradiation and the in vitro culturing and transplantation of human neural stem cells (hNSCs) can be found in our companion paper (Acharya et al., JoVE reference). Following irradiation and transplantation surgery, rats are then assessed for changes in cognition, grafted cell survival and expression of differentiation-specific markers 1 and 4-months after irradiation. To critically evaluate the success or failure of any potential intervention designed to ameliorate radiation-induced cognitive sequelae, a rigorous series of quantitative cognitive tasks must be performed. To accomplish this, we subject our animals to a suite of cognitive testing paradigms including novel place recognition, water maze, elevated plus maze and fear conditioning, in order to quantify hippocampal and non-hippocampal learning and memory. We have demonstrated the utility of these tests for quantifying specific types of cognitive decrements in irradiated animals, and used them to show that animals engrafted with hNSCs exhibit significant improvements in cognitive function. The cognitive benefits derived from engrafted human stem cells suggest that similar strategies may one day provide much needed clinical recourse to cancer survivors suffering from impaired cognition. Accordingly, we have provided written and visual documentation of the critical steps used in our cognitive testing paradigms to facilitate the translation of our promising results into the clinic. PMID:22042060

Christie, Lori-Ann; Acharya, Munjal M; Limoli, Charles L



On monotonicity of type ?1,1? fuzzy quantifiers determined by fuzzy measures  

Microsoft Academic Search

In this contribution, we study a very important se- mantic property of generalized quantifiers called the monotonicity for fuzzy quantifiers of type 1, 1 defined using fuzzy measures and Sugeno type of fuzzy integrals. We show that fuzzy integrals can ensure under some natural conditions the monotonicity of fuzzy quantifiers. Finally, we propose the concept of concave fuzzy quantifiers and

Michal Holcapek; Antonin Dvorak



Quantifier processing can be dissociated from numerical processing: Evidence from semantic dementia patients.  


Quantifiers such as frequency adverbs (e.g., "always", "never") and quantity pronouns (e.g., "many", "none") convey quantity information. Whether quantifiers are processed as numbers or as general semantics has been a matter of much debate. Some neuropsychological and fMRI studies have found that the processing of quantifiers depends on the numerical magnitude comprehension system, but others have found that quantifier processing is associated with semantic representation. The selective impairment of language in semantic dementia patients provides a way to examine the above controversy. We administered a series of neuropsychological tests (i.e., language processing, numerical processing and semantic distance judgment) to two patients with different levels of severity in semantic dementia (mild vs. severe). The results showed that the two patients had intact numerical knowledge, but impairments in semantic processing. Moreover, the patient with severe/late semantic dementia showed more impairment in quantifier and semantic processing than the patient with mild/early semantic dementia. We concluded that quantifier processing is associated with general semantic processing, not with numerical processing. PMID:23867350

Cheng, Dazhi; Zhou, Aihong; Yu, Xing; Chen, Chuansheng; Jia, Jianping; Zhou, Xinlin



Television Quiz Show Simulation  

ERIC Educational Resources Information Center

|This article explores the simulation of four television quiz shows for students in China studying English as a foreign language (EFL). It discusses the adaptation and implementation of television quiz shows and how the students reacted to them.|

Hill, Jonnie Lynn



The Great Cometary Show  

NASA Astrophysics Data System (ADS)

The ESO Very Large Telescope Interferometer, which allows astronomers to scrutinise objects with a precision equivalent to that of a 130-m telescope, is proving itself an unequalled success every day. One of the latest instruments installed, AMBER, has led to a flurry of scientific results, an anthology of which is being published this week as special features in the research journal Astronomy & Astrophysics. ESO PR Photo 06a/07 ESO PR Photo 06a/07 The AMBER Instrument "With its unique capabilities, the VLT Interferometer (VLTI) has created itself a niche in which it provide answers to many astronomical questions, from the shape of stars, to discs around stars, to the surroundings of the supermassive black holes in active galaxies," says Jorge Melnick (ESO), the VLT Project Scientist. The VLTI has led to 55 scientific papers already and is in fact producing more than half of the interferometric results worldwide. "With the capability of AMBER to combine up to three of the 8.2-m VLT Unit Telescopes, we can really achieve what nobody else can do," added Fabien Malbet, from the LAOG (France) and the AMBER Project Scientist. Eleven articles will appear this week in Astronomy & Astrophysics' special AMBER section. Three of them describe the unique instrument, while the other eight reveal completely new results about the early and late stages in the life of stars. ESO PR Photo 06b/07 ESO PR Photo 06b/07 The Inner Winds of Eta Carinae The first results presented in this issue cover various fields of stellar and circumstellar physics. Two papers deal with very young solar-like stars, offering new information about the geometry of the surrounding discs and associated outflowing winds. Other articles are devoted to the study of hot active stars of particular interest: Alpha Arae, Kappa Canis Majoris, and CPD -57o2874. They provide new, precise information about their rotating gas envelopes. An important new result concerns the enigmatic object Eta Carinae. Using AMBER with its high spatial and spectral resolution, it was possible to zoom into the very heart of this very massive star. In this innermost region, the observations are dominated by the extremely dense stellar wind that totally obscures the underlying central star. The AMBER observations show that this dense stellar wind is not spherically symmetric, but exhibits a clearly elongated structure. Overall, the AMBER observations confirm that the extremely high mass loss of Eta Carinae's massive central star is non-spherical and much stronger along the poles than in the equatorial plane. This is in agreement with theoretical models that predict such an enhanced polar mass-loss in the case of rapidly rotating stars. ESO PR Photo 06c/07 ESO PR Photo 06c/07 RS Ophiuchi in Outburst Several papers from this special feature focus on the later stages in a star's life. One looks at the binary system Gamma 2 Velorum, which contains the closest example of a star known as a Wolf-Rayet. A single AMBER observation allowed the astronomers to separate the spectra of the two components, offering new insights in the modeling of Wolf-Rayet stars, but made it also possible to measure the separation between the two stars. This led to a new determination of the distance of the system, showing that previous estimates were incorrect. The observations also revealed information on the region where the winds from the two stars collide. The famous binary system RS Ophiuchi, an example of a recurrent nova, was observed just 5 days after it was discovered to be in outburst on 12 February 2006, an event that has been expected for 21 years. AMBER was able to detect the extension of the expanding nova emission. These observations show a complex geometry and kinematics, far from the simple interpretation of a spherical fireball in extension. AMBER has detected a high velocity jet probably perpendicular to the orbital plane of the binary system, and allowed a precise and careful study of the wind and the shockwave coming from the nova. The stream of results from the VLTI and AMBER



Quantifying reliability uncertainty : a proof of concept.  

SciTech Connect

This paper develops Classical and Bayesian methods for quantifying the uncertainty in reliability for a system of mixed series and parallel components for which both go/no-go and variables data are available. Classical methods focus on uncertainty due to sampling error. Bayesian methods can explore both sampling error and other knowledge-based uncertainties. To date, the reliability community has focused on qualitative statements about uncertainty because there was no consensus on how to quantify them. This paper provides a proof of concept that workable, meaningful quantification methods can be constructed. In addition, the application of the methods demonstrated that the results from the two fundamentally different approaches can be quite comparable. In both approaches, results are sensitive to the details of how one handles components for which no failures have been seen in relatively few tests.

Diegert, Kathleen V.; Dvorack, Michael A.; Ringland, James T.; Mundt, Michael Joseph; Huzurbazar, Aparna (Los Alamos National Laboratory, Los Alamos, NM); Lorio, John F.; Fatherley, Quinn (Los Alamos National Laboratory, Los Alamos, NM); Anderson-Cook, Christine (Los Alamos National Laboratory, Los Alamos, NM); Wilson, Alyson G. (Los Alamos National Laboratory, Los Alamos, NM); Zurn, Rena M.



A Holographic Road Show.  

ERIC Educational Resources Information Center

|Describes the viewing sessions and the holograms of a holographic road show. The traveling exhibits, believed to stimulate interest in physics, include a wide variety of holograms and demonstrate several physical principles. (GA)|

Kirkpatrick, Larry D.; Rugheimer, Mac



The Diane Rehm Show  

NSDL National Science Digital Library

The Diane Rehm Show has its origins in a mid-day program at WAMU in Washington, D.C. Diane Rehm came on to host the program in 1979, and in 1984 it was renamed "The Diane Rehm Show". Over the past several decades, Rehm has played host to hundreds of guests, include Archbishop Desmond Tutu, Julie Andrews, and President Bill Clinton. This website contains an archive of her past programs, and visitors can use the interactive calendar to look through past shows. Those visitors looking for specific topics can use the "Topics" list on the left-hand side of the page, or also take advantage of the search engine. The show has a number of social networking links, including a Facebook page and a Twitter feed.


Tracking and Quantifying Objects and Non-Cohesive Substances  

ERIC Educational Resources Information Center

The present study tested infants' ability to assess and compare quantities of a food substance. Contrary to previous findings, the results suggest that by 10 months of age infants can quantify non-cohesive substances, and that this ability is different in important ways from their ability to quantify discrete objects: (1) In contrast to even much…

van Marle, Kristy; Wynn, Karen



Tracking and Quantifying Objects and Non-Cohesive Substances  

ERIC Educational Resources Information Center

|The present study tested infants' ability to assess and compare quantities of a food substance. Contrary to previous findings, the results suggest that by 10 months of age infants can quantify non-cohesive substances, and that this ability is different in important ways from their ability to quantify discrete objects: (1) In contrast to even much…

van Marle, Kristy; Wynn, Karen



Quantifying magnetite magnetofossil contributions to sedimentary magnetizations  

NASA Astrophysics Data System (ADS)

Under suitable conditions, magnetofossils (the inorganic remains of magnetotactic bacteria) can contribute to the natural remanent magnetization (NRM) of sediments. In recent years, magnetofossils have been shown to be preserved commonly in marine sediments, which makes it essential to quantify their importance in palaeomagnetic recording. In this study, we examine a deep-sea sediment core from offshore of northwestern Western Australia. The magnetic mineral assemblage is dominated by continental detritus and magnetite magnetofossils. By separating magnetofossil and detrital components based on their different demagnetization characteristics, it is possible to quantify their respective contributions to the sedimentary NRM throughout the Brunhes chron. In the studied core, the contribution of magnetofossils to the NRM is controlled by large-scale climate changes, with their relative importance increasing during glacial periods when detrital inputs were low. Our results demonstrate that magnetite magnetofossils can dominate sedimentary NRMs in settings where they are preserved in significant abundances.

Heslop, David; Roberts, Andrew P.; Chang, Liao; Davies, Maureen; Abrajevitch, Alexandra; De Deckker, Patrick



Quantifying Stimulus Discriminability: A Comparison of Information Theory and Ideal Observer Analysis  

Microsoft Academic Search

Performance in sensory discrimination tasks is commonly quantified using either information theory or ideal observer analysis. These two quantitative frameworks are often assumed to be equivalent. For example, higher mutual information is said to correspond to improved performance of an ideal observer in a stimulus estimation task. To the contrary, drawing on and extending previous results, we show that five

Eric E. Thomson; William B. Kristan Jr.



Quantifying fiber formation in meat analogs under high moisture extrusion using image processing  

Microsoft Academic Search

High moisture extrusion using twin-screw extruders shows great promise of producing meat analog products with vegetable proteins. The resulting products have well defined fiber formations; resemble real meat in both visual appearance and taste sensation. Developing reliable non-destructive techniques to quantify the textural properties of extrudates is important for quality control in the manufacturing process. In this study, we developed

J. Ranasinghesagara; F. Hsieh; G. Yao



Do Elephants Show Empathy?  

Microsoft Academic Search

Elephants show a rich social organization and display a number of unusual traits. In this paper, we analyse reports collected over a thirty-five year period, describing behaviour that has the potential to reveal signs of empathic understanding. These include coalition formation, the offering of protection and comfort to others, retrieving and 'babysitting' calves, aiding individuals that would otherwise have difficulty

Lucy A. Bates; Phyllis C. Lee; Norah Njiraini; Joyce H. Poole; Katito Sayialel; Soila Sayialel; Cynthia J. Moss; Richard W. Byrne



What Do Maps Show?  

ERIC Educational Resources Information Center

|This curriculum packet, appropriate for grades 4-8, features a teaching poster which shows different types of maps (different views of Salt Lake City, Utah), as well as three reproducible maps and reproducible activity sheets which complement the maps. The poster provides teacher background, including step-by-step lesson plans for four geography…

Geological Survey (Dept. of Interior), Reston, VA.


Shakespearean Slide Shows.  

ERIC Educational Resources Information Center

|Presents a condensed method for involving students in the kind of theatrical problem-solving that transforms a script to a play. Describes how to incorporate a "human slide show" into the class. Notes that students must read plays not just to understand events, but to make artistic choices about how to stage the action so that an audience…

Flynn, Rosalind M.



Stage a Water Show  

ERIC Educational Resources Information Center

In the author's book titled "The Incredible Water Show," the characters from "Miss Alaineus: A Vocabulary Disaster" used an ocean of information to stage an inventive performance about the water cycle. In this article, the author relates how she turned the story into hands-on science teaching for real-life fifth-grade students. The author also…

Frasier, Debra



ISU Demonstration Road Show  

NSDL National Science Digital Library

The Idaho State University Department of Physics conducts science demonstration shows at SE Idaho schools. Four different presentations are currently available; "Forces and Motion", "States of Matter", "Electricity and Magnetism", and "Sound and Waves". Student activities and descriptions of the demonstrated material are also provided.

Shropshire, Steven



Quantifying mixing using equilibrium reactions  

SciTech Connect

A method of quantifying equilibrium reactions in a microchannel using a fluorometric reaction of Fluo-4 and Ca{sup 2+} ions is presented. Under the proper conditions, equilibrium reactions can be used to quantify fluid mixing without the challenges associated with constituent mixing measures such as limited imaging spatial resolution and viewing angle coupled with three-dimensional structure. Quantitative measurements of CaCl and calcium-indicating fluorescent dye Fluo-4 mixing are measured in Y-shaped microchannels. Reactant and product concentration distributions are modeled using Green's function solutions and a numerical solution to the advection-diffusion equation. Equilibrium reactions provide for an unambiguous, quantitative measure of mixing when the reactant concentrations are greater than 100 times their dissociation constant and the diffusivities are equal. At lower concentrations and for dissimilar diffusivities, the area averaged fluorescence signal reaches a maximum before the species have interdiffused, suggesting that reactant concentrations and diffusivities must be carefully selected to provide unambiguous, quantitative mixing measures. Fluorometric equilibrium reactions work over a wide range of pH and background concentrations such that they can be used for a wide variety of fluid mixing measures including industrial or microscale flows.

Wheat, Philip M. [Department of Mechanical and Aerospace Engineering, Arizona State University, Tempe, Arizona 85287-6106 (United States); Posner, Jonathan D. [Department of Mechanical and Aerospace Engineering, Arizona State University, Tempe, Arizona 85287-6106 (United States); Department of Chemical Engineering, Arizona State University, Tempe, Arizona 85287-6106 (United States)



Viewing television talk shows  

Microsoft Academic Search

We examined how motivation, audience activity, and attitudes influenced the likelihood of watching societal?issue and relational topics on television talk programs. Path analysis supported differences in ritualized and instrumental motives for watching talk shows. Information and exciting?entertainment motivation predicted greater’ realism of, affinity with, involvement with, and intent to watch talk television. Pass?time motivation predicted reduced affinity with and intent

Alan M. Rubin; Mary M. Step



The Truman Show  

Microsoft Academic Search

The Truman Show is hardly a film you would automatically speak about as a game. At first glance, it is tempting to interpret the story of\\u000a Truman Burbank — his perpetual subjection to the artificial (televisual) world of Seahaven and its gargantuan reality TV project,\\u000a his eventual escape from the “OmniCam Ecosphere” building and the paternalistic surveillance of director Christof

Rolf F. Nohr


Quantifying pulsed laser induced damage to graphene  

SciTech Connect

As an emerging optical material, graphene's ultrafast dynamics are often probed using pulsed lasers yet the region in which optical damage takes place is largely uncharted. Here, femtosecond laser pulses induced localized damage in single-layer graphene on sapphire. Raman spatial mapping, SEM, and AFM microscopy quantified the damage. The resulting size of the damaged area has a linear correlation with the optical fluence. These results demonstrate local modification of sp{sup 2}-carbon bonding structures with optical pulse fluences as low as 14 mJ/cm{sup 2}, an order-of-magnitude lower than measured and theoretical ablation thresholds.

Currie, Marc; Caldwell, Joshua D.; Bezares, Francisco J.; Robinson, Jeremy; Anderson, Travis; Chun, Hayden; Tadjer, Marko [Optical Sciences Division and Electronics Science and Technology Division, Naval Research Laboratory, Washington DC 20375 (United States)



Slide Show Template  

Center for Biologics Evaluation and Research (CBER)

Text Version... Basic protocol approach: – Search Strategy for identifying TEE: Medical Dictionary for Regulatory Activities ... After IVIG Infusion Unknown All 1 (1%) ... More results from


Use of the Concept of Equivalent Biologically Effective Dose (BED) to Quantify the Contribution of Hyperthermia to Local Tumor Control in Radiohyperthermia Cervical Cancer Trials, and Comparison With Radiochemotherapy Results  

SciTech Connect

Purpose: To express the magnitude of contribution of hyperthermia to local tumor control in radiohyperthermia (RT/HT) cervical cancer trials, in terms of the radiation-equivalent biologically effective dose (BED) and to explore the potential of the combined modalities in the treatment of this neoplasm. Materials and Methods: Local control rates of both arms of each study (RT vs. RT+HT) reported from randomized controlled trials (RCT) on concurrent RT/HT for cervical cancer were reviewed. By comparing the two tumor control probabilities (TCPs) from each study, we calculated the HT-related log cell-kill and then expressed it in terms of the number of 2 Gy fraction equivalents, for a range of tumor volumes and radiosensitivities. We have compared the contribution of each modality and made some exploratory calculations on the TCPs that might be expected from a combined trimodality treatment (RT+CT+HT). Results: The HT-equivalent number of 2-Gy fractions ranges from 0.6 to 4.8 depending on radiosensitivity. Opportunities for clinically detectable improvement by the addition of HT are only available in tumors with an alpha value in the approximate range of 0.22-0.28 Gy{sup -1}. A combined treatment (RT+CT+HT) is not expected to improve prognosis in radioresistant tumors. Conclusion: The most significant improvements in TCP, which may result from the combination of RT/CT/HT for locally advanced cervical carcinomas, are likely to be limited only to those patients with tumors of relatively low-intermediate radiosensitivity.

Plataniotis, George A. [Department of Oncology, Aberdeen Royal Infirmary, Aberdeen (United Kingdom)], E-mail:; Dale, Roger G. [Imperial College Healthcare NHS Trust, London (United Kingdom)



Children's knowledge of hierarchical phrase structure: quantifier floating in Japanese.  


The interpretation of floating quantifiers in Japanese requires knowledge of hierarchical phrase structure. However, the input to children is insufficient or even misleading, as our analysis indicates. This presents an intriguing question on learnability: do children interpret floating quantifiers based on a structure-dependent rule which is not obvious in the input or do they employ a sentence comprehension strategy based on the available input? Two experiments examined four- to six-year-old Japanese-speaking children for their interpretations of floating quantifiers in SOV and OSV sentences. The results revealed that no child employed a comprehension strategy in terms of the linear ordering of constituents, and most five- and six-year-olds correctly interpreted floating quantifiers when word-order difficulty was reduced. These facts indicate that children's interpretation of floating quantifiers is structurally dependent on hierarchical phrase structure, suggesting that this knowledge is a part of children's grammar despite the insufficient input available to them. PMID:22850618

Suzuki, Takaaki; Yoshinaga, Naoko



NPR: The Picture Show  

NSDL National Science Digital Library

National Public Radio's "The Picture Show" photo blog is a great way to avoid culling through the thousands of less interesting and engaging photographs on the web. With a dedicated team of professionals, this blog brings together different posts that profile various sets of photographs that cover 19th century war in Afghanistan, visual memories of WWII, unpublished photographs of JFK's presidential campaign, and abandoned buildings on the islands in Boston Harbor. Visitors can search through previous posts, use social media features to share the photo features with friends, and also sign up to receive new materials via their RSS feed. There's quite a nice mix of material here, and visitors can also comment on the photos and recommend the collection to friends and others.


Egg: the Arts Show  

NSDL National Science Digital Library

"Egg is a new TV show about people making art across America" from PBS. This accompanying Website presents excerpts from sixteen episodes of the series, with three more "hatching soon," such as Close to Home, profiling three photographers: Jeanine Pohlhaus, whose pictures document her father's struggle with mental illness; Gregory Crewdson's photos of Lee, Massachusetts; and Joseph Rodriguez's photos of Hispanics in New York City. Excerpts include video clips, gallery listings where the artists' work can be seen, and short interviews with artists. Some episodes also offer "peeps," glimpses of material not shown on TV, such as the Space episode's peep, Shooting Stars, that provides directions for astrophotography, taking photographs of star trails. Other sections of the site are airdates, for local listings; see and do usa, where vacationers can search for art events at their destinations; and egg on the arts, a discussion forum.


American History Picture Show  

NSDL National Science Digital Library

In class we read Katie's Picture Show, a book about a girl who discovers art first-hand one day at an art museum in London. She realizes she can climb into the paintings, explore her surroundings, and even solve problems for the subjects of the paintings. As part of our unit on American history, we are going to use art to further learn about some of the important events we have been discussing. Each of these works of art depicts an important event in American History. When you click on a picture, you will be able to see the name of the event as well as the artist who created it. You will be using all three pictures for this assignment.Use the websites ...

Bennion, Ms.



Quantifying entanglement with witness operators  

SciTech Connect

We present a unifying approach to the quantification of entanglement based on entanglement witnesses, which includes several already established entanglement measures such as the negativity, the concurrence, and the robustness of entanglement. We then introduce an infinite family of new entanglement quantifiers, having as its limits the best separable approximation measure and the generalized robustness. Gaussian states, states with symmetry, states constrained to super-selection rules, and states composed of indistinguishable particles are studied under the view of the witnessed entanglement. We derive new bounds to the fidelity of teleportation d{sub min}, for the distillable entanglement E{sub D} and for the entanglement of formation. A particular measure, the PPT-generalized robustness, stands out due to its easy calculability and provides sharper bounds to d{sub min} and E{sub D} than the negativity in most of the states. We illustrate our approach studying thermodynamical properties of entanglement in the Heisenberg XXX and dimerized models.

Brandao, Fernando G.S.L. [Grupo de Informacao Quantica, Departamento de Fisica, Universidade Federal de Minas Gerais, Caixa Postal 702, Belo Horizonte, 30.123-970, MG (Brazil)



SPACE: an algorithm to predict and quantify alternatively spliced isoforms using microarrays  

PubMed Central

Exon and exon+junction microarrays are promising tools for studying alternative splicing. Current analytical tools applied to these arrays lack two relevant features: the ability to predict unknown spliced forms and the ability to quantify the concentration of known and unknown isoforms. SPACE is an algorithm that has been developed to (1) estimate the number of different transcripts expressed under several conditions, (2) predict the precursor mRNA splicing structure and (3) quantify the transcript concentrations including unknown forms. The results presented here show its robustness and accuracy for real and simulated data.

Anton, Miguel A; Gorostiaga, Dorleta; Guruceaga, Elizabeth; Segura, Victor; Carmona-Saez, Pedro; Pascual-Montano, Alberto; Pio, Ruben; Montuenga, Luis M; Rubio, Angel



A potential show stopper  

Microsoft Academic Search

The effects of thermal self-focusing instability on radio-wave power transmission in the SPS program are examined. It is found that, without lowering the power per beam, orbital radius, or operating wavelength of solar power satellites, the thermal self-focusing, fluctuation-amplifying instability will be excited in the F-region during at least some periods of the sunspot cycle. This instability could result in

J. E. Drummond



Critique of “Quantifying the effects of promoting smokeless ...  

Center for Biologics Evaluation and Research (CBER)

Text VersionPage 1. Critique of “Quantifying the effects of promoting smokeless tobacco as a harm reduction strategy in the USA” by ... More results from


Quantifying astrophysical uncertainties on dark matter direct detection results  

NASA Astrophysics Data System (ADS)

We attempt to estimate the uncertainty in the constraints on the spin independent dark matter-nucleon cross section due to our lack of knowledge of the dark matter phase space in the galaxy. We fit the density of dark matter before investigating the possible solutions of the Jeans equation compatible with those fits in order to understand what velocity dispersions we might expect at the solar radius. We take into account the possibility of non-Maxwellian velocity distributions and the possible presence of a dark disk. Combining all these effects, we still find that the uncertainty in the interpretation of direct detection experiments for high (>100 GeV) mass dark matter candidates is less than an order of magnitude in cross section.

Fairbairn, Malcolm; Douce, Tom; Swift, Jace



Uniform quantifier elimination and constraint query processing  

Microsoft Academic Search

In this paper we introduce a variant of the quantifier eliminationproblem for the first order theory of real closed fields.Instead of considering a single quantified formula, we considera uniform sequence of such formulas and eliminatequantifiers to obtain another uniform sequence. Our immediatemotivation comes from a problem in the theory ofconstraint databases with real polynomial constraints. Usingthe uniform quantifier elimination algorithm,

Saugata Basu



Using `LIRA' To Quantify Diffuse Structure Around X-ray and Gamma-Ray Pulsars  

Microsoft Academic Search

In this poster, we exploit several capabilities of a Low-count Image Restoration and Analysis (LIRA) package, to quantify details of faint ``scruffy'' emission, consistent with PWN around X-ray and gamma-ray pulsars. Our preliminary results show evidence for irregular structure on scales of 1''-10'' or less (i.e. <500 pc), rather than larger smooth loops. Additionally, we can show this to be

Alanna Connors; Nathan M. Stein; David van Dyk; Aneta Siemiginowska; Vinay Kashyap; Mallory Roberts



Evaluation of two methods for quantifying passeriform lice  

PubMed Central

Two methods commonly used to quantify ectoparasites on live birds are visual examination and dust-ruffling. Visual examination provides an estimate of ectoparasite abundance based on an observer’s timed inspection of various body regions on a bird. Dust-ruffling involves application of insecticidal powder to feathers that are then ruffled to dislodge ectoparasites onto a collection surface where they can then be counted. Despite the common use of these methods in the field, the proportion of actual ectoparasites they account for has only been tested with Rock Pigeons (Columba livia), a relatively large-bodied species (238–302 g) with dense plumage. We tested the accuracy of the two methods using European Starlings (Sturnus vulgaris; ~75 g). We first quantified the number of lice (Brueelia nebulosa) on starlings using visual examination, followed immediately by dust-ruffling. Birds were then euthanized and the proportion of lice accounted for by each method was compared to the total number of lice on each bird as determined with a body-washing method. Visual examination and dust-ruffling each accounted for a relatively small proportion of total lice (14% and 16%, respectively), but both were still significant predictors of abundance. The number of lice observed by visual examination accounted for 68% of the variation in total abundance. Similarly, the number of lice recovered by dust-ruffling accounted for 72% of the variation in total abundance. Our results show that both methods can be used to reliably quantify the abundance of lice on European Starlings and other similar-sized passerines.

Koop, Jennifer A. H.; Clayton, Dale H.



Physiological Relevance of Quantifying Segmental Contraction Synchrony  

PubMed Central

Background Most current indices of synchrony quantify left ventricular (LV) contraction pattern in terms of a single, global (integrated) measure. We report the development and physiological relevance of a novel method to quantify LV segmental contraction synchrony. Methods LV pressure-volume and echocardiographic data were collected in seven anesthetized, opened-chest dogs under several pacing modes: right atrial (RA) (control), right ventricular (RV) (dyssynchrony), and additional LV pacing at either apex (CRTa) or free wall (CRTf). Cross-correlation-based integrated (CCSIint) and segmental (CCSIseg) measures of synchrony were calculated from speckle-tracking derived radial strain, along with a commonly used index (maximum time delay). LV contractility was quantified using either Ees (ESPVR slope) or ESPVRarea (defined in the manuscript). Results RV pacing decreased CCSIint at LV base (0.95 ± 0.02 [RA] vs 0.64 ± 0.14 [RV]; P < 0.05) and only CRTa improved it (0.93 ± 0.03; P < 0.05 vs RV). The CCSIseg analysis identified anteroseptal and septal segments as being responsible for the low CCSIint during RV pacing and inferior segment for poor resynchronization with CRTf. Changes in ESPVRarea, and not in Ees, indicated depressed LV contractility with RV pacing, an observation consistent with significantly decreased global LV performance (stroke work [SW]: 252 ± 23 [RA] vs 151 ± 24 [RV] mJ; P < 0.05). Only CRTa improved SW and contractility (SW: 240 ± 19 mJ; ESPVRarea: 545 ± 175 mmHg•mL; both P < 0.01 vs RV). Only changes in CCSIseg and global LV contractility were strongly correlated (R2 = 0.698, P = 0.005). Conclusion CCSIseg provided insights into the changes in LV integrated contraction pattern and a better link to global LV contractility changes.




Quantifying Hyporheic Exchange at Sagehen Creek, California  

NASA Astrophysics Data System (ADS)

Quantifying heat and water exchange at the hyporheic zone is an important basis for understanding stream ecology and biogeochemistry. Unfortunately, at many streams the nature and extent of hyporheic exchange between surface water and shallow groundwater is unknown. In this study we use both field observations and model simulations to better understand the hyporheic exchange at Sagehen Creek, an alpine stream located in the Central Sierra Nevada, California. Three transects, each containing shallow peizometers and a stream gage, were established at Sagehen Creek, each fitted with pressure and temperature loggers to continuously monitor the head and temperature of both surface streamflow and near surface groundwater during the late summer and fall of 2011. These observations are then used with a model to simulate hyporheic water and heat exchange direction and magnitude. The results of this studyallow us to quantify how the hyporheic exchange differs between snowmelt dominated high flow periods and groundwater dominated low flow periods. The 2011 water year in the Sierra Nevada was unusual in that there was higher than average snowfall, late snowfall, and late snowmelt, resulting in streams such as Sagehen Creek having a later peak discharge date than average. Therefore, trends observed this year as to how groundwater drains or contributes to streamflow may provide a basis for examining how abnormally late snowmelt influences stream water quality, nutrient dynamics, and water sourcing and availability.

Heslop, J.; Boyle, D. P.



An algorithm for quantifying dependence in multivariate data sets  

NASA Astrophysics Data System (ADS)

We describe an algorithm to quantify dependence in a multivariate data set. The algorithm is able to identify any linear and non-linear dependence in the data set by performing a hypothesis test for two variables being independent. As a result we obtain a reliable measure of dependence.In high energy physics understanding dependencies is especially important in multidimensional maximum likelihood analyses. We therefore describe the problem of a multidimensional maximum likelihood analysis applied on a multivariate data set with variables that are dependent on each other. We review common procedures used in high energy physics and show that general dependence is not the same as linear correlation and discuss their limitations in practical application.Finally we present the tool CAT, which is able to perform all reviewed methods in a fully automatic mode and creates an analysis report document with numeric results and visual review.

Feindt, M.; Prim, M.



Quantifying the fluvial autogenic processes: Tank Experiments  

NASA Astrophysics Data System (ADS)

The evolution of deltaic shorelines has long been explained by allogenic changes in the environment such as changes in tectonics, base level, and sediment supply. Recently, the importance of autogenic cyclicity has been recognized in concert with allogenic forcing. Decoupling autogenic variability from allogenic signatures is essential in order to understand depositional systems and the stratigraphic record; however, autogenic behavior in sedimentary environments is not understood well enough to separate it from allogenic factors. Data drawn from model experiments that isolate the autogenic variability from allogenic forcing are the key to understanding and predicting autogenic responses in fluvial and deltaic systems. Here, three experiments using a constant water discharge (Qw) with a varying sediment flux (Qs) are conducted to examine the autogenic variability in a fluviodeltaic system. The experimental basin has dimensions of 1 m x 1 m, and a sediment/water mixture was delivered into the experimental basin. The sediment mixture contained 50% fine sand (.1 mm) and 50% coarse sand (2 mm) by volume and was delivered into the basin. The delta was built over a flat, non-erodible surface into a standing body of water with a constant base level and no subsidence. The autogenic responses of the fluvial and deltaic systems were captured by time-lapse images and the shoreline position was mapped to quantify the autogenic processes. The autogenic response to varying sediment supply while maintaining constant water supply include changes in 1) the slope of the fluvial-surface, 2) the frequency of autogenic storage and release events, and 3) shoreline roughness. Interestingly, the data shows a non-linear relationship between the frequency of autogenic cyclicity and the ratio of sediment supply to water discharge. The successive increase in the sediment supply and thus the increase in the ratio of Qs to Qw caused the slope of the fluvial surface to increase, and the frequency of autogenic sediment storage and release events to increase, but in a non-linear nature. This non-linear increase results from the autogenic frequency not increasing by a factor of 2 when the sediment flux increases by a factor of 2. Since the experimental data suggests that the frequency of autogenic variability is also related to the slope of the fluvial-surface, an increase in the fluvial slope would force the fluvial system to experience larger autogenic processes over a longer period of time. These three experiments are part of a larger matrix of nine total flume experiments, which explore variations in sediment supply, water discharge, and Qs/Qw to better understand fluvial autogenic processes.

Powell, E. J.; Kim, W.; Muto, T.



An approach to quantifying the efficiency of a Bayesian filter  

NASA Astrophysics Data System (ADS)

Data assimilation is the Bayesian conditioning of uncertain model simulations on observations to reduce uncertainty about model states. In practice, it is common to make simplifying assumptions about the prior and posterior state distributions, and to employ approximations of the likelihood function, which can reduce the efficiency of the filter. We propose metrics that quantify how much of the uncertainty in a Bayesian posterior state distribution is due to (i) the observation operator, (ii) observation error, and (iii) approximations of Bayes' Law. Our approach uses discrete Shannon entropy to quantify uncertainty, and we define the utility of an observation (for reducing uncertainty about a model state) as the ratio of the mutual information between the state and observation to the entropy of the state prior. These metrics make it possible to analyze the efficiency of a proposed observation system and data assimilation strategy, and provide a way to examine the propagation of information through the dynamic system model. We demonstrate the procedure on the problem of estimating profile soil moisture from observations at the surface (top 5 cm). The results show that when synthetic observations of 5 cm soil moisture are assimilated into a three-layer model of soil hydrology, the ensemble Kalman filter does not use all of the information available in observations.

Nearing, Grey S.; Gupta, Hoshin V.; Crow, Wade T.; Gong, Wei



Quantifying lateral femoral condyle ellipticalness in chimpanzees, gorillas, and humans.  


Articular surfaces of limb bones provide information for understanding animal locomotion because their size and shape are a reflection of habitual postures and movements. Here we present a novel method for quantifying the ellipticalness (i.e., departure from perfectly circular) of the lateral femoral condyle (LFC), applying this technique to hominid femora. Three-dimensional surface models were created for 49 Homo sapiens, 34 Pan troglodytes and 25 Gorilla gorilla femora. Software was developed that fit separate cylinders to each of the femoral condyles. These cylinders were constrained to have a single axis, but could have different radii. The cylinder fit to the LFC was allowed to assume an elliptical cross-section, while the cylinder fit to the medial condyle was constrained to remain circular. The shape of the elliptical cylinder (ratio of the major and minor axes of the ellipse) was recorded, and the orientation of the elliptical cylinder quantified as angles between the major axis of the ellipse and the anatomical and mechanical axes of the femur. Species were compared using analysis of variance and post hoc multiple comparisons tests. Confirming qualitative descriptions, human LFCs are more elliptical than those of chimpanzees and gorillas. Human femora exhibit a narrow range for the angle between the major axis of the elliptical cylinder and femoral axes. Conversely, the chimpanzee sample is bimodal for these angles, exhibiting two ellipse orientations, while Gorilla shows no preferred angle. Our results suggest that like modern human femora, chimpanzee femoral condyles have preferentially used regions. PMID:23042636

Sylvester, Adam D; Pfisterer, Theresa



Quantifying dielectrophoretic nanoparticle response to amplitude modulated input signal  

NASA Astrophysics Data System (ADS)

A new experimental system and theoretical model have been developed to systematically quantify and analyse the movement of nanoparticles subjected to continuously pulsed, or amplitude modulated, dielectrophoretic (DEP) input signal. Modulation DEP-induced concentration fluctuations of fluorescently labelled 0.5 µm and 1.0 µm diameter latex nanospheres, localized near castellated electrode edges, were quantified using real-time fluorescence microscope dielectrophoretic spectroscopy. Experimental measurements show that the fluorescence fluctuations decrease as the modulation frequency increases—in agreement with model predictions. The modulation frequency was varied from 25 × 10-3 to 25 Hz and the duty-cycle ratios ranged from zero to unity. Two new parameters for characterizing DEP nanoparticle transport are defined: the modulation frequency bandwidth and the optimal duty-cycle ratio. The ‘on/off’ modulation bandwidth, for micrometre scale movement, was measured to be 0.6 Hz and 1.0 Hz for 1.0 µm and 0.5 µm diameter nanospheres, respectively. At these cut-off frequencies very little movement of the nanospheres could be microscopically observed. Optimal fluorescence fluctuations, for modulation frequencies ranging from 0.25 to 1.0 Hz, occurred for duty-cycle ratio values ranging from 0.3 to 0.7—agreeing with theory. The results are useful for automated DEP investigations and associated technologies.

Bakewell, D. J.; Chichenkov, A.



A Generalizable Methodology for Quantifying User Satisfaction  

NASA Astrophysics Data System (ADS)

Quantifying user satisfaction is essential, because the results can help service providers deliver better services. In this work, we propose a generalizable methodology, based on survival analysis, to quantify user satisfaction in terms of session times, i. e., the length of time users stay with an application. Unlike subjective human surveys, our methodology is based solely on passive measurement, which is more cost-efficient and better able to capture subconscious reactions. Furthermore, by using session times, rather than a specific performance indicator, such as the level of distortion of voice signals, the effects of other factors like loudness and sidetone, can also be captured by the developed models. Like survival analysis, our methodology is characterized by low complexity and a simple model-developing process. The feasibility of our methodology is demonstrated through case studies of ShenZhou Online, a commercial MMORPG in Taiwan, and the most prevalent VoIP application in the world, namely Skype. Through the model development process, we can also identify the most significant performance factors and their impacts on user satisfaction and discuss how they can be exploited to improve user experience and optimize resource allocation.

Huang, Te-Yuan; Chen, Kuan-Ta; Huang, Polly; Lei, Chin-Laung


Computed tomography to quantify tooth abrasion  

NASA Astrophysics Data System (ADS)

Cone-beam computed tomography, also termed digital volume tomography, has become a standard technique in dentistry, allowing for fast 3D jaw imaging including denture at moderate spatial resolution. More detailed X-ray images of restricted volumes for post-mortem studies in dental anthropology are obtained by means of micro computed tomography. The present study evaluates the impact of the pipe smoking wear on teeth morphology comparing the abraded tooth with its contra-lateral counterpart. A set of 60 teeth, loose or anchored in the jaw, from 12 dentitions have been analyzed. After the two contra-lateral teeth were scanned, one dataset has been mirrored before the two datasets were registered using affine and rigid registration algorithms. Rigid registration provides three translational and three rotational parameters to maximize the overlap of two rigid bodies. For the affine registration, three scaling factors are incorporated. Within the present investigation, affine and rigid registrations yield comparable values. The restriction to the six parameters of the rigid registration is not a limitation. The differences in size and shape between the tooth and its contra-lateral counterpart generally exhibit only a few percent in the non-abraded volume, validating that the contralateral tooth is a reasonable approximation to quantify, for example, the volume loss as the result of long-term clay pipe smoking. Therefore, this approach allows quantifying the impact of the pipe abrasion on the internal tooth morphology including root canal, dentin, and enamel volumes.

Kofmehl, Lukas; Schulz, Georg; Deyhle, Hans; Filippi, Andreas; Hotz, Gerhard; Berndt-Dagassan, Dorothea; Kramis, Simon; Beckmann, Felix; Müller, Bert



Quantifying the synchronizability of externally driven oscillators  

NASA Astrophysics Data System (ADS)

This paper is focused on the problem of complete synchronization in arrays of externally driven identical or slightly different oscillators. These oscillators are coupled by common driving which makes an occurrence of generalized synchronization between a driving signal and response oscillators possible. Therefore, the phenomenon of generalized synchronization is also analyzed here. The research is concentrated on the cases of an irregular (chaotic or stochastic) driving signal acting on continuous-time (Duffing systems) and discrete-time (Henon maps) response oscillators. As a tool for quantifying the robustness of the synchronized state, response (conditional) Lyapunov exponents are applied. The most significant result presented in this paper is a novel method of estimation of the largest response Lyapunov exponent. This approach is based on the complete synchronization of two twin response subsystems via additional master-slave coupling between them. Examples of the method application and its comparison with the classical algorithm for calculation of Lyapunov exponents are widely demonstrated. Finally, the idea of effective response Lyapunov exponents, which allows us to quantify the synchronizability in case of slightly different response oscillators, is introduced.

Stefa?ski, Andrzej



Quantifying asymmetry of quantum states using entanglement  

NASA Astrophysics Data System (ADS)

For open systems, symmetric dynamics do not always lead to conservation laws. We show that, for a dynamic symmetry associated with a compact Lie group, one can derive new selection rules from entanglement theory. These selection rules apply to both closed and open systems as well as reversible and irreversible time evolutions. Our approach is based on an embedding of the system's Hilbert space into a tensor product of two Hilbert spaces allowing for the symmetric dynamics to be simulated with local operations. The entanglement of the embedded states determines which transformations are forbidden because of the symmetry. In fact, every bipartite entanglement monotone can be used to quantify the asymmetry of the initial states. Moreover, where the dynamics is reversible, each of these monotones becomes a new conserved quantity.

Toloui, Borzu



Quantifying recrystallization by electron backscatter diffraction.  


The use of high-resolution electron backscatter diffraction in the scanning electron microscope to quantify the volume fraction of recrystallization and the recrystallization kinetics is discussed. Monitoring the changes of high-angle grain boundary (HAGB) content during annealing is shown to be a reliable method of determining the volume fraction of recrystallization during discontinuous recrystallization, where a large increase in the percentage of high-angle boundaries occurs during annealing. The results are shown to be consistent with the standard methods of studying recrystallization, such as quantitative metallography and hardness testing. Application of the method to a highly deformed material has shown that it can be used to identify the transition from discontinuous to continuous recrystallization during which there is no significant change in the percentage of HAGB during annealing. PMID:15009691

Jazaeri, H; Humphreys, F J



Quantifying immersion in virtual reality  

Microsoft Academic Search

Virtual Reality (VR) has generated much excitement but little for- mal proof that it is useful. Because VR interfaces are difficult and expensive to build, the computer graphics community needs to be able to predict which applications will benefit from VR. In this paper, we show that users with a VR interface complete a search task faster than users with

Randy F. Pausch; Dennis Proffitt; George H. Williams



The S locus-linked Primula homeotic mutant sepaloid shows characteristics of a B-function mutant but does not result from mutation in a B-function gene.  


Floral homeotic and flower development mutants of Primula, including double, Hose in Hose, Jack in the Green and Split Perianth, have been cultivated since the late 1500s as ornamental plants but until recently have attracted limited scientific attention. Here we describe the characterization of a new mutant phenotype, sepaloid, that produces flowers comprising only sepals and carpels. The sepaloid mutation is recessive, and is linked to the S locus that controls floral heteromorphy. The phenotype shows developmental variability, with flowers containing three whorls of sepals surrounding fertile carpels, two whorls of sepals with a diminished third whorl of sepals surrounding a fourth whorl of carpels, or three whorls of sepals surrounding abnormal carpels. In some respects, these phenotypes resemble the Arabidopsis and Antirrhinum homeotic B-function mutants apetala3/deficiens (ap3/def) and pistillata/globosa (pi/glo). We have isolated the Primula vulgaris B-function genes PvDEFICIENS (PvDEF) and PvGLOBOSA (PvGLO), expression of both of which is affected in the sepaloid mutant. PvGLO, like sepaloid, is linked to the S locus, whereas PvDEF is not. However, our analyses reveal that sepaloid and PvGLO represent different genes. We conclude that SEPALOID is an S-linked independent regulator of floral organ identity genes including PvDEF and PvGLO. PMID:18564384

Li, Jinhong; Webster, Margaret; Dudas, Brigitta; Cook, Holly; Manfield, Iain; Davies, Brendan; Gilmartin, Philip M



Verifying Mixed Real-Integer Quantifier Elimination  

Microsoft Academic Search

We present a formally verified quantifier elimination proce- dure for the first order theory over linear mixed real-integer arithmetics in higher-order logic based on a work by Weispfenning. To this end we provide two verified quantifier elimination procedures: for Presburger arithmitics and for linear real arithmetics.

Amine Chaieb



Quantifying temporal ventriloquism in audiovisual synchrony perception.  


The integration of visual and auditory inputs in the human brain works properly only if the components are perceived in close temporal proximity. In the present study, we quantified cross-modal interactions in the human brain for audiovisual stimuli with temporal asynchronies, using a paradigm from rhythm perception. In this method, participants had to align the temporal position of a target in a rhythmic sequence of four markers. In the first experiment, target and markers consisted of a visual flash or an auditory noise burst, and all four combinations of target and marker modalities were tested. In the same-modality conditions, no temporal biases and a high precision of the adjusted temporal position of the target were observed. In the different-modality conditions, we found a systematic temporal bias of 25-30 ms. In the second part of the first and in a second experiment, we tested conditions in which audiovisual markers with different stimulus onset asynchronies (SOAs) between the two components and a visual target were used to quantify temporal ventriloquism. The adjusted target positions varied by up to about 50 ms and depended in a systematic way on the SOA and its proximity to the point of subjective synchrony. These data allowed testing different quantitative models. The most satisfying model, based on work by Maij, Brenner, and Smeets (Journal of Neurophysiology 102, 490-495, 2009), linked temporal ventriloquism and the percept of synchrony and was capable of adequately describing the results from the present study, as well as those of some earlier experiments. PMID:23868564

Kuling, Irene A; Kohlrausch, Armin; Juola, James F



Quantifying chaos for ecological stoichiometry  

NASA Astrophysics Data System (ADS)

The theory of ecological stoichiometry considers ecological interactions among species with different chemical compositions. Both experimental and theoretical investigations have shown the importance of species composition in the outcome of the population dynamics. A recent study of a theoretical three-species food chain model considering stoichiometry [B. Deng and I. Loladze, Chaos 17, 033108 (2007)] shows that coexistence between two consumers predating on the same prey is possible via chaos. In this work we study the topological and dynamical measures of the chaotic attractors found in such a model under ecological relevant parameters. By using the theory of symbolic dynamics, we first compute the topological entropy associated with unimodal Poincaré return maps obtained by Deng and Loladze from a dimension reduction. With this measure we numerically prove chaotic competitive coexistence, which is characterized by positive topological entropy and positive Lyapunov exponents, achieved when the first predator reduces its maximum growth rate, as happens at increasing ?1. However, for higher values of ?1 the dynamics become again stable due to an asymmetric bubble-like bifurcation scenario. We also show that a decrease in the efficiency of the predator sensitive to prey's quality (increasing parameter ?) stabilizes the dynamics. Finally, we estimate the fractal dimension of the chaotic attractors for the stoichiometric ecological model.

Duarte, Jorge; Januário, Cristina; Martins, Nuno; Sardanyés, Josep



Quantifying diet for nutrigenomic studies.  


The field of nutrigenomics shows tremendous promise for improved understanding of the effects of dietary intake on health. The knowledge that metabolic pathways may be altered in individuals with genetic variants in the presence of certain dietary exposures offers great potential for personalized nutrition advice. However, although considerable resources have gone into improving technology for measurement of the genome and biological systems, dietary intake assessment remains inadequate. Each of the methods currently used has limitations that may be exaggerated in the context of gene × nutrient interaction in large multiethnic studies. Because of the specificity of most gene × nutrient interactions, valid data are needed for nutrient intakes at the individual level. Most statistical adjustment efforts are designed to improve estimates of nutrient intake distributions in populations and are unlikely to solve this problem. An improved method of direct measurement of individual usual dietary intake that is unbiased across populations is urgently needed. PMID:23642200

Tucker, Katherine L; Smith, Caren E; Lai, Chao-Qiang; Ordovas, Jose M



National Orange Show Photovoltaic Demonstration  

SciTech Connect

National Orange Show Photovoltaic Demonstration created a 400KW Photovoltaic self-generation plant at the National Orange Show Events Center (NOS). The NOS owns a 120-acre state fairground where it operates an events center and produces an annual citrus fair known as the Orange Show. The NOS governing board wanted to employ cost-saving programs for annual energy expenses. It is hoped the Photovoltaic program will result in overall savings for the NOS, help reduce the State's energy demands as relating to electrical power consumption, improve quality of life within the affected grid area as well as increase the energy efficiency of buildings at our venue. In addition, the potential to reduce operational expenses would have a tremendous effect on the ability of the NOS to service its community.

Dan Jimenez (NOS)Sheri Raborn, CPA (National Orange Show); Tom Baker (California Construction Authority)



Quantifying Inductive Bias: AI Learning Algorithms and Valiant's Learning Framework  

Microsoft Academic Search

We show that the notion of inductive bias in concept learning can be quantified in a way that directl_v relates to learning performance in the framework recently introduced by Valiant. Our measure of bias is based on the growth function introduced by Vapnik and Chervonenkis, and on the Vapnik-Chervonenkis dimension. We measure some common language biases, including restriction to conjunctive

David Haussler



Common ecology quantifies human insurgency.  


Many collective human activities, including violence, have been shown to exhibit universal patterns. The size distributions of casualties both in whole wars from 1816 to 1980 and terrorist attacks have separately been shown to follow approximate power-law distributions. However, the possibility of universal patterns ranging across wars in the size distribution or timing of within-conflict events has barely been explored. Here we show that the sizes and timing of violent events within different insurgent conflicts exhibit remarkable similarities. We propose a unified model of human insurgency that reproduces these commonalities, and explains conflict-specific variations quantitatively in terms of underlying rules of engagement. Our model treats each insurgent population as an ecology of dynamically evolving, self-organized groups following common decision-making processes. Our model is consistent with several recent hypotheses about modern insurgency, is robust to many generalizations, and establishes a quantitative connection between human insurgency, global terrorism and ecology. Its similarity to financial market models provides a surprising link between violent and non-violent forms of human behaviour. PMID:20016600

Bohorquez, Juan Camilo; Gourley, Sean; Dixon, Alexander R; Spagat, Michael; Johnson, Neil F



A sensitive method to quantify human long DNA in stool: relevance to colorectal cancer screening.  


Human long DNA in stool may reflect nonapoptotic exfoliation and has been used as a colorectal cancer (CRC) marker. Targeting human-specific Alu repeats represents a logical but untested approach. A real-time Alu PCR assay was developed for quantifying long human DNA in stool and evaluated in this study. The accuracy and reproducibility of this assay and the stability of long DNA during room temperature fecal storage were assessed using selected patient stools and stools added to human DNA. Thereafter, long DNA levels were determined in blinded fashion from 18 CRC patients and 20 colonoscopically normal controls. Reproducibility of real-time Alu PCR for quantifying fecal long DNA was high (r2 = 0.99; P < 0.01). Long DNA levels in nonbuffered stools stored at room temperature fell a median of 75% by 1 day and 81% by 3 days. An EDTA buffer preserved DNA integrity during such storage. Human long DNA was quantifiable in all stools but was significantly higher in stools from CRC patients than from normal controls (P < 0.05). At a specificity of 100%, the sensitivity of long DNA for CRC was 44%. Results indicate that real-time Alu PCR is a simple method to sensitively quantify long human DNA in stool. This study shows that not all CRCs are associated with increased fecal levels of long DNA. Long DNA degrades with fecal storage, and measures to stabilize this analyte must be considered for optimal use of this marker. PMID:16775168

Zou, Hongzhi; Harrington, Jonathan J; Klatt, Kristie K; Ahlquist, David A



Quantifying Tsunami Impact on Structures  

NASA Astrophysics Data System (ADS)

Tsunami impact is usually assessed through inundation simulations and maps which provide estimates of coastal flooding zones based on "credible worst case" scenarios. Earlier maps relied on one-dimensional computations, but two-dimensional computations are now employed routinely. In some cases, the maps do not represent flooding from any particular scenario event, but present an inundation line that reflects the worst inundation at this particular location among a range of scenario events. Current practice in tsunami resistant design relies on estimates of tsunami impact forces derived from empirical relationships that have been borrowed from riverine flooding calculations, which involve only inundation elevations. We examine this practice critically. Recent computational advances allow for calculation of additional parameters from scenario events such as the detailed distributions of tsunami currents and fluid accelerations, and this suggests that alternative and more comprehensive expressions for calculating tsunami impact and tsunami forces should be examined. We do so, using model output for multiple inundation simulations of Seaside, Oregon, as part of a pilot project to develop probabilistic tsunami hazard assessment methodologies for incorporation into FEMA Flood Insurance Rate Maps. We consider three different methods, compare the results with existing methodology for estimating forces and impact, and discuss the implications of these methodologies for probabilistic tsunami hazard assessment.

Yalciner, A. C.; Kanoglu, U.; Titov, V.; Gonzalez, F.; Synolakis, C. E.



Quantifying the Erlenmeyer flask deformity  

PubMed Central

Objective Erlenmeyer flask deformity is a common radiological finding in patients with Gaucher?s disease; however, no definition of this deformity exists and the reported prevalence of the deformity varies widely. To devise an easily applied definition of this deformity, we investigated a cohort of knee radiographs in which there was consensus between three experienced radiologists as to the presence or absence of Erlenmeyer flask morphology. Methods Using the presence or absence of Erlenmeyer flask morphology as a benchmark, we measured the diameter of the femur at the level of the physeal scar and serially at defined intervals along the metadiaphysis. Results A measured ratio in excess of 0.57 between the diameter of the femoral shaft 4 cm from the physis to the diameter of the physeal baseline itself on a frontal radiograph of the knee predicted the Erlenmeyer flask deformity with 95.6% sensitivity and 100% specificity in our series of 43 independently diagnosed adults with Gaucher?s disease. Application of this method to the distal femur detected the Erlenmeyer flask deformity reproducibly and was simple to carry out. Conclusion Unlike diagnostic assignments based on subjective review, our simple procedure for identifying the modelling deformity is based on robust quantitative measurement: it should facilitate comparative studies between different groups of patients, and may allow more rigorous exploration of the pathogenesis of the complex osseous manifestations of Gaucher?s disease to be undertaken.

Carter, A; Rajan, P S; Deegan, P; Cox, T M; Bearcroft, P



Quantifying error distributions in crowding.  


When multiple objects are in close proximity, observers have difficulty identifying them individually. Two classes of theories aim to account for this crowding phenomenon: spatial pooling and spatial substitution. Variations of these accounts predict different patterns of errors in crowded displays. Here we aim to characterize the kinds of errors that people make during crowding by comparing a number of error models across three experiments in which we manipulate flanker spacing, display eccentricity, and precueing duration. We find that both spatial intrusions and individual letter confusions play a considerable role in errors. Moreover, we find no evidence that a naïve pooling model that predicts errors based on a nonadditive combination of target and flankers explains errors better than an independent intrusion model (indeed, in our data, an independent intrusion model is slightly, but significantly, better). Finally, we find that manipulating trial difficulty in any way (spacing, eccentricity, or precueing) produces homogenous changes in error distributions. Together, these results provide quantitative baselines for predictive models of crowding errors, suggest that pooling and spatial substitution models are difficult to tease apart, and imply that manipulations of crowding all influence a common mechanism that impacts subject performance. PMID:23525133

Hanus, Deborah; Vul, Edward



Quantifying capital goods for waste incineration.  


Materials and energy used for the construction of modern waste incineration plants were quantified. The data was collected from five incineration plants (72,000-240,000 tonnes per year) built in Scandinavia (Norway, Finland and Denmark) between 2006 and 2012. Concrete for the buildings was the main material used amounting to 19,000-26,000 tonnes per plant. The quantification further included six main materials, electronic systems, cables and all transportation. The energy used for the actual on-site construction of the incinerators was in the range 4000-5000 MW h. In terms of the environmental burden of producing the materials used in the construction, steel for the building and the machinery contributed the most. The material and energy used for the construction corresponded to the emission of 7-14 kg CO2 per tonne of waste combusted throughout the lifetime of the incineration plant. The assessment showed that, compared to data reported in the literature on direct emissions from the operation of incinerators, the environmental impacts caused by the construction of buildings and machinery (capital goods) could amount to 2-3% with respect to kg CO2 per tonne of waste combusted. PMID:23561797

Brogaard, L K; Riber, C; Christensen, T H



Polymer microlenses for quantifying cell sheet mechanics  

PubMed Central

Mechanical interactions between individual cells and their substrate have been studied extensively over the past decade; however, understanding how these interactions change as cells interact with neighboring cells in the development of a cell sheet, or early stage tissue, is less developed. We use a recently developed experimental technique for quantifying the mechanics of confluent cell sheets. Living cells are cultured on a thin film of polystyrene [PS], which is attached to a patterned substrate of crosslinked poly(dimethyl siloxane) [PDMS] microwells. As cells attach to the substrate and begin to form a sheet, they apply sufficient contractile force to buckle the PS film over individual microwells to form a microlens array. The curvature for each microlens is measured by confocal microscopy and can be related to the strain and stress applied by the cell sheet using simple mechanical analysis for the buckling of thin films. We demonstrate that this technique can provide insight into the important materials properties and length scales that govern cell sheet responses, especially the role of stiffness of the substrate. We show that intercellular forces can lead to significantly different behaviors than the ones observed for individual cells, where focal adhesion is the relevant parameter.

Miquelard-Garnier, Guillaume; Zimberlin, Jessica A.; Sikora, Christian B.; Wadsworth, Patricia



Quantifying Saturation-Dependent Anisotropy in Soil Hydraulic Conductivity  

NASA Astrophysics Data System (ADS)

The anisotropy in unsaturated hydraulic conductivity is saturation-dependent. Accurate characterization of soil anisotropy is very important in simulating flow and contaminants (e.g., radioactive nuclides in Hanford) transport. A recently proposed tensorial connectivity-tortuosity (TCT) concept describes the hydraulic conductivity tensor of the unsaturated anisotropic soils as the product of a scalar variable, the symmetric connectivity tortuosity tensor, and the hydraulic conductivity tensor at saturation. In this study, the TCT model is used to quantify soil anisotropy in unsaturated hydraulic conductivity. The results show that the anisotropy coefficient, A, is independent of soil water retention properties. At a certain saturation, A can be characterized by the ratio of the saturated hydraulic conductivities and the difference in the tortuosity-connectivity coefficients in orthogonal directions. The model was tested using directional measurements of unsaturated hydraulic conductivity of undisturbed soil cores. Results show that the TCT model can describe different types of soil anisotropy, previously ignored in other models. The TCT model can be used to describe either monotonic increases or decreases in A with saturation and allows the principal direction of hydraulic conductivity to rotate when saturation varies. The Pacific Northwest National Laboratory is operated for the U.S. Department of Energy by Battelle under Contract DE-AC06-76RL01830.

Zhang, Z. F.; Ward, A. L.; Gee, G. W.; White, M. D.; Keller, J. M.



Quantifying drug-protein binding in vivo.  

SciTech Connect

Accelerator mass spectrometry (AMS) provides precise quantitation of isotope labeled compounds that are bound to biological macromolecules such as DNA or proteins. The sensitivity is high enough to allow for sub-pharmacological (''micro-'') dosing to determine macromolecular targets without inducing toxicities or altering the system under study, whether it is healthy or diseased. We demonstrated an application of AMS in quantifying the physiologic effects of one dosed chemical compound upon the binding level of another compound in vivo at sub-toxic doses [4].We are using tissues left from this study to develop protocols for quantifying specific binding to isolated and identified proteins. We also developed a new technique to quantify nanogram to milligram amounts of isolated protein at precisions that are comparable to those for quantifying the bound compound by AMS.

Buchholz, B; Bench, G; Keating III, G; Palmblad, M; Vogel, J; Grant, P G; Hillegonds, D



View of hospital district, showing cannon in foreground, showing building ...  

Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

View of hospital district, showing cannon in foreground, showing building H1 at left, showing building H72 in background; camera facing north. - Mare Island Naval Shipyard, East of Nave Drive, Vallejo, Solano County, CA


Development of a peptide mapping procedure to identify and quantify methionine oxidation in recombinant human alpha1-antitrypsin.  


A peptide mapping procedure was developed to identify and quantify methionine oxidation in recombinant human alpha1-antitrypsin. Due to the protein's complex structural biochemistry, chromatographic analysis of methionine containing digest peptides was a significant challenge. However, by using a combination of mass spectrometry, protein engineering, and high-temperature reversed-phase liquid chromatography, we were able to identify methionine residues that are susceptible to oxidation by hydrogen peroxide. and quantify their reactivity. Our results show that five of the protein's 10 methionine residues are susceptible to oxidation at neutral pH, four of which are localized to the active site region. PMID:11822379

Griffiths, Steven W; Cooney, Charles L



Quantifying Annual Aboveground Net Primary Production in the Intermountain West  

Technology Transfer Automated Retrieval System (TEKTRAN)

As part of a larger project, methods were developed to quantify current year growth on grasses, forbs, and shrubs. Annual aboveground net primary production (ANPP) data are needed for this project to calibrate results from computer simulation models and remote-sensing data. Measuring annual ANPP of ...


Quantifying Hepatic Shear Modulus In Vivo Using Acoustic Radiation Force  

Microsoft Academic Search

The speed at which shear waves propagate in tissue can be used to quantify the shear modulus of the tissue. As many groups have shown, shear waves can be generated within tissues using focused, impulsive, acoustic radiation force excitations, and the resulting displacement response can be ultrasonically tracked through time. The goals of the work herein are twofold: (i) to

M. L. Palmeri; M. H. Wang; J. J. Dahl; K. D. Frinkley; K. R. Nightingale



Fat Stigmatization in Television Shows and Movies: A Content Analysis  

Microsoft Academic Search

Objective: To examine the phenomenon of fat stigmatization messages presented in television shows and movies, a content analysis was used to quantify and categorize fat-specific commentary and humor.Research Methods and Procedures: Fat stigmatization vignettes were identified using a targeted sampling procedure, and 135 scenes were excised from movies and television shows. The material was coded by trained raters. Reliability indices

Susan M. Himes; J. Kevin Thompson



Quantifying long-range correlations in complex networks beyond nearest neighbors  

NASA Astrophysics Data System (ADS)

We propose a fluctuation analysis to quantify spatial correlations in complex networks. The approach considers the sequences of degrees along shortest paths in the networks and quantifies the fluctuations in analogy to time series. In this work, the Barabasi-Albert (BA) model, the Cayley tree at the percolation transition, a fractal network model, and examples of real-world networks are studied. While the fluctuation functions for the BA model show exponential decay, in the case of the Cayley tree and the fractal network model the fluctuation functions display a power law behavior. The fractal network model comprises long-range anticorrelations. The results suggest that the fluctuation exponent provides complementary information to the fractal dimension.

Rybski, D.; Rozenfeld, H. D.; Kropp, J. P.



Shakespeare and other English Renaissance authors as characterized by Information Theory complexity quantifiers  

NASA Astrophysics Data System (ADS)

We introduce novel Information Theory quantifiers in a computational linguistic study that involves a large corpus of English Renaissance literature. The 185 texts studied (136 plays and 49 poems in total), with first editions that range from 1580 to 1640, form a representative set of its period. Our data set includes 30 texts unquestionably attributed to Shakespeare; in addition we also included A Lover’s Complaint, a poem which generally appears in Shakespeare collected editions but whose authorship is currently in dispute. Our statistical complexity quantifiers combine the power of Jensen-Shannon’s divergence with the entropy variations as computed from a probability distribution function of the observed word use frequencies. Our results show, among other things, that for a given entropy poems display higher complexity than plays, that Shakespeare’s work falls into two distinct clusters in entropy, and that his work is remarkable for its homogeneity and for its closeness to overall means.

Rosso, Osvaldo A.; Craig, Hugh; Moscato, Pablo



New approaches to quantifying aortic stenosis severity.  


Previously, aortic valve stenosis (AS) etiology was usually congenital or due to rheumatic disease. However, the most frequent cause is now degenerative AS, which is often part of a continuum including increased rigidity of the aorta due to atherosclerosis and left ventricular dysfunction due to coronary artery disease. This article highlights newer approaches to quantify AS taking into account the inter-relation between the different components (valvular, vascular, and ventricular) affecting clinical outcome in these patients. Emphasis is given to a more comprehensive evaluation of AS severity going beyond classical measurements and including indices such as 1) the energy loss index to quantify the valvular obstruction net of pressure recovery; 2) systemic arterial compliance to quantify vascular load; and 3) valvular-arterial impedance to assess the global (valvular + vascular) increase in afterload. Routine use of these indices, easily measured by Doppler echocardiography, should improve clinical management of AS patients. PMID:18417008

Dumesnil, Jean G; Pibarot, Philippe; Akins, Cary



Quantifying mixed-state quantum entanglement by optimal entanglement witnesses  

NASA Astrophysics Data System (ADS)

We develop an approach of quantifying entanglement in mixed quantum states by the optimal entanglement witness operator. We identify the convex set of mixed states for which a single witness provides the exact value of an entanglement measure and show that the convexity, properties, and symmetries of entanglement or of a target state considerably fix the form of the optimal witness. This greatly reduces the difficulty in computing and experimentally determining entanglement measures. As an example, we show how to experimentally quantify bound entanglement in four-qubit noisy Smolin states and three-qubit Greenberger-Horne-Zeilinger entanglement under white noise. For general measures and states, we provide a numerical method to efficiently optimize the witness.

Lee, S.-S. B.; Sim, H.-S.



Quantifying the Ripple: Word-of-Mouth and Advertising Effectiveness  

Microsoft Academic Search

In this article the authors demonstrate how a customer lifetime value approach can provide a better assessment of advertising effectiveness that takes into account postpurchase behaviors such as word-of-mouth. Although for many advertisers word-of-mouth is viewed as an alternative to advertising, the authors show that it is possible to quantify the way in which word-of-mouth often complements and extends the




Asia: Showing the Changing Seasons  

NSDL National Science Digital Library

SeaWiFS false color data showing seasonal change in the oceans and on land for Asia. The data is seasonally averaged, and shows the sequence: fall, winter, spring, summer, fall, winter, spring (for the Northern Hemisphere).

Allen, Jesse; Newcombe, Marte; Feldman, Gene



Quantified Histopathology of the Keratoconic Cornea  

PubMed Central

Purpose The present study systematically investigated and quantified histopathological changes in a series of keratoconic (Kc) corneas utilizing a physiologically formulated fixative to not further distort the already distorted diseased corneas. Methods Twelve surgically removed Kc corneal buttons were immediately preserved and processed for light and transmission electron microscopy using an established corneal protocol. Measurements were taken from the central cone and peripheral regions of the host button. The sample size examined ranged in length from 390–2608um centrally and 439–2242um peripherally. Results The average corneal thickness was 437um centrally and 559um peripherally. Epithelial thickness varied centrally from 14–92um and peripherally from 30–91um. A marked thickening of the epithelial basement membrane was noted in 58% of corneas. Centrally, anterior limiting lamina (ALL) was thinned or lost over 60% of the area examined, while peripheral cornea was also affected, but to a lesser extent. Histopathologically, posterior cornea remained undisturbed by the disease. Anteriorly in the stroma, an increased number of cells and tissue debris were encountered and some of these cells were clearly not keratocytes. Conclusions It is concluded that Kc pathology, at least initially, has a distinct anterior focus involving the epithelium, ALL and anterior stroma. The epithelium had lost its cellular uniformity and was compromised by the loss or damage to the ALL. The activity of the hitherto unreported recruited stromal cells may be to break down and remove ALL and anterior stromal lamellae leading to the overall thinning that accompanies this disease.

Mathew, Jessica H.; Goosey, John D.; Bergmanson, Jan P. G.




Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

28. MAP SHOWING LOCATION OF ARVFS FACILITY AS BUILT. SHOWS LINCOLN BOULEVARD, BIG LOST RIVER, AND NAVAL REACTORS FACILITY. F.C. TORKELSON DRAWING NUMBER 842-ARVFS-101-2. DATED OCTOBER 12, 1965. INEL INDEX CODE NUMBER: 075 0101 851 151969. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID


Quantifying microsleep to help assess subjective sleepiness  

Microsoft Academic Search

BackgroundThe qualitative presence of microsleep during the multiple sleep latency test (MSLT) has been shown to correlate with an increased incidence of subjective complaints of sleepiness, tiredness, accidents\\/near accidents, and gap driving. However, there is no data on how to quantify microsleep and effectively incorporate it as a diagnostic tool in the measurement of sleepiness. The purpose of this study

Allen J. Blaivas; Rajeshri Patel; David Hom; Kenneth Antigua; Hormoz Ashtyani



Quantifying Learning Outcomes: A Gentle Approach.  

ERIC Educational Resources Information Center

In fall 1993, Coffeyville Community College (CCC) in Kansas announced the implementation of a new program known as Quantifying Learning Outcomes (QLO), one component of the college's Student Outcomes Assessment Plan. QLO calls for a clear articulation of CCC's goals regarding instructional support materials within the next 6 to 12 months; the…

Lind, Donald J.


Quantifying the Reuse of Learning Objects  

ERIC Educational Resources Information Center

|This paper reports the findings of one case study from a larger project, which aims to quantify the claimed efficiencies of reusing learning objects to develop e-learning resources. The case study describes how an online inquiry project "Diabetes: A waste of energy" was developed by searching for, evaluating, modifying and then integrating as…

Elliott, Kristine; Sweeney, Kevin



Partial Cylindrical Algebraic Decomposition for Quantifier Elimination  

Microsoft Academic Search

The Cylindrical Algebraic Decomposition method (CAD) decomposes $R^r$ into regions over which given polynomials have constant signs. An important application of CAD is quantifier elimination in elementary algebra and geometry. In this paper we present a method which intermingles CAD construction with truth evaluation so that parts of the CAD are constructed only as needed to further truth evaluation and

George E. Collins; Jay H. Hong



Smart velocity ranging quantifiable optical microangiography  

Microsoft Academic Search

We introduce a new type of Optical Microangiography (OMAG) called Quantifiable Optical Microangiography (QOMAG) which is capable of performing quantitative flow imaging with smart velocity ranging. In order to extracting multi-range velocity, two three dimensional data sets need to be acquired at the same imaging area. One data set performs dense scanning in B-scan direction and Doppler analysis was done

Zhongwei Zhi; Ruikang K. Wang



Quantifying a design process based on experiments  

Microsoft Academic Search

The authors identify important factors that quantify the software design process, by which they mean the phase of software development that spans the specification and coding,. Through two experiments using student subjects. They identify the quality of the specification and programmer effort as important factors

Hideo KUDO; Yuji SUGIYAMA; M. Fujii; K. Torii



Quantifying the Advantage of Looking Forward  

NASA Astrophysics Data System (ADS)

We introduce a future orientation index to quantify the degree to which Internet users worldwide seek more information about years in the future than years in the past. We analyse Google logs and find a striking correlation between the country's GDP and the predisposition of its inhabitants to look forward.

Preis, Tobias; Moat, Helen Susannah; Stanley, H. Eugene; Bishop, Steven R.



A Quantifiable Alternative to Double Data Entry  

Microsoft Academic Search

Recent articles in this journal have questioned the effectiveness of double data entry to enhance the quality of clinical trials data entered into the computer from case report forms. Although double data entry is widely used, no definitive agreement has been reached as to how to model and quantify the time\\/cost involved to perform double data entry and the exact

Dennis W King; Roderick Lashley



Quantifying the Thermal Fatigue of CPV Modules  

NASA Astrophysics Data System (ADS)

A method is presented to quantify thermal fatigue in the CPV die-attach from meteorological data. A comparative study between cities demonstrates a significant difference in the accumulated damage. These differences are most sensitive to the number of larger (?T) thermal cycles experienced for a location. High frequency data (<1/min) may be required to most accurately employ this method.

Bosco, Nick; Kurtz, Sarah



Quantifying Item Dependency by Fisher's Z.  

ERIC Educational Resources Information Center

Three aspects of the usual approach to assessing local item dependency, Yen's "Q" (H. Huynh, H. Michaels, and S. Ferrara, 1995), deserve further investigation. Pearson correlation coefficients do not distribute normally when the coefficients are large, and thus cannot quantify the dependency well. In the second place, the accuracy of item response…

Shen, Linjun


Risk Quantified Structural Design and Evaluation.  

National Technical Information Service (NTIS)

The objective of this program was to investigate the risk-quantified design methods for the design and certification of the structure for future military aircraft. The current Department of Defense (DoD) process for developing and supporting aircraft stru...

E. J. Tuegel



Interbank Exposures: Quantifying the Risk of Contagion  

Microsoft Academic Search

This paper examines the degree to which the failure of one bank would cause the subsequent collapse of other banks. Using unique data on interbank payment flows, the magnitude of bilateral federal funds exposures is quantified. These exposures are used to simulate the impact of various failure scenarios, and the risk of contagion is found to be economically small.




Interbank exposures: quantifying the risk of contagion  

Microsoft Academic Search

This paper examines the likelihood that failure of one bank would cause the subsequent collapse of a large number of other banks. Using unique data on interbank payment flows, the magnitude of bilateral federal funds exposures is quantified. These exposures are used to simulate the impact of various failure scenarios, and the risk of contagion is found to be economically

Craig H Furfine



Quantifying the Thermal Fatigue of CPV Modules  

SciTech Connect

A method is presented to quantify thermal fatigue in the CPV die-attach from meteorological data. A comparative; study between cities demonstrates a significant difference in the accumulated damage. These differences are most; sensitive to the number of larger (?T) thermal cycles experienced for a location. High frequency data (<1/min) may; be required to most accurately employ this method.

Bosco, N.; Kurtz, S.



Cross-Linguistic Relations between Quantifiers and Numerals in Language Acquisition: Evidence from Japanese  

ERIC Educational Resources Information Center

|A study of 104 Japanese-speaking 2- to 5-year-olds tested the relation between numeral and quantifier acquisition. A first study assessed Japanese children's comprehension of quantifiers, numerals, and classifiers. Relative to English-speaking counterparts, Japanese children were delayed in numeral comprehension at 2 years of age but showed no…

Barner, David; Libenson, Amanda; Cheung, Pierina; Takasaki, Mayu



Quantifying air pollution removal by green roofs in Chicago  

NASA Astrophysics Data System (ADS)

The level of air pollution removal by green roofs in Chicago was quantified using a dry deposition model. The result showed that a total of 1675 kg of air pollutants was removed by 19.8 ha of green roofs in one year with O 3 accounting for 52% of the total, NO 2 (27%), PM 10 (14%), and SO 2 (7%). The highest level of air pollution removal occurred in May and the lowest in February. The annual removal per hectare of green roof was 85 kg ha -1 yr -1. The amount of pollutants removed would increase to 2046.89 metric tons if all rooftops in Chicago were covered with intensive green roofs. Although costly, the installation of green roofs could be justified in the long run if the environmental benefits were considered. The green roof can be used to supplement the use of urban trees in air pollution control, especially in situations where land and public funds are not readily available.

Yang, Jun; Yu, Qian; Gong, Peng


Image Restoration for Quantifying TFT-LCD Defect Levels  

NASA Astrophysics Data System (ADS)

Though machine vision systems for automatically detecting visual defects, called mura, have been developed for thin flat transistor liquid crystal display (TFT-LCD) panels, they have not yet reached a level of reliability which can replace human inspectors. To establish an objective criterion for identifying real defects, some index functions for quantifying defect levels based on human perception have been recently researched. However, while these functions have been verified in the laboratory, further consideration is needed in order to apply them to real systems in the field. To begin with, we should correct the distortion occurring through the capturing of panels. Distortion can cause the defect level in the observed image to differ from that in the panel. There are several known methods to restore the observed image in general vision systems. However, TFT-LCD panel images have a unique background degradation composed of background non-uniformity and vignetting effect which cannot easily be restored through traditional methods. Therefore, in this paper we present a new method to correct background degradation of TFT-LCD panel images using principal component analysis (PCA). Experimental results show that our method properly restores the given observed images and the transformed shape of muras closely approaches the original undistorted shape.

Choi, Kyu Nam; Park, No Kap; Yoo, Suk In


Quantifying cognitive state from EEG using phase synchrony.  


Phase synchrony is a powerful amplitudeindependent measure that quantifies linear and nonlinear dynamics between non-stationary signals. It has been widely used in a variety of disciplines including neural science and cognitive psychology. Current time-varying phase estimation uses either the Hilbert transform or the complex wavelet transform of the signals. This paper exploits the concept of phase synchrony as a mean to discriminate face processing from the processing of a simple control stimulus. Dependencies between channel locations were assessed for two separate conditions elicited by distinct pictures (representing a human face and a Gabor patch), both flickering at a rate of 17.5 Hz. Statistical analysis is performed using the Kolmogorov-Smirnov test. Moreover, the phase synchrony measure used is compared with a measure of association that has been previously applied in the same context: the generalized measure of association (GMA). Results show that although phase synchrony works well in revealing regions of high synchronization, and therefore achieves an acceptable level of discriminability, this comes at the expense of sacrificing time resolution. PMID:24111059

Wan, Lu; Fadlallah, Bilal H; Keil, Andreas; Principe, Jose C



Quantifying and specifying the solar influence on terrestrial surface temperature  

NASA Astrophysics Data System (ADS)

This investigation is a follow-up of a paper in which we showed that both major magnetic components of the solar dynamo, viz. the toroidal and the poloidal ones, are correlated with average terrestrial surface temperatures. Here, we quantify, improve and specify that result and search for their causes. We studied seven recent temperature files. They were smoothed in order to eliminate the Schwabe-type (11 years) variations. While the total temperature gradient over the period of investigation (1610-1970) is 0.087 °C/century; a gradient of 0.077 °C/century is correlated with the equatorial (toroidal) magnetic field component. Half of it is explained by the increase of the Total Solar Irradiance over the period of investigation, while the other half is due to feedback by evaporated water vapour. A yet unexplained gradient of -0.040 °C/century is correlated with the polar (poloidal) magnetic field. The residual temperature increase over that period, not correlated with solar variability, is 0.051 °C/century. It is ascribed to climatologic forcings and internal modes of variation. We used these results to study present terrestrial surface warming. By subtracting the above-mentioned components from the observed temperatures we found a residual excess of 0.31° in 1999, this being the triangularly weighted residual over the period 1990-2008. We show that solar forcing of the ground temperature associated with significant feedback is a regularly occurring feature, by describing some well observed events during the Holocene.

de Jager, C.; Duhau, S.; van Geel, B.



Quantifying the Cost of Providing Intrusion Tolerance in Group Communication Systems  

Microsoft Academic Search

Group communication systems that provide consistent group membership and reliable, ordered multicast proper- ties in the presence of faults resulting from malicious in- trusions have not been analyzed extensively to quantify the cost of tolerating these intrusions. This paper attempts to quantify this cost by presenting results from an experimen- tal evaluation of three new intrusion-tolerant microproto- cols that have

Harigovind V. Ramasamy; Prashant Pandey; James Lyons; Michel Cukier; William H. Sanders



Entropy generation method to quantify thermal comfort.  


The present paper presents a thermodynamic approach to assess the quality of human-thermal environment interaction and quantify thermal comfort. The approach involves development of entropy generation term by applying second law of thermodynamics to the combined human-environment system. The entropy generation term combines both human thermal physiological responses and thermal environmental variables to provide an objective measure of thermal comfort. The original concepts and definitions form the basis for establishing the mathematical relationship between thermal comfort and entropy generation term. As a result of logic and deterministic approach, an Objective Thermal Comfort Index (OTCI) is defined and established as a function of entropy generation. In order to verify the entropy-based thermal comfort model, human thermal physiological responses due to changes in ambient conditions are simulated using a well established and validated human thermal model developed at the Institute of Environmental Research of Kansas State University (KSU). The finite element based KSU human thermal computer model is being utilized as a "Computational Environmental Chamber" to conduct series of simulations to examine the human thermal responses to different environmental conditions. The output from the simulation, which include human thermal responses and input data consisting of environmental conditions are fed into the thermal comfort model. Continuous monitoring of thermal comfort in comfortable and extreme environmental conditions is demonstrated. The Objective Thermal Comfort values obtained from the entropy-based model are validated against regression based Predicted Mean Vote (PMV) values. Using the corresponding air temperatures and vapor pressures that were used in the computer simulation in the regression equation generates the PMV values. The preliminary results indicate that the OTCI and PMV values correlate well under ideal conditions. However, an experimental study is needed in the future to fully establish the validity of the OTCI formula and the model. One of the practical applications of this index is that could it be integrated in thermal control systems to develop human-centered environmental control systems for potential use in aircraft, mass transit vehicles, intelligent building systems, and space vehicles. PMID:12182196

Boregowda, S C; Tiwari, S N; Chaturvedi, S K



DOE: Quantifying the Value of Hydropower in the Electric Grid  

SciTech Connect

The report summarizes research to Quantify the Value of Hydropower in the Electric Grid. This 3-year DOE study focused on defining value of hydropower assets in a changing electric grid. Methods are described for valuation and planning of pumped storage and conventional hydropower. The project team conducted plant case studies, electric system modeling, market analysis, cost data gathering, and evaluations of operating strategies and constraints. Five other reports detailing these research results are available a project website, With increasing deployment of wind and solar renewable generation, many owners, operators, and developers of hydropower have recognized the opportunity to provide more flexibility and ancillary services to the electric grid. To quantify value of services, this study focused on the Western Electric Coordinating Council region. A security-constrained, unit commitment and economic dispatch model was used to quantify the role of hydropower for several future energy scenarios up to 2020. This hourly production simulation considered transmission requirements to deliver energy, including future expansion plans. Both energy and ancillary service values were considered. Addressing specifically the quantification of pumped storage value, no single value stream dominated predicted plant contributions in various energy futures. Modeling confirmed that service value depends greatly on location and on competition with other available grid support resources. In this summary, ten different value streams related to hydropower are described. These fell into three categories; operational improvements, new technologies, and electricity market opportunities. Of these ten, the study was able to quantify a monetary value in six by applying both present day and future scenarios for operating the electric grid. This study confirmed that hydropower resources across the United States contribute significantly to operation of the grid in terms of energy, capacity, and ancillary services. Many potential improvements to existing hydropower plants were found to be cost-effective. Pumped storage is the most likely form of large new hydro asset expansions in the U.S. however, justifying investments in new pumped storage plants remains very challenging with current electricity market economics. Even over a wide range of possible energy futures, up to 2020, no energy future was found to bring quantifiable revenues sufficient to cover estimated costs of plant construction. Value streams not quantified in this study may provide a different cost-benefit balance and an economic tipping point for hydro. Future studies are essential in the quest to quantify the full potential value. Additional research should consider the value of services provided by advanced storage hydropower and pumped storage at smaller time steps for integration of variable renewable resources, and should include all possible value streams such as capacity value and portfolio benefits i.e.; reducing cycling on traditional generation.




Quantifying Clinical Data Quality Using Relative Gold Standards  

PubMed Central

As the use of detailed clinical data expands for strategic planning, clinical quality measures, and research, the quality of the data contained in source systems, such as electronic medical records, becomes more critical. Methods to quantify and monitor clinical data quality in large operational databases involve a set of predefined data quality queries that attempt to detect data anomalies such as missing or unrealistic values based on meta-knowledge about a data domain. However, descriptive data elements, such as patient race, cannot be assessed using these methods. We present a novel approach leveraging existing intra-institutional databases with differing data quality for the same data element to quantify data quality for descriptive data. Using the concept of a relative gold standard, we show how this method can be used to assess data quality in enterprise clinical databases.

Kahn, Michael G.; Eliason, Brian B.; Bathurst, Janet



Quantifying the relevance of cyclones for precipitation extremes  

NASA Astrophysics Data System (ADS)

Precipitation extremes and associated floods may have a huge impact on society. It is thus important to better understand the mechanisms causing these events, also with regard to their variations in a changing climate. Here the importance of a particular category of weather systems, namely cyclones, for the occurrence of regional-scale precipitation extremes is quantified globally, based on the ERA-Interim reanalysis dataset for the period 1989-2009. Such an event-based climatological approach complements previous case studies, which established the physical relationship between cyclones and heavy precipitation. Cyclones are identified from ERA-Interim sea level pressure fields as features with a finite size, determined by the outermost closed pressure contour comprising one or several pressure minima. At each grid point, the 99% percentile of six-hourly accumulated precipitation is calculated, and all dates with six-hourly precipitation larger than this percentile are identified as extreme events. A comparison with the satellite observation-based CMORPH dataset for the years 2003 to 2009 shows that ERA-Interim properly captures the timing of the extreme events in the major parts of the extratropics. A cyclone is assumed to induce a precipitation extreme if both occur simultaneously at the same grid point. The percentage of extreme precipitation events coinciding with a cyclone is then quantified at every grid point. This percentage strongly exceeds the climatological cyclone frequency in many regions. It reaches maxima of more than 80%, e.g., in the main North Atlantic, North Pacific and Southern Ocean storm track areas. Other regional hot spots of cyclone-induced precipitation extremes are found in areas with very low climatological cyclone frequencies, in particular around the Mediterranean Sea, in eastern China, over the Philippines and the southeastern United States. Our results suggest that in these hot spot regions changes of precipitation extremes with global warming are specifically sensitive to variations in the dynamical forcing, e.g., related to shifts of the storm tracks. Finally, properties of cyclones causing extreme precipitation are investigated. In the exit regions of the Northern Hemisphere storm tracks, these cyclones are on average slightly more intense than low-pressure systems not associated with extreme precipitation events, but no differences with respect to minimum core pressure are found in most other parts of the midlatitudes. The fundamental linkage between cyclones and precipitation extremes may thus provide guidance to forecasters involved in flood prediction, but it is unlikely that forecasting rules based on simple cyclone properties can be established.

Pfahl, S.; Wernli, H.



Experimental verification of bridge seismic damage states quantified by calibrating analytical models with empirical field data  

NASA Astrophysics Data System (ADS)

Bridges are one of the most vulnerable components of a highway transportation network system subjected to earthquake ground motions. Prediction of resilience and sustainability of bridge performance in a probabilistic manner provides valuable information for pre-event system upgrading and post-event functional recovery of the network. The current study integrates bridge seismic damageability information obtained through empirical, analytical and experimental procedures and quantifies threshold limits of bridge damage states consistent with the physical damage description given in HAZUS. Experimental data from a large-scale shaking table test are utilized for this purpose. This experiment was conducted at the University of Nevada, Reno, where a research team from the University of California, Irvine, participated. Observed experimental damage data are processed to identify and quantify bridge damage states in terms of rotational ductility at bridge column ends. In parallel, a mechanistic model for fragility curves is developed in such a way that the model can be calibrated against empirical fragility curves that have been constructed from damage data obtained during the 1994 Northridge earthquake. This calibration quantifies threshold values of bridge damage states and makes the analytical study consistent with damage data observed in past earthquakes. The mechanistic model is transportable and applicable to most types and sizes of bridges. Finally, calibrated damage state definitions are compared with that obtained using experimental findings. Comparison shows excellent consistency among results from analytical, empirical and experimental observations.

Banerjee, Swagata; Shinozuka, Masanobu



A fluorescence-based microplate assay to quantify DOM-induced catabolic activity.  


This note describes a novel method to quickly quantify the dissolved organic matter (DOM)-induced catabolic activity from low-volume samples. The concept is based on the catabolic response profiles (CRP) assay and is described as an inverse CRP, where the reactivity of a complex and diverse mixture of organic compounds towards single strains of bacteria is quantified. A strain of Pseudomonas fluorescens was grown and then transferred to an organic carbon-free mineral salt medium. 90 microL of a fluorogenic redox indicator was added to 90 microL of the bacterial suspension in a well on a 96-well microplate. The DOM sample (90 microL) was added to the well and the fluorescence emitted by the reduced indicator was read over the period of incubation. Only 0.8 mL of the DOM sample, including controls and replicates, was required to quantify the activity of each sample. Results are presented for a surface soil DOM sample and they were compared to glucose samples of various concentrations. The detection limit was reached for samples containing as little as 55 microM of glucose (0.3 mg C L(-1)). The assay showed that only 9% of the total carbon of the soil surface DOM sample was readily biodegradable. PMID:16270197

Dudal, Y; Holgado, R; Knoth, K; Debroux, M



A new method for quantifying the needling component of acupuncture treatments  

PubMed Central

Objectives The highly variable nature of acupuncture needling creates challenges to systematic research. The goal of this study was to test the feasibility of quantifying acupuncture needle manipulation using motion and force measurements. It was hypothesised that distinct needling styles and techniques would produce different needle motion and force patterns that could be quantified and differentiated from each other. Methods A new needling sensor tool (Acusensor) was used to record needling in real time as performed by six New England School of Acupuncture staff from the ‘Chinese acupuncture’ (style 1) and ‘Japanese acupuncture’ (style 2) programmes (three from each). Each faculty expert needled 12 points (6 bilateral locations) in 12 healthy human subjects using tonification (technique 1) and dispersal (technique 2). Parameters calculated from the raw needling data were displacement amplitude, displacement frequency, rotation amplitude, rotation frequency, force amplitude and torque amplitude. Results Data analysis revealed significant differences in the amplitude of displacement and rotation between needling performed by staff from two different acupuncture styles. Significant overall differences in the frequency of displacement between techniques 1 and 2 that were not dependent of the style of acupuncture being performed were also found. The relationships between displacement and rotation frequencies, as well as between displacement and force amplitudes showed considerable variability across individual acupuncturists and subjects. Conclusions Needling motion and force parameters can be quantified in a treatment-like setting. Needling data can subsequently be analysed, providing an objective method for characterising needling in basic and clinical acupuncture research.

Davis, Robert T; Churchill, David L; Badger, Gary J; Dunn, Julie; Langevin, Helene M



Digital Optical Method to quantify the visual opacity of fugitive plumes  

NASA Astrophysics Data System (ADS)

Fugitive emissions of particulate matter (PM) raise public concerns due to their adverse impacts on human health and atmospheric visibility. Although the United States Environmental Protection Agency (USEPA) has not developed a standard method for quantifying the opacities of fugitive plumes, select states have developed human vision-based opacity methods for such applications. A digital photographic method, Digital Optical Method for fugitive plumes (DOMfugitive), is described herein for quantifying the opacities of fugitive plume emissions. Field campaigns were completed to evaluate this method by driving vehicles on unpaved roads to generate dust plumes. DOMfugitive was validated by performing simultaneous measurements using a co-located laser transmissometer. For 84% of the measurements, the individual absolute opacity difference values between the two methods were ?15%. The average absolute opacity difference for all the measurements was 8.5%. The paired t-test showed no significant difference between the two methods at 99% confidence level. Comparisons of wavelength dependent opacities with grayscale opacities indicated that DOMfugitive was not sensitive to the wavelength in the visible spectrum evaluated during these field campaigns. These results encourage the development of a USEPA standard method for quantifying the opacities of fugitive PM plumes using digital photography, as an alternative to human-vision based approaches.

Du, Ke; Shi, Peng; Rood, Mark J.; Wang, Kai; Wang, Yang; Varma, Ravi M.



Quantifying Coastal Change Patterns Using LIDAR  

NASA Astrophysics Data System (ADS)

Shorelines undergo continuous change, primarily in response to the action of waves. New technologies including LIDAR surveys are just beginning to reveal surprising shoreline behaviors over a range of space and time scales (e.g. List and Farris, 1999; Tebbens et al, 2002). This early stage of coastal physical science calls for further documentation and analysis of the range of phenomena involved. Wavelt analysis of the changes along the North Carolina Outer Banks, USA, over a single annual interval (Tebbens et al., 2002) quantify statistics including: 1) the amount of shoreline change as a function of alongshore length scale; 2) the distribution of the alongshore-lengths of contiguous zones of erosion and accretion; and 3) the distribution of the magnitudes of erosion and accretion occurring during a time interval. The statistics of the patterns of shoreline varied among the different coastline segments measured. Because these these shoreline segments have different orientations, they are affected by different effective wave climates. Analyses over other time intervals test whether the statistics and the variations from one coastline segment to another are robust. The work also tests a hypothesis and potential model for the main cause of these observed shoreline behaviors. The statistics describing the patterns of shoreline change vary as a function of regional wave climate, suggesting the hypothesis that these changes are driven chiefly by gradients in alongshore transport associated with subtle deviations from a smooth shoreline. Recent work has shown that when waves approach shore from deep water at relative angles greater than approximately 45°, shoreline perturbations grow, causing alongshore-heterogeneous shoreline changes on any scale at which perturbations exist (Ashton et al., 2001). Waves approaching from deep-water angles closer to shore-normal tend to smooth out the shoreline. The patterns of alongshore change over some extended time period will result at least partly from the interactions between these roughening and smoothing influences, which will depend on the regional wave climate, including the relative proportions of high and low wave-approach angles. A model treating alongshore transport (Ashton et al., 2001; Ashton et al., 2003a; Ashton et al., 2003b) predicts the observed trend with shoreline orientation (regional wave climate) in one of the statistics in the previous analysis (Tebbens et al, 2002).

Tebbens, S. F.; Murray, A.; Ashton, A. D.



Choosing appropriate techniques for quantifying groundwater recharge  

Microsoft Academic Search

.   Various techniques are available to quantify recharge; however, choosing appropriate techniques is often difficult. Important\\u000a considerations in choosing a technique include space\\/time scales, range, and reliability of recharge estimates based on different\\u000a techniques; other factors may limit the application of particular techniques. The goal of the recharge study is important\\u000a because it may dictate the required space\\/time scales of

Bridget R. Scanlon; Richard W. Healy; Peter G. Cook



Quantifying reliability uncertainty : a proof of concept  

Microsoft Academic Search

This paper develops Classical and Bayesian methods for quantifying the uncertainty in reliability for a system of mixed series and parallel components for which both go\\/no-go and variables data are available. Classical methods focus on uncertainty due to sampling error. Bayesian methods can explore both sampling error and other knowledge-based uncertainties. To date, the reliability community has focused on qualitative

Kathleen V. Diegert; Michael A. Dvorack; James T. Ringland; Michael Joseph Mundt; Aparna Huzurbazar; John F. Lorio; Quinn Fatherley; Christine Anderson-Cook; Alyson G. Wilson; Rena M. Zurn



Quantifying the sensitivity of simulated climate change to model configuration  

Microsoft Academic Search

This study used “factor separation” to quantify the sensitivity of simulated present and future surface temperatures and precipitation\\u000a to alternative regional climate model physics components. The method enables a quantitative isolation of the effects of using\\u000a each physical component as well as the combined effect of two or more components. Simulation results are presented from eight\\u000a versions of the Mesoscale

Barry H. Lynn; Richard Healy; Leonard M. Druyan



An In Vivo Method to Quantify Lymphangiogenesis in Zebrafish  

PubMed Central

Background Lymphangiogenesis is a highly regulated process involved in the pathogenesis of disease. Current in vivo models to assess lymphangiogenesis are largely unphysiologic. The zebrafish is a powerful model system for studying development, due to its rapid growth and transparency during early stages of life. Identification of a network of trunk lymphatic capillaries in zebrafish provides an opportunity to quantify lymphatic growth in vivo. Methods and Results Late-phase microangiography was used to detect trunk lymphatic capillaries in zebrafish 2- and 3-days post-fertilization. Using this approach, real-time changes in lymphatic capillary development were measured in response to modulators of lymphangiogenesis. Recombinant human vascular endothelial growth factor (VEGF)-C added directly to the zebrafish aqueous environment as well as human endothelial and mouse melanoma cell transplantation resulted in increased lymphatic capillary growth, while morpholino-based knockdown of vegfc and chemical inhibitors of lymphangiogenesis added to the aqueous environment resulted in decreased lymphatic capillary growth. Conclusion Lymphatic capillaries in embryonic and larval zebrafish can be quantified using late-phase microangiography. Human activators and small molecule inhibitors of lymphangiogenesis, as well as transplanted human endothelial and mouse melanoma cells, alter lymphatic capillary development in zebrafish. The ability to rapidly quantify changes in lymphatic growth under physiologic conditions will allow for broad screening of lymphangiogenesis modulators, as well as help define cellular roles and elucidate pathways of lymphatic development.

Hoffman, Scott J.; Psaltis, Peter J.; Clark, Karl J.; Spoon, Daniel B.; Chue, Colin D.; Ekker, Stephen C.; Simari, Robert D.



The Arizona Sun Corridor: Quantifying climatic implications of megapolitan development  

NASA Astrophysics Data System (ADS)

The local and regional-scale hydro-climatic impacts of land use and land cover change (LULCC) that result from urbanization require attention in light of future urban growth projections and related concerns for environmental sustainability. This is an especially serious issue over the southwestern U.S. where mounting pressure on the area’s natural desert environment and increasingly limited resources (e.g. water) exists, and is likely to worsen, due to unrelenting sprawl and associated urbanization. While previous modeling results have shown the degree to which the built environment has contributed to the region’s warming summertime climate, we use projections of future landscape change over the rapidly urbanizing Arizona Sun Corridor - an anticipated stretch of urban expanse that includes current metro Phoenix and Tucson - as surface boundary conditions to conduct high-resolution (order of 1-km) numerical simulations, over the seasonal timescale, to quantify the climatic effect of this relentlessly growing and increasingly vulnerable region. We use the latest version of the WRF modeling system to take advantage of several new capabilities, including a newly implemented nesting method used to refine the vertical mesh, and a comprehensive multi-story urban canopy scheme. We quantify the impact of projected (circa 2050) Sun Corridor megapolitan area on further development of the urban heat island (UHI), assess changes in the surface energy budget, with important implications for the near surface temperature and stability, and discuss modeled impacts on regional rainfall. Lastly, simulated effects are compared with projected warming due to increasing greenhouse gases (the GCMs from which these results are obtained currently do not take into account effects of urbanizing regions) and quantify the degree to which LULCC over the Arizona Sun Corridor will exacerbate regional anthropogenic climate change. A number of potential mitigation strategies are discussed (including effects of renewable energy), the simulated impact on anthropogenic heat production is quantified, and the degree to which future warming may be offset is estimated.

Georgescu, M.; Moustaoui, M.; Mahalov, A.



The OOPSLA trivia show (TOOTS)  

Microsoft Academic Search

OOPSLA has a longstanding tradition of being a forum for discussing the cutting edge of technology in a fun and participatory environment. The type of events sponsored by OOPSLA sometimes border on the unconventional. This event represents an atypical panel that conforms to the concept of a game show that is focused on questions and answers related to OOPSLA themes.

Jeff Gray; Douglas C. Schmidt



Using multilevel models to quantify heterogeneity in resource selection  

USGS Publications Warehouse

Models of resource selection are being used increasingly to predict or model the effects of management actions rather than simply quantifying habitat selection. Multilevel, or hierarchical, models are an increasingly popular method to analyze animal resource selection because they impose a relatively weak stochastic constraint to model heterogeneity in habitat use and also account for unequal sample sizes among individuals. However, few studies have used multilevel models to model coefficients as a function of predictors that may influence habitat use at different scales or quantify differences in resource selection among groups. We used an example with white-tailed deer (Odocoileus virginianus) to illustrate how to model resource use as a function of distance to road that varies among deer by road density at the home range scale. We found that deer avoidance of roads decreased as road density increased. Also, we used multilevel models with sika deer (Cervus nippon) and white-tailed deer to examine whether resource selection differed between species. We failed to detect differences in resource use between these two species and showed how information-theoretic and graphical measures can be used to assess how resource use may have differed. Multilevel models can improve our understanding of how resource selection varies among individuals and provides an objective, quantifiable approach to assess differences or changes in resource selection. ?? The Wildlife Society, 2011.

Wagner, T.; Diefenbach, D. R.; Christensen, S. A.; Norton, A. S.



Quantifying dithiothreitol displacement of functional ligands from gold nanoparticles.  


Dithiothreitol (DTT)-based displacement is widely utilized for separating ligands from their gold nanoparticle (AuNP) conjugates, a critical step for differentiating and quantifying surface-bound functional ligands and therefore the effective surface density of these species on nanoparticle-based therapeutics and other functional constructs. The underlying assumption is that DTT is smaller and much more reactive toward gold compared with most ligands of interest, and as a result will reactively displace the ligands from surface sites thereby enabling their quantification. In this study, we use complementary dimensional and spectroscopic methods to characterize the efficiency of DTT displacement. Thiolated methoxypolyethylene glycol (SH-PEG) and bovine serum albumin (BSA) were chosen as representative ligands. Results clearly show that (1) DTT does not completely displace bound SH-PEG or BSA from AuNPs, and (2) the displacement efficiency is dependent on the binding affinity between the ligands and the AuNP surface. Additionally, the displacement efficiency for conjugated SH-PEG is moderately dependent on the molecular mass (yielding efficiencies ranging from 60 to 80% measured by ATR-FTIR and ?90% by ES-DMA), indicating that the displacement efficiency for SH-PEG is predominantly determined by the S-Au bond. BSA is particularly difficult to displace with DTT (i.e., the displacement efficiency is nearly zero) when it is in the so-called normal form. The displacement efficiency for BSA improves to 80% when it undergoes a conformational change to the expanded form through a process of pH change or treatment with a surfactant. An analysis of the three-component system (SH-PEG + BSA + AuNP) indicates that the presence of SH-PEG decreases the displacement efficiency for BSA, whereas the displacement efficiency for SH-PEG is less impacted by the presence of BSA. PMID:23104310

Tsai, De-Hao; Shelton, Melanie P; DelRio, Frank W; Elzey, Sherrie; Guha, Suvajyoti; Zachariah, Michael R; Hackley, Vincent A



Quantifying thermal modifications on laser welded skin tissue  

NASA Astrophysics Data System (ADS)

Laser tissue welding is a potential medical treatment method especially on closing cuts implemented during any kind of surgery. Photothermal effects of laser on tissue should be quantified in order to determine optimal dosimetry parameters. Polarized light and phase contrast techniques reveal information about extend of thermal change over tissue occurred during laser welding application. Change in collagen structure in skin tissue stained with hematoxilen and eosin samples can be detected. In this study, three different near infrared laser wavelengths (809 nm, 980 nm and 1070 nm) were compared for skin welding efficiency. 1 cm long cuts were treated spot by spot laser application on Wistar rats' dorsal skin, in vivo. In all laser applications, 0.5 W of optical power was delivered to the tissue, 5 s continuously, resulting in 79.61 J/cm2 energy density (15.92 W/cm2 power density) for each spot. The 1st, 4th, 7th, 14th, and 21st days of recovery period were determined as control days, and skin samples needed for histology were removed on these particular days. The stained samples were examined under a light microscope. Images were taken with a CCD camera and examined with imaging software. 809 Nm laser was found to be capable of creating strong full-thickness closure, but thermal damage was evident. The thermal damage from 980 nm laser welding was found to be more tolerable. The results showed that 1070 nm laser welding produced noticeably stronger bonds with minimal scar formation.

Tabakoglu, Hasim Ö.; Gülsoy, Murat



Quantifying the sources of error in measurements of urine activity  

SciTech Connect

Accurate scintigraphic measurements of radioactivity in the bladder and voided urine specimens can be limited by scatter, attenuation, and variations in the volume of urine that a given dose is distributed in. The purpose of this study was to quantify some of the errors that these problems can introduce. Transmission scans and 41 conjugate images of the bladder were sequentially acquired on a dual headed camera over 24 hours in 6 subjects after the intravenous administration of 100-150 MBq (2.7-3.6 mCi) of a novel I-123 labeled benzamide. Renal excretion fractions were calculated by measuring the counts in conjugate images of 41 sequentially voided urine samples. A correction for scatter was estimated by comparing the count rates in images that were acquired with the photopeak centered an 159 keV and images that were made simultaneously with the photopeak centered on 126 keV. The decay and attenuation corrected, geometric mean activities were compared to images of the net dose injected. Checks of the results were performed by measuring the total volume of each voided urine specimen and determining the activity in a 20 ml aliquot of it with a dose calibrator. Modeling verified the experimental results which showed that 34% of the counts were attenuated when the bladder had been expanded to a volume of 300 ml. Corrections for attenuation that were based solely on the transmission scans were limited by the volume of non-radioactive urine in the bladder before the activity was administered. The attenuation of activity in images of the voided wine samples was dependent on the geometry of the specimen container. The images of urine in standard, 300 ml laboratory specimen cups had 39{plus_minus}5% fewer counts than images of the same samples laid out in 3 liter bedpans. Scatter through the carbon fiber table substantially increased the number of counts in the images by an average of 14%.

Mozley, P.D.; Kim, H.J.; McElgin, W. [ORISE, Oak Ridge, TN (United States)] [and others



Use of cine magnetic resonance angiography in quantifying aneurysm pulsatility associated with endoleak  

Microsoft Academic Search

ObjectivePersistent aneurysm perfusion or endoleak is associated with pulsatility of abdominal aortic aneurysm (AAA) after endovascular repair. However, the resultant pulsatile change in aneurysm diameter may be difficult to quantify, and therefore its significance is unknown. In this study cine magnetic resonance angiography (MRA) was used to quantify aneurysm wall motion during the cardiac cycle and to correlate it with

Peter L. Faries; Gautam Agarwal; Robert Lookstein; Joshua W. Bernheim; Neal S. Cayne; Hadley Cadot; Jeffery Goldman; K. Craig Kent; Larry H. Hollier; Michael L. Marin



Reading the traveling exhibition show: \\  

Microsoft Academic Search

This dissertation utilizes the motif of the traveling exhibition show in order to analyze how the Massachusetts Magazine (1789–96) participated in the cultural discussion regarding the construction of the American woman in the new nation. Although others have focused on the role of women in America (i.e., “Republican Motherhood”), I assert that whatever situation a woman found herself in—single, married,

Beverly Jean Reed



Do Trade Shows Pay Off?  

Microsoft Academic Search

rade show expenditures are the second largest item in the business marketing communications budget after advertising, and they account for nearly one-fifth of the total budget for U.S. firms and approximately one-fourth of the budget for European firms (Jacobson 1990; Schafer 1987). The level of these expenditures, including direct costs and allocation of exhibitor staff time, though exclud- ing planning

Srinath Gopalakrishna; Gary L. Lilien; Jerome D. Williams; Ian K. Sequeira


Quantifying variances in comparative RNA secondary structure prediction  

PubMed Central

Background With the advancement of next-generation sequencing and transcriptomics technologies, regulatory effects involving RNA, in particular RNA structural changes are being detected. These results often rely on RNA secondary structure predictions. However, current approaches to RNA secondary structure modelling produce predictions with a high variance in predictive accuracy, and we have little quantifiable knowledge about the reasons for these variances. Results In this paper we explore a number of factors which can contribute to poor RNA secondary structure prediction quality. We establish a quantified relationship between alignment quality and loss of accuracy. Furthermore, we define two new measures to quantify uncertainty in alignment-based structure predictions. One of the measures improves on the “reliability score” reported by PPfold, and considers alignment uncertainty as well as base-pair probabilities. The other measure considers the information entropy for SCFGs over a space of input alignments. Conclusions Our predictive accuracy improves on the PPfold reliability score. We can successfully characterize many of the underlying reasons for and variances in poor prediction. However, there is still variability unaccounted for, which we therefore suggest comes from the RNA secondary structure predictive model itself.



Quantifying Error in the CMORPH Satellite Precipitation Estimates  

NASA Astrophysics Data System (ADS)

As part of the collaboration between China Meteorological Administration (CMA) National Meteorological Information Centre (NMIC) and NOAA Climate Prediction Center (CPC), a new system is being developed to construct hourly precipitation analysis on a 0.25olat/lon grid over China by merging information derived from gauge observations and CMORPH satellite precipitation estimates. Foundation to the development of the gauge-satellite merging algorithm is the definition of the systematic and random error inherent in the CMORPH satellite precipitation estimates. In this study, we quantify the CMORPH error structures through comparisons against a gauge-based analysis of hourly precipitation derived from station reports from a dense network over China. First, systematic error (bias) of the CMORPH satellite estimates are examined with co-located hourly gauge precipitation analysis over 0.25olat/lon grid boxes with at least one reporting station. The CMORPH exhibits biases of regional variations showing over-estimates over eastern China, and seasonal changes with over-/under-estimates during warm/cold seasons. The CMORPH bias presents range-dependency. In general, the CMORPH tends to over-/under-estimate weak / strong rainfall. The bias, when expressed in the form of ratio between the gauge observations and the CMORPH satellite estimates, increases with the rainfall intensity but tends to saturate at a certain level for high rainfall. Based on the above results, a prototype algorithm is developed to remove the CMORPH bias through matching the PDF of original CMORPH estimates against that of the gauge analysis using data pairs co-located over grid boxes with at least one reporting gauge over a 30-day period ending at the target date. The spatial domain for collecting the co-located data pairs is expanded so that at least 5000 pairs of data are available to ensure statistical availability. The bias-corrected CMORPH is then compared against the gauge data to quantify the remaining random error. The results showed that the random error in the bias-corrected CMORPH is proportional to the smoothness of the target precipitation fields, expressed as the standard deviation of the CMORPH fields, and to the size of the spatial domain over which the data pairs to construct the PDF functions are collected. An empirical equation is then defined to compute the random error in the bias-corrected CMORPH from the CMORPH spatial standard deviation and the size of the data collection domain. An algorithm is being developed to combine the gauge analysis with the bias-corrected CMORPH through the optimal interpolation (OI) technique using the error statistics defined in this study. In this process, the bias-corrected CMORPH will be used as the first guess, while the gauge data will be utilized as observations to modify the first guess over regions with gauge network coverage. Detailed results will be reported at the conference.

Xu, B.; Yoo, S.; Xie, P.



Rocks and Minerals Slide Show  

NSDL National Science Digital Library

This interactive slide show of common rocks and minerals allows students to choose from two sets of minerals and click on a thumbnail to see a larger photograph with a full description of the mineral including color, streak, hardness, cleavage/fracture, and chemical composition. Also included are its use and where it is found. The rocks are divided into igneous, sedimentary, and metamorphic and can be accessed in the same manner. They are described on the basis of crystal size and mineral composition as well as use.


Quantifying ecological thresholds in a complex world  

NASA Astrophysics Data System (ADS)

Ecological thresholds are abrupt changes of ecological state. Most empirical methods detect ecological thresholds in time or across geographic space. Although useful, these approaches do not quantify the direct drivers of a threshold response. Causal understanding of thresholds detected empirically requires their investigation in a domain containing the direct drivers (often referred to as state space). Currently, no method quantifies thresholds with respect to more than one driver in state space. Here, we present an approach designed to better accomodate complexity; the approach quantifies thresholds in state space with more than one driver. We present two indices of shape attributes measured from 3-D response surfaces, threshold strength (T) and diagonality (D). We use 48 simulated response surfaces of different shapes to test the efficacy of the indices. T is sensitive to the steepness of the transition from one state to the next, with various forms of abrupt, centralized thresholds yielding the highest values among the simulated surfaces. D represents the orientation of the response surface in state space or the simultaneous influence of more than one predictor in eliciting the response. Strongly diagonal surfaces have the most diagonal surface area demonstrated by sharply undulating diagonal surfaces. Given that the success of T and D requires a regression method to accurately capture any shape of complex data structure as a response surface, we also test the accuracy of regression methods known to be tractable with complex data. We test Classification and Regression Trees (CART), Random Forest, and Non-Parametric Multiplicative Regression (NPMR) for binary and continuous responses. We use the 48 simulated response surfaces to test the methods, and we find that prediction accuracy depends on both the T and D of the simulated data for each method. We choose the most accurate method among those we test for capturing any shape of response surface from real data, NPMR. Finally, we use NPMR to build response surfaces from which we quantify T and D for real ecological data sets. We demonstrate how measurement of threshold strength and diagonality from multi-factor response surfaces can advance our understanding of thresholds using several examples: tree mortality from bark beetles, woody plant vulnerability curves, and species probability of occurrence with respect to climate.

Lintz, H. E.; McCune, B.; Gray, A. N.; McCulloh, K. A.



Quantifying Systematic Effects on Galaxy Clustering  

NASA Astrophysics Data System (ADS)

We present techniques for quantifying the effects of observation systematic effects on galaxy clustering measurements from large photometric surveys. These techniques can leverage both pixelized and point-based systematics and can be quickly calculated for large data volumes and as a function of observational tiling and Galactic coordinates. The actual measurements are performed via a correlation function either in pixel-space or real-space. As a demonstration, we present a measurement of the systematic effects of seeing, extinction, and stellar density on the SDSS DR7 photometric galaxy clustering signal.

Wang, Y.; Brunner, R. J.



An index for quantifying flocking behavior.  


One of the classic research topics in adaptive behavior is the collective displacement of groups of organisms such as flocks of birds, schools of fish, herds of mammals, and crowds of people. However, most agent-based simulations of group behavior do not provide a quantitative index for determining the point at which the flock emerges. An index was developed of the aggregation of moving individuals in a flock and an example was provided of how it can be used to quantify the degree to which a group of moving individuals actually forms a flock. PMID:18229552

Quera, Vicenç; Herrando, Salvador; Beltran, Francesc S; Salas, Laura; Miñano, Meritxell



Quantifying nanoscale order in amorphous materials via fluctuation electron microscopy  

NASA Astrophysics Data System (ADS)

Fluctuation electron microscopy (FEM) has been used to study the nanoscale order in various amorphous materials. The method is explicitly sensitive to 3- and 4-body atomic correlation functions in amorphous materials; this is sufficient to establish the existence of structural order on the nanoscale, even when the radial distribution function extracted from diffraction data appears entirely amorphous. The variable resolution form of the technique can reveal the characteristic decay length over which topological order persists in amorphous materials. By changing the resolution, a characteristic length is obtained without the need for a priori knowledge of the structure. However, it remains a formidable challenge to invert the FEM data into a quantitative description of the structure that is free from error due to experimental noise and quantitative in both size and volume fraction. Here, we quantify the FEM method by (i) forward simulating the FEM data from a family of high quality atomistic a-Si models, (ii) reexamining the statistical origins of contributions to the variance due to artifacts, and (iii) comparing the measured experimental data with model simulations. From simulations at a fixed resolution, we show that the variance V( k) is a complex function of the size and volume fraction of the ordered regions present in the amorphous matrix. However, the ratio of the variance peaks as a function of diffraction vector k affords the size of the ordered regions; and the magnitude of the variance affords a quantitative measure of the volume fraction. From comparison of measured characteristic length with model simulations, we are able to estimate the size and volume fraction of ordered regions. The use of the STEM mode of FEM offers significant advantages in identifying artifacts in the variances. Artifacts, caused by non-idealities in the sample unrelated to nanoscale order, can easily dominate the measured variance, producing erroneous results. We show that reexamination and correction of the contributions of artifacts to variance is necessary to obtain an accurate and quantitative description of the structure of amorphous materials. Using variable resolution FEM we are able to extract a characteristic length of ordered regions in two different amorphous silicon samples. Having eliminated the noise contribution to the variance, we show here the first demonstration of a consistent characteristic length at all values of k. The experimental results presented here are the first to be consistent with both FEM theory and simulations.

Bogle, Stephanie Nicole


Identifying and quantifying urban recharge: a review  

NASA Astrophysics Data System (ADS)

The sources of and pathways for groundwater recharge in urban areas are more numerous and complex than in rural environments. Buildings, roads, and other surface infrastructure combine with man-made drainage networks to change the pathways for precipitation. Some direct recharge is lost, but additional recharge can occur from storm drainage systems. Large amounts of water are imported into most cities for supply, distributed through underground pipes, and collected again in sewers or septic tanks. The leaks from these pipe networks often provide substantial recharge. Sources of recharge in urban areas are identified through piezometry, chemical signatures, and water balances. All three approaches have problems. Recharge is quantified either by individual components (direct recharge, water-mains leakage, septic tanks, etc.) or holistically. Working with individual components requires large amounts of data, much of which is uncertain and is likely to lead to large uncertainties in the final result. Recommended holistic approaches include the use of groundwater modelling and solute balances, where various types of data are integrated. Urban recharge remains an under-researched topic, with few high-quality case studies reported in the literature. Résumé. Les origines et les trajets de la recharge des nappes en zones urbaines sont plus nombreux et plus complexes qu'en zones rurales. Les bâtiments, les routes et les autres infrastructures de surface se combinent avec les réseaux de drainage artificiels en modifiant les voies d'écoulements des précipitations. Une partie de la recharge directe est perdue, mais une recharge supplémentaire peut intervenir à partir des systèmes de drainage d'eaux pluviales. Des quantités importantes d'eau sont importées dans la plupart des villes pour l'alimentation, sont distribuées par des conduites souterraines et sont collectées dans des égouts ou des fosses septiques. Les fuites de ces réseaux de conduites constituent souvent une part importante de la recharge. L'origine de la recharge en zones urbaines est mise en évidence grâce à la piézométrie, aux signatures chimiques et aux bilans hydrologiques. Ces trois approches posent des problèmes. La recharge est quantifiée soit à partir de ses composantes individuelles (la recharge directe, les fuites d'eaux des réseaux, les fosses septiques, etc.), soit de façon globale. Pour travailler avec les composantes individuelles, il faut de grandes quantités de données, dont beaucoup comportent des incertitudes, et le résultat final présentera vraisemblablement des incertitudes importantes. Les approches globales recommandées s'appuient sur la modélisation de l'aquifère et les bilans de solutés, dans lesquels différents types de données sont intégrés. La recharge en zone urbaine reste un sujet délaissé par la recherche, offrant dans la littérature peu d'études de cas de bonne qualité. Resumen. Las fuentes y vías de recarga en zonas urbanas son más numerosas y complejas que en medios rurales. Los edificios, carreteras y otras infraestructuras superficiales se combinan con las obras antrópicas de drenaje para modificar las vías de infiltración. Una parte de la recarga directa se pierde, pero puede haber contribuciones adicionales a partir de los sistemas de drenaje de aguas pluviales. Se importa grandes volúmenes de agua a la mayoría de las ciudades para abastecimiento, siendo distribuida por medio de tuberías subterráneas, y recogida de nuevo en alcantarillas o fosas sépticas. Las pérdidas en las redes de distribución a menudo aportan una recarga substancial. Las fuentes de recarga en zonas urbanas se identifican mediante la piezometría, trazadores químicos y balances de agua, pero los tres métodos presentan problemas. La recarga se cuantifica bien por sus componentes individuales (recarga directa, goteo en tuberías, fosas sépticas, etc.) o bien de forma holística. La primera opción requiere muchos datos, a menudo inciertos, y es probable que se obtengan enormes incertidumbres en el resultado fin

Lerner, David



The daily show: Journalism's jester  

Microsoft Academic Search

The social meaning of television news has been under transformation since the successes of cable news in the final years of the previous century. In their attempts to preserve viewership and to remain relevant, traditional broadcast news outlets increasingly emulate the conventions of cable news. Instead of retaining audiences, the result has been declining news content and a continued loss

Mark R McCarthy



"Show me" bioethics and politics.  


Missouri, the "Show Me State," has become the epicenter of several important national public policy debates, including abortion rights, the right to choose and refuse medical treatment, and, most recently, early stem cell research. In this environment, the Center for Practical Bioethics (formerly, Midwest Bioethics Center) emerged and grew. The Center's role in these "cultural wars" is not to advocate for a particular position but to provide well researched and objective information, perspective, and advocacy for the ethical justification of policy positions; and to serve as a neutral convener and provider of a public forum for discussion. In this article, the Center's work on early stem cell research is a case study through which to argue that not only the Center, but also the field of bioethics has a critical role in the politics of public health policy. PMID:17926217

Christopher, Myra J



Quantifying Mineralization Utilizing Bone Mineral Density Distribution in the Mandible  

PubMed Central

Background Microcomputed Tomography (?CT) is an efficient method for quantifying the density and mineralization of mandibular microarchitecture. Conventional radiomorphometrics such as Bone and Tissue Mineral Density are useful in determining the average, overall mineral content of a scanned specimen; however, solely relying on these metrics has limitations. Utilizing Bone Mineral Density Distribution (BMDD), the complex array of mineralization densities within a bone sample can be portrayed. This information is particularly useful as a computational feature reflective of the rate of bone turnover. Here we demonstrate the utility of BMDD analyses in the rat mandible and generate a platform for further exploration of mandibular pathology and treatment. Methods Male Sprague Dawley rats (n=8) underwent ?CT and histogram data was generated from a selected volume of interest. A standard curve was derived for each animal and reference criteria were defined. An average histogram was produced for the group and descriptive analyses including the means and standard deviations are reported for each of the normative metrics. Results Mpeak (3444 Hounsfield Units, SD =138) and Mwidth (2221 Hounsfield Units SD =628) are two metrics demonstrating reproducible parameters of BMDD with minimal variance. A total of eight valuable metrics quantifying biologically significant events concerning mineralization are reported. Conclusion Here we quantify the vast wealth of information depicted in the complete spectrum of mineralization established by the BMDD analysis. We demonstrate its potential in delivering mineralization data that encompasses and enhances conventional reporting of radiomorphometrics. Moreover, we explore its role and translational potential in craniofacial experimentation.

Donneys, Alexis; Nelson, Noah S.; Deshpande, Sagar S.; Boguslawski, Matthew J.; Tchanque-Fossuo, Catherine N.; Farberg, Aaron S.; Buchman, Steven R.



Applicability of digital photogrammetry technique to quantify rock fracture surfaces  

NASA Astrophysics Data System (ADS)

Several automatic recording mechanical profilographs have been used to perform fracture roughness measurements. The previous studies indicated that for accurate quantification of roughness the fracture roughness measurements should be obtained at a much higher resolution than that possible using the mechanical profilographs. With laser profilometers, roughness can be measured at very small step spacings to a high degree of precision. Laser profilometer, however, is limited to laboratory measurements, and only small scale roughness is represented. Waviness or large-scale roughness can be considered using a digital photogrammetry technique through in situ measurements. Applicability of the digital photogrammetry technique for roughness estimation of fracture surface is addressed in this study. Measurements of fracture surface have been performed for three rock fracture specimens using the laser profiler and the digital photogrammetry technique. The conventional roughness parameters, such as Z2, SDSL, SDH and Rp, as well as fractal roughness parameters have been estimated for roughness data obtained from each method. Obtained results showed that there were considerable amount of discrepancy on each of estimated conventional roughness parameter based on the laser profilometer and the digital photogrammetry technique. On the other hand estimated fractal roughness parameters based on both methods were found to close each other. It is very important to note that the estimated fractal roughness parameters obtained from the digital photogrammetry technique were lower than that based on the laser profilometer, even though a high degree of correlation exist between them. To perform the accurate estimation of fracture roughness, values obtained from the digital photogrammetry technique have to be corrected. The conducted research in this study have shown that the digital photogrammetry technique have strong capability to quantify the roughness of rock fracture accurately. Acknowledgements. This work was supported by the 2011 Energy Efficiency and Resources Program of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant.

Seo, H. K.; Noh, Y. H.; Um, J. G.; Choi, Y. S.; Park, M. H.



Quantifying occupant energy behavior using pattern analysis techniques  

SciTech Connect

Occupant energy behavior is widely agreed upon to have a major influence over the amount of energy used in buildings. Few attempts have been made to quantify this energy behavior, even though vast amounts of end-use data containing useful information lay fallow. This paper describes analysis techniques developed to extract behavioral information from collected residential end-use data. Analysis of the averages, standard deviations and frequency distributions of hourly data can yield important behavioral information. Pattern analysis can be used to group similar daily energy patterns together for a particular end-use or set of end-uses. Resulting pattern groups can then be examined statistically using multinomial logit modeling to find their likelihood of occurrence for a given set of daily conditions. These techniques were tested successfully using end-use data for families living in four heavily instrumented residences. Energy behaviors were analyzed for individual families during each heating season of the study. These behaviors (indoor temperature, ventilation load, water heating, large appliance energy, and miscellaneous outlet energy) capture how occupants directly control the residence. The pattern analysis and multinomial logit model were able to match the occupant behavior correctly 40 to 70% of the time. The steadier behaviors of indoor temperature and ventilation were matched most successfully. Simple changes to capture more detail during pattern analysis can increase accuracy for the more variable behavior patterns. The methods developed here show promise for extracting meaningful and useful information about occupant energy behavior from the stores of existing end-use data.

Emery, A. [Univ. of Washington, Seattle, WA (United States). Dept. of Mechanical Engineering; Gartland, L. [Lawrence Berkeley National Lab., CA (United States). Energy and Environment Div.



Quantifying Dirac hydrogenic effects via complexity measures  

NASA Astrophysics Data System (ADS)

The primary dynamical Dirac relativistic effects can only be seen in hydrogenic systems without the complications introduced by electron-electron interactions in many-electron systems. They are known to be the contraction towards the origin of the electronic charge in hydrogenic systems and the nodal disappearance (because of the raising of all the nonrelativistic minima) in the electron density of the excited states of these systems. In addition we point out the (largely ignored) gradient reduction of the charge density near and far from the nucleus. In this work we quantify these effects by means of single (Fisher information) and composite [Fisher-Shannon complexity and plane, López-Ruiz, Mancini, and Calbet (LMC) complexity] information-theoretic measures. While the Fisher information measures the gradient content of the density, the (dimensionless) composite information-theoretic quantities grasp twofold facets of the electronic distribution: The Fisher-Shannon complexity measures the combined balance of the gradient content and the total extent of the electronic charge, and the LMC complexity quantifies the disequilibrium jointly with the spreading of the density in the configuration space. Opposite to other complexity notions (e.g., computational and algorithmic complexities), these two quantities describe intrinsic properties of the system because they do not depend on the context but are functionals of the electron density. Moreover, they are closely related to the intuitive notion of complexity because they are minimum for the two extreme (or least complex) distributions of perfect order and maximum disorder.

Bouvrie, P. A.; López-Rosa, S.; Dehesa, J. S.



New measurements quantify atmospheric greenhouse effect  

NASA Astrophysics Data System (ADS)

In spite of a large body of existing measurements of incoming short-wave solar radiation and outgoing long-wave terrestrial radiation at the surface of the Earth and, more recently, in the upper atmosphere, there are few observations documenting how radiation profiles change through the atmosphere—information that is necessary to fully quantify the greenhouse effect of Earth's atmosphere. Through the use of existing technology but employing improvements in observational techniques it may now be possible not only to quantify but also to understand how different components of the atmosphere (e.g., concentration of gases, cloud cover, moisture, and aerosols) contribute to the greenhouse effect. Using weather balloons equipped with radiosondes, Philipona et al. continuously measured radiation fluxes from the surface of Earth up to altitudes of 35 kilometers in the upper stratosphere. Combining data from flights conducted during both day and night with continuous 24-hour measurements made at the surface of the Earth, the researchers created radiation profiles of all four components necessary to fully capture the radiation budget of Earth, namely, the upward and downward short-wave and long-wave radiation as a function of altitude.

Bhattacharya, Atreyee



Control in bioreactors showing gradients  

Microsoft Academic Search

In large-scale bioreactors gradients often occur as a result of non-ideal mixing. This phenomenon complicates design and control of large-scale bioreactors. Gradients in the oxygen concentration can be modeled with a two-compartment model of the liquid phase. Application of this model had been suggested for the control of the dissolved oxygen concentration with a batch gluconic acid fermentation process as

S. R. Weijers; G. Honderd; K. Ch. A. M. Luyben



Casimir experiments showing saturation effects  

SciTech Connect

We address several different Casimir experiments where theory and experiment disagree. First out is the classical Casimir force measurement between two metal half spaces; here both in the form of the torsion pendulum experiment by Lamoreaux and in the form of the Casimir pressure measurement between a gold sphere and a gold plate as performed by Decca et al.; theory predicts a large negative thermal correction, absent in the high precision experiments. The third experiment is the measurement of the Casimir force between a metal plate and a laser irradiated semiconductor membrane as performed by Chen et al.; the change in force with laser intensity is larger than predicted by theory. The fourth experiment is the measurement of the Casimir force between an atom and a wall in the form of the measurement by Obrecht et al. of the change in oscillation frequency of a {sup 87}Rb Bose-Einstein condensate trapped to a fused silica wall; the change is smaller than predicted by theory. We show that saturation effects can explain the discrepancies between theory and experiment observed in all these cases.

Sernelius, Bo E. [Division of Theory and Modeling, Department of Physics, Chemistry and Biology, Linkoeping University, SE-581 83 Linkoeping (Sweden)



Instrument Development for SHOW project  

NASA Astrophysics Data System (ADS)

Water is a critically important constituent throughout the stratosphere and mesosphere The SHOW project will develop a new instrument to measure water vapour from 15km to 85km height on a global scale using the unique capabilities provided by Spatial Heterodyne Spectroscopy SHS This work builds on Canadian expertise in fabricating solid Michelson interferometers to fill a significant niche in our current capability The SHS setup the FTS with the mirrors replaced by diffraction gratings at Littrow configuration wavelength depended Fizeau fringes are recorded by a 320 256 InGaAs near infrared camera without any scanning elements the high resolution spectral information along one detector dimension can be obtain from Fourier analysis and the other dimension will provide the spatial information At a limb view point a field-widened SHS with half-angle of 6 degrees for water observations at 1364nm is desired the resolution is 0 02nm within full bandwidth of 2nm and the resolving power is about 68 000 The laboratory work for the instrument development and the designing building and testing of the pre-prototype are presented

Lin, Y. L.; Shepherd, M. G.; Shepherd, G. G.; Solheim, B. H.; Brown, S.


Quantifying uncertainties in U.S. wildland fire emissions across space and time scales  

NASA Astrophysics Data System (ADS)

Smoke from wildland fire is a growing concern as air quality regulations tighten and public acceptance declines. Wildland fire emissions inventories are not only important for understanding smoke impacts on air quality but also in quantifying sources of greenhouse gas emissions. Wildland fire emissions can be calculated using a number of models and methods. We show an overview of results from the Smoke and Emissions Model Intercomparison Project (SEMIP) describing uncertainties in calculations of U.S. wildland fire emissions across space and time scales from single fires to annual national totals. Differences in emissions calculated from different models and systems and satallite algorithms and ground based systems are shown. The relative importance of uncertainties in fire size and available fuel data, consumption modeling techniques, and emissions factors are compared and quantified and can be applied to various use cases that include air quality impact modeling and greenhouse gas accounting. The results of this work show where additional information and updated models can most improve wildland fire emission inventories.

Larkin, N. K.; Strand, T. T.; Raffuse, S. M.; Drury, S.




SciTech Connect

We introduce a new technique to quantify highly structured spectra for which the definition of continua or spectral features in the observed flux spectra is difficult. The method employs wavelet transformations to decompose the observed spectra into different scales. A procedure is formulated to define the strength of spectral features so that the measured spectral indices are independent of the flux levels and are insensitive to the definition of continuum and also to reddening. This technique is applied to Type Ia supernovae (SNe) spectra, where correlations are revealed between luminosity and spectral features. The current technique may allow for luminosity corrections based on spectral features in the use of Type Ia SNe as cosmological probe.

Wagers, A.; Wang, L. [Department of Physics, Texas A and M, College Station, TX 77843 (United States); Asztalos, S., E-mail: [X-ray Instrumentation Associates, LLC, Hayward, CA 94551 (United States)



Quantifying the Reversible Association of Thermosensitive Nanoparticles  

NASA Astrophysics Data System (ADS)

Under many conditions, biomolecules and nanoparticles associate by means of attractive bonds, due to hydrophobic attraction. Extracting the microscopic association or dissociation rates from experimental data is complicated by the dissociation events and by the sensitivity of the binding force to temperature (T). Here we introduce a theoretical model that combined with light-scattering experiments allows us to quantify these rates and the reversible binding energy as a function of T. We apply this method to the reversible aggregation of thermoresponsive polystyrene/poly(N-isopropylacrylamide) core-shell nanoparticles, as a model system for biomolecules. We find that the binding energy changes sharply with T, and relate this remarkable switchable behavior to the hydrophobic-hydrophilic transition of the thermosensitive nanoparticles.

Zaccone, Alessio; Crassous, Jerome J.; Béri, Benjamin; Ballauff, Matthias



World Health Organization: Quantifying environmental health impacts  

NSDL National Science Digital Library

The World Health Organization works in a number of public health areas, and their work on quantifying environmental health impacts has been receiving praise from many quarters. This site provides materials on their work in this area and visitors with a penchant for international health relief efforts and policy analysis will find the site invaluable. Along the left-hand side of the site, visitors will find topical sections that include "Methods", "Assessment at national level", "Global estimates", and "Publications". In the "Methods" area, visitors will learn about how the World Health Organization's methodology for studying environmental health impacts has been developed and they can also read a detailed report on the subject. The "Global Estimates" area is worth a look as well, and users can look at their complete report, "Preventing Disease Through Healthy Environments: Towards An Estimate Of the Global Burden Of Disease".


Quantifying the complexities of Saccharomyces cerevisiae's ecosystem engineering via fermentation.  


The theory of niche construction suggests that organisms may engineer environments via their activities. Despite the potential of this phenomenon being realized by Darwin, the capability of niche construction to generally unite ecological and evolutionary biology has never been empirically quantified. Here I quantify the fitness effects of Saccharomyces cerevisiae's ecosystem engineering in a natural ferment in order to understand the interaction between ecological and evolutionary processes. I show that S. cerevisiae eventually dominates in fruit niches, where it is naturally initially rare, by modifying the environment through fermentation (the Crabtree effect) in ways which extend beyond just considering ethanol production. These data show that an additional cause of S. cerevisiae's competitive advantage over the other yeasts in the community is due to the production of heat via fermentation. Even though fermentation is less energetically efficient than respiration, it seems that this trait has been selected for because its net effect provides roughly a 7% fitness advantage over the other members of the community. These data provide an elegant example of niche construction because this trait clearly modifies the environment and therefore the selection pressures to which S. cerevisiae, and other organisms that access the fruit resource, including humans, are exposed to. PMID:18724717

Goddard, Matthew R



Quantifying the 'Un-quantifiable': Valuing the Intangible Impacts of Hosting the Summer Olympic Games  

Microsoft Academic Search

The Summer Olympic Games, it is argued, is a significant cultural good as well as a popular sporting spectacle; it provides the impetus for environmental and economic improvements in the host city, gives a boost to national pride and so on. This paper examines whether there is an economic basis for such claims. To date, attempts to quantify the benefits

Giles Atkinson; Susana Mourato


Quantifying the statistical complexity of low-frequency fluctuations in semiconductor lasers with optical feedback  

SciTech Connect

Low-frequency fluctuations (LFFs) represent a dynamical instability that occurs in semiconductor lasers when they are operated near the lasing threshold and subject to moderate optical feedback. LFFs consist of sudden power dropouts followed by gradual, stepwise recoveries. We analyze experimental time series of intensity dropouts and quantify the complexity of the underlying dynamics employing two tools from information theory, namely, Shannon's entropy and the Martin, Plastino, and Rosso statistical complexity measure. These measures are computed using a method based on ordinal patterns, by which the relative length and ordering of consecutive interdropout intervals (i.e., the time intervals between consecutive intensity dropouts) are analyzed, disregarding the precise timing of the dropouts and the absolute durations of the interdropout intervals. We show that this methodology is suitable for quantifying subtle characteristics of the LFFs, and in particular the transition to fully developed chaos that takes place when the laser's pump current is increased. Our method shows that the statistical complexity of the laser does not increase continuously with the pump current, but levels off before reaching the coherence collapse regime. This behavior coincides with that of the first- and second-order correlations of the interdropout intervals, suggesting that these correlations, and not the chaotic behavior, are what determine the level of complexity of the laser's dynamics. These results hold for two different dynamical regimes, namely, sustained LFFs and coexistence between LFFs and steady-state emission.

Tiana-Alsina, J.; Torrent, M. C.; Masoller, C.; Garcia-Ojalvo, J. [Departament de Fisica i Enginyeria Nuclear, Universitat Politecnica de Catalunya, Campus de Terrassa, Edif. GAIA, Rambla de Sant Nebridi s/n, Terrassa E-08222 Barcelona (Spain); Rosso, O. A. [Departamento de Fisica, Instituto de Ciencias Exatas, Universidade Federal de Minas Gerais, Av. Antonio Carlos, 6627 Campus Pampulha, C.P. 702, 30123-970 Belo Horizonte, MG (Brazil); Chaos and Biology Group, Instituto de Calculo, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, 1428 Ciudad Universitaria, Buenos Aires (Argentina)



Quantifying nonverbal communicative behavior in face-to-face human dialogues  

NASA Astrophysics Data System (ADS)

The referred study is based on the assumption that understanding how humans use nonverbal behavior in dialogues can be very useful in the design of more natural-looking animated talking heads. The goal of the study is twofold: (1) to explore how people use specific facial expressions and head movements to serve important dialogue functions, and (2) to show evidence that it is possible to measure and quantify the entity of these movements with the Qualisys MacReflex motion tracking system. Naturally elicited dialogues between humans have been analyzed with focus on the attention on those nonverbal behaviors that serve the very relevant functions of regulating the conversational flux (i.e., turn taking) and producing information about the state of communication (i.e., feedback). The results show that eyebrow raising, head nods, and head shakes are typical signals involved during the exchange of speaking turns, as well as in the production and elicitation of feedback. These movements can be easily measured and quantified, and this measure can be implemented in animated talking heads.

Skhiri, Mustapha; Cerrato, Loredana



Cross-linguistic relations between quantifiers and numerals in language acquisition: Evidence from Japanese  

Microsoft Academic Search

A study of 104 Japanese-speaking 2- to 5-year-olds tested the relation between numeral and quantifier acquisition. A first study assessed Japanese children’s comprehension of quantifiers, numerals, and classifiers. Relative to English-speaking counterparts, Japanese children were delayed in numeral comprehension at 2 years of age but showed no difference at 3 and 4 years of age. Also, Japanese 2-year-olds had better

David Barner; Amanda Libenson; Pierina Cheung; Mayu Takasaki



FDA Broadcast Shows Food Industry Personnel and ...  

Center for Food Safety and Applied Nutrition (CFSAN)

... FDA Broadcast Shows Food Industry Personnel and Consumers How Proper Health and Hygiene Helps to Prevent Foodborne Illness Outbreaks. ... More results from


Quantifying Hierarchy Stimuli in Systematic Desensitization Via GSR: A Preliminary Investigation  

ERIC Educational Resources Information Center

The aim of the method for quantifying hierarchy stimuli by Galvanic Skin Resistance recordings is to improve the results of systematic desensitization by attenuating the subjective influences in hierarchy construction which are common in traditional procedures. (Author/CS)

Barabasz, Arreed F.



Radiative transfer modeling for quantifying lunar surface minerals, particle size, and submicroscopic metallic Fe  

Microsoft Academic Search

Accuracy of Hapke's model is improved significantly by applying Newton's methodResults for quantifying lunar soil PS, SMFe and minerals are better than previousSpectral effects of surface roughness and temperature are incorporated

Shuai Li; Lin Li



Mapping the Galactic Halo. VIII. Quantifying Substructure  

NASA Astrophysics Data System (ADS)

We have measured the amount of kinematic substructure in the Galactic halo using the final data set from the Spaghetti project, a pencil-beam high-latitude sky survey. Our sample contains 101 photometrically selected and spectroscopically confirmed giants with accurate distance, radial velocity, and metallicity information. We have developed a new clustering estimator: the "4distance" measure, which when applied to our data set leads to the identification of one group and seven pairs of clumped stars. The group, with six members, can confidently be matched to tidal debris of the Sagittarius dwarf galaxy. Two pairs match the properties of known Virgo structures. Using models of the disruption of Sagittarius in Galactic potentials with different degrees of dark halo flattening, we show that this favors a spherical or prolate halo shape, as demonstrated by Newberg et al. using the Sloan Digital Sky Survey data. One additional pair can be linked to older Sagittarius debris. We find that 20% of the stars in the Spaghetti data set are in substructures. From comparison with random data sets, we derive a very conservative lower limit of 10% to the amount of substructure in the halo. However, comparison to numerical simulations shows that our results are also consistent with a halo entirely built up from disrupted satellites, provided that the dominating features are relatively broad due to early merging or relatively heavy progenitor satellites.

Starkenburg, Else; Helmi, Amina; Morrison, Heather L.; Harding, Paul; van Woerden, Hugo; Mateo, Mario; Olszewski, Edward W.; Sivarani, Thirupathi; Norris, John E.; Freeman, Kenneth C.; Shectman, Stephen A.; Dohm-Palmer, R. C.; Frey, Lucy; Oravetz, Dan



Quantifying human health risks from virginiamycin used in chickens.  


The streptogramin antimicrobial combination Quinupristin-Dalfopristin (QD) has been used in the United States since late 1999 to treat patients with vancomycin-resistant Enterococcus faecium (VREF) infections. Another streptogramin, virginiamycin (VM), is used as a growth promoter and therapeutic agent in farm animals in the United States and other countries. Many chickens test positive for QD-resistant E. faecium, raising concern that VM use in chickens might compromise QD effectiveness against VREF infections by promoting development of QD-resistant strains that can be transferred to human patients. Despite the potential importance of this threat to human health, quantifying the risk via traditional farm-to-fork modeling has proved extremely difficult. Enough key data (mainly on microbial loads at each stage) are lacking so that such modeling amounts to little more than choosing a set of assumptions to determine the answer. Yet, regulators cannot keep waiting for more data. Patients prescribed QD are typically severely ill, immunocompromised people for whom other treatment options have not readily been available. Thus, there is a pressing need for sound risk assessment methods to inform risk management decisions for VM/QD using currently available data. This article takes a new approach to the QD-VM risk modeling challenge. Recognizing that the usual farm-to-fork ("forward chaining") approach commonly used in antimicrobial risk assessment for food animals is unlikely to produce reliable results soon enough to be useful, we instead draw on ideas from traditional fault tree analysis ("backward chaining") to reverse the farm-to-fork process and start with readily available human data on VREF case loads and QD resistance rates. Combining these data with recent genogroup frequency data for humans, chickens, and other sources (Willems et al., 2000, 2001) allows us to quantify potential human health risks from VM in chickens in both the United States and Australia, two countries where regulatory action for VM is being considered. We present a risk simulation model, thoroughly grounded in data, that incorporates recent nosocomial transmission and genetic typing data. The model is used to estimate human QD treatment failures over the next five years with and without continued VM use in chickens. The quantitative estimates and probability distributions were implemented in a Monte Carlo simulation model for a five-year horizon beginning in the first quarter of 2002. In Australia, a Q1-2002 ban of virginiamycin would likely reduce average attributable treatment failures by 0.35 x 10(-3) cases, expected mortalities by 5.8 x 10(-5) deaths, and life years lost by 1.3 x 10(-3) for the entire population over five years. In the United States, where the number of cases of VRE is much higher, a 1Q-2002 ban on VM is predicted to reduce average attributable treatment failures by 1.8 cases in the entire population over five years; expected mortalities by 0.29 cases; and life years lost by 6.3 over a five-year period. The model shows that the theoretical statistical human health benefits of a VM ban range from zero to less than one statistical life saved in both Australia and the United States over the next five years and are rapidly decreasing. Sensitivity analyses indicate that this conclusion is robust to key data gaps and uncertainties, e.g., about the extent of resistance transfer from chickens to people. PMID:15028017

Cox, Louis A; Popken, Douglas A



Using nitrate to quantify quick flow in a karst aquifer  

USGS Publications Warehouse

In karst aquifers, contaminated recharge can degrade spring water quality, but quantifying the rapid recharge (quick flow) component of spring flow is challenging because of its temporal variability. Here, we investigate the use of nitrate in a two-endmember mixing model to quantify quick flow in Barton Springs, Austin, Texas. Historical nitrate data from recharging creeks and Barton Springs were evaluated to determine a representative nitrate concentration for the aquifer water endmember (1.5 mg/L) and the quick flow endmember (0.17 mg/L for nonstormflow conditions and 0.25 mg/L for stormflow conditions). Under nonstormflow conditions for 1990 to 2005, model results indicated that quick flow contributed from 0% to 55% of spring flow. The nitrate-based two-endmember model was applied to the response of Barton Springs to a storm and results compared to those produced using the same model with ??18O and specific conductance (SC) as tracers. Additionally, the mixing model was modified to allow endmember quick flow values to vary over time. Of the three tracers, nitrate appears to be the most advantageous because it is conservative and because the difference between the concentrations in the two endmembers is large relative to their variance. The ??18O- based model was very sensitive to variability within the quick flow endmember, and SC was not conservative over the timescale of the storm response. We conclude that a nitrate-based two-endmember mixing model might provide a useful approach for quantifying the temporally variable quick flow component of spring flow in some karst systems. ?? 2008 National Ground Water Association.

Mahler, B. J.; Garner, B. D.



Using nitrate to quantify quick flow in a karst aquifer.  


In karst aquifers, contaminated recharge can degrade spring water quality, but quantifying the rapid recharge (quick flow) component of spring flow is challenging because of its temporal variability. Here, we investigate the use of nitrate in a two-endmember mixing model to quantify quick flow in Barton Springs, Austin, Texas. Historical nitrate data from recharging creeks and Barton Springs were evaluated to determine a representative nitrate concentration for the aquifer water endmember (1.5 mg/L) and the quick flow endmember (0.17 mg/L for nonstormflow conditions and 0.25 mg/L for stormflow conditions). Under nonstormflow conditions for 1990 to 2005, model results indicated that quick flow contributed from 0% to 55% of spring flow. The nitrate-based two-endmember model was applied to the response of Barton Springs to a storm and results compared to those produced using the same model with delta(18)O and specific conductance (SC) as tracers. Additionally, the mixing model was modified to allow endmember quick flow values to vary over time. Of the three tracers, nitrate appears to be the most advantageous because it is conservative and because the difference between the concentrations in the two endmembers is large relative to their variance. The delta(18)O-based model was very sensitive to variability within the quick flow endmember, and SC was not conservative over the timescale of the storm response. We conclude that a nitrate-based two-endmember mixing model might provide a useful approach for quantifying the temporally variable quick flow component of spring flow in some karst systems. PMID:18800970

Mahler, Barbara J; Garner, Bradley D



Quantifying spin mixing conductance in F/Pt (F =Ni, Fe, and Ni81Fe19) bilayer film  

NASA Astrophysics Data System (ADS)

The spin-mixing conductances in F/Pt (F = Ni, Fe, and Ni81Fe19) bilayer films were quantified from the peak-to-peak linewidth of ferromagnetic resonance (FMR) spectra based on the model of the spin pumping. When the Pt layer is attached to the F layer, we found the enhancement of the FMR linewidth due to the spin pumping. The experimental results show that the spin-mixing conductances in F/Pt (F = Ni, Fe, and Ni81Fe19) bilayer films have the same order of magnitude, showing that spin injection efficiency in the spin pumping is almost identical in these films.

Yoshino, T.; Ando, K.; Harii, K.; Nakayama, H.; Kajiwara, Y.; Saitoh, E.



Quantifying altered long-term potentiation in the CA1 hippocampus.  


Long-term potentiation (LTP) of synaptic transmission is a widely accepted model of learning and memory. In vitro brain slice techniques were used to investigate the effects of cortical-spreading depression and picrotoxin, an antagonist of the gamma-aminobutyric acid A (GABA(A)) receptor, on the tetanus-induced long-term potentiation of field excitatory postsynaptic potentials. Cortical-spreading depression is involved in glutamate desensitization; on the other hand, GABA(A) antagonists could increase postsynaptic excitability. This study shows that picrotoxin effectively induced long-term potentiation with 142.25 ± 4.18% of the baseline in the picrotoxin group (n = 8) versus 134.36 ± 2.35% of the baseline in the control group (n = 10). In group with picrotoxin applied to CSD, we obtained the smallest magnitude of LTP (120.15 ± 3.73% of the baseline, n = 8). These results suggest that picrotoxin could increase hippocampal activity and LTP; on the contrary, CSD reduced LTP magnitude. In addition, the results also suggest that the decay rate of post-tetanic potentiation has a direct relationship with LTP. Moreover, data were interpreted by nonlinear least squares quantifying, and LTP could also be quantified. The nonlinear attribute of LTP had an influence on the fitting, with respect to increasing the accuracy of the parameters and the compatibility of combination of stimuli that produce LTP. PMID:22934805

Saleewong, T; Srikiatkhachorn, A; Maneepark, M; Chonwerayuth, A; Bongsebandhu-phubhakdi, S



Cascading "Triclick" functionalization of poly(caprolactone) thin films quantified via a quartz crystal microbalance.  


A series of mono- and multifunctionalized degradable polyesters bearing various "clickable" groups, including ketone, alkyne, azide, and methyl acrylate (MA) are reported. Using this approach, we demonstrate a cascade approach to immobilize and quantitate three separate bioactive groups onto poly(caprolactone) (PCL) thin films. The materials are based on tunable copolymer compositions of ?-caprolactone and 2-oxepane-1,5-dione. A quartz crystal microbalance (QCM) was used to quantify the rate and extent of surface conjugation between RGD peptide and polymer thin films using "click" chemistry methods. The results show that alkyne-functionalized polymers have the highest conversion efficiency, followed by MA and azide polymers, while polymer films possessing keto groups are less amenable to surface functionalization. The successful conjugation was further confirmed by static contact angle measurements, with a smaller contact angle correlating directly with lower levels of surface peptide conjugation. QCM results quantify the sequential immobilization of peptides on the PCL thin films and indicate that Michael addition must occur first, followed by azide-alkyne Huisgen cycloadditions. PMID:23795681

Lin, Fei; Zheng, Jukuan; Yu, Jiayi; Zhou, Jinjun; Becker, Matthew L



Smart velocity ranging quantifiable optical microangiography  

NASA Astrophysics Data System (ADS)

We introduce a new type of Optical Microangiography (OMAG) called Quantifiable Optical Microangiography (QOMAG) which is capable of performing quantitative flow imaging with smart velocity ranging. In order to extracting multi-range velocity, two three dimensional data sets need to be acquired at the same imaging area. One data set performs dense scanning in B-scan direction and Doppler analysis was done at the basis of subsequent A-scans, while the other data set performs dense scanning in C-scan direction and Doppler analysis was done at the basis of consecutive B-scan. Since the velocity ranging is determined by the time interval between consecutive measurements of the spectral fringes, longer time interval will give us higher sensitivity to slow velocity. By simultaneous acquiring data sets with different time intervals, we can perform smart velocity ranging quantification on blood flow characterized by different velocity values. The feasibility of QOMAG for variable blood flow imaging is demonstrated by in vivo studies executed on cerebral blood flow of mouse model. Multi-range detailed blood flow map within intracranial Dura mater and cortex of mouse brain can be given by QOMAG.

Zhi, Zhongwei; Wang, Ruikang K.



How to quantify energy landscapes of solids  

NASA Astrophysics Data System (ADS)

We explore whether the topology of energy landscapes in chemical systems obeys any rules and what these rules are. To answer this and related questions we use several tools: (i) Reduced energy surface and its density of states, (ii) descriptor of structure called fingerprint function, which can be represented as a one-dimensional function or a vector in abstract multidimensional space, (iii) definition of a ``distance'' between two structures enabling quantification of energy landscapes, (iv) definition of a degree of order of a structure, and (v) definitions of the quasi-entropy quantifying structural diversity. Our approach can be used for rationalizing large databases of crystal structures and for tuning computational algorithms for structure prediction. It enables quantitative and intuitive representations of energy landscapes and reappraisal of some of the traditional chemical notions and rules. Our analysis confirms the expectations that low-energy minima are clustered in compact regions of configuration space (``funnels'') and that chemical systems tend to have very few funnels, sometimes only one. This analysis can be applied to the physical properties of solids, opening new ways of discovering structure-property relations. We quantitatively demonstrate that crystals tend to adopt one of the few simplest structures consistent with their chemistry, providing a thermodynamic justification of Pauling's fifth rule.

Oganov, Artem R.; Valle, Mario



Quantifying the Magnetic Advantage in Magnetotaxis  

PubMed Central

Magnetotactic bacteria are characterized by the production of magnetosomes, nanoscale particles of lipid bilayer encapsulated magnetite, that act to orient the bacteria in magnetic fields. These magnetosomes allow magneto-aerotaxis, which is the motion of the bacteria along a magnetic field and toward preferred concentrations of oxygen. Magneto-aerotaxis has been shown to direct the motion of these bacteria downward toward sediments and microaerobic environments favorable for growth. Herein, we compare the magneto-aerotaxis of wild-type, magnetic Magnetospirillum magneticum AMB-1 with a nonmagnetic mutant we have engineered. Using an applied magnetic field and an advancing oxygen gradient, we have quantified the magnetic advantage in magneto-aerotaxis as a more rapid migration to preferred oxygen levels. Magnetic, wild-type cells swimming in an applied magnetic field more quickly migrate away from the advancing oxygen than either wild-type cells in a zero field or the nonmagnetic cells in any field. We find that the responses of the magnetic and mutant strains are well described by a relatively simple analytical model, an analysis of which indicates that the key benefit of magnetotaxis is an enhancement of a bacterium's ability to detect oxygen, not an increase in its average speed moving away from high oxygen concentrations.

Smith, M. J.; Sheehan, P. E.; Perry, L. L.; O'Connor, K.; Csonka, L. N.; Applegate, B. M.; Whitman, L. J.



Quantifying the transmission potential of pandemic influenza  

NASA Astrophysics Data System (ADS)

This article reviews quantitative methods to estimate the basic reproduction number of pandemic influenza, a key threshold quantity to help determine the intensity of interventions required to control the disease. Although it is difficult to assess the transmission potential of a probable future pandemic, historical epidemiologic data is readily available from previous pandemics, and as a reference quantity for future pandemic planning, mathematical and statistical analyses of historical data are crucial. In particular, because many historical records tend to document only the temporal distribution of cases or deaths (i.e. epidemic curve), our review focuses on methods to maximize the utility of time-evolution data and to clarify the detailed mechanisms of the spread of influenza. First, we highlight structured epidemic models and their parameter estimation method which can quantify the detailed disease dynamics including those we cannot observe directly. Duration-structured epidemic systems are subsequently presented, offering firm understanding of the definition of the basic and effective reproduction numbers. When the initial growth phase of an epidemic is investigated, the distribution of the generation time is key statistical information to appropriately estimate the transmission potential using the intrinsic growth rate. Applications of stochastic processes are also highlighted to estimate the transmission potential using similar data. Critically important characteristics of influenza data are subsequently summarized, followed by our conclusions to suggest potential future methodological improvements.

Chowell, Gerardo; Nishiura, Hiroshi



Quantifying instantaneous performance in alpine ski racing.  


Alpine ski racing is a popular sport in many countries and a lot of research has gone into optimising athlete performance. Two factors influence athlete performance in a ski race: speed and the chosen path between the gates. However, to date there is no objective, quantitative method to determine instantaneous skiing performance that takes both of these factors into account. The purpose of this short communication was to define a variable quantifying instantaneous skiing performance and to study how this variable depended on the skiers' speed and on their chosen path. Instantaneous skiing performance was defined as time loss per elevation difference dt/dz, which depends on the skier's speed v(z), and the distance travelled per elevation difference ds/dz. Using kinematic data collected in an earlier study, it was evaluated how these variables can be used to assess the individual performance of six ski racers in two slalom turns. The performance analysis conducted in this study might be a useful tool not only for athletes and coaches preparing for competition, but also for sports scientists investigating skiing techniques or engineers developing and testing skiing equipment. PMID:22620279

Federolf, Peter Andreas



A framework for quantifying net benefits of alternative prognostic models‡  

PubMed Central

New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measures and can be used in full health economic evaluations of prognostic models used for screening and allocating risk reduction interventions. We extend previous work in this area by quantifying net benefits in life years, thus linking prognostic performance to health economic measures; by taking full account of the occurrence of events over time; and by considering estimation and cross-validation in a multiple-study setting. The method is illustrated in the context of cardiovascular disease risk prediction using an individual participant data meta-analysis. We estimate the number of cardiovascular-disease-free life years gained when statin treatment is allocated based on a risk prediction model with five established risk factors instead of a model with just age, gender and region. We explore methodological issues associated with the multistudy design and show that cost-effectiveness comparisons based on the proposed methodology are robust against a range of modelling assumptions, including adjusting for competing risks. Copyright © 2011 John Wiley & Sons, Ltd.

Rapsomaniki, Eleni; White, Ian R; Wood, Angela M; Thompson, Simon G



A framework for quantifying net benefits of alternative prognostic models.  


New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measures and can be used in full health economic evaluations of prognostic models used for screening and allocating risk reduction interventions. We extend previous work in this area by quantifying net benefits in life years, thus linking prognostic performance to health economic measures; by taking full account of the occurrence of events over time; and by considering estimation and cross-validation in a multiple-study setting. The method is illustrated in the context of cardiovascular disease risk prediction using an individual participant data meta-analysis. We estimate the number of cardiovascular-disease-free life years gained when statin treatment is allocated based on a risk prediction model with five established risk factors instead of a model with just age, gender and region. We explore methodological issues associated with the multistudy design and show that cost-effectiveness comparisons based on the proposed methodology are robust against a range of modelling assumptions, including adjusting for competing risks. PMID:21905066

Rapsomaniki, Eleni; White, Ian R; Wood, Angela M; Thompson, Simon G



Dispersal range analysis: quantifying individual variation in dispersal behaviour.  


A complete understanding of animal dispersal requires knowledge not only of its consequences at population and community levels, but also of the behavioural decisions made by dispersing individuals. Recent theoretical work has emphasised the importance of this dispersal process, particularly the phase in which individuals search the landscape for breeding opportunities. However, empirical advances are currently hampered by a lack of tools for quantifying these dispersal search tactics. Here, we review existing methods for quantifying movement that are appropriate for the dispersal search process, describe several new techniques that we developed for characterising movement and behaviour through an individual's dispersal range, and illustrate their use with data from Australasian treecreepers (Climacteridae). We also describe how the quantitative parameters we discuss are calculated in a freely available computer software package that we designed. Specifically, we present methods for calculating the area searched during dispersal, search rate, thoroughness, intensity, philopatry of search, timing of exploration, and surreptitiousness. When we applied this approach to the study of dispersal in treecreepers, we found that search area, philopatry and timing of exploration showed the greatest individual variation. Furthermore, search area, search rate, thoroughness and philopatry of search were all correlated, suggesting they may be useful parameters for further research on the causes and consequences of different dispersal search tactics. Finally, we make recommendations for modifying radiotracking protocols to facilitate more accurate assessment of individual variation in the dispersal process, and suggest future directions for this type of empirical work at the interface of population and behavioural ecology. PMID:15378345

Doerr, Erik D; Doerr, Veronica A J



Using multiscale norms to quantify mixing and transport  

NASA Astrophysics Data System (ADS)

Mixing is relevant to many areas of science and engineering, including the pharmaceutical and food industries, oceanography, atmospheric sciences and civil engineering. In all these situations one goal is to quantify and often then to improve the degree of homogenization of a substance being stirred, referred to as a passive scalar or tracer. A classical measure of mixing is the variance of the concentration of the scalar, which is the L2 norm of a mean-zero concentration field. Recently, other norms have been used to quantify mixing, in particular the mix-norm as well as negative Sobolev norms. These norms have the advantage that unlike variance they decay even in the absence of diffusion, and their decay corresponds to the flow being mixing in the sense of ergodic theory. General Sobolev norms weigh scalar gradients differently, and are known as multiscale norms for mixing. We review the applications of such norms to mixing and transport, and show how they can be used to optimize the stirring and mixing of a decaying passive scalar. We then review recent work on the less-studied case of a continuously replenished scalar field—the source-sink problem. In that case the flows that optimally reduce the norms are associated with transport rather than mixing: they push sources onto sinks, and vice versa.

Thiffeault, Jean-Luc



Quantifying uncertainty in LCA-modelling of waste management systems  

SciTech Connect

Highlights: Black-Right-Pointing-Pointer Uncertainty in LCA-modelling of waste management is significant. Black-Right-Pointing-Pointer Model, scenario and parameter uncertainties contribute. Black-Right-Pointing-Pointer Sequential procedure for quantifying uncertainty is proposed. Black-Right-Pointing-Pointer Application of procedure is illustrated by a case-study. - Abstract: Uncertainty analysis in LCA studies has been subject to major progress over the last years. In the context of waste management, various methods have been implemented but a systematic method for uncertainty analysis of waste-LCA studies is lacking. The objective of this paper is (1) to present the sources of uncertainty specifically inherent to waste-LCA studies, (2) to select and apply several methods for uncertainty analysis and (3) to develop a general framework for quantitative uncertainty assessment of LCA of waste management systems. The suggested method is a sequence of four steps combining the selected methods: (Step 1) a sensitivity analysis evaluating the sensitivities of the results with respect to the input uncertainties, (Step 2) an uncertainty propagation providing appropriate tools for representing uncertainties and calculating the overall uncertainty of the model results, (Step 3) an uncertainty contribution analysis quantifying the contribution of each parameter uncertainty to the final uncertainty and (Step 4) as a new approach, a combined sensitivity analysis providing a visualisation of the shift in the ranking of different options due to variations of selected key parameters. This tiered approach optimises the resources available to LCA practitioners by only propagating the most influential uncertainties.

Clavreul, Julie, E-mail: [Department of Environmental Engineering, Technical University of Denmark, Miljoevej, Building 113, DK-2800 Kongens Lyngby (Denmark); Guyonnet, Dominique [BRGM, ENAG BRGM-School, BP 6009, 3 Avenue C. Guillemin, 45060 Orleans Cedex (France); Christensen, Thomas H. [Department of Environmental Engineering, Technical University of Denmark, Miljoevej, Building 113, DK-2800 Kongens Lyngby (Denmark)



The quantified process approach: an emerging methodology to neuropsychological assessment.  


An important development in the field of neuropsychological assessment is the quantification of the process by which individuals solve common neuropsychological tasks. The present article outlines the history leading to this development, the Quantified Process Approach, and suggests that this line of applied research bridges the gap between the clinical and statistical approaches to neuropsychological assessment. It is argued that the enterprise of quantifying the process approach proceeds via three major methodologies: (1) the "Satellite" Testing Paradigm: an approach by which new tasks are developed to complement existing tests so as to clarify a given test performance; (2) the Composition Paradigm: an approach by which data on a given test that have been largely overlooked are compiled and subsequently analyzed, resulting in new indices that are believed to reflect underlying constructs accounting for test performance; and (3) the Decomposition Paradigm: an approach which investigates the relationship between test items of a given measure according to underlying facets, resulting in the development of new subscores. The article illustrates each of the above paradigms, offers a critique of this new field according to prevailing professional standards for psychological measures, and provides suggestions for future research. PMID:10916196

Poreh, A M



Quantifying the length-scale dependence of surf zone advection  

NASA Astrophysics Data System (ADS)

We investigate the momentum balance in the surf zone, in a setting which is weakly varying in the alongshore direction. Our focus is on the role of nonlinear advective terms. Using numerical experiments, we find that advection tends to counteract alongshore variations in momentum flux, resulting in more uniform kinematics. Additionally, advection causes a shifting of the kinematic response in the direction of flow. These effects are strongest at short alongshore length scales, and/or strong alongshore-mean velocity. The length-scale dependence is investigated using spectral analysis, where the effect of advective terms is treated as a transfer function applied to the solution to the linear (advection-free) equations of motion. The transfer function is then shown to be governed by a nondimensional parameter which quantifies the relative scales of advection and bottom stress, analogous to a Reynolds Number. Hence, this parameter can be used to quantify the length scales at which advective terms, and the resulting effects described above, are important. We also introduce an approximate functional form for the transfer function, which is valid asymptotically within a restricted range of length scales.

Wilson, Greg W.; Özkan-Haller, H. Tuba; Holman, Robert A.



Quantifying Synapses: an Immunocytochemistry-based Assay to Quantify Synapse Number  

PubMed Central

One of the most important goals in neuroscience is to understand the molecular cues that instruct early stages of synapse formation. As such it has become imperative to develop objective approaches to quantify changes in synaptic connectivity. Starting from sample fixation, this protocol details how to quantify synapse number both in dissociated neuronal culture and in brain sections using immunocytochemistry. Using compartment-specific antibodies, we label presynaptic terminals as well as sites of postsynaptic specialization. We define synapses as points of colocalization between the signals generated by these markers. The number of these colocalizations is quantified using a plug in Puncta Analyzer (written by Bary Wark, available upon request, under the ImageJ analysis software platform. The synapse assay described in this protocol can be applied to any neural tissue or culture preparation for which you have selective pre- and postsynaptic markers. This synapse assay is a valuable tool that can be widely utilized in the study of synaptic development.

Ippolito, Dominic M.; Eroglu, Cagla



Visualizing and Quantifying Blob Characteristics on NSTX  

NASA Astrophysics Data System (ADS)

Understanding the radial motion of blob-filaments in the tokamak edge plasma is important since this motion can affect the width of the heat and particle scrape-off layer (SOL) [1]. High resolution (64x80), high speed (400,000 frames/sec) edge turbulence movies taken of the NSTX outer midplane separatrix region have recently been analyzed for blob motion. Regions of high light emission from gas puff imaging within a 25x30 cm cross-section were used to track blob-filaments in the plasma edge and into the SOL. Software tools have been developed for visualizing blob movement and automatically generating statistics of blob speed, shape, amplitude, size, and orientation; thousands of blobs have been analyzed for dozens of shots. The blob tracking algorithm and resulting database entries are explained in detail. Visualization tools also show how poloidal and radial motion change as blobs move through the scrape-off-layer (SOL), e.g. suggesting the influence of sheared flow. Relationships between blob size and velocity are shown for various types of plasmas and compared with simplified theories of blob motion. This work was supported by DOE Contract DE-AC02-09-CH11466. [4pt] [1] J.R. Myra et al, Phys. Plasmas 18, 012305 (2011)

Davis, William; Zweben, Stewart; Myra, James; D'Ippolito, Daniel; Ko, Matthew



Choosing appropriate techniques for quantifying groundwater recharge  

NASA Astrophysics Data System (ADS)

Various techniques are available to quantify recharge; however, choosing appropriate techniques is often difficult. Important considerations in choosing a technique include space/time scales, range, and reliability of recharge estimates based on different techniques; other factors may limit the application of particular techniques. The goal of the recharge study is important because it may dictate the required space/time scales of the recharge estimates. Typical study goals include water-resource evaluation, which requires information on recharge over large spatial scales and on decadal time scales; and evaluation of aquifer vulnerability to contamination, which requires detailed information on spatial variability and preferential flow. The range of recharge rates that can be estimated using different approaches should be matched to expected recharge rates at a site. The reliability of recharge estimates using different techniques is variable. Techniques based on surface-water and unsaturated-zone data provide estimates of potential recharge, whereas those based on groundwater data generally provide estimates of actual recharge. Uncertainties in each approach to estimating recharge underscore the need for application of multiple techniques to increase reliability of recharge estimates. Résumé. Il existe différentes techniques pour quantifier la recharge; toutefois, il est souvent difficile de choisir les techniques appropriées. Les points importants pour le choix d'une technique sont l'échelle de temps et d'espace, la gamme de valeurs et la validité des estimations de la recharge basées sur différentes techniques; d'autres facteurs peuvent limiter l'application de techniques particulières. Le but des études de la recharge est important parce qu'il peut imposer les échelles de temps et d'espace des estimations de recharge. Les buts de ces études concernent habituellement l'évaluation des ressources en eau, qui requiert des informations sur la recharge à des échelles spatiales étendues et sur des durées comptées en dizaines d'années, et l'évaluation de la vulnérabilité des aquifères aux contaminations, qui exige des informations détaillées sur la variabilité spatiale et les écoulements préférentiels. La gamme des taux de recharge qui peuvent être estimés par différentes approches doit être adaptée aux valeurs attendues de la recharge sur le site. La validité des estimations de recharge faites par des techniques différentes est variable. Des techniques s'appuyant sur des données concernant les eaux de surface et la zone non saturée fournissent des estimations de recharge potentielle, tandis que celles basées sur les données des eaux souterraines donnent généralement des estimations de la recharge réelle. Les incertitudes de chaque approche d'estimation de la recharge mettent en relief la nécessité d'appliquer des techniques multiples pour accroître la validité des estimations de la recharge. Resumen. Existen diversas técnicas para cuantificar la recarga, pero elegir las apropiadas es a menudo difícil. Entre las consideraciones a tener en cuenta, hay que citar las escalas espacial y temporal, el rango y la fiabilidad de las estimaciones de la recarga obtenidas por medio de técnicas diferentes; hay otros factores que pueden limitar la aplicación de técnicas particulares. El objetivo de un estudio de recarga es importante, ya que puede condicionar las escalas temporal y espacial de las estimaciones. Los objetivos típicos comprenden la evaluación de recursos, cosa que requiere información de la recarga para escalas espaciales extensas y escalas temporales cifradas en décadas, y la evaluación de la vulnerabilidad del acuífero a la contaminación, para lo que hace falta información detallada sobre la variabilidad espacial y el flujo preferente. Se debería contrastar el rango de los valores estimados de recarga mediante enfoques diferentes con los valores esperados en un emplazamiento. La fiabilidad de las estimaciones basadas en técnicas diferentes es variable. Así, la

Scanlon, Bridget; Healy, Richard; Cook, Peter



Quantifying scale relationships in snow distributions  

NASA Astrophysics Data System (ADS)

Spatial distributions of snow in mountain environments represent the time integration of accumulation and ablation processes, and are strongly and dynamically linked to mountain hydrologic, ecologic, and climatic systems. Accurate measurement and modeling of the spatial distribution and variability of the seasonal mountain snowpack at different scales are imperative for water supply and hydropower decision-making, for investigations of land-atmosphere interaction or biogeochemical cycling, and for accurate simulation of earth system processes and feedbacks. Assessment and prediction of snow distributions in complex terrain are heavily dependent on scale effects, as the pattern and magnitude of variability in snow distributions depends on the scale of observation. Measurement and model scales are usually different from process scales, and thereby introduce a scale bias to the estimate or prediction. To quantify this bias, or to properly design measurement schemes and model applications, the process scale must be known or estimated. Airborne Light Detection And Ranging (lidar) products provide high-resolution, broad-extent altimetry data for terrain and snowpack mapping, and allow an application of variogram fractal analysis techniques to characterize snow depth scaling properties over lag distances from 1 to 1000 meters. Snow depth patterns as measured by lidar at three Colorado mountain sites exhibit fractal (power law) scaling patterns over two distinct scale ranges, separated by a distinct break at the 15-40 m lag distance, depending on the site. Each fractal range represents a range of separation distances over which snow depth processes remain consistent. The scale break between fractal regions is a characteristic scale at which snow depth process relationships change fundamentally. Similar scale break distances in vegetation topography datasets suggest that the snow depth scale break represents a change in wind redistribution processes from wind/vegetation interactions at small lags to wind/terrain interactions at larger lags. These snow depth scale characteristics are internally consistent, directly describe the scales of action of snow accumulation, redistribution, and ablation processes, and inform scale considerations for measurement and modeling. Snow process models are designed to represent processes acting over specific scale ranges. However, since the incorporated processes vary with scale, the model performance cannot be scale-independent. Thus, distributed snow models must represent the appropriate process interactions at each scale in order to produce reasonable simulations of snow depth or snow water equivalent (WE) variability. By comparing fractal dimensions and scale break lengths of modeled snow depth patterns to those derived from lidar observations, the model process representations can be evaluated and subsequently refined. Snow depth simulations from the Snowmobile seasonal snow process model exhibit fractal patterns, and a scale break can be produced by including a sub-model that simulates fine-scale wind drifting patterns. The fractal dimensions provide important spatial scaling information that can inform refinement of process representations. This collection of work provides a new application of methods developed in other geophysical fields for quantifying scale and variability relationships.

Deems, Jeffrey S.


Quantifying Collective Attention from Tweet Stream  

PubMed Central

Online social media are increasingly facilitating our social interactions, thereby making available a massive “digital fossil” of human behavior. Discovering and quantifying distinct patterns using these data is important for studying social behavior, although the rapid time-variant nature and large volumes of these data make this task difficult and challenging. In this study, we focused on the emergence of “collective attention” on Twitter, a popular social networking service. We propose a simple method for detecting and measuring the collective attention evoked by various types of events. This method exploits the fact that tweeting activity exhibits a burst-like increase and an irregular oscillation when a particular real-world event occurs; otherwise, it follows regular circadian rhythms. The difference between regular and irregular states in the tweet stream was measured using the Jensen-Shannon divergence, which corresponds to the intensity of collective attention. We then associated irregular incidents with their corresponding events that attracted the attention and elicited responses from large numbers of people, based on the popularity and the enhancement of key terms in posted messages or “tweets.” Next, we demonstrate the effectiveness of this method using a large dataset that contained approximately 490 million Japanese tweets by over 400,000 users, in which we identified 60 cases of collective attentions, including one related to the Tohoku-oki earthquake. “Retweet” networks were also investigated to understand collective attention in terms of social interactions. This simple method provides a retrospective summary of collective attention, thereby contributing to the fundamental understanding of social behavior in the digital era.

Sasahara, Kazutoshi; Hirata, Yoshito; Toyoda, Masashi; Kitsuregawa, Masaru; Aihara, Kazuyuki



Quantifying collective attention from tweet stream.  


Online social media are increasingly facilitating our social interactions, thereby making available a massive "digital fossil" of human behavior. Discovering and quantifying distinct patterns using these data is important for studying social behavior, although the rapid time-variant nature and large volumes of these data make this task difficult and challenging. In this study, we focused on the emergence of "collective attention" on Twitter, a popular social networking service. We propose a simple method for detecting and measuring the collective attention evoked by various types of events. This method exploits the fact that tweeting activity exhibits a burst-like increase and an irregular oscillation when a particular real-world event occurs; otherwise, it follows regular circadian rhythms. The difference between regular and irregular states in the tweet stream was measured using the Jensen-Shannon divergence, which corresponds to the intensity of collective attention. We then associated irregular incidents with their corresponding events that attracted the attention and elicited responses from large numbers of people, based on the popularity and the enhancement of key terms in posted messages or "tweets." Next, we demonstrate the effectiveness of this method using a large dataset that contained approximately 490 million Japanese tweets by over 400,000 users, in which we identified 60 cases of collective attentions, including one related to the Tohoku-oki earthquake. "Retweet" networks were also investigated to understand collective attention in terms of social interactions. This simple method provides a retrospective summary of collective attention, thereby contributing to the fundamental understanding of social behavior in the digital era. PMID:23637913

Sasahara, Kazutoshi; Hirata, Yoshito; Toyoda, Masashi; Kitsuregawa, Masaru; Aihara, Kazuyuki



A new model for quantifying climate episodes  

NASA Astrophysics Data System (ADS)

When long records of climate (precipitation, temperature, stream runoff, etc.) are available, either from instrumental observations or from proxy records, the objective evaluation and comparison of climatic episodes becomes necessary. Such episodes can be quantified in terms of duration (the number of time intervals, e.g. years, the process remains continuously above or below a reference level) and magnitude (the sum of all series values for a given duration). The joint distribution of duration and magnitude is represented here by a stochastic model called BEG, for bivariate distribution with exponential and geometric marginals. The model is based on the theory of random sums, and its mathematical derivation confirms and extends previous empirical findings. Probability statements that can be obtained from the model are illustrated by applying it to a 2300-year dendroclimatic reconstruction of water-year precipitation for the eastern Sierra Nevada-western Great Basin. Using the Dust Bowl drought period as an example, the chance of a longer or greater drought is 8%. Conditional probabilities are much higher, i.e. a drought of that magnitude has a 62% chance of lasting for 11 years or longer, and a drought that lasts 11 years has a 46% chance of having an equal or greater magnitude. In addition, because of the bivariate model, we can estimate a 6% chance of witnessing a drought that is both longer and greater. Additional examples of model application are also provided. This type of information provides a way to place any climatic episode in a temporal perspective, and such numerical statements help with reaching science-based management and policy decisions.

Biondi, Franco; Kozubowski, Tomasz J.; Panorska, Anna K.



Constraining Habitable Environments on Mars by Quantifying Available Geochemical Energy  

NASA Astrophysics Data System (ADS)

The search for life on Mars includes the availability of liquid water, access to biogenic elements and an energy source. In the past, when water was more abundant on Mars, a source of energy may have been the limiting factor for potential life. Energy, either from photosynthesis or chemosynthesis, is required in order to drive metabolism. Potential martian organisms most likely took advantage of chemosynthetic reactions at and below the surface. Terrestrial chemolithoautotrophs, for example, thrive off of chemical disequilibrium that exists in many environments and use inorganic redox (reduction-oxidation) reactions to drive metabolism and create cellular biomass. The chemical disequilibrium of six different martian environments were modeled in this study and analyzed incorporating a range of water and rock compositions, water:rock mass ratios, atmospheric fugacities, pH, and temperatures. All of these models can be applied to specific sites on Mars including environments similar to Meridiani Planum and Gusev Crater. Both a mass transfer geochemical model of groundwater-basalt interaction and a mixing model of groundwater-hydrothermal fluid interaction were used to estimate hypothetical martian fluid compositions that results from mixing over the entire reaction path. By determining the overall Gibbs free energy yields for redox reactions in the H-O-C-S-Fe-Mn system, the amount of geochemical energy that was available for potential chemolithoautotrophic microorganisms was quantified and the amount of biomass that could have been sustained was estimated. The quantity of biomass that can be formed and supported within a system depends on energy availability, thus sites that have higher levels and fluxes of energy have greater potential to support life. Results show that iron- and sulfur-oxidation reactions would have been the most favorable redox reactions in aqueous systems where groundwater and rock interacted at or near the surface. These types of reactions could have supported between 0.05 and 1.0 grams (dry weight) of biomass per mole of iron or sulfur. The hydrothermal environments would have had numerous redox reactions in the H-O-C-S-Fe-Mn system that could have provided sufficient metabolic energy for potential microorganisms. Methanotrophy, for example, provides the greatest amount of energy at ~760 kJ per mole of methane, which is equivalent to 0.6 grams (dry weight) of biomass. Additional results show that varying the amount of CO2 in the martian atmosphere or adjusting the water:rock ratios has little effect on the resulting Gibbs free energies. The martian values that are reported for available free energy in this study are similar to values that have been calculated for terrestrial systems in hydrothermal settings in which life is known to be abundant. In summary, the models indicate that martian aqueous environments were likely to have been habitable at a wide range of conditions when liquid water was more abundant and would have been able to supply a large amount of energy for potential organisms.

Tierney, L. L.; Jakosky, B. M.



Quantifying compositional impacts of ambient aerosol on cloud droplet formation  

NASA Astrophysics Data System (ADS)

It has been historically assumed that most of the uncertainty associated with the aerosol indirect effect on climate can be attributed to the unpredictability of updrafts. In Chapter 1, we analyze the sensitivity of cloud droplet number density, to realistic variations in aerosol chemical properties and to variable updraft velocities using a 1-dimensional cloud parcel model in three important environmental cases (continental, polluted and remote marine). The results suggest that aerosol chemical variability may be as important to the aerosol indirect effect as the effect of unresolved cloud dynamics, especially in polluted environments. We next used a continuous flow streamwise thermal gradient Cloud Condensation Nuclei counter (CCNc) to study the water-uptake properties of the ambient aerosol, by exposing an aerosol sample to a controlled water vapor supersaturation and counting the resulting number of droplets. In Chapter 2, we modeled and experimentally characterized the heat transfer properties and droplet growth within the CCNc. Chapter 3 describes results from the MIRAGE field campaign, in which the CCNc and a Hygroscopicity Tandem Differential Mobility Analyzer (HTDMA) were deployed at a ground-based site during March, 2006. Size-resolved CCN activation spectra and growth factor distributions of the ambient aerosol in Mexico City were obtained, and an analytical technique was developed to quantify a probability distribution of solute volume fractions for the CCN in addition to the aerosol mixing-state. The CCN were shown to be much less CCN active than ammonium sulfate, with water uptake properties more consistent with low molecular weight organic compounds. The pollution outflow from Mexico City was shown to have CCN with an even lower fraction of soluble material. "Chemical Closure" was attained for the CCN, by comparing the inferred solute volume fraction with that from direct chemical measurements. A clear diurnal pattern was observed for the CCN solute volume fraction, showing that measurable aging of the aerosol population occurs during the day, on the timescale of a few hours. The mixing state of the aerosol, also showing a consistent diurnal pattern, clearly correlates with a chemical tracer for local combustion sources. Chapter 4 describes results from the GoMACCS field study, in which the CCNc was subsequently deployed on an airborne field campaign in Houston, Texas during August-September, 2006. GoMACCS tested our ability to predict CCN for highly polluted conditions with limited chemical information. Assuming the particles were composed purely of ammonium sulfate, CCN closure was obtained with a 10% overprediction bias on average for CCN concentrations ranging from less than 100 cm-3 to over 10,000 cm-3, but with on average 50% variability. Assuming measured concentrations of organics to be internally mixed and insoluble tended to reduce the overprediction bias for less polluted conditions, but led to underprediction bias in the most polluted conditions. A likely explanation is that the high organic concentrations in the polluted environments depress the surface tension of the droplets, thereby enabling activation at lower soluble fractions.

Lance, Sara


Silicon nanowire detectors showing phototransistive gain  

NASA Astrophysics Data System (ADS)

Nanowire photodetectors are shown to function as phototransistors with high sensitivity. Due to small lateral dimensions, a nanowire detector can have low dark current while showing large phototransistive gain. Planar and vertical silicon nanowire photodetectors fabricated in a top-down approach using an etching process show a phototransistive gain above 35 000 at low light intensities. Simulations show that incident light can be waveguided into vertical nanowires resulting in up to 40 times greater external quantum efficiency above their physical fill factor. Vertical silicon nanowire phototransistors formed by etching are attractive for low light level detection and for integration with silicon electronics.

Zhang, Arthur; You, Sifang; Soci, Cesare; Liu, Yisi; Wang, Deli; Lo, Yu-Hwa



Quantifying the threat of extinction from Muller's ratchet in the diploid Amazon molly (Poecilia formosa)  

PubMed Central

Background The Amazon molly (Poecilia formosa) is a small unisexual fish that has been suspected of being threatened by extinction from the stochastic accumulation of slightly deleterious mutations that is caused by Muller's ratchet in non-recombining populations. However, no detailed quantification of the extent of this threat is available. Results Here we quantify genomic decay in this fish by using a simple model of Muller's ratchet with the most realistic parameter combinations available employing the evolution@home global computing system. We also describe simple extensions of the standard model of Muller's ratchet that allow us to deal with selfing diploids, triploids and mitotic recombination. We show that Muller's ratchet creates a threat of extinction for the Amazon molly for many biologically realistic parameter combinations. In most cases, extinction is expected to occur within a time frame that is less than previous estimates of the age of the species, leading to a genomic decay paradox. Conclusion How then does the Amazon molly survive? Several biological processes could individually or in combination solve this genomic decay paradox, including paternal leakage of undamaged DNA from sexual sister species, compensatory mutations and many others. More research is needed to quantify the contribution of these potential solutions towards the survival of the Amazon molly and other (ancient) asexual species.



Comparative study of two chromatographic methods for quantifying 2,4,6-trichloranisole in wines.  


Here we present the validation and the comparative study of two chromatographic methods for quantifying 2,4,6-trichloroanisole (TCA) in wines (red, rosé and white wines). The first method involves headspace solid-phase microextraction and gas chromatography with electron-capture detection (ECD). The evaluation of the performance parameters shows limit of detection of 0.3 ng l(-1), limit of quantification of 1.0 ng l(-1), recoveries around 100% and repeatability of 10%. The second one implies a headspace solid-phase microextraction and gas chromatography with mass spectrometric detection. The performance parameters of this second method are limit of detection of 0.2 ng l(-1), limit of quantification of 0.8 ng l(-1) and repeatability of 10.1%. From the comparative study we can state that both methods provide similar results and the differences between them are the better sensitivity of the GC-ECD method and the very shorter chromatogram running time of the GC-MS method. The two methods are able to quantify TCA below the sensorial threshold in red, rosé and white wines using just a calibration graph, thus they could be a very good tool for quality control in wineries. PMID:17109869

Riu, M; Mestres, M; Busto, O; Guasch, J



A synthetic phased array surface acoustic wave sensor for quantifying bolt tension.  


In this paper, we report our findings on implementing a synthetic phased array surface acoustic wave sensor to quantify bolt tension. Maintaining proper bolt tension is important in many fields such as for ensuring safe operation of civil infrastructures. Significant advantages of this relatively simple methodology is its capability to assess bolt tension without any contact with the bolt, thus enabling measurement at inaccessible locations, multiple bolt measurement capability at a time, not requiring data collection during the installation and no calibration requirements. We performed detailed experiments on a custom-built flexible bench-top experimental setup consisting of 1018 steel plate of 12.7 mm (½ in) thickness, a 6.4 mm (¼ in) grade 8 bolt and a stainless steel washer with 19 mm (¾ in) of external diameter. Our results indicate that this method is not only capable of clearly distinguishing properly bolted joints from loosened joints but also capable of quantifying how loose the bolt actually is. We also conducted detailed signal-to-noise (SNR) analysis and showed that the SNR value for the entire bolt tension range was sufficient for image reconstruction. PMID:23112711

Martinez, Jairo; Sisman, Alper; Onen, Onursal; Velasquez, Dean; Guldiken, Rasim



Quantifying Mountain Block Recharge by Means of Catchment-Scale Storage-Discharge Relationships  

NASA Astrophysics Data System (ADS)

Despite the hydrologic significance of mountainous catchments in providing freshwater resources, especially in semi-arid regions, little is known about key hydrological processes in these systems, such as mountain block recharge (MBR). We developed an empirical approach based on the storage sensitivity function introduced by Kirchner (2009) to develop storage-discharge relationships from stream flow analysis. We investigated sensitivity of MBR estimates to uncertainty in the derivation of the catchment storage-discharge relations. We implemented this technique in a semi-arid mountainous catchment in South-east Arizona, USA (the Marshall Gulch catchment in the Santa Catalina Mountains near Tucson) with two distinct rainy seasons, winter frontal storms and summer monsoon separated by prolonged dry periods. Developing storage-discharge relation based on baseflow data in the dry period allowed quantifying change in fractured bedrock storage caused by MBR. Contribution of fractured bedrock to stream flow was confirmed using stable isotope data. Our results show that 1) incorporating scalable time steps to correct for stream flow measurement errors improves the model fit; 2) the quantile method is more suitable for stream flow data binning; 3) the choice of the regression model is more critical when the stage-discharge function is used to predict changes in bedrock storage beyond the maximum observed flow in the catchment and 4) application of daily versus hourly flow did not affect the storage-discharge relationship. This methodology allowed quantifying MBR using stream flow recession analysis from within the mountain system.

Ajami, H.; Troch, P. A.; Maddock, T.; Meixner, T.; Eastoe, C. J.



Quantifying Relative Diver Effects in Underwater Visual Censuses  

PubMed Central

Diver-based Underwater Visual Censuses (UVCs), particularly transect-based surveys, are key tools in the study of coral reef fish ecology. These techniques, however, have inherent problems that make it difficult to collect accurate numerical data. One of these problems is the diver effect (defined as the reaction of fish to a diver). Although widely recognised, its effects have yet to be quantified and the extent of taxonomic variation remains to be determined. We therefore examined relative diver effects on a reef fish assemblage on the Great Barrier Reef. Using common UVC methods, the recorded abundance of seven reef fish groups were significantly affected by the ongoing presence of SCUBA divers. Overall, the diver effect resulted in a 52% decrease in the mean number of individuals recorded, with declines of up to 70% in individual families. Although the diver effect appears to be a significant problem, UVCs remain a useful approach for quantifying spatial and temporal variation in relative fish abundances, especially if using methods that minimise the exposure of fishes to divers. Fixed distance transects using tapes or lines deployed by a second diver (or GPS-calibrated timed swims) would appear to maximise fish counts and minimise diver effects.

Dickens, Luke C.; Goatley, Christopher H. R.; Tanner, Jennifer K.; Bellwood, David R.



Quantifying serum antibody in bird fanciers' hypersensitivity pneumonitis  

PubMed Central

Background Detecting serum antibody against inhaled antigens is an important diagnostic adjunct for hypersensitivity pneumonitis (HP). We sought to validate a quantitative fluorimetric assay testing serum from bird fanciers. Methods Antibody activity was assessed in bird fanciers and control subjects using various avian antigens and serological methods, and the titer was compared with symptoms of HP. Results IgG antibody against pigeon serum antigens, quantified by fluorimetry, provided a good discriminator of disease. Levels below 10 mg/L were insignificant, and increasing titers were associated with disease. The assay was unaffected by total IgG, autoantibodies and antibody to dietary hen's egg antigens. Antigens from pigeon serum seem sufficient to recognize immune sensitivity to most common pet avian species. Decreasing antibody titers confirmed antigen avoidance. Conclusion Increasing antibody titer reflected the likelihood of HP, and decreasing titers confirmed antigen avoidance. Quantifying antibody was rapid and the increased sensitivity will improve the rate of false-negative reporting and obviate the need for invasive diagnostic procedures. Automated fluorimetry provides a method for the international standardization of HP serology thereby improving quality control and improving its suitability as a diagnostic adjunct.

McSharry, Charles; Dye, George M; Ismail, Tengku; Anderson, Kenneth; Spiers, Elizabeth M; Boyd, Gavin



Quantifying the functional rehabilitation of injured football players  

PubMed Central

Objective To determine whether quantified, auditable records of functional rehabilitation can be generated using subjective assessments of players' performance in fitness tests routinely used in professional football. Method Ten sequential test elements grouped into three phases (fitness, ball and match skills, match pace football) were used to monitor players' functional recovery from injury. Physiotherapists subjectively assessed players' performance in each test element using a six point subjective rating scale. Satisfactory performance in each element of the assessment programme added 10% to the injured player's recovery score. Daily recovery scores for injured players were recorded against the time spent in functional rehabilitation. Results Rehabilitation data for 118 injuries sustained by 55 players over two seasons were recorded. The average time in functional rehabilitation depended on the time spent in pre?functional rehabilitation and the nature and location of injury. Benchmark functional rehabilitation curves (y ?=? mln(x) + c) were developed for thigh (n ?=? 15) and lower leg (n ?=? 8) muscle strains and knee (n ?=? 7) and ankle (n ?=? 9) ligament sprains (R2 ?=? 0.95–0.98). Conclusions A structured, quantified rehabilitation programme based on routine fitness and skills exercises and a graded subjective assessment of performance provides an auditable record of a player's functional recovery from a range of lower limb injuries and a transparent exit point from rehabilitation. The proposed method provides a permanent record of the functional rehabilitation of players' injuries and evidence based data to support management's return to play decisions.

Fuller, C W; Walker, J



Methods for quantifying uncertainty in fast reactor analyses.  

SciTech Connect

Liquid-metal-cooled fast reactors in the form of sodium-cooled fast reactors have been successfully built and tested in the U.S. and throughout the world. However, no fast reactor has operated in the U.S. for nearly fourteen years. More importantly, the U.S. has not constructed a fast reactor in nearly 30 years. In addition to reestablishing the necessary industrial infrastructure, the development, testing, and licensing of a new, advanced fast reactor concept will likely require a significant base technology program that will rely more heavily on modeling and simulation than has been done in the past. The ability to quantify uncertainty in modeling and simulations will be an important part of any experimental program and can provide added confidence that established design limits and safety margins are appropriate. In addition, there is an increasing demand from the nuclear industry for best-estimate analysis methods to provide confidence bounds along with their results. The ability to quantify uncertainty will be an important component of modeling that is used to support design, testing, and experimental programs. Three avenues of UQ investigation are proposed. Two relatively new approaches are described which can be directly coupled to simulation codes currently being developed under the Advanced Simulation and Modeling program within the Reactor Campaign. A third approach, based on robust Monte Carlo methods, can be used in conjunction with existing reactor analysis codes as a means of verification and validation of the more detailed approaches.

Fanning, T. H.; Fischer, P. F.



Quantifying The Information Content In The GPS Slant Path Delays  

NASA Astrophysics Data System (ADS)

We quantify the amount of information on atmospheric water vapor that can be ex- tracted from the GPS line of sight measurement, by comparing these measurements to those made with a collocated pointed water vapor radiometer (WVR). We distinguish between the zero order, first order, and higher order variability of the water vapor dis- tribution, as evident from the GPS line of sight observations. We show that while the GPS observations are capable of capturing the zero and first order (gradient) distribu- tion of water vapor with high accuracy as compared to the WVR-based observations, the higher order information is not captured well and is mostly buried in noise. We conclude that most observations of slant path delays to-date are merely representation of the delay gradients. This fact significantly limits the capability of standard GPS receivers to support atmospheric tomography.

Bar-Sever, Y.; Emardson, R.


Quantifying spin Hall angles from spin pumping : experiments and theory.  

SciTech Connect

Spin Hall effects intermix spin and charge currents even in nonmagnetic materials and, therefore, ultimately may allow the use of spin transport without the need for ferromagnets. We show how spin Hall effects can be quantified by integrating Ni{sub 80}Fe{sub 20}|normal metal (N) bilayers into a coplanar waveguide. A dc spin current in N can be generated by spin pumping in a controllable way by ferromagnetic resonance. The transverse dc voltage detected along the Ni{sub 80}Fe{sub 20}|N has contributions from both the anisotropic magnetoresistance and the spin Hall effect, which can be distinguished by their symmetries. We developed a theory that accounts for both. In this way, we determine the spin Hall angle quantitatively for Pt, Au, and Mo. This approach can readily be adapted to any conducting material with even very small spin Hall angles.

Mosendz, O.; Pearson, J. E.; Fradin, F. Y.; Bauer, G. E. W.; Bader, S. D.; Hoffmann, A.; Delft Univ. of Technology



Quantifying the Magnitude of Anomalous Solar Absorption  

SciTech Connect

The data set from ARESE II, sponsored by the Atmospheric Radiation Measurement Program, provides a unique opportunity to understand solar absorption in the atmosphere because of the combination of three sets of broadband solar radiometers mounted on the Twin Otter aircraft and the ground based instruments at the ARM Southern Great Plains facility. In this study, we analyze the measurements taken on two clear sky days and three cloudy days and model the solar radiative transfer in each case with two different models. On the two clear days, the calculated and measured column absorptions agree to better than 10 Wm-2, which is about 10% of the total column absorption. Because both the model fluxes and the individual radiometer measurements are accurate to no better than 10 Wm-2, we conclude that the models and measurements are essentially in agreement. For the three cloudy days, the model calculations agree very well with each other and on two of the three days agree with the measurements to 20 Wm-2 or less out of a total column absorption of more than 200 Wm-2, which is again agreement at better than 10%. On the third day, the model and measurements agree to either 8% or 14% depending on which value of surface albedo is used. Differences exceeding 10% represent a significant absorption difference between model and observations. In addition to the uncertainty in absorption due to surface albedo, we show that including aerosol with an optical depth similar to that found on clear days can reduce the difference between model and measurement by 5% or more. Thus, we conclude that the ARESE II results are incompatible with previous studies reporting extreme anomalous absorption and can be modeled with our current understanding of radiative transfer.

Ackerman, Thomas P.; Flynn, Donna M.; Marchand, Roger T.



Quantifying Recent Changes in Earth's Radiation Budget  

NASA Astrophysics Data System (ADS)

The radiative energy balance between the solar or shortwave (SW) radiation absorbed by Earth and the thermal infrared or longwave (LW) radiation emitted back to space is fundamental to climate. An increase in the net radiative flux into the system (e.g., due to external forcing) is primarily stored as heat in the ocean, and can resurface at a later time to affect weather and climate on a global scale. The associated changes in the components of the Earth-atmosphere such as clouds, the surface and the atmosphere further alter the radiative balance, leading to further changes in weather and climate. Observations from instruments aboard Aqua and other satellites clearly show large interannual and decadal variability in the Earth's radiation budget associated with the major modes of climate variability (e.g., ENSO, NAO, etc.). We present results from CERES regarding variations in the net radiation imbalance of the planet during the past decade, comparing them with independent estimates of ocean heating rates derived from in-situ observations of ocean heat content. We combine these two data sets to calculate that during the past decade Earth has been accumulating energy at the rate 0.54±0.43 Wm-2, suggesting that while Earth's surface has not warmed significantly during the 2000s, energy is continuing to accumulate in the sub-surface ocean. Our observations do not support previous claims of "missing energy" in the system. We exploit data from other instruments such as MODIS, AIRS, CALIPSO and CloudSat to examine how clouds and atmospheric temperature/humidity vary both at regional and global scales during ENSO events. Finally, we present a revised representation of the global mean Earth radiation budget derived from gridded monthly mean TOA and surface radiative fluxes (EBAF-TOA and EBAF-SFC) that are based on a radiative assimilation analysis of observations from Aqua, Terra, geostationary satellites, CALIPSO and CloudSat.

Loeb, N. G.; Kato, S.; Lyman, J. M.; Johnson, G. C.; Doelling, D.; Wong, T.; Allan, R.; Soden, B. J.; Stephens, G. L.



Aortic function quantified: the heart's essential cushion.  


Arterial compliance is mainly determined by the elasticity of proximal large-conduit arteries of which the aorta is the largest contributor. Compliance forms an important part of the cardiac load and plays a role in organ (especially coronary) perfusion. To follow local changes in aortic compliance, as in aging, noninvasive determination of compliance distribution would be of great value. Our goal is to determine regional aortic compliance noninvasively in the human. In seven healthy individuals at six locations, aortic blood flow and systolic/diastolic area (?A) was measured with MRI. Simultaneously brachial pulse pressure (?P) was measured with standard cuff. With a transfer function we derived ?P at the same aortic locations as the MRI measurements. Regional aortic compliance was calculated with two approaches, the pulse pressure method, and local area compliance (?A/?P) times segment length, called area compliance method. For comparison, pulse wave velocity (PWV) from local flows at two locations was determined, and compliance was derived from PWV. Both approaches show that compliance is largest in the proximal aorta and decreases toward the distal aorta. Similar results were found with PWV-derived compliance. Of total arterial compliance, ascending to distal arch (segments 1-3) contributes 40% (of which 15% is in head and arms), descending aorta (segments 4 and 5) 25%, and "hip, pelvic and leg arteries" 20%. Pulse pressure method includes compliance of side branches and is therefore larger than the area compliance method. Regional aortic compliance can be obtained noninvasively. Therefore, this technique allows following changes in local compliance with age and cardiovascular diseases. PMID:22936729

Saouti, Nabil; Marcus, J Tim; Vonk Noordegraaf, Anton; Westerhof, Nico



Molecular Marker Approach on Characterizing and Quantifying Charcoal in Environmental Media  

NASA Astrophysics Data System (ADS)

Black carbon (BC) is widely distributed in natural environments including soils, sediments, freshwater, seawater and the atmosphere. It is produced mostly from the incomplete combustion of fossil fuels and vegetation. In recent years, increasing attention has been given to BC due to its potential influence in many biogeochemical processes. In the environment, BC exists as a continuum ranging from partly charred plant materials, charcoal residues to highly condensed soot and graphite particles. The heterogeneous nature of black carbon means that BC is always operationally-defined, highlighting the need for standard methods that support data comparisons. Unlike soot and graphite that can be quantified with well-established methods, it is difficult to directly quantify charcoal in geologic media due to its chemical and physical heterogeneity. Most of the available charcoal quantification methods detect unknown fractions of the BC continuum. To specifically identify and quantify charcoal in soils and sediments, we adopted and validated an innovative molecular marker approach that quantifies levoglucosan, a pyrogenic derivative of cellulose, as a proxy of charcoal. Levoglucosan is source-specific, stable and is able to be detected at low concentrations using gas chromatograph-mass spectrometer (GC-MS). In the present study, two different plant species, honey mesquite and cordgrass, were selected as the raw materials to synthesize charcoals. The lab-synthesize charcoals were made under control conditions to eliminate the high heterogeneity often found in natural charcoals. The effects of two major combustion factors, temperature and duration, on the yield of levoglucosan were characterized in the lab-synthesize charcoals. Our results showed that significant levoglucosan production in the two types of charcoal was restricted to relatively low combustion temperatures (150-350 degree C). The combustion duration did not cause significant differences in the yield of levoglucosan in the two charcoals. Interestingly, the low temperature charcoals are undetectable by the acid dichromate oxidation method, a popular soot/charcoal analytical approach. Our study demonstrates that levoglucosan can serve as a proxy of low temperature charcoals that are undetectable using other BC methods. Moreover, our study highlights the limitations of the common BC quantification methods to characterize the entire BC continuum.

Kuo, L.; Herbert, B. E.; Louchouarn, P.



Quantifying distributed damage in composites via the thermoelastic effect  

SciTech Connect

A new approach toward quantifying transverse matrix cracking in composite laminates using the thermoelastic effect is developed. The thermoelastic effect refers to the small temperature changes that are generated in components under dynamic loading. Two models are derived, and the theoretical predictions are experimentally verified for three types of laminates. Both models include damage-induced changes in the lamina stress state and lamina coefficients of thermal expansion conduction effects, and epoxy thickness. The first model relates changes in the laminate TSA signal to changes in longitudinal laminate stiffness and Poisson's ratio. This model is based on gross simplifying assumptions and can be used on any composite laminate layup undergoing transverse matrix cracking. The second model relates TSA signal changes to longitudinal laminate stiffness, Poisson's ratio, and microcrack density for (0[sub p]90[sub q])[sub s] and (90[sub q]/0[sub p])[sub s] cross-ply laminates. Both models yield virtually identical results for the cross-ply laminates considered. A sensitivity analysis is performed on both models to quantify the effects of reasonable property variations on the normalized stiffness vs. normalized TSA signal results for the three laminates under consideration. The results for the cross-ply laminates are very insensitive, while the (+/- 45)[sub 5s] are particularly sensitive to epoxy thickness and longitudinal lamina coefficient of thermal expansion. Experiments are conducted on (0[sub 3]/90[sub 3])[sub s] and (90[sub 3]/0[sub 3])[sub s] Gl/Ep laminates and (+/- 45)[sub 5s] Gr/Ep laminates to confirm the theoretical developments of the thesis. There is a very good correlation between the theoretical predictions and experimental results for the Gl/Ep laminates.

Mahoney, B.J.



Olaparib shows promise in multiple tumor types.  


A phase II study of the PARP inhibitor olaparib (AstraZeneca) for cancer patients with inherited BRCA1 and BRCA2 gene mutations confirmed earlier results showing clinical benefit for advanced breast and ovarian cancers, and demonstrated evidence of effectiveness against pancreatic and prostate cancers. PMID:23847380



Quantifying uncertainty in earthquake rupture models  

NASA Astrophysics Data System (ADS)

Using dynamic and kinematic models, we analyze the ability of GPS and strong-motion data to recover the rupture history of earthquakes. By analyzing the near-source ground-motion generated by earthquake ruptures through barriers and asperities, we determine that both the prestress and yield stress of a frictional inhomogeneity can be recovered. In addition, we find that models with constraints on rupture velocity have less ground motion than constraint-free, spontaneous dynamic models with equivalent stress drops. This suggests that kinematic models with such constraints overestimate the actual stress heterogeneity of earthquakes. We use GPS data from the well-recorded 2004 Mw6.0 Parkfield Earthquake to further probe uncertainties in kinematic models. We find that the inversion for this data set is poorly resolved at depth and near the edges of the fault. In such an underdetermined inversion, it is possible to obtain spurious structure in poorly resolved areas. We demonstrate that a nonuniform grid with grid spacing matching the local resolution length on the fault outperforms small uniform grids, which generate spurious structure in poorly resolved regions, and large uniform grids, which lose recoverable information in well-resolved areas of the fault. The nonuniform grid correctly averages out large-scale structure in poorly resolved areas while recovering small-scale structure near the surface. In addition to probing model uncertainties in earthquake source models, we also examine the effect of model uncertainty in Probabilistic Seismic Hazard Analysis (PSHA). While methods for incorporating parameter uncertainty of a particular model in PSHA are well-understood, methods for incorporating model uncertainty are more difficult to implement due to the high degree of dependence between different earthquake-recurrence models. We show that the method used by the 2002 Working Group on California Earthquake Probabilities (WGCEP-2002) to combine the probability distributions given by multiple earthquake recurrence models has several adverse effects on their result. In particular, WGCEP-2002 uses a linear combination of the models which ignores model dependence and leads to large uncertainty in the final hazard estimate. In addition to analyzing current statistical problems, we present alternative methods for rigorously incorporating model uncertainty into PSHA.

Page, Morgan T.


Quantifying circular-linear associations: hippocampal phase precession.  


When a rat crosses the place field of a hippocampal pyramidal cell, this cell typically fires a series of spikes. Spike phases, measured with respect to theta oscillations of the local field potential, on average decrease as a function of the spatial distance traveled. This relation between phase and position of spikes might be a neural basis for encoding and is called phase precession. The degree of association between the circular phase variable and the linear spatial variable is commonly quantified through, however, a linear-linear correlation coefficient where the circular variable is converted to a linear variable by restricting the phase to an arbitrarily chosen range, which may bias the estimated correlation. Here we introduce a new measure to quantify circular-linear associations. This measure leads to a robust estimate of the slope and phase offset of the regression line, and it provides a correlation coefficient for circular-linear data that is a natural analog of Pearson's product-moment correlation coefficient for linear-linear data. Using surrogate data, we show that the new method outperforms the standard linear-linear approach with respect to estimates of the regression line and the correlation, and that the new method is less dependent on noise and sample size. We confirm these findings in a large data set of experimental recordings from hippocampal place cells and theta oscillations, and we discuss remaining problems that are relevant for the analysis and interpretation of phase precession. In summary, we provide a new method for the quantification of circular-linear associations. PMID:22487609

Kempter, Richard; Leibold, Christian; Buzsáki, György; Diba, Kamran; Schmidt, Robert



Nerve strain correlates with structural changes quantified by fourier analysis.  


Introduction: Nerve deformation affects physiological function. Bands of Fontana are an optical manifestation of axonal undulations and may provide a structural indicator of nerve strain. Methods: We developed an automated Fourier-based image processing method to quantify the periodicity of bands of Fontana both in bright field images and in axonal undulations in immunolabeled longitudinal sections. Results: We found a strong linear relationship between applied strain and the frequency of bands of Fontana in rat sciatic nerves (-0.0056 ?m(-) ?%(-) , r(2) ?=?0.829; P?

Love, James M; Chuang, Ting-Hsien; Lieber, Richard L; Shah, Sameer B



Identifying and quantifying interactions in a laboratory swarm  

NASA Astrophysics Data System (ADS)

Emergent collective behavior, such as in flocks of birds or swarms of bees, is exhibited throughout the animal kingdom. Many models have been developed to describe swarming and flocking behavior using systems of self-propelled particles obeying simple rules or interacting via various potentials. However, due to experimental difficulties and constraints, little empirical data exists for characterizing the exact form of the biological interactions. We study laboratory swarms of flying Chironomus riparius midges, using stereoimaging and particle tracking techniques to record three-dimensional trajectories for all the individuals in the swarm. We describe methods to identify and quantify interactions by examining these trajectories, and report results on interaction magnitude, frequency, and mutuality.

Puckett, James G.; Kelley, Douglas H.; Ouellette, Nicholas T.



Quantifying Qualitative Data Using Cognitive Maps  

ERIC Educational Resources Information Center

The aim of the article is to show how substantial qualitative material consisting of graphic cognitive maps can be analysed by using digital CmapTools, Excel and SPSS. Evidence is provided of how qualitative and quantitative methods can be combined in educational research by transforming qualitative data into quantitative data to facilitate…

Scherp, Hans-Ake



Quantifying qualitative data using cognitive maps  

Microsoft Academic Search

The aim of the article is to show how substantial qualitative material consisting of graphic cognitive maps can be analysed by using digital CmapTools, Excel and SPSS. Evidence is provided of how qualitative and quantitative methods can be combined in educational research by transforming qualitative data into quantitative data to facilitate discoveries of patterns in the data. Examples are drawn

Hans-Åke Scherp



Quantifying Local Radiation-Induced Lung Damage From Computed Tomography  

SciTech Connect

Purpose: Optimal implementation of new radiotherapy techniques requires accurate predictive models for normal tissue complications. Since clinically used dose distributions are nonuniform, local tissue damage needs to be measured and related to local tissue dose. In lung, radiation-induced damage results in density changes that have been measured by computed tomography (CT) imaging noninvasively, but not yet on a localized scale. Therefore, the aim of the present study was to develop a method for quantification of local radiation-induced lung tissue damage using CT. Methods and Materials: CT images of the thorax were made 8 and 26 weeks after irradiation of 100%, 75%, 50%, and 25% lung volume of rats. Local lung tissue structure (S{sub L}) was quantified from local mean and local standard deviation of the CT density in Hounsfield units in 1-mm{sup 3} subvolumes. The relation of changes in S{sub L} (DELTAS{sub L}) to histologic changes and breathing rate was investigated. Feasibility for clinical application was tested by applying the method to CT images of a patient with non-small-cell lung carcinoma and investigating the local dose-effect relationship of DELTAS{sub L}. Results: In rats, a clear dose-response relationship of DELTAS{sub L} was observed at different time points after radiation. Furthermore, DELTAS{sub L} correlated strongly to histologic endpoints (infiltrates and inflammatory cells) and breathing rate. In the patient, progressive local dose-dependent increases in DELTAS{sub L} were observed. Conclusion: We developed a method to quantify local radiation-induced tissue damage in the lung using CT. This method can be used in the development of more accurate predictive models for normal tissue complications.

Ghobadi, Ghazaleh; Hogeweg, Laurens E. [Department of Radiation Oncology, University Medical Center Groningen/University of Groningen, Groningen (Netherlands); Faber, Hette [Department of Radiation Oncology, University Medical Center Groningen/University of Groningen, Groningen (Netherlands); Department of Cell Biology, Section of Radiation and Stress Cell Biology, University Medical Center Groningen/University of Groningen, Groningen (Netherlands); Tukker, Wim G.J. [Department of Radiology, University Medical Center Groningen/University of Groningen, Groningen (Netherlands); Schippers, Jacobus M. [Department of Radiation Oncology, University Medical Center Groningen/University of Groningen, Groningen (Netherlands); Accelerator Department, Paul Scherrer Institut, Villigen (Switzerland); Brandenburg, Sytze [Kernfysisch Versneller Instituut, Groningen (Netherlands); Langendijk, Johannes A. [Department of Radiation Oncology, University Medical Center Groningen/University of Groningen, Groningen (Netherlands); Coppes, Robert P. [Department of Radiation Oncology, University Medical Center Groningen/University of Groningen, Groningen (Netherlands); Department of Cell Biology, Section of Radiation and Stress Cell Biology, University Medical Center Groningen/University of Groningen, Groningen (Netherlands); Luijk, Peter van, E-mail: p.van.luijk@rt.umcg.n [Department of Radiation Oncology, University Medical Center Groningen/University of Groningen, Groningen (Netherlands)



Aquatic Treadmill Walking: Quantifying Drag Force and Energy Expenditure.  


CONTEXT: Quantification of the magnitudes of fluid resistance provided by water jets (currents) and their effect on energy expenditure during aquatic treadmill walking is lacking in the scientific literature. OBJECTIVE: To quantify the effect of water jet intensity on jet velocity, drag force, and oxygen uptake during aquatic treadmill walking. DESIGN: Descriptive and repeated measures. SETTING: Athletic training facility. PARTICIPANTS, INTERVENTION, AND MEASURE: Water jet velocities were measured using an electromagnetic flow meter at nine different jet intensities (0-80% maximum). Drag forces, on three healthy subjects that displayed a range of frontal areas (600, 880, and 1250 cm2), were measured at each jet intensity with a force transducer and line attached to the subject who was suspended in water. Five healthy participants (age = 37.2 ± 11.3 yrs.; weight = 611 ± 96 N) subsequently walked (? 1.03 m/s or 2.3 mph) on an aquatic treadmill at the nine different jet intensities while expired gases were collected to estimate oxygen uptake (VO2). RESULTS: For the range of jet intensities, water jet velocities and drag forces were between 0 - 1.2 m/s and 0 - 47 N, respectively. VO2 increased nonlinearly with values ranging between 11.4 ± 1.0 to 22.2 ± 3.8 ml/kg/min for 0-80% of jet maximum, respectively. CONCLUSIONS: This study presented methodology for quantifying water jet flow velocities and drag forces in an aquatic treadmill environment and examined how different jet intensities influenced VO2 during walking. Quantification of these variables provides a fundamental understanding of aquatic jet use and its effect on VO2. In practice, the results indicate that VO2 may be substantially increased on an aquatic treadmill while maintaining a relatively slow walking speed. PMID:22715134

Bressel, Eadric; Smith, Gerald; Miller, Andrew; Dolny, Dennis



How to Quantify Engineered Tissue Structureand Mechanical ...  

Center for Biologics Evaluation and Research (CBER)

Text Version... Electrospun biodegradable polymers Page 10. ... Page 25. ?Aluminum Collection Mandrel ?Aluminum Collection Mandrel Polymer – Poly (ester ... More results from


Quantifying the Ease of Scientific Discovery  

PubMed Central

It has long been known that scientific output proceeds on an exponential increase, or more properly, a logistic growth curve. The interplay between effort and discovery is clear, and the nature of the functional form has been thought to be due to many changes in the scientific process over time. Here I show a quantitative method for examining the ease of scientific progress, another necessary component in understanding scientific discovery. Using examples from three different scientific disciplines – mammalian species, chemical elements, and minor planets – I find the ease of discovery to conform to an exponential decay. In addition, I show how the pace of scientific discovery can be best understood as the outcome of both scientific output and ease of discovery. A quantitative study of the ease of scientific discovery in the aggregate, such as done here, has the potential to provide a great deal of insight into both the nature of future discoveries and the technical processes behind discoveries in science.

Arbesman, Samuel



Use of short half-life cosmogenic isotopes to quantify sediment mixing and transport in karst conduits  

NASA Astrophysics Data System (ADS)

Particulate inorganic carbon (PIC) transport and flux in karst aquifers is poorly understood. Methods to quantify PIC flux are needed in order to account for total inorganic carbon removal (chemical plus mechanical) from karst settings. Quantifying PIC flux will allow more accurate calculations of landscape denudation and global carbon sink processes. The study concentrates on the critical processes of the suspended sediment component of mass flux - surface soil/stored sediment mixing, transport rates and distance, and sediment storage times. The primary objective of the study is to describe transport and mixing with the resolution of single storm-flow events. To quantify the transport processes, short half-life cosmogenic isotopes are utilized. The isotopes 7Be (t1/2 = 53d) and 210Pb (t1/2 = 22y) are the primary isotopes measured, and other potential isotopes such as 137Cs and 241Am are investigated. The study location is at Mammoth Cave National Park within the Logsdon River watershed. The Logsdon River conduit is continuously traversable underground for two kilometers. Background levels and input concentrations of isotopes are determined from soil samples taken at random locations in the catchment area, and suspended sediment collected from the primary sinking stream during a storm event. Suspended sediment was also collected from the downstream end of the conduit during the storm event. After the storm flow receded, fine sediment samples were taken from the cave stream at regular intervals to determine transport distances and mixing ratios along the conduit. Samples were analyzed with a Canberra Industries gamma ray spectrometer, counted for 24 hours to increase detection of low radionuclide activities. The measured activity levels of radionuclides in the samples were adjusted for decay from time of sampling using standard decay curves. The results of the study show that surface sediment mixing, transport and storage in karst conduits is a dynamic but potentially quantifiable process at the storm-event scale.

Paylor, R.




Microsoft Academic Search

By combining the laws of classical quantification theory with the modal propositional logic K in the most direct manner, one produces the simplest Quantified Modal Logic. The models of this simple QML relativize predication to possible worlds and interpret the quantifiers as ranging over a single, fixed domain. But simple QML has many controversial features, not the least of which

Bernard Linsky; Edward N. Zalta



Identifying and Quantifying Landscape Patterns in Space and Time  

Microsoft Academic Search

In landscape ecology, approaches to identify and quantify landscape patterns are well developed for discrete landscape representations. Discretisation is often seen as a form of generalisation and simplification. Landscape patterns however are shaped by complex dynamic processes acting at various spatial and temporal scales. Thus, standard landscape metrics that quantify static, discrete overall landscape pattern or individual patch properties may

Janine Bolliger; Helene H. Wagner; Monica G. Turner


Quantifier elimination for real closed fields by cylindrical algebraic decomposition  

Microsoft Academic Search

Tarski in 1948, [18] published a quantifier elimination method for the elementary theory of real closed fields (which he had discovered in 1930). As noted by Tarski, any quantifier elimination method for this theory also provides a decision method, which enables one to decide whether any sentence of the theory is true or false. Since many important and difficult mathematical

George E. Collins



Land cover change and remote sensing: Examples of quantifying spatiotemporal dynamics in tropical forests  

SciTech Connect

Research on human impacts or natural processes that operate over broad geographic areas must explicitly address issues of scale and spatial heterogeneity. While the tropical forests of Southeast Asia and Mexico have been occupied and used to meet human needs for thousands of years, traditional forest management systems are currently being transformed by rapid and far-reaching demographic, political, economic, and environmental changes. The dynamics of population growth, migration into the remaining frontiers, and responses to national and international market forces result in a demand for land to produce food and fiber. These results illustrate some of the mechanisms that drive current land use changes, especially in the tropical forest frontiers. By linking the outcome of individual land use decisions and measures of landscape fragmentation and change, the aggregated results shows the hierarchy of temporal and spatial events that in summation result in global changes to the most complex and sensitive biome -- tropical forests. By quantifying the spatial and temporal patterns of tropical forest change, researchers can assist policy makers by showing how landscape systems in these tropical forests are controlled by physical, biological, social, and economic parameters.

Krummel, J.R.; Su, Haiping [Argonne National Lab., IL (United States); Fox, J. [East-West Center, Honolulu, HI (United States); Yarnasan, S.; Ekasingh, M. [Chiang Mai Univ. (Thailand)




EPA Science Inventory

A significant limitation in defining remediation needs at contaminated sites often results from an insufficient understanding of the transport processes that control contaminant migration. The objectives of this research were to help resolve this dilemma by providing an improved ...


Using `LIRA' To Quantify Diffuse Structure Around X-ray and Gamma-Ray Pulsars  

NASA Astrophysics Data System (ADS)

In this poster, we exploit several capabilities of a Low-count Image Restoration and Analysis (LIRA) package, to quantify details of faint ``scruffy'' emission, consistent with PWN around X-ray and gamma-ray pulsars. Our preliminary results show evidence for irregular structure on scales of 1''-10'' or less (i.e. <500 pc), rather than larger smooth loops. Additionally, we can show this to be visible across several energy bands.LIRA grew out of work by the California-Boston Astro-Statistics Collaboration (CBASC) on analyzing high resolution, high energy Poisson images from X-ray and gamma-ray telescopes (see Stein et. al. these proceedings; also Esch et al 2004; and Connors and van Dyk in SCMAIV). LIRA fits: a ``Null'' or background model shape, times a scale factor; plus a flexible Multi-Scale (MS) model; folded though an instrument response (PSF, exposure). Embedding this in a fully Poisson probability structure allows us to map out uncertainties in our image analysis and reconstruction, via many MCMC samples. Specifically, for quantifying irregular nebular structure, we exploit the Multi-Scale model's smoothing parameters at each length-scale, as ``Summary Statistics'' (i.e low-dimensional summaries of the probability space). When distributions of these summary statistics, from analysis of simulated ``Null'' data sets, are compared with those from the actual Chandra data, we can set quantitative limits on structures at different length scales. Since one can do this for very low counts, one is able to analyze and compare structure in several energy slices. This work is supported by NSF and AISR funds.

Connors, Alanna; Stein, Nathan M.; van Dyk, David; Siemiginowska, Aneta; Kashyap, Vinay; Roberts, Mallory



Quantifying the clinical relevance of a laboratory observer performance paradigm  

PubMed Central

Objective Laboratory observer performance measurements, receiver operating characteristic (ROC) and free-response ROC (FROC) differ from actual clinical interpretations in several respects, which could compromise their clinical relevance. The objective of this study was to develop a method for quantifying the clinical relevance of a laboratory paradigm and apply it to compare the ROC and FROC paradigms in a nodule detection task. Methods The original prospective interpretations of 80 digital chest radiographs were classified by the truth panel as correct (C=1) or incorrect (C=0), depending on correlation with additional imaging, and the average of C was interpreted as the clinical figure of merit. FROC data were acquired for 21 radiologists and ROC data were inferred using the highest ratings. The areas under the ROC and alternative FROC curves were used as laboratory figures of merit. Bootstrap analysis was conducted to estimate conventional agreement measures between laboratory and clinical figures of merit. Also computed was a pseudovalue-based image-level correctness measure of the laboratory interpretations, whose association with C as measured by the area (rAUC) under an appropriately defined relevance ROC curve, is as a measure of the clinical relevance of a laboratory paradigm. Results Low correlations (e.g. ?=0.244) and near chance level rAUC values (e.g. 0.598), attributable to differences between the clinical and laboratory paradigms, were observed. The absolute width of the confidence interval was 0.38 for the interparadigm differences of the conventional measures and 0.14 for the difference of the rAUCs. Conclusion The rAUC measure was consistent with the traditional measures but was more sensitive to the differences in clinical relevance. A new relevance ROC method for quantifying the clinical relevance of a laboratory paradigm is proposed.

Chakraborty, D P; Haygood, T M; Ryan, J; Marom, E M; Evanoff, M; Mcentee, M F; Brennan, P C



Quantifying transport properties by exchange matrix method  

NASA Astrophysics Data System (ADS)

The exchange matrix method is described to study of transport properties in chaotic geophysical flows. This study is important for applying in problems of pollutants transport (such as petroleum patches) in tidal flows and others. In order to construct this special exchange matrix (first suggested by Spencer & Wiley) we use an approximation of such flows made by Zimmerman, who adopted the idea of chaotic advection, first put forward by Aref. Then for a quantitative estimation of the transport properties we explore a coarse-grained density description introduced by Gibbs and Welander. Such coarse-grained representations over an investigation area, show a ``residence place'' for the pollutant material at any instant. The orbit expansion method, exploited an assumption that the contributions of tidal and residual currents are of different orders (the tidal is much stronger), does not give answers in many real situations. The exchange matrix can show transport of patches or particles from any place in the area under consideration to an arbitrary location in the tidal sea and time if it happens.

Krasnopolskaya, Tatyana; Meleshko, Vyacheslav



GFP-tagged E. coli shows bacterial distribution in mouse organs: pathogen tracking using fluorescence signal  

PubMed Central

Purpose In vaccine efficacy evaluation, visualization of pathogens in whole organism at each time point would be able to reduce the consuming animals and provide the in vivo information within consistent background with identical organism. Materials and Methods Using IVIS spectrum whole live-animal imaging system, fluorescent intensity was optimized and visualized proportionately by concentrating Escherichia coli MC1061 strain which expresses GFP (E. coli-GFP) in BALB/C mice after injection. Results Local distribution of disseminated E. coli-GFP was traced in each organ by fluorescence. Detached organ showed more obvious fluorescent signal, and intestine showed strongest fluorescent signal. Conclusion This in vivo imaging method using GFP-tagged pathogen strain suggest quantified infected pathogens by fluorescence intensity in whole animals can provide the information about the localization and distribution after infection.

Park, Pil-Gu; Cho, Min-Hee; Rhie, Gi-eun; Jeong, Haeseul; Youn, Hyewon




Microsoft Academic Search

As with Hurricane Andrew in 1992, and the Northridge Earthquake in 1994, the terrorist attack on the World Trade Center on September 11, 2001, has resulted in major advances in th e quantification and management of a class of catastrophe insurance risks. The control of accumulations of exposure in urban areas is a basic principle of insurance portfolio management, but

Gordon Woo



Quantifying absolute carbon isotope ratios by AMS  

NASA Astrophysics Data System (ADS)

Our AMS produced a ratio of instrument transmissions for 14C and 13C equal to 13/14 to 0.22 ± 0.06% accuracy at a specific charge-changing energy of 235 keV and giving absolute isotope ratios with that mass correction. The ratio for 13C and 12C transmissions was 12/13 at 215 keV. A differential equation model of energetic ion and atom transport through the gas collision cell was constructed with functions representing experimental cross-sections for all collision effects. This model showed cation yield dependencies that were linear with energy for the monoenergetic isotopic ions at the energies that we measured, explaining the inverse mass dependence. The model predicted multiple lower energies that have this dependence and, more importantly, two low energies at which cation yields were equal for pairs of isotopes, presaging potential absolute isotope ratio measurements at low MS energies.

Vogel, John S.; Giacomo, Jason A.; Dueker, Stephen R.



Quantifying systematic uncertainties in supernova cosmology  

SciTech Connect

Observations of Type Ia supernovae used to map the expansion history of the Universe suffer from systematic uncertainties that need to be propagated into the estimates of cosmological parameters. We propose an iterative Monte Carlo simulation and cosmology fitting technique (SMOCK) to investigate the impact of sources of error upon fits of the dark energy equation of state. This approach is especially useful to track the impact of non-Gaussian, correlated effects, e.g. reddening correction errors, brightness evolution of the supernovae, K-corrections, gravitational lensing, etc. While the tool is primarily aimed at studies and optimization of future instruments, we use the Gold data-set in Riess et al (2007 Astrophys. J. 659 98) to show examples of potential systematic uncertainties that could exceed the quoted statistical uncertainties.

Nordin, Jakob; Goobar, Ariel; Joensson, Jakob, E-mail:, E-mail:, E-mail: [Department of Physics, Stockholm University, Albanova University Center, S-106 91 Stockholm (Sweden)



Quantifying non-Gaussianity for quantum information  

SciTech Connect

We address the quantification of non-Gaussianity (nG) of states and operations in continuous-variable systems and its use in quantum information. We start by illustrating in detail the properties and the relationships of two recently proposed measures of nG based on the Hilbert-Schmidt distance and the quantum relative entropy (QRE) between the state under examination and a reference Gaussian state. We then evaluate the non-Gaussianities of several families of non-Gaussian quantum states and show that the two measures have the same basic properties and also share the same qualitative behavior in most of the examples taken into account. However, we also show that they introduce a different relation of order; that is, they are not strictly monotone to each other. We exploit the nG measures for states in order to introduce a measure of nG for quantum operations, to assess Gaussification and de-Gaussification protocols, and to investigate in detail the role played by nG in entanglement-distillation protocols. Besides, we exploit the QRE-based nG measure to provide different insight on the extremality of Gaussian states for some entropic quantities such as conditional entropy, mutual information, and the Holevo bound. We also deal with parameter estimation and present a theorem connecting the QRE nG to the quantum Fisher information. Finally, since evaluation of the QRE nG measure requires the knowledge of the full density matrix, we derive some experimentally friendly lower bounds to nG for some classes of states and by considering the possibility of performing on the states only certain efficient or inefficient measurements.

Genoni, Marco G.; Paris, Matteo G. A. [Consorzio Nazionale Interuniversitario per le Scienze Fisiche della Materia, UdR Milano, I-20133 Milano (Italy); Dipartimento di Fisica, Universita degli Studi di Milano, I-20133 Milano (Italy)



Quantifying non-Gaussianity for quantum information  

NASA Astrophysics Data System (ADS)

We address the quantification of non-Gaussianity (nG) of states and operations in continuous-variable systems and its use in quantum information. We start by illustrating in detail the properties and the relationships of two recently proposed measures of nG based on the Hilbert-Schmidt distance and the quantum relative entropy (QRE) between the state under examination and a reference Gaussian state. We then evaluate the non-Gaussianities of several families of non-Gaussian quantum states and show that the two measures have the same basic properties and also share the same qualitative behavior in most of the examples taken into account. However, we also show that they introduce a different relation of order; that is, they are not strictly monotone to each other. We exploit the nG measures for states in order to introduce a measure of nG for quantum operations, to assess Gaussification and de-Gaussification protocols, and to investigate in detail the role played by nG in entanglement-distillation protocols. Besides, we exploit the QRE-based nG measure to provide different insight on the extremality of Gaussian states for some entropic quantities such as conditional entropy, mutual information, and the Holevo bound. We also deal with parameter estimation and present a theorem connecting the QRE nG to the quantum Fisher information. Finally, since evaluation of the QRE nG measure requires the knowledge of the full density matrix, we derive some experimentally friendly lower bounds to nG for some classes of states and by considering the possibility of performing on the states only certain efficient or inefficient measurements.

Genoni, Marco G.; Paris, Matteo G. A.



Quantifying protein–protein interactions in high throughput using protein domain microarrays  

Microsoft Academic Search

Protein microarrays provide an efficient way to identify and quantify protein–protein interactions in high throughput. One drawback of this technique is that proteins show a broad range of physicochemical properties and are often difficult to produce recombinantly. To circumvent these problems, we have focused on families of protein interaction domains. Here we provide protocols for constructing microarrays of protein interaction

Alexis Kaushansky; John E Allen; Andrew Gordus; Michael A Stiffler; Ethan S Karp; Bryan H Chang; Gavin MacBeath



In vivo approaches and rationale for quantifying kinetics and imaging brain lipid metabolic pathways  

Microsoft Academic Search

Developing a kinetic strategy to examine rates of lipid metabolic pathways can help to elucidate the roles that lipids play in tissue function and structure in health and disease. This review summarizes such a strategy, and shows how it has been applied to quantify different kinetic aspects of brain lipid metabolism in animals and humans. Methods involve injecting intravenously a

Stanley I. Rapoport



Radon as a Natural Partitioning Tracer for Locating and Quantifying DNAPL Saturation in the Subsurface  

NASA Astrophysics Data System (ADS)

The inability to locate and quantify dense nonaqueous phase liquid (DNAPL) saturation in the subsurface presents obstacles to site characterization and remediation. The objective of this study is to evaluate the use of naturally occurring radon as an in-situ, partitioning tracer to locate and quantify DNAPL saturation. In the saturated zone, radon emanating from aquifer solids occurs as a dissolved gas and, due to its non-polarity, partitions into DNAPL. Partitioning between the DNAPL and aqueous phases results in retarded radon transport during groundwater flow. The radon retardation factor can be determined using single-well 'push-pull' tracer tests, enabling the calculation of the DNAPL saturation. Radon can also be used as a 'static' partitioning tracer, whereby grab samples of radon from monitoring wells in contaminated and non-contaminated portions of an aquifer are collected and compared to calculate the DNAPL saturation and to monitor saturation changes as remediation proceeds. The utility of these methods was investigated in the laboratory using a physical aquifer model (PAM). Static and push-pull tests were performed before and after contamination of a portion of the PAM sediment pack with trichloroethene (TCE). The PAM was then remediated using alcohol cosolvent and tap water flushes, and static and push-pull tests were performed to assess the efficacy of remediation. Numerical simulations were used to estimate the retardation factor for radon in the push-pull tests. Radon partitioning was observed in static and push-pull tests conducted after TCE contamination. Calculated TCE saturations ranged up to 1.4 % (static test) and 14.1 % (push-pull test), based on the numerical method modeling approach used to analyze the results. Post-remediation tests showed decreases in TCE saturations. The results show that radon is sensitive to changes in DNAPL (e.g., TCE) saturation in space and time. Recent advances in numerical modeling of radon in push-pull tests have shown the influence of TCE saturation distribution and initial radon concentrations on radon breakthrough curves and calculated TCE saturations. These advances have led to more accurate predictions of the TCE saturation in the PAM. The push-pull method was applied at a field site at Dover Air Force Base, Delaware. The site consists of an aquifer 'test cell' 27 ft long and 18 ft wide surrounded by steel pilings to a clay confining unit 40 ft below grade. Push-pull tests were performed before and after contamination of the test cell with perchloroethene (PCE). Push-pull tests performed before contamination showed no evidence of radon retardation, while tests performed after contamination showed evidence of retardation and suggested the presence of PCE.

Davis, B. M.; Istok, J.; Semprini, L.



Quantifying the trustworthiness of social media content  

Microsoft Academic Search

The growing popularity of social media in recent years has resulted in the creation of an enormous amount of user-generated\\u000a content. A significant portion of this information is useful and has proven to be a great source of knowledge. However, since\\u000a much of this information has been contributed by strangers with little or no apparent reputation to speak of, there

Sai T. Moturu; Huan Liu



Quantifying covalency and metallicity in correlated compounds undergoing metal-insulator transitions  

NASA Astrophysics Data System (ADS)

The tunability of bonding character in transition-metal compounds controls phase transitions and their fascinating properties such as high-temperature superconductivity, colossal magnetoresistance, spin-charge ordering, etc. However, separating out and quantifying the roles of covalency and metallicity derived from the same set of transition-metal d and ligand p electrons remains a fundamental challenge. In this study, we use bulk-sensitive photoelectron spectroscopy and configuration-interaction calculations for quantifying the covalency and metallicity in correlated compounds. The method is applied to study the first-order temperature- (T-) dependent metal-insulator transitions (MITs) in the cubic pyrochlore ruthenates Tl2Ru2O7 and Hg2Ru2O7. Core-level spectroscopy shows drastic T-dependent modifications which are well explained by including ligand-screening and metallic-screening channels. The core-level metallic-origin features get quenched upon gap formation in valence band spectra, while ionic and covalent components remain intact across the MIT. The results establish temperature-driven Mott-Hubbard MITs in three-dimensional ruthenates and reveal three energy scales: (a) 4d electronic changes occur on the largest (˜eV) energy scale, (b) the band-gap energies/charge gaps (Eg˜160-200 meV) are intermediate, and (c) the lowest-energy scale corresponds to the transition temperature TMIT (˜10 meV), which is also the spin gap energy of Tl2Ru2O7 and the magnetic-ordering temperature of Hg2Ru2O7. The method is general for doping- and T-induced transitions and is valid for V2O3, CrN, La1-xSrxMnO3, La2-xSrxCuO4, etc. The obtained transition-metal-ligand (d-p) bonding energies (V˜45-90 kcal/mol) are consistent with thermochemical data, and with energies of typical heteronuclear covalent bonds such as C-H, C-O, C-N, etc. In contrast, the metallic-screening energies of correlated compounds form a weaker class (V*˜10-40 kcal/mol) but are still stronger than van der Waals and hydrogen bonding. The results identify and quantify the roles of covalency and metallicity in 3d and 4d correlated compounds undergoing metal-insulator transitions.

Chainani, Ashish; Yamamoto, Ayako; Matsunami, Masaharu; Eguchi, Ritsuko; Taguchi, Munetaka; Takata, Yasutaka; Takagi, Hidenori; Shin, Shik; Nishino, Yoshinori; Yabashi, Makina; Tamasaku, Kenji; Ishikawa, Tetsuya



Stretching DNA to quantify nonspecific protein binding.  


Nonspecific binding of regulatory proteins to DNA can be an important mechanism for target search and storage. This seems to be the case for the lambda repressor protein (CI), which maintains lysogeny after infection of E. coli. CI binds specifically at two distant regions along the viral genome and induces the formation of a repressive DNA loop. However, single-molecule imaging as well as thermodynamic and kinetic measurements of CI-mediated looping show that CI also binds to DNA nonspecifically and that this mode of binding may play an important role in maintaining lysogeny. This paper presents a robust phenomenological approach using a recently developed method based on the partition function, which allows calculation of the number of proteins bound nonspecific to DNA from measurements of the DNA extension as a function of applied force. This approach was used to analyze several cycles of extension and relaxation of ? DNA performed at several CI concentrations to measure the dissociation constant for nonspecific binding of CI (~100 nM), and to obtain a measurement of the induced DNA compaction (~10%) by CI. PMID:23005450

Goyal, Sachin; Fountain, Chandler; Dunlap, David; Family, Fereydoon; Finzi, Laura



Quantifying the effects of melittin on liposomes.  


Melittin, the soluble peptide of bee venom, has been demonstrated to induce lysis of phospholipid liposomes. We have investigated the dependence of the lytic activity of melittin on lipid composition. The lysis of liposomes, measured by following their mass and dimensions when immobilised on a solid substrate, was close to zero when the negatively charged lipids phosphatidyl glycerol or phosphatidyl serine were used as the phospholipid component of the liposome. Whilst there was significant binding of melittin to the liposomes, there was little net change in their diameter with melittin binding reversed upon salt injection. For the zwitterionic phosphatidyl choline the lytic ability of melittin is dependent on the degree of acyl chain unsaturation, with melittin able to induce lysis of liposomes in the liquid crystalline state, whilst those in the gel state showed strong resistance to lysis. By directly measuring the dimensions and mass changes of liposomes on exposure to melittin using Dual Polarisation Interferometry, rather than following the florescence of entrapped dyes we attained further information about the initial stages of melittin binding to liposomes. PMID:17092481

Popplewell, J F; Swann, M J; Freeman, N J; McDonnell, C; Ford, R C



Quantified EEG in different G situations  

NASA Astrophysics Data System (ADS)

The electrical activity of the brain (EEG) has been recorded during parabolic flights in trained astronauts and non trained volunteers as well. The Fast Fourier analysis of the EEG activity evidenced more asymmetry between the two brain hemispheres in the subjects who suffered from motion sickness than in the others. However, such a FFT classification does not lead to a discrimination between deterministic and stochastic events. Therefore, a first attempt was made to calculate the dimensionality of "chaotic attractors" in the EEG patterns as a function of the different g-epochs of one parabola. Very preliminary results are given here.

de Metz, K.; Quadens, O.; De Graeve, M.


Quantifying the natural history of breast cancer  

PubMed Central

Background: Natural history models of breast cancer progression provide an opportunity to evaluate and identify optimal screening scenarios. This paper describes a detailed Markov model characterising breast cancer tumour progression. Methods: Breast cancer is modelled by a 13-state continuous-time Markov model. The model differentiates between indolent and aggressive ductal carcinomas in situ tumours, and aggressive tumours of different sizes. We compared such aggressive cancers, that is, which are non-indolent, to those which are non-growing and regressing. Model input parameters and structure were informed by the 1978–1984 Ostergotland county breast screening randomised controlled trial. Overlaid on the natural history model is the effect of screening on diagnosis. Parameters were estimated using Bayesian methods. Markov chain Monte Carlo integration was used to sample the resulting posterior distribution. Results: The breast cancer incidence rate in the Ostergotland population was 21 (95% CI: 17–25) per 10?000 woman-years. Accounting for length-biased sampling, an estimated 91% (95% CI: 85–97%) of breast cancers were aggressive. Larger tumours, 21–50?mm, had an average sojourn of 6 years (95% CI: 3–16 years), whereas aggressive ductal carcinomas in situ took around half a month (95% CI: 0–1 month) to progress to the invasive ?10?mm state. Conclusion: These tumour progression rate estimates may facilitate future work analysing cost-effectiveness and quality-adjusted life years for various screening strategies.

Tan, K H X; Simonella, L; Wee, H L; Roellin, A; Lim, Y-W; Lim, W-Y; Chia, K S; Hartman, M; Cook, A R



Quantifying in vivo MR spectra with circles ?  

PubMed Central

Accurate and robust quantification of in vivo magnetic resonance spectroscopy (MRS) data is essential to its application in research and medicine. The performance of existing analysis methods is problematic for in vivo studies where low signal-to-noise ratio, overlapping peaks and intense artefacts are endemic. Here, a new frequency-domain technique for MRS data analysis is introduced wherein the circular trajectories which result when spectral peaks are projected onto the complex plane, are fitted with active circle models. The use of active contour strategies naturally allows incorporation of prior knowledge as constraint energy terms. The problem of phasing spectra is eliminated, and baseline artefacts are dealt with using active contours-snakes. The stability and accuracy of the new technique, CFIT, is compared with a standard time-domain fitting tool, using simulated 31P data with varying amounts of noise and 98 real human chest and heart 31P MRS data sets. The real data were also analyzed by our standard frequency-domain absorption-mode technique. On the real data, CFIT demonstrated the least fitting failures of all methods and an accuracy similar to the latter method, with both these techniques outperforming the time-domain approach. Contrasting results from simulations argue that performance relative to Cramer-Rao Bounds may not be a suitable indicator of fitting performance with typical in vivo data such as these. We conclude that CFIT is a stable, accurate alternative to the best existing methods of fitting in vivo data.

Gabr, Refaat E.; Ouwerkerk, Ronald; Bottomley, Paul A.



Quantifying the natural history of breast cancer.  


Background:Natural history models of breast cancer progression provide an opportunity to evaluate and identify optimal screening scenarios. This paper describes a detailed Markov model characterising breast cancer tumour progression.Methods:Breast cancer is modelled by a 13-state continuous-time Markov model. The model differentiates between indolent and aggressive ductal carcinomas in situ tumours, and aggressive tumours of different sizes. We compared such aggressive cancers, that is, which are non-indolent, to those which are non-growing and regressing. Model input parameters and structure were informed by the 1978-1984 Ostergotland county breast screening randomised controlled trial. Overlaid on the natural history model is the effect of screening on diagnosis. Parameters were estimated using Bayesian methods. Markov chain Monte Carlo integration was used to sample the resulting posterior distribution.Results:The breast cancer incidence rate in the Ostergotland population was 21 (95% CI: 17-25) per 10?000 woman-years. Accounting for length-biased sampling, an estimated 91% (95% CI: 85-97%) of breast cancers were aggressive. Larger tumours, 21-50?mm, had an average sojourn of 6 years (95% CI: 3-16 years), whereas aggressive ductal carcinomas in situ took around half a month (95% CI: 0-1 month) to progress to the invasive ?10?mm state.Conclusion:These tumour progression rate estimates may facilitate future work analysing cost-effectiveness and quality-adjusted life years for various screening strategies. PMID:24084766

Tan, K H X; Simonella, L; Wee, H L; Roellin, A; Lim, Y-W; Lim, W-Y; Chia, K S; Hartman, M; Cook, A R



Application of Tsallis Entropy to EEG: Quantifying the Presence of Burst Suppression After Asphyxial Cardiac Arrest in Rats  

PubMed Central

Burst suppression (BS) activity in EEG is clinically accepted as a marker of brain dysfunction or injury. Experimental studies in a rodent model of brain injury following asphyxial cardiac arrest (CA) show evidence of BS soon after resuscitation, appearing as a transitional recovery pattern between isoelectricity and continuous EEG. The EEG trends in such experiments suggest varying levels of uncertainty or randomness in the signals. To quantify the EEG data, Shannon entropy and Tsallis entropy (TsEn) are examined. More specifically, an entropy-based measure named TsEn area (TsEnA) is proposed to reveal the presence and the extent of development of BS following brain injury. The methodology of TsEnA and the selection of its parameter are elucidated in detail. To test the validity of this measure, 15 rats were subjected to 7 or 9 min of asphyxial CA. EEG recordings immediately after resuscitation from CA were investigated and characterized by TsEnA. The results show that TsEnA correlates well with the outcome assessed by evaluating the rodents after the experiments using a well-established neurological deficit score (Pearson correlation = 0.86, p ? 0.01). This research shows that TsEnA reliably quantifies the complex dynamics in BS EEG, and may be useful as an experimental or clinical tool for objective estimation of the gravity of brain damage after CA.

Zhang, Dandan; Jia, Xiaofeng; Ding, Haiyan; Ye, Datian; Thakor, Nitish V.



Quantifying defects in zeolites and zeolite membranes  

NASA Astrophysics Data System (ADS)

Zeolites are crystalline aluminosilicates that are frequently used as catalysts to transform chemical feedstocks into more useful materials in a size- or shape-selective fashion; they are one of the earliest forms of nanotechnology. Zeolites can also be used, especially in the form of zeolite membranes (layers of zeolite on a support), to separate mixtures based on the size of the molecules. Recent advances have also created the possibility of using zeolites as alkaline catalysts, in addition to their traditional applications as acid catalysts and catalytic supports. Transport and catalysis in zeolites are greatly affected by physical and chemical defects. Such defects can be undesirable (in the case of zeolite membranes), or desirable (in the case of nitrogen-doped alkaline zeolites). Studying zeolites at the relevant length scales requires indirect experimental methods such as vapor adsorption or atomic-scale modeling such as electronic structure calculations. This dissertation explores both experimental and theoretical characterization of zeolites and zeolite membranes. Physical defects, important in membrane permeation, are studied using physical adsorption experiments and models of membrane transport. The results indicate that zeolite membranes can be modeled as a zeolite powder on top of a support---a "supported powder," so to speak---for the purposes of adsorption. Mesoporosity that might be expected based on permeation and confocal microscopy measurements is not observed. Chemical defects---substitutions of nitrogen for oxygen---are studied using quantum mechanical models that predict spectroscopic properties. These models provide a method for simulating the 29Si NMR spectra of nitrogendefected zeolites. They also demonstrate that nitrogen substitutes into the zeolite framework (not just on the surface) under the proper reaction conditions. The results of these studies will be valuable to experimentalists and theorists alike in our efforts to understand the versatile and complicated materials that are zeolites.

Hammond, Karl Daniel


Gait stability and variability measures show effects of impaired cognition and dual tasking in frail people  

PubMed Central

Background Falls in frail elderly are a common problem with a rising incidence. Gait and postural instability are major risk factors for falling, particularly in geriatric patients. As walking requires attention, cognitive impairments are likely to contribute to an increased fall risk. An objective quantification of gait and balance ability is required to identify persons with a high tendency to fall. Recent studies have shown that stride variability is increased in elderly and under dual task condition and might be more sensitive to detect fall risk than walking speed. In the present study we complemented stride related measures with measures that quantify trunk movement patterns as indicators of dynamic balance ability during walking. The aim of the study was to quantify the effect of impaired cognition and dual tasking on gait variability and stability in geriatric patients. Methods Thirteen elderly with dementia (mean age: 82.6 ± 4.3 years) and thirteen without dementia (79.4 ± 5.55) recruited from a geriatric day clinic, walked at self-selected speed with and without performing a verbal dual task. The Mini Mental State Examination and the Seven Minute Screen were administered. Trunk accelerations were measured with an accelerometer. In addition to walking speed, mean, and variability of stride times, gait stability was quantified using stochastic dynamical measures, namely regularity (sample entropy, long range correlations) and local stability exponents of trunk accelerations. Results Dual tasking significantly (p < 0.05) decreased walking speed, while stride time variability increased, and stability and regularity of lateral trunk accelerations decreased. Cognitively impaired elderly showed significantly (p < 0.05) more changes in gait variability than cognitive intact elderly. Differences in dynamic parameters between groups were more discerned under dual task conditions. Conclusions The observed trunk adaptations were a consistent instability factor. These results support the concept that changes in cognitive functions contribute to changes in the variability and stability of the gait pattern. Walking under dual task conditions and quantifying gait using dynamical parameters can improve detecting walking disorders and might help to identify those elderly who are able to adapt walking ability and those who are not and thus are at greater risk for falling.



Quantifying the Waddington landscape and biological paths for development and differentiation  

PubMed Central

We developed a theoretical framework to prove the existence and quantify the Waddington landscape as well as chreode-biological paths for development and differentiation. The cells can have states with the higher probability ones giving the different cell types. Different cell types correspond to different basins of attractions of the probability landscape. We study how the cells develop from undifferentiated cells to differentiated cells from landscape perspectives. We quantified the Waddington landscape through construction of underlying probability landscape for cell development. We show the developmental process proceeds as moving from undifferentiated to the differentiated basins of attractions. The barrier height of the basins of attractions correlates with the escape time that determines the stability of cell types. We show that the developmental process can be quantitatively described and uncovered by the biological paths on the quantified Waddington landscape from undifferentiated to the differentiated cells. We found the dynamics of the developmental process is controlled by a combination of the gradient and curl force on the landscape. The biological paths often do not follow the steepest descent path on the landscape. The landscape framework also quantifies the possibility of reverse differentiation process such as cell reprogramming from differentiated cells back to the original stem cell. We show that the biological path of reverse differentiation is irreversible and different from the one for differentiation process. We found that the developmental process described by the underlying landscape and the associated biological paths is relatively stable and robust against the influences of environmental perturbations.

Wang, Jin; Zhang, Kun; Xu, Li; Wang, Erkang



Quantifying uncertainty in chemical systems modeling.  

SciTech Connect

This study compares two techniques for uncertainty quantification in chemistry computations, one based on sensitivity analysis and error propagation, and the other on stochastic analysis using polynomial chaos techniques. The two constructions are studied in the context of H{sub 2}-O{sub 2} ignition under supercritical-water conditions. They are compared in terms of their prediction of uncertainty in species concentrations and the sensitivity of selected species concentrations to given parameters. The formulation is extended to one-dimensional reacting-flow simulations. The computations are used to study sensitivities to both reaction rate pre-exponentials and enthalpies, and to examine how this information must be evaluated in light of known, inherent parametric uncertainties in simulation parameters. The results indicate that polynomial chaos methods provide similar first-order information to conventional sensitivity analysis, while preserving higher-order information that is needed for accurate uncertainty quantification and for assigning confidence intervals on sensitivity coefficients. These higher-order effects can be significant, as the analysis reveals substantial uncertainties in the sensitivity coefficients themselves.

Reagan, Matthew T.; Knio, Omar M. (The Johns Hopkins University, Baltimore, MD); Najm, Habib N.; Ghanem, Roger Georges (The Johns Hopkins University, Baltimore, MD); Pebay, Philippe Pierre



Quantifying the limits of convective parameterizations  

NASA Astrophysics Data System (ADS)

Quasi-equilibrium (QE) closure is an approximation that is expected to apply to a large ensemble of clouds under slowly changing weather conditions. It breaks down under rapidly changing conditions or when the domain size is too small to provide an adequate sample of the cloud field. We explore fluctuations about an equilibrium state as simulated by a three-dimensional cloud-resolving model. An ensemble of simulations is used to determine how the response to prescribed periodic large-scale forcing changes with the period of the forcing and the size of the averaging domain. The vertical profile of the forcing is loosely based on GATE data. Results are compared with those from constant forcing simulations. In the constant forcing simulations, the noise-to-signal ratio is nearly independent of forcing magnitude. With time-varying forcing, a considerable range of responses is found. As expected, the more slowly the forcing varies, the better the response is approximated by QE. Errors become large when the period of the forcing is less than 30 h, suggesting that the diurnal cycle cannot be accurately simulated with a QE closure. Nondeterministic variability becomes more significant with smaller domain sizes. For the cases studied, a domain width of at least 180 km is needed to obtain an adequate sample of the cloud population.

Jones, Todd R.; Randall, David A.



Quantifying object salience by equating distractor effects.  


It is commonly believed that objects viewed in certain contexts may be more or less salient. Measurements of salience have usually relied on asking observers "How much does this object stand out against the background?". In this study, we measured the salience of objects by assessing the distraction they produce for subjects searching for a different, pre-specified target. Distraction was measured through response times, but changes in response times were not assumed to be a linear measure of distracting potency. The analysis rested on measuring the effects of varying disparities--in size, luminance, and both-between a target object, a key distractor, and other background items. Our results indicate: (1) object salience defined by luminance or size disparity is determined by the ratio between its defining feature value and the corresponding feature value of background items; this finding is congenial to Weber's law for discrimination thresholds. (2) If we define salience as the logarithm of a feature value ratio, then salience increases approximately as fast due to increase in area as due to increase in luminance. (3) The sum of salience arising from object-background disparity in both size and luminance is larger than their vector sum (orthogonal vectors), but smaller than their scalar sum. PMID:15797780

Huang, Liqiang; Pashler, Harold



Use of tracers to quantify subsurface flow through a mining pit.  


Three independent tracer experiments were conducted to quantify the through-flow of water from Herman Pit, an abandoned mercury (Hg) mine pit adjacent to Clear Lake, California, USA. The tracers used were Rhodamine-WT, sulfur hexafluoride, and a mixture of sulfur hexafluoride and neon-22. The tracers were injected into Herman Pit, a generally well-mixed water body of approximately 81,000 m2, and the concentrations were monitored in the mine pit, observation wells, and the lake for 2-3 months following each injection. The results for all three experiments showed that the tracer arrived at certain observation wells within days of injection. Comparing all the well data showed a highly heterogeneous response, with a small number of wells showing this near-instantaneous response and others taking months before the tracer was detectable. Tracer was also found in the lake on four occasions over a one-month period, too few to infer any pattern but sufficient to confirm the connection of the two water bodies. Using a simple mass balance model it was possible to determine the effective loss rate through advection for each of the tracers and with this to estimate the through-flow rate. The through-flow rate for all three experiments was approximately 630 L/s, at least 1-2 orders of magnitude larger than previous estimates, all of which had been based on geochemical inferences or other indirect measures of the pit through-flow. PMID:19475918

Schladow, S Geoffrey; Clark, Jordan F



Quantifying food losses and the potential for reduction in Switzerland.  


A key element in making our food systems more efficient is the reduction of food losses across the entire food value chain. Nevertheless, food losses are often neglected. This paper quantifies food losses in Switzerland at the various stages of the food value chain (agricultural production, postharvest handling and trade, processing, food service industry, retail, and households), identifies hotspots and analyses the reasons for losses. Twenty-two food categories are modelled separately in a mass and energy flow analysis, based on data from 31 companies within the food value chain, and from public institutions, associations, and from the literature. The energy balance shows that 48% of the total calories produced (edible crop yields at harvest time and animal products, including slaughter waste) is lost across the whole food value chain. Half of these losses would be avoidable given appropriate mitigation measures. Most avoidable food losses occur at the household, processing, and agricultural production stage of the food value chain. Households are responsible for almost half of the total avoidable losses (in terms of calorific content). PMID:23270687

Beretta, Claudio; Stoessel, Franziska; Baier, Urs; Hellweg, Stefanie



Quantifying Correlations Between Allosteric Sites in Thermodynamic Ensembles  

PubMed Central

Allostery describes altered protein function at one site due to a perturbation at another site. One mechanism of allostery involves correlated motions, which can occur even in the absence of substantial conformational change. We present a novel method, “MutInf”, to identify statistically significant correlated motions from equilibrium molecular dynamics simulations. Our approach analyzes both backbone and sidechain motions using internal coordinates to account for the gear-like twists that can take place even in the absence of the large conformational changes typical of traditional allosteric proteins. We quantify correlated motions using a mutual information metric, which we extend to incorporate data from multiple short simulations and to filter out correlations that are not statistically significant. Applying our approach to uncover mechanisms of cooperative small molecule binding in human interleukin-2, we identify clusters of correlated residues from 50 ns of molecular dynamics simulations. Interestingly, two of the clusters with the strongest correlations highlight known cooperative small-molecule binding sites and show substantial correlations between these sites. These cooperative binding sites on interleukin-2 are correlated not only through the hydrophobic core of the protein but also through a dynamic polar network of hydrogen bonding and electrostatic interactions. Since this approach identifies correlated conformations in an unbiased, statistically robust manner, it should be a useful tool for finding novel or “orphan” allosteric sites in proteins of biological and therapeutic importance.

McClendon, Christopher L.; Friedland, Gregory; Mobley, David L.; Amirkhani, Homeira; Jacobson, Matthew P.



Quantifying the Spatial Dimension of Dengue Virus Epidemic Spread within a Tropical Urban Environment  

PubMed Central

Background Dengue infection spread in naive populations occurs in an explosive and widespread fashion primarily due to the absence of population herd immunity, the population dynamics and dispersal of Ae. aegypti, and the movement of individuals within the urban space. Knowledge on the relative contribution of such factors to the spatial dimension of dengue virus spread has been limited. In the present study we analyzed the spatio-temporal pattern of a large dengue virus-2 (DENV-2) outbreak that affected the Australian city of Cairns (north Queensland) in 2003, quantified the relationship between dengue transmission and distance to the epidemic's index case (IC), evaluated the effects of indoor residual spraying (IRS) on the odds of dengue infection, and generated recommendations for city-wide dengue surveillance and control. Methods and Findings We retrospectively analyzed data from 383 DENV-2 confirmed cases and 1,163 IRS applications performed during the 25-week epidemic period. Spatial (local k-function, angular wavelets) and space-time (Knox test) analyses quantified the intensity and directionality of clustering of dengue cases, whereas a semi-parametric Bayesian space-time regression assessed the impact of IRS and spatial autocorrelation in the odds of weekly dengue infection. About 63% of the cases clustered up to 800 m around the IC's house. Most cases were distributed in the NW-SE axis as a consequence of the spatial arrangement of blocks within the city and, possibly, the prevailing winds. Space-time analysis showed that DENV-2 infection spread rapidly, generating 18 clusters (comprising 65% of all cases), and that these clusters varied in extent as a function of their distance to the IC's residence. IRS applications had a significant protective effect in the further occurrence of dengue cases, but only when they reached coverage of 60% or more of the neighboring premises of a house. Conclusion By applying sound statistical analysis to a very detailed dataset from one of the largest outbreaks that affected the city of Cairns in recent times, we not only described the spread of dengue virus with high detail but also quantified the spatio-temporal dimension of dengue virus transmission within this complex urban environment. In areas susceptible to non-periodic dengue epidemics, effective disease prevention and control would depend on the prompt response to introduced cases. We foresee that some of the results and recommendations derived from our study may also be applicable to other areas currently affected or potentially subject to dengue epidemics.

Vazquez-Prokopec, Gonzalo M.; Kitron, Uriel; Montgomery, Brian; Horne, Peter; Ritchie, Scott A.



A numerical analysis on the applicability of the water level fluctuation method for quantifying groundwater recharge  

NASA Astrophysics Data System (ADS)

The water table fluctuation(WTF) method is a conventional method for quantifying groundwater recharge by multiplying the specific yield to the water level rise. Based on the van Genuchten model, an analytical relationship between groundwater recharge and the water level rise is derived. The equation is used to analyze the effects of the depth to water level and the soil properties on the recharge estimate using the WTF method. The results show that the WTF method is reliable when applied to the aquifers of the fluvial sand provided the water table is below 1m depth. However, if it is applied to the silt loam having the water table depth ranging 4~10m, the recharge is overestimated by 30~80%, and the error increases drastically as the water table is getting shallower. A 2-D unconfined flow model with a time series of the recharge rate is developed. It is used for elucidating the errors of the WTF method, which is implicitly based on the tank model where the horizontal flow in the saturated zone is ignored. Simulations show that the recharge estimated by the WTF method is underestimated for the observation well near the discharge boundary. This is due to the fact that the hydraulic stress resulting from the recharge is rapidly dissipating by the horizontal flow near the discharge boundary. Simulations also reveal that the recharge is significantly underestimated with increase in the hydraulic conductivity and the recharge duration, and decrease in the specific yield.

Koo, M.; Lee, D.



Quantifying Spatial Variability of Selected Soil Trace Elements and Their Scaling Relationships Using Multifractal Techniques  

PubMed Central

Multifractal techniques were utilized to quantify the spatial variability of selected soil trace elements and their scaling relationships in a 10.24-ha agricultural field in northeast China. 1024 soil samples were collected from the field and available Fe, Mn, Cu and Zn were measured in each sample. Descriptive results showed that Mn deficiencies were widespread throughout the field while Fe and Zn deficiencies tended to occur in patches. By estimating single multifractal spectra, we found that available Fe, Cu and Zn in the study soils exhibited high spatial variability and the existence of anomalies ([?(q)max??(q)min]?0.54), whereas available Mn had a relatively uniform distribution ([?(q)max??(q)min]?0.10). The joint multifractal spectra revealed that the strong positive relationships (r?0.86, P<0.001) among available Fe, Cu and Zn were all valid across a wider range of scales and over the full range of data values, whereas available Mn was weakly related to available Fe and Zn (r?0.18, P<0.01) but not related to available Cu (r?=??0.03, P?=?0.40). These results show that the variability and singularities of selected soil trace elements as well as their scaling relationships can be characterized by single and joint multifractal parameters. The findings presented in this study could be extended to predict selected soil trace elements at larger regional scales with the aid of geographic information systems.

Zhang, Fasheng; Yin, Guanghua; Wang, Zhenying; McLaughlin, Neil; Geng, Xiaoyuan; Liu, Zuoxin



Capturing and quantifying the exocytotic event.  


Although exocytosis is now known to be the universal method by which proteins are released from eukaryotic cells, we know surprisingly little of the mechanism by which exocytosis occurs. One reason for this is that it has proved difficult to capture sufficient of these evanescent events to permit their study. The difficulty with which exocytoses can be visualized with standard preparative techniques varies among tissues, but the problem is particularly apparent in the mammalian nervous system. Tannic acid has recently been introduced as an agent by which exocytosed granule cores can be captured and visualized electron-microscopically. Application of tannic acid to the magnocellular neurosecretory system reveals exocytoses from all parts of their terminal arborization within the neural lobe, and also from their dendrites within the hypothalamus. Quantification of the exocytoses in unstimulated tissue and in tissue stimulated by a variety of exogenous and endogenous mechanisms indicates: (a) that exocytosis occurs equally from each unit of membrane of the perivascular nerve endings, and of the axonal swellings that were previously thought to be sites of granule storage, rather than release; (b) that, in the nerve endings, a greater proportion of the stored granules are exocytosed, and thus the endings are specialized for release not by any particular property of their membrane, but by a high surface membrane:volume ratio. Together, the data cast doubt on the hypothesis that exocytosis occurs only at some functionally specialized sites at certain loci in the membrane. Rather, the data favour the hypothesis that magnocellular granules can fuse with any part of the membrane, depending on constraints imposed by the cytoskeleton, and a local increase in cytosolic free calcium level. When applied to hypothalamic central nervous tissue, tannic acid reveals that exocytosis of dense-cored synaptic vesicles occurs preferentially, but not exclusively, at the membrane apposed to the postsynaptic element. However, about half of all exocytoses from synaptic boutons occur at bouton membrane unrelated to the synaptic cleft. In all tissues studied, tannic acid reveals a heterogeneity among secretory cells in the extent of exocytosis that occurs in response to stimulation, and permits an analysis of the degree to which secretion is polarized in any one direction. These results question long-held assumptions concerning the site at which neurones release transmitters and modulators. Tannic acid seems likely to prove a potent tool in the investigation of both the mechanism of exocytosis and the ways in which different types of cells adapt the process to perform their physiol PMID:3062123

Morris, J F; Pow, D V



Quantifying Regional Measurement Requirements for ASCENDS  

NASA Astrophysics Data System (ADS)

Quantification of greenhouse gas fluxes at regional and local scales is required by the Kyoto protocol and potential follow-up agreements, and their accompanying implementation mechanisms (e.g., cap-and-trade schemes and treaty verification protocols). Dedicated satellite observations, such as those provided by the Greenhouse gases Observing Satellite (GOSAT), the upcoming Orbiting Carbon Observatory (OCO-2), and future active missions, particularly Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) and Advanced Space Carbon and Climate Observation of Planet Earth (A-SCOPE), are poised to play a central role in this endeavor. In order to prepare for the ASCENDS mission, we are applying the Stochastic Time-Inverted Lagrangian Transport (STILT) model driven by meteorological fields from a customized version of the Weather Research and Forecasting (WRF) model to generate surface influence functions for ASCENDS observations. These "footprints" (or adjoint) express the sensitivity of observations to surface fluxes in the upwind source regions and thus enable the computation of a posteriori flux error reductions resulting from the inclusion of satellite observations (taking into account the vertical sensitivity and error characteristics of the latter). The overarching objective of this project is the specification of the measurement requirements for the ASCENDS mission, with a focus on policy-relevant regional scales. Several features make WRF-STILT an attractive tool for regional analysis of satellite observations: 1) WRF meteorology is available at higher resolution than for global models and is thus more realistic, 2) The Lagrangian approach minimizes numerical diffusion present in Eulerian models, 3) The WRF-STILT coupling has been specifically designed to achieve good mass conservation characteristics, and 4) The receptor-oriented approach offers a relatively straightforward way to compute the adjoint of the transport model. These aspects allow the model to compute surface influences for satellite observations at high spatiotemporal resolution and to generate realistic flux error and flux estimates at policy-relevant scales. The main drawbacks of the Lagrangian approach to satellite simulations are inefficiency and storage requirements, but these obstacles can be overcome by taking advantage of modern computing resources (the current runs are being performed on the NASA Pleiades supercomputer). We gratefully acknowledge funding by the NASA Atmospheric CO2 Observations from Space Program (grant NNX10AT87G).

Mountain, M. E.; Eluszkiewicz, J.; Nehrkorn, T.; Hegarty, J. D.; Aschbrenner, R.; Henderson, J.; Zaccheo, S.



Isotopes in Urban Cheatgrass Quantify Atmospheric Pollution  

NASA Astrophysics Data System (ADS)

This study presents evidence that the nitrogen and carbon stable isotope values of vegetation can be used as integrators of ephemeral atmospheric pollution signals. Leaves and stems of Bromus tectorum and soil samples were collected in the urban Salt Lake Valley and in the rural Skull Valley of Utah. These samples were used to develop a map of the spatial distribution of ?13C and ?15N values of leaves and stems of Bromus tectorum and soils around each valley. The spatial distribution of ?15N values of leaves and stems of Bromus tectorum and associated soils were significantly correlated. The average ?15N value for Salt Lake Valley Bromus tectorum leaves and stems was 2.37‰ while the average value for Skull Valley Bromus tectorum leaves and stems was 4.76‰. It is possible that the higher concentration of atmospheric nitrogen pollutants measured in the Salt Lake Valley provided the ?15N depleted nitrogen source for uptake by plants and deposition on soils, though the ?15N value of source nitrogen was not measured directly. The presence of a seasonal difference in ?15N values of leaves and stems of Bromus tectorum sampled in Salt Lake Valley but not in Skull Valley further supports this idea. Leaves and stems of Bromus tectorum sampled in the Salt Lake Valley in April 2003 had a statistically more positive average ?15N value of 2.4 ‰ than samples collected in August 2003, which had an average ?15N value of 0.90‰. The carbon isotope values of leaves and stems of Bromus tectorum and air samples collected in Salt Lake Valley were more negative than values measured in Skull Valley samples (Salt Lake ?13Cplant= -28.50‰ and ?13Cair= -9.32 ‰; Skull Valley ?13Cplant= -27.58‰ and ?13C air= -8.52 ‰). This supports the idea that differences in stable isotope values of source air are correlated with differences in stable isotope values of exposed vegetation. Overall, the results of this study suggest that the carbon and nitrogen stable isotope values measured in vegetation are useful indicators of differences in atmospheric pollutant concentration in urban and rural areas.

Kammerdiener, S. A.; Ehleringer, J. R.



Quantifying Mixing using Magnetic Resonance Imaging  

PubMed Central

Mixing is a unit operation that combines two or more components into a homogeneous mixture. This work involves mixing two viscous liquid streams using an in-line static mixer. The mixer is a split-and-recombine design that employs shear and extensional flow to increase the interfacial contact between the components. A prototype split-and-recombine (SAR) mixer was constructed by aligning a series of thin laser-cut Poly (methyl methacrylate) (PMMA) plates held in place in a PVC pipe. Mixing in this device is illustrated in the photograph in Fig. 1. Red dye was added to a portion of the test fluid and used as the minor component being mixed into the major (undyed) component. At the inlet of the mixer, the injected layer of tracer fluid is split into two layers as it flows through the mixing section. On each subsequent mixing section, the number of horizontal layers is duplicated. Ultimately, the single stream of dye is uniformly dispersed throughout the cross section of the device. Using a non-Newtonian test fluid of 0.2% Carbopol and a doped tracer fluid of similar composition, mixing in the unit is visualized using magnetic resonance imaging (MRI). MRI is a very powerful experimental probe of molecular chemical and physical environment as well as sample structure on the length scales from microns to centimeters. This sensitivity has resulted in broad application of these techniques to characterize physical, chemical and/or biological properties of materials ranging from humans to foods to porous media 1, 2. The equipment and conditions used here are suitable for imaging liquids containing substantial amounts of NMR mobile 1H such as ordinary water and organic liquids including oils. Traditionally MRI has utilized super conducting magnets which are not suitable for industrial environments and not portable within a laboratory (Fig. 2). Recent advances in magnet technology have permitted the construction of large volume industrially compatible magnets suitable for imaging process flows. Here, MRI provides spatially resolved component concentrations at different axial locations during the mixing process. This work documents real-time mixing of highly viscous fluids via distributive mixing with an application to personal care products.

Tozzi, Emilio J.; McCarthy, Kathryn L.; Bacca, Lori A.; Hartt, William H.; McCarthy, Michael J.



Quantifying Phycocyanin Concentration in Cyanobacterial Algal Blooms from Remote Sensing Reflectance-A Quasi Analytical Approach  

NASA Astrophysics Data System (ADS)

Cyanobacterial harmful algal blooms (CHAB) are notorious for depleting dissolved oxygen level, producing various toxins, causing threats to aquatic life, altering the food-web dynamics and the overall ecosystem functioning in inland lakes, estuaries, and coastal waters. Most of these algal blooms produce various toxins that can damage cells, tissues and even cause mortality of living organisms. Frequent monitoring of water quality in a synoptic scale has been possible by the virtue of remote sensing techniques. In this research, we present a novel technique to monitor CHAB using remote sensing reflectance products. We have modified a multi-band quasi analytical algorithm that determines phytoplankton absorption coefficients from above surface remote sensing reflectance measurements using an inversion method. In situ hyperspectral remote sensing reflectance data were collected from several highly turbid and productive aquaculture ponds. A novel technique was developed to further decompose the phytoplankton absorption coefficients at 620 nm and obtain phycocyanin absorption coefficient at the same wavelength. An empirical relationship was established between phycocyanin absorption coefficients at 620 nm and measured phycocyanin concentrations. Model calibration showed strong relationship between phycocyanin absorption coefficients and phycocyanin pigment concentration (r2=0.94). Validation of the model in a separate dataset produced a root mean squared error of 167 mg m-3 (phycocyanin range: 26-1012 mg m-3). Results demonstrate that the new approach will be suitable for quantifying phycocyanin concentration in cyanobacteria dominated turbid productive waters. Band architecture of the model matches with the band configuration of the Medium Resolution Imaging Spectrometer (MERIS) and assures that MERIS reflectance products can be used to quantify phycocyanin in cyanobacterial harmful algal blooms in optically complex waters.

Mishra, S.; Mishra, D. R.; Tucker, C.



Time and frequency domain methods for quantifying common modulation of motor unit firing patterns  

PubMed Central

Background In investigations of the human motor system, two approaches are generally employed toward the identification of common modulating drives from motor unit recordings. One is a frequency domain method and uses the coherence function to determine the degree of linear correlation between each frequency component of the signals. The other is a time domain method that has been developed to determine the strength of low frequency common modulations between motor unit spike trains, often referred to in the literature as 'common drive'. Methods The relationships between these methods are systematically explored using both mathematical and experimental procedures. A mathematical derivation is presented that shows the theoretical relationship between both time and frequency domain techniques. Multiple recordings from concurrent activities of pairs of motor units are studied and linear regressions are performed between time and frequency domain estimates (for different time domain window sizes) to assess their equivalence. Results Analytically, it may be demonstrated that under the theoretical condition of a narrowband point frequency, the two relations are equivalent. However practical situations deviate from this ideal condition. The correlation between the two techniques varies with time domain moving average window length and for window lengths of 200 ms, 400 ms and 800 ms, the r2 regression statistics (p < 0.05) are 0.56, 0.81 and 0.80 respectively. Conclusions Although theoretically equivalent and experimentally well correlated there are a number of minor discrepancies between the two techniques that are explored. The time domain technique is preferred for short data segments and is better able to quantify the strength of a broad band drive into a single index. The frequency domain measures are more encompassing, providing a complete description of all oscillatory inputs and are better suited to quantifying narrow ranges of descending input into a single index. In general the physiological question at hand should dictate which technique is best suited.

Myers, Lance J; Erim, Zeynep; Lowery, Madeleine M



Quantifying metal ions binding onto dissolved organic matter using log-transformed absorbance spectra.  


This study introduces the concept of consistent examination of changes of log-transformed absorbance spectra of dissolved organic matter (DOM) at incrementally increasing concentrations of heavy metal cations such as copper, cadmium, and aluminum at environmentally relevant concentrations. The approach is designed to highlight contributions of low-intensity absorbance features that appear to be especially sensitive to DOM reactions. In accord with this approach, log-transformed absorbance spectra of fractions of DOM from the Suwannee River were acquired at varying pHs and concentrations of copper, cadmium, and aluminum. These log-transformed spectra were processed using the differential approach and used to examine the nature of the observed changes of DOM absorbance and correlate them with the extent of Me-DOM complexation. Two alternative parameters, namely the change of the spectral slope in the range of wavelengths 325-375 nm (DSlope325-375) and differential logarithm of DOM absorbance at 350 nm (DLnA350) were introduced to quantify Cu(II), Cd(II), and Al(III) binding onto DOMs. DLnA350 and DSlope325-375 datasets were compared with the amount of DOM-bound Cu(II), Cd(II), and Al(III) estimated based on NICA-Donnan model calculations. This examination showed that the DLnA350 and DSlope325-375 acquired at various pH values, metal ions concentrations, and DOM types were strongly and unambiguously correlated with the concentration of DOM-bound metal ions. The obtained experimental results and their interpretation indicate that the introduced DSlope325-375 and DLnA35 parameters are predictive of and can be used to quantify in situ metal ions interactions with DOMs. The presented approach can be used to gain more information about DOM-metal interactions and for further optimization of existing formal models of metal-DOM complexation. PMID:23490103

Yan, Mingquan; Wang, Dongsheng; Korshin, Gregory V; Benedetti, Marc F



Quantifying the kinetic stability of hyperstable proteins via time-dependent SDS trapping.  


Globular proteins are usually in equilibrium with unfolded conformations, whereas kinetically stable proteins (KSPs) are conformationally trapped by their high unfolding transition state energy. Kinetic stability (KS) could allow proteins to maintain their activity under harsh conditions, increase a protein's half-life, or protect against misfolding-aggregation. Here we show the development of a simple method for quantifying a protein's KS that involves incubating a protein in SDS at high temperature as a function of time, running the unheated samples on SDS-PAGE, and quantifying the bands to determine the time-dependent loss of a protein's SDS resistance. Six diverse proteins, including two monomer, two dimers, and two tetramers, were studied by this method, and the kinetics of the loss of SDS resistance correlated linearly with their unfolding rate determined by circular dichroism. These results imply that the mechanism by which SDS denatures proteins involves conformational trapping, with a trapping rate that is determined and limited by the rate of protein unfolding. We applied the SDS trapping of proteins (S-TraP) method to superoxide dismutase (SOD) and transthyretin (TTR), which are highly KSPs with native unfolding rates that are difficult to measure by conventional spectroscopic methods. A combination of S-TraP experiments between 75 and 90 °C combined with Eyring plot analysis yielded an unfolding half-life of 70 ± 37 and 18 ± 6 days at 37 °C for SOD and TTR, respectively. The S-TraP method shown here is extremely accessible, sample-efficient, cost-effective, compatible with impure or complex samples, and will be useful for exploring the biological and pathological roles of kinetic stability. PMID:22106876

Xia, Ke; Zhang, Songjie; Bathrick, Brendan; Liu, Shuangqi; Garcia, Yeidaliz; Colón, Wilfredo



Wireless accelerometer iPod application for quantifying gait characteristics.  


The capability to quantify gait characteristics through a wireless accelerometer iPod application in an effectively autonomous environment may alleviate the progressive strain on highly specific medical resources. The iPod consists of the inherent attributes imperative for robust gait quantification, such as a three dimensional accelerometer, data storage, flexible software, and the capacity for wireless transmission of the gait data through email. Based on the synthesis of the integral components of the iPod, a wireless accelerometer iPod application for quantifying gait characteristics has been tested and evaluated in an essentially autonomous environment. The quantified gait acceleration waveforms were wirelessly transmitted using email for postprocessing. The site for the gait experiment occurred in a remote location relative to the location where the postprocessing was conducted. The wireless accelerometer iPod application for quantifying gait characteristics demonstrated sufficient accuracy and consistency. PMID:22256173

LeMoyne, Robert; Mastroianni, Timothy; Grundfest, Warren



Understanding and quantifying urban forest structure, functions, and ...  


Unfortunately, there is relatively little data about the structure, health, functions, ... This article was written and prepared by U.S. Government employees on official ... Understanding and quantifying urban forest structure, functions, and value.


Quantifying the Complex Hydrologic Response of a Desert Ephemeral Wash.  

National Technical Information Service (NTIS)

The objective of this research is to increase current understanding of drylands hydrology by quantifying the hydrologic response of two geomorphic surfaces in an ephemeral wash to seasonal precipitation inputs. Specifically, the aim is to understand how w...

J. A. Ramirez S. Howe



Quantifying the Reinforcement Value of Verbal Items on College Students  

ERIC Educational Resources Information Center

|The purpose of the current investigation was to quantify the quality of various verbal items on a college population that was voluntarily involved in remedial training in the basic academic skills. (Author)|

Hafner, James L.



Describing and Quantifying Asthma Comorbidty: A Population Study  

PubMed Central

Background Asthma comorbidity has been correlated with poor asthma control, increased health services use, and decreased quality of life. Managing it improves these outcomes. Little is known about the amount of different types of comorbidity associated with asthma and how they vary by age. Methodology/Principal Findings The authors conducted a population study using health administrative data on all individuals living in Ontario, Canada (population 12 million). Types of asthma comorbidity were quantified by comparing physician health care claims between individuals with and without asthma in each of 14 major disease categories; results were adjusted for demographic factors and other comorbidity and stratified by age. Compared to those without asthma, individuals with asthma had higher rates of comorbidity in most major disease categories. Most notably, they had about fifty percent or more physician health care claims for respiratory disease (other than asthma) in all age groups; psychiatric disorders in individuals age four and under and age 18 to 44; perinatal disorders in individuals 17 years and under, and metabolic and immunity, and hematologic disorders in children four years and under. Conclusion/Significance Asthma appears to be associated with significant rates of various types of comorbidity that vary according to age. These results can be used to develop strategies to recognize and address asthma comorbidity to improve the overall health of individuals with asthma.

Gershon, Andrea S.; Guan, Jun; Wang, Chengning; Victor, J. Charles; To, Teresa




SciTech Connect

Depending on the invasive nature of performing waste management activities, excessive concentrations of mists, vapors, gases, dusts or fumes may be present thus creating hazards to the employee from either inhalation into the lungs or absorption through the skin. To address these hazards, similar exposure groups and an exposure profile result consisting of: (1) a hazard index (concentration); (2) an exposure rating (monitoring results or exposure probabilities); and (3) a frequency rating (hours of potential exposure per week) are used to assign an exposure risk rating (ERR). The ERR determines if the potential hazards pose significant risks to employees linking potential exposure and breathing zone (BZ) monitoring requirements. Three case studies consisting of: (1) a hazard-task approach; (2) a hazard-job classification-task approach; and (3) a hazard approach demonstrate how to conduct exposure assessments using this methodology. Environment, safety and health professionals can then categorize levels of risk and evaluate the need for BZ monitoring, thereby quantifying employee exposure levels accurately.

Thompson, Aaron L.; Hylko, James M.



Decomposable graphs and definitions with no quantifier alternation  

Microsoft Academic Search

Let D(G) be the minimum quantifier depth of a first order sentence ? that defines a graph G up to isomorphism. Let D0(G) be the version of D(G) where we do not allow quantifier alternations in ?. Define q0(n) to be the minimum of D0(G) over all graphs G of order n. We prove that for all n we have

Oleg Pikhurko; Joel Spencer; Oleg Verbitsky



A diagnostic approach to quantifying the stress sensitivity of permeability  

Microsoft Academic Search

The sensitivity of core intergranular permeability to applied hydrostatic stress has been quantified through third-order polynomial fits to data from three loosely-to-moderately-consolidated sandstone formations from the same geological system. The derivatives of the fitted polynomials have allowed diverse permeability vs. stress behaviours to be identified and grouped as “stress facies”. This grouping is quantified using a composite stress-sensitivity parameter based

Paul F. Worthington



New methods to quantify the cracking performance of cementitious systems made with internal curing  

NASA Astrophysics Data System (ADS)

The use of high performance concretes that utilize low water-cement ratios have been promoted for use in infrastructure based on their potential to increase durability and service life because they are stronger and less porous. Unfortunately, these benefits are not always realized due to the susceptibility of high performance concrete to undergo early age cracking caused by shrinkage. This problem is widespread and effects federal, state, and local budgets that must maintain or replace deterioration caused by cracking. As a result, methods to reduce or eliminate early age shrinkage cracking have been investigated. Internal curing is one such method in which a prewetted lightweight sand is incorporated into the concrete mixture to provide internal water as the concrete cures. This action can significantly reduce or eliminate shrinkage and in some cases causes a beneficial early age expansion. Standard laboratory tests have been developed to quantify the shrinkage cracking potential of concrete. Unfortunately, many of these tests may not be appropriate for use with internally cured mixtures and only provide limited amounts of information. Most standard tests are not designed to capture the expansive behavior of internally cured mixtures. This thesis describes the design and implementation of two new testing devices that overcome the limitations of current standards. The first device discussed in this thesis is called the dual ring. The dual ring is a testing device that quantifies the early age restrained shrinkage performance of cementitious mixtures. The design of the dual ring is based on the current ASTM C 1581-04 standard test which utilizes one steel ring to restrain a cementitious specimen. The dual ring overcomes two important limitations of the standard test. First, the standard single ring test cannot restrain the expansion that takes place at early ages which is not representative of field conditions. The dual ring incorporates a second restraining ring which is located outside of the sample to provide restraint against expansion. Second, the standard ring test is a passive test that only relies on the autogenous and drying shrinkage of the mixture to induce cracking. The dual ring test can be an active test because it has the ability to vary the temperature of the specimen in order to induce thermal stress and produce cracking. This ability enables the study of the restrained cracking capacity as the mixture ages in order to quantify crack sensitive periods of time. Measurements made with the dual ring quantify the benefits from using larger amounts of internal curing. Mixtures that resupplied internal curing water to match that of chemical shrinkage could sustain three times the magnitude of thermal change before cracking. The second device discussed in this thesis is a large scale slab testing device. This device tests the cracking potential of 15' long by 4" thick by 24" wide slab specimens in an environmentally controlled chamber. The current standard testing devices can be considered small scale and encounter problems when linking their results to the field due to size effects. Therefore, the large scale slab testing device was developed in order to calibrate the results of smaller scale tests to real world field conditions such as a pavement or bridge deck. Measurements made with the large scale testing device showed that the cracking propensity of the internally cured mixtures was reduced and that a significant benefit could be realized.

Schlitter, John L.


Analysis of heart rate variability signal during meditation using deterministic-chaotic quantifiers.  


Abstract This study investigated the level of chaos and the existence of fractal patterns in the heart rate variability (HRV) signal prior to meditation and during meditation using two quantifiers adapted from non-linear dynamics and deterministic chaos theory: (1) component central tendency measures (CCTMs) and (2) Higuchi fractal dimension (HFD). CCTM quantifies degree of variability/chaos in the specified quadrant of the second-order difference plot for HRV time series, while HFD quantifies dimensional complexity of the HRV series. Both the quantifiers yielded excellent results in discriminating the different psychophysiological states. The study found (1) significantly higher first quadrant CCTM values and (2) significantly lower HFD values during meditation state compared to pre-meditation state. Both of these can be attributed to the respiratory-modulated oscillations shifting to the lower frequency region by parasympathetic tone during meditation. It is thought that these quantifiers are most promising in providing new insight into the evolution of complexity of underlying dynamics in different physiological states. PMID:24044586

Kamath, Chandrakar



Clathrin triskelia show evidence of molecular flexibility.  


The clathrin triskelion, which is a three-legged pinwheel-shaped heteropolymer, is a major component in the protein coats of certain post-Golgi and endocytic vesicles. At low pH, or at physiological pH in the presence of assembly proteins, triskelia will self-assemble to form a closed clathrin cage, or "basket". Recent static light scattering and dynamic light scattering studies of triskelia in solution showed that an individual triskelion has an intrinsic pucker similar to, but differing from, that inferred from a high resolution cryoEM structure of a triskelion in a clathrin basket. We extend the earlier solution studies by performing small-angle neutron scattering (SANS) experiments on isolated triskelia, allowing us to examine a higher q range than that probed by static light scattering. Results of the SANS measurements are consistent with the light scattering measurements, but show a shoulder in the scattering function at intermediate q values (0.016 A(-1)), just beyond the Guinier regime. This feature can be accounted for by Brownian dynamics simulations based on flexible bead-spring models of a triskelion, which generate time-averaged scattering functions. Calculated scattering profiles are in good agreement with the experimental SANS profiles when the persistence length of the assumed semiflexible triskelion is close to that previously estimated from the analysis of electron micrographs. PMID:18502808

Ferguson, Matthew L; Prasad, Kondury; Boukari, Hacene; Sackett, Dan L; Krueger, Susan; Lafer, Eileen M; Nossal, Ralph



Hospital survey shows improvements in patient experience.  


Hospitals are improving the inpatient care experience. A government survey that measures patients' experiences with a range of issues from staff responsiveness to hospital cleanliness-the Hospital Consumer Assessment of Healthcare Providers and Systems survey-is showing modest but meaningful gains. Using data from the surveys reported in March 2008 and March 2009, we present the first comprehensive national assessment of changes in patients' experiences with inpatient care since public reporting of the results began. We found improvements in all measures of patient experience, except doctors' communication. These improvements were fairly uniform across hospitals. The largest increases were in measures related to staff responsiveness and the discharge information that patients received. PMID:21041749

Elliott, Marc N; Lehrman, William G; Goldstein, Elizabeth H; Giordano, Laura A; Beckett, Megan K; Cohea, Christopher W; Cleary, Paul D



A method for quantifying blood flow distribution among the alveoli of the lung.  


This article describes a method for quantifying blood flow distribution among lung alveoli. Our method is based on analysis of trapping patterns of small diameter (4 ?m) fluorescent latex particles infused into lung capillaries. Trapping patterns are imaged using confocal microscopy, and the images are analyzed statistically using SAS subroutines. The resulting plots provide a quantifiable way of assessing interalveolar perfusion distribution in a way that has not previously been possible. Methods for using this technique are described, and the SAS routines are included. This technique can be an important tool for learning how this critical vascular bed performs in health and disease. PMID:24052359

Conhaim, Robert L; Heisey, Dennis M; Leverson, Glen E



Quantifying Pollutant Emissions from Office Equipment Phase IReport  

SciTech Connect

Although office equipment has been a focal point for governmental efforts to promote energy efficiency through programs such as Energy Star, little is known about the relationship between office equipment use and indoor air quality. This report provides results of the first phase (Phase I) of a study in which the primary objective is to measure emissions of organic pollutants and particulate matter from a selected set of office equipment typically used in residential and office environments. The specific aims of the overall research effort are: (1) use screening-level measurements to identify and quantify the concentrations of air pollutants of interest emitted by major categories of distributed office equipment in a controlled environment; (2) quantify the emissions of air pollutants from generally representative, individual machines within each of the major categories in a controlled chamber environment using well defined protocols; (3) characterize the effects of ageing and use on emissions for individual machines spanning several categories; (4) evaluate the importance of operational factors that can be manipulated to reduce pollutant emissions from office machines; and (5) explore the potential relationship between energy consumption and pollutant emissions for machines performing equivalent tasks. The study includes desktop computers (CPU units), computer monitors, and three categories of desktop printing devices. The printer categories are: (1) printers and multipurpose devices using color inkjet technology; (2) low- to medium output printers and multipurpose devices employing monochrome or color laser technology; and (3) high-output monochrome and color laser printers. The literature review and screening level experiments in Phase 1 were designed to identify substances of toxicological significance for more detailed study. In addition, these screening level measurements indicate the potential relative importance of different categories of office equipment with respect to human exposures. The more detailed studies of the next phase of research (Phase II) are meant to characterize changes in emissions with time and may identify factors that can be modified to reduce emissions. These measurements may identify 'win-win' situations in which low energy consumption machines have lower pollutant emissions. This information will be used to compare machines to determine if some are substantially better than their peers with respect to their emissions of pollutants.

Maddalena, R.L.; Destaillats, H.; Hodgson, A.T.; McKone, T.E.; Perino, C.



Quantifying uncertainty, variability and likelihood for ordinary differential equation models  

PubMed Central

Background In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space. Results The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well-known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability. Conclusions While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations.



Quantifying selection in high-throughput Immunoglobulin sequencing data sets  

PubMed Central

High-throughput immunoglobulin sequencing promises new insights into the somatic hypermutation and antigen-driven selection processes that underlie B-cell affinity maturation and adaptive immunity. The ability to estimate positive and negative selection from these sequence data has broad applications not only for understanding the immune response to pathogens, but is also critical to determining the role of somatic hypermutation in autoimmunity and B-cell cancers. Here, we develop a statistical framework for Bayesian estimation of Antigen-driven SELectIoN (BASELINe) based on the analysis of somatic mutation patterns. Our approach represents a fundamental advance over previous methods by shifting the problem from one of simply detecting selection to one of quantifying selection. Along with providing a more intuitive means to assess and visualize selection, our approach allows, for the first time, comparative analysis between groups of sequences derived from different germline V(D)J segments. Application of this approach to next-generation sequencing data demonstrates different selection pressures for memory cells of different isotypes. This framework can easily be adapted to analyze other types of DNA mutation patterns resulting from a mutator that displays hot/cold-spots, substitution preference or other intrinsic biases.

Yaari, Gur; Uduman, Mohamed; Kleinstein, Steven H.



Quantifying lung shunting during planning for radio-embolization  

NASA Astrophysics Data System (ADS)

A method is proposed for accurate quantification of lung uptake during shunt studies for liver cancer patients undergoing radio-embolization. The current standard for analysis of [99mTc]-MAA shunt studies is subjective and highly variable. The technique proposed in this work involves a small additional peripheral intravenous injection of macroaggregated albumin (MAA) and two additional static acquisitions (before and after injection) to quantify the absolute activity in the lungs as a result of arterio-venous shunting. Such quantification also allows for estimates of absorbed dose to lung tissue at the time of treatment based on MIRD formalism. The method was used on six radio-embolization patients attending the department for lung shunt analysis. Quantitative values for each were compared to a previously validated technique using fully quantitative SPECT/CT imaging, treated as the gold standard. The average difference between absolute activity shunted to the lungs calculated by the proposed technique compared to the previously validated technique was found to be 2%, with a range of (1-8)%. The proposed method is simple and fast, allowing for accurate quantification of lung shunting and estimates of absorbed dose to lung tissue at treatment, and may one day be used in a one-stop procedure for planning and therapy in a single interventional procedure.

Willowson, Kathy; Bailey, Dale L.; Baldock, Clive



Quantifying seismic survey reverberation off the Alaskan North Slope.  


Shallow-water airgun survey activities off the North Slope of Alaska generate impulsive sounds that are the focus of much regulatory attention. Reverberation from repetitive airgun shots, however, can also increase background noise levels, which can decrease the detection range of nearby passive acoustic monitoring (PAM) systems. Typical acoustic metrics for impulsive signals provide no quantitative information about reverberation or its relative effect on the ambient acoustic environment. Here, two conservative metrics are defined for quantifying reverberation: a minimum level metric measures reverberation levels that exist between airgun pulse arrivals, while a reverberation metric estimates the relative magnitude of reverberation vs expected ambient levels in the hypothetical absence of airgun activity, using satellite-measured wind data. The metrics are applied to acoustic data measured by autonomous recorders in the Alaskan Beaufort Sea in 2008 and demonstrate how seismic surveys can increase the background noise over natural ambient levels by 30-45 dB within 1 km of the activity, by 10-25 dB within 15 km of the activity, and by a few dB at 128 km range. These results suggest that shallow-water reverberation would reduce the performance of nearby PAM systems when monitoring for marine mammals within a few kilometers of shallow-water seismic surveys. PMID:22087932

Guerra, Melania; Thode, Aaron M; Blackwell, Susanna B; Michael Macrander, A



Cardiovascular regulation during sleep quantified by symbolic coupling traces  

NASA Astrophysics Data System (ADS)

Sleep is a complex regulated process with short periods of wakefulness and different sleep stages. These sleep stages modulate autonomous functions such as blood pressure and heart rate. The method of symbolic coupling traces (SCT) is used to analyze and quantify time-delayed coupling of these measurements during different sleep stages. The symbolic coupling traces, defined as the symmetric and diametric traces of the bivariate word distribution matrix, allow the quantification of time-delayed coupling. In this paper, the method is applied to heart rate and systolic blood pressure time series during different sleep stages for healthy controls as well as for normotensive and hypertensive patients with sleep apneas. Using the SCT, significant different cardiovascular mechanisms not only between the deep sleep and the other sleep stages but also between healthy subjects and patients can be revealed. The SCT method is applied to model systems, compared with established methods, such as cross correlation, mutual information, and cross recurrence analysis and demonstrates its advantages especially for nonstationary physiological data. As a result, SCT proves to be more specific in detecting delays of directional interactions than standard coupling analysis methods and yields additional information which cannot be measured by standard parameters of heart rate and blood pressure variability. The proposed method may help to indicate the pathological changes in cardiovascular regulation and also the effects of continuous positive airway pressure therapy on the cardiovascular system.

Suhrbier, A.; Riedl, M.; Malberg, H.; Penzel, T.; Bretthauer, G.; Kurths, J.; Wessel, N.



Quantifying potential error in painting breast excision specimens.  


Aim. When excision margins are close or involved following breast conserving surgery, many surgeons will attempt to reexcise the corresponding cavity margin. Margins are ascribed to breast specimens such that six faces are identifiable to the pathologist, a process that may be prone to error at several stages. Methods. An experimental model was designed according to stated criteria in order to answer the research question. Computer software was used to measure the surface areas of experimental surfaces to compare human-painted surfaces with experimental controls. Results. The variability of the hand-painted surfaces was considerable. Thirty percent of hand-painted surfaces were 20% larger or smaller than controls. The mean area of the last surface painted was significantly larger than controls (mean 58996 pixels versus 50096 pixels, CI 1477-16324, P = 0.014). By chance, each of the six volunteers chose to paint the deep surface last. Conclusion. This study is the first to attempt to quantify the extent of human error in marking imaginary boundaries on a breast excision model and suggests that humans do not make these judgements well, raising questions about the safety of targeting single margins at reexcision. PMID:23762569

Fysh, Thomas; Boddy, Alex; Godden, Amy



Quantifying Potential Error in Painting Breast Excision Specimens  

PubMed Central

Aim. When excision margins are close or involved following breast conserving surgery, many surgeons will attempt to reexcise the corresponding cavity margin. Margins are ascribed to breast specimens such that six faces are identifiable to the pathologist, a process that may be prone to error at several stages. Methods. An experimental model was designed according to stated criteria in order to answer the research question. Computer software was used to measure the surface areas of experimental surfaces to compare human-painted surfaces with experimental controls. Results. The variability of the hand-painted surfaces was considerable. Thirty percent of hand-painted surfaces were 20% larger or smaller than controls. The mean area of the last surface painted was significantly larger than controls (mean 58996 pixels versus 50096 pixels, CI 1477–16324, P = 0.014). By chance, each of the six volunteers chose to paint the deep surface last. Conclusion. This study is the first to attempt to quantify the extent of human error in marking imaginary boundaries on a breast excision model and suggests that humans do not make these judgements well, raising questions about the safety of targeting single margins at reexcision.

Godden, Amy



Quantifying Water Flow And Contaminant Flux In Boreholes  

NASA Astrophysics Data System (ADS)

A new method has been developed for measuring both contaminant and groundwater flux in aquifers. The method uses a sorptive permeable media that is placed either in a borehole or monitoring well to intercept contaminated groundwater and release resident tracers. The material is packed in a permeable sock and inserted into the borehole for a specified period of time. The sock is then removed and subsampled to analyze for tracers and contaminants. By quantifying the fraction of resident tracer lost and the mass of contaminant sorbed, contaminant flux and groundwater flow can be calculated. This approach requires knowledge of the tracer and contaminant partitioning or sorption characteristics with the media and an estimate of the media and aquifer permeability contrast. The method has been laboratory tested using both liquid hydrocarbon and activated carbon as the sorptive media. This method has also been field tested at the Borden research site in Canada. The field tests were conducted in a TCE/PCE plume that resulted from a controlled release of a DNAPL mixture. The contaminant flux is compared with estimates based on multilevel samplers located 1 m down gradient of the fully screened wells used for the borehole flux tests.

Annable, M. D.; Hatfield, K.; Cho, J.; Rao, S.; Parker, B.; Cherry, J.



Quantifying the abnormal hemodynamics of sickle cell anemia  

NASA Astrophysics Data System (ADS)

Sickle red blood cells (SS-RBC) exhibit heterogeneous morphologies and abnormal hemodynamics in deoxygenated states. A multi-scale model for SS-RBC is developed based on the Dissipative Particle Dynamics (DPD) method. Different cell morphologies (sickle, granular, elongated shapes) typically observed in deoxygenated states are constructed and quantified by the Asphericity and Elliptical shape factors. The hemodynamics of SS-RBC suspensions is studied in both shear and pipe flow systems. The flow resistance obtained from both systems exhibits a larger value than the healthy blood flow due to the abnormal cell properties. Moreover, SS-RBCs exhibit abnormal adhesive interactions with both the vessel endothelium cells and the leukocytes. The effect of the abnormal adhesive interactions on the hemodynamics of sickle blood is investigated using the current model. It is found that both the SS-RBC - endothelium and the SS-RBC - leukocytes interactions, can potentially trigger the vicious ``sickling and entrapment'' cycles, resulting in vaso-occlusion phenomena widely observed in micro-circulation experiments.

Lei, Huan; Karniadakis, George



Variation in continuous reaction norms: quantifying directions of biological interest.  


Thermal performance curves are an example of continuous reaction norm curves of common shape. Three modes of variation in these curves--vertical shift, horizontal shift, and generalist-specialist trade-offs--are of special interest to evolutionary biologists. Since two of these modes are nonlinear, traditional methods such as principal components analysis fail to decompose the variation into biological modes and to quantify the variation associated with each mode. Here we present the results of a new method, template mode of variation (TMV), that decomposes the variation into predetermined modes of variation for a particular set of thermal performance curves. We illustrate the method using data on thermal sensitivity of growth rate in Pieris rapae caterpillars. The TMV model explains 67% of the variation in thermal performance curves among families; generalist-specialist trade-offs account for 38% of the total between-family variation. The TMV method implemented here is applicable to both differences in mean and patterns of variation, and it can be used with either phenotypic or quantitative genetic data for thermal performance curves or other continuous reaction norms that have a template shape with a single maximum. The TMV approach may also apply to growth trajectories, age-specific life-history traits, and other function-valued traits. PMID:16032579

Izem, Rima; Kingsolver, Joel G



Graphical methods for quantifying macromolecules through bright field imaging.  


Bright field imaging of biological samples stained with antibodies and/or special stains provides a rapid protocol for visualizing various macromolecules. However, this method of sample staining and imaging is rarely employed for direct quantitative analysis due to variations in sample fixations, ambiguities introduced by color composition and the limited dynamic range of imaging instruments. We demonstrate that, through the decomposition of color signals, staining can be scored on a cell-by-cell basis. We have applied our method to fibroblasts grown from histologically normal breast tissue biopsies obtained from two distinct populations. Initially, nuclear regions are segmented through conversion of color images into gray scale, and detection of dark elliptic features. Subsequently, the strength of staining is quantified by a color decomposition model that is optimized by a graph cut algorithm. In rare cases where nuclear signal is significantly altered as a result of sample preparation, nuclear segmentation can be validated and corrected. Finally, segmented stained patterns are associated with each nuclear region following region-based tessellation. Compared to classical non-negative matrix factorization, proposed method: (i) improves color decomposition, (ii) has a better noise immunity, (iii) is more invariant to initial conditions and (iv) has a superior computing performance. PMID:18703588

Chang, Hang; DeFilippis, Rosa Anna; Tlsty, Thea D; Parvin, Bahram



Quantifying the Benefits of Combining Offshore Wind and Wave Energy  

NASA Astrophysics Data System (ADS)

For many locations the offshore wind resource and the wave energy resource are collocated, which suggests a natural synergy if both technologies are combined into one offshore marine renewable energy plant. Initial meteorological assessments of the western coast of the United States suggest only a weak correlation in power levels of wind and wave energy at any given hour associated with the large ocean basin wave dynamics and storm systems of the North Pacific. This finding indicates that combining the two power sources could reduce the variability in electric power output from a combined wind and wave offshore plant. A combined plant is modeled with offshore wind turbines and Pelamis wave energy converters with wind and wave data from meteorological buoys operated by the US National Buoy Data Center off the coast of California, Oregon, and Washington. This study will present results of quantifying the benefits of combining wind and wave energy for the electrical power system to facilitate increased renewable energy penetration to support reductions in greenhouse gas emissions, and air and water pollution associated with conventional fossil fuel power plants.

Stoutenburg, E.; Jacobson, M. Z.




Technology Transfer Automated Retrieval System (TEKTRAN)

The vision of this Special Issue is Quantifying and Modeling Agricultural Management Effects on Soil Properties and Processes - Essential for Developing Best Management Practices for the Environment and Production in the 21st Century. The issue presents the state-of-the-science research results and ...


What do recent advances in quantifying climate and carbon cycle uncertainties mean for climate policy?  

Microsoft Academic Search

Global policy targets for greenhouse gas emissions reductions are being negotiated. The amount of emitted carbon dioxide remaining in the atmosphere is controlled by carbon cycle processes in the ocean and on land. These processes are themselves affected by climate. The resulting 'climate-carbon cycle feedback' has recently been quantified, but the policy implications have not. Using a scheme to emulate

Joanna I. House; Chris Huntingford; Wolfgang Knorr; Sarah E. Cornell; Peter M. Cox; Glen R. Harris; Chris D. Jones; Jason A. Lowe; I. Colin Prentice



Image Processing to quantify the Trajectory of a Visualized Air Jet  

Microsoft Academic Search

In a ventilated space, the incoming air jet and the resulting airflow pattern play key roles in the removal or supply of heat, moisture, and harmful gases from or to living organisms (man, animal and plant). In this research, an image processing method was developed to visualize and quantify the two-dimensional trajectory and the deflection angle of an air jet

A. Van Brecht; K. Janssens; D. Berckmans; E. Vranken



Uncertainties in a carbon footprint model for detergents; quantifying the confidence in a comparative result  

Microsoft Academic Search

Background, aim, and scope  A new trend driven by climate change concerns is the interest to label consumer products with a carbon footprint (CF) number.\\u000a Here, we present a study that examines the uncertainty in the estimated CFs of a liquid and a compact powder detergent and\\u000a how the uncertainty varies with the type of comparison one wishes to make.\\u000a \\u000a \\u000a \\u000a Materials

Arjan de Koning; Diederik Schowanek; Joost Dewaele; Annie Weisbrod; Jeroen Guinée



Complexity Results for Quantified Boolean Formulae Based on Complete Propositional Languages  

Microsoft Academic Search

Several propositional fragments have been considered so far as target languages for knowledge compilation and used for improving computational tasks from major AI areas (like inference, diagnosis and planning); among them are the ordered binary decision dia- grams, prime implicates, prime implicants, \\\\formulae\\

Sylvie Coste-marquis; Daniel Le Berre; Florian Letombe; Pierre Marquis



Analyzing quantum simulators efficiently: Scalable state tomography and quantifying entanglement with routine measurements  

NASA Astrophysics Data System (ADS)

Conventional full state tomography reaches its limit already for a few qubits and hence novel methods for the verification and benchmarking of quantum devices are called for. We show how the complete reconstruction of density matrices is possible even if one relies only on local information about the state. This results in an experimental effort that is linear in the number of qubits and efficient post-processing -- in stark contrast to the exponential scaling of standard tomography. Whenever full tomography is not needed but instead less information required, one would expect that even fewer measurements suffice. Taking entanglement content of solid state samples and bosons in lattices as an example, we show how it may be quantified unconditionally using already routinely performed measurements only.Scalable reconstruction of density matrices, T. Baumgratz, D. Gross, M. Cramer, and M.B. Plenio, arXiv:1207.0358.Efficient quantum state tomography, M. Cramer, M.B. Plenio, S.T. Flammia, R. Somma, D. Gross, S.D. Bartlett, O. Landon-Cardinal, D. Poulin, and Y.-K. Liu, Nat. Commun. 1, 149 (2010).Measuring entanglement in condensed matter systems, M. Cramer, M.B. Plenio, and H. Wunderlich, Phys. Rev. Lett. 106, 020401 (2011).

Cramer, Marcus; Baumgratz, Tillmann; Marty, Oliver; Gross, David; Plenio, Martin



Quantifying nanoscale order in amorphous materials: simulating fluctuation electron microscopy of amorphous silicon  

NASA Astrophysics Data System (ADS)

Fluctuation electron microscopy (FEM) is explicitly sensitive to 3- and 4-body atomic correlation functions in amorphous materials; this is sufficient to establish the existence of structural order on the nanoscale, even when the radial distribution function extracted from diffraction data appears entirely amorphous. However, it remains a formidable challenge to invert the FEM data into a quantitative model of the structure. Here, we quantify the FEM method for a-Si by forward simulating the FEM data from a family of high quality atomistic models. Using a modified WWW method, we construct computational models that contain 10-40 vol% of topologically crystalline grains, 1-3 nm in diameter, in an amorphous matrix and calculate the FEM signal, which consists of the statistical variance V (k) of the dark-field image as a function of scattering vector k. We show that V (k) is a complex function of the size and volume fraction of the ordered regions present in the amorphous matrix. However, the ratio of the variance peaks as a function of k affords the size of the ordered regions; and the magnitude of the variance affords a semi-quantitative measure of the volume fraction. We have also compared models that contain various amounts of strain in the ordered regions. This analysis shows that the amount of strain in realistic models is sufficient to mute variance peaks at high k. We conclude with a comparison between the model results and experimental data.

Bogle, Stephanie N.; Voyles, Paul M.; Khare, Sanjay V.; Abelson, John R.



Flat Globe: Showing the Changing Seasons  

NSDL National Science Digital Library

SeaWiFS false color data showing seasonal change in the oceans and on land for the entire globe. The data is seasonally averaged, and shows the sequence: fall, winter, spring, summer, fall, winter, spring (for the Northern Hemisphere).

Allen, Jesse; Newcombe, Marte; Feldman, Gene



The Franklin Institute's Traveling Science Shows  

NSDL National Science Digital Library

The Franklin Institute's team of science educators are available for shows on a variety of science topics. Traveling Science shows are aligned with National Science Education Standards, and focus on Physics, Biology and Chemistry.

Shows, Traveling S.



Spectral imaging-based methods for quantifying autophagy and apoptosis.  


Spectral imaging systems are capable of detecting and quantifying subtle differences in light quality. In this study we coupled spectral imaging with fluorescence and white light microscopy to develop new methods for quantifying autophagy and apoptosis. For autophagy, we employed multispectral imaging to examine spectral changes in the fluorescence of LC3-GFP, a chimeric protein commonly used to track autophagosome formation. We found that punctate autophagosome-associated LC3-GFP exhibited a spectral profile that was distinctly different from diffuse cytosolic LC3-GFP. We then exploited this shift in spectral quality to quantify the amount of autophagosome-associated signal in single cells. Hydroxychloroquine (CQ), an anti-malarial agent that increases autophagosomal number, significantly increased the punctate LC3-GFP spectral signature, providing proof-of-principle for this approach. For studying apoptosis, we employed the Prism and Reflector Imaging Spectroscopy System (PARISS) hyperspectral imaging system to identify a spectral signature for active caspase-8 immunostaining in ex vivo tumor samples. This system was then used to rapidly quantify apoptosis induced by lexatumumab, an agonistic TRAIL-R2/DR5 antibody, in histological sections from a preclinical mouse model. We further found that the PARISS could accurately distinguish apoptotic tumor regions in hematoxylin and eosin-stained sections, which allowed us to quantify death receptor-mediated apoptosis in the absence of an apoptotic marker. These spectral imaging systems provide unbiased, quantitative and fast means for studying autophagy and apoptosis and complement the existing methods in their respective fields. PMID:21757995

Dolloff, Nathan G; Ma, Xiahong; Dicker, David T; Humphreys, Robin C; Li, Lin Z; El-Deiry, Wafik S



Silicon nanowire detectors showing phototransistive gain  

Microsoft Academic Search

Nanowire photodetectors are shown to function as phototransistors with high sensitivity. Due to small lateral dimensions, a nanowire detector can have low dark current while showing large phototransistive gain. Planar and vertical silicon nanowire photodetectors fabricated in a top-down approach using an etching process show a phototransistive gain above 35 000 at low light intensities. Simulations show that incident light

Arthur Zhang; Cesare Soci; Yisi Liu; Deli Wang; Yu-Hwa Lo



Watching The Daily Show in Kenya  

Microsoft Academic Search

Global distribution of a popular American television programme – Jon Stewart's Daily Show – offers a rare opportunity to examine transnational contingencies of meaning in political satire. Drawing on focus group discussions in Kenya, this analysis shows how some East Africans appropriated and reinterpreted – indeed unexpectedly subverted – The Daily Show's political content, deriving from it insights that Stewart

Angelique Haugerud; Dillon Mahoney; Meghan Ference




SciTech Connect

Several transiting super-Earths are expected to be discovered in the coming few years. While tools to model the interior structure of transiting planets exist, inferences about the composition are fraught with ambiguities. We present a framework to quantify how much we can robustly infer about super-Earth and Neptune-size exoplanet interiors from radius and mass measurements. We introduce quaternary diagrams to illustrate the range of possible interior compositions for planets with four layers (iron core, silicate mantles, water layers, and H/He envelopes). We apply our model to CoRoT-7b, GJ 436b, and HAT-P-11b. Interpretation of planets with H/He envelopes is limited by the model uncertainty in the interior temperature, while for CoRoT-7b observational uncertainties dominate. We further find that our planet interior model sharpens the observational constraints on CoRoT-7b's mass and radius, assuming the planet does not contain significant amounts of water or gas. We show that the strength of the limits that can be placed on a super-Earth's composition depends on the planet's density; for similar observational uncertainties, high-density super-Mercuries allow the tightest composition constraints. Finally, we describe how techniques from Bayesian statistics can be used to take into account in a formal way the combined contributions of both theoretical and observational uncertainties to ambiguities in a planet's interior composition. On the whole, with only a mass and radius measurement an exact interior composition cannot be inferred for an exoplanet because the problem is highly underconstrained. Detailed quantitative ranges of plausible compositions, however, can be found.

Rogers, L. A.; Seager, S. [Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)



Quantified trends in the history of verbal behavior research.  


The history of scientific research about verbal behavior research, especially that based on Verbal Behavior (Skinner, 1957), can be assessed on the basis of a frequency and celeration analysis of the published and presented literature. In order to discover these quantified trends, a comprehensive bibliographical database was developed. Based on several literature searches, the bibliographic database included papers pertaining to verbal behavior that were published in the Journal of the Experimental Analysis of Behavior, the Journal of Applied Behavior Analysis, Behaviorism, The Behavior Analyst, and The Analysis of Verbal Behavior. A nonbehavioral journal, the Journal of Verbal Learning and Verbal Behavior was assessed as a nonexample comparison. The bibliographic database also included a listing of verbal behavior papers presented at the meetings of the Association for Behavior Analysis. Papers were added to the database if they (a) were about verbal behavior, (b) referenced B.F. Skinner's (1957) book Verbal Behavior, or (c) did both. Because the references indicated the year of publication or presentation, a count per year of them was measured. These yearly frequencies were plotted on Standard Celeration Charts. Once plotted, various celeration trends in the literature became visible, not the least of which was the greater quantity of verbal behavior research than is generally acknowledged. The data clearly show an acceleration of research across the past decade. The data also question the notion that a "paucity" of research based on Verbal Behavior currently exists. Explanations of the acceleration of verbal behavior research are suggested, and plausible reasons are offered as to why a relative lack of verbal behavior research extended through the mid 1960s to the latter 1970s. PMID:22477630

Eshleman, J W



Quantified trends in the history of verbal behavior research  

PubMed Central

The history of scientific research about verbal behavior research, especially that based on Verbal Behavior (Skinner, 1957), can be assessed on the basis of a frequency and celeration analysis of the published and presented literature. In order to discover these quantified trends, a comprehensive bibliographical database was developed. Based on several literature searches, the bibliographic database included papers pertaining to verbal behavior that were published in the Journal of the Experimental Analysis of Behavior, the Journal of Applied Behavior Analysis, Behaviorism, The Behavior Analyst, and The Analysis of Verbal Behavior. A nonbehavioral journal, the Journal of Verbal Learning and Verbal Behavior was assessed as a nonexample comparison. The bibliographic database also included a listing of verbal behavior papers presented at the meetings of the Association for Behavior Analysis. Papers were added to the database if they (a) were about verbal behavior, (b) referenced B.F. Skinner's (1957) book Verbal Behavior, or (c) did both. Because the references indicated the year of publication or presentation, a count per year of them was measured. These yearly frequencies were plotted on Standard Celeration Charts. Once plotted, various celeration trends in the literature became visible, not the least of which was the greater quantity of verbal behavior research than is generally acknowledged. The data clearly show an acceleration of research across the past decade. The data also question the notion that a “paucity” of research based on Verbal Behavior currently exists. Explanations of the acceleration of verbal behavior research are suggested, and plausible reasons are offered as to why a relative lack of verbal behavior research extended through the mid 1960s to the latter 1970s.

Eshleman, John W.



Quantifying the effects of material properties on analog models of critical taper wedges  

NASA Astrophysics Data System (ADS)

Analogue models are inherently handmade and reflect their creator's shaping character. For example, sieving style in combination with grain geometry and distribution have been claimed to influence bulk material properties and the outcome of analogue experiments. Few studies exist that quantify these effects and here we aim at investigating the impact of bulk properties of granular materials on the structural development of convergent brittle wedges in analogue experiments. In a systematic fashion, natural sands as well as glass beads of different grain size and size distribution were sieved by different persons from different heights and the resulting bulk density was measured. A series of analogue experiments in both the push and pull setup were performed. The differences in the outcome of experiments were analyzed based on sidewall pictures and 3D laserscanning of the surface. A new high-resolution approach to measuring surface slope automatically is introduced and applied to the evaluation of images and profiles. This procedure is compared to manual methods of determining surface slope. The effect of sidewall friction was quantified by measuring lateral changes in surface slope. The resulting dataset is used to identify the main differences between pushed and pulled wedge experiments in the light of critical taper theory. The bulk density of granular material was found to be highly dependent on sieve height. Sieve heights of less than 50 cm produced a bulk density that was up to 10% less than the maximum bulk density; an effect equally shown for different people sieving the material. Glass beads were found to produce a more regular structure of in-sequence-thrusts in both, space and time, than sands while displaying less variability. Surface slope was found to be highly transient for pushed wedge experiments, whereas it reached and attained a stable value in pulled experiments. Pushed wedges are inferred to develop into a supercritical state because they exceed the theoretical critical surface slope by 5-15°. Since bulk density effects shear strength, different sieving styles could potentially alter the results of analogue models and must be taken into consideration when filling in material. Results from this study also show that only wedges in the pull setup are accurately described by critical taper theory.

Hofmann, F.; Rosenau, M.; Schreurs, G.; Friedrich, A. M.



Quantifying urban street configuration for improvements in air pollution models  

NASA Astrophysics Data System (ADS)

In many built-up urban areas, tall buildings along narrow streets obstruct the free flow of air, resulting in higher pollution levels. Input data to account for street configuration in models are difficult to obtain for large numbers of streets. We describe an approach to calculate indicators of this "urban canyon effect" using 3-dimensional building data and evaluated whether these indicators improved spatially resolved land use regression (LUR) models. Concentrations of NO2 and NOx were available from 132 sites in the Netherlands. We calculated four indicators for canyon effects at each site: (1) the maximum aspect ratio (building height/width of the street) between buildings on opposite sides of the street, (2) the mean building angle, which is the angle between the horizontal street level and the line of sight to the top of surrounding buildings, (3) median building angle and (4) "SkyView Factor" (SVF), a measure of the total fraction of visible sky. Basic LUR models were computed for both pollutants using common predictors such as household density, land-use and nearby traffic intensity. We added each of the four canyon indicators to the basic LUR models and evaluated whether they improved the model. The calculated aspect ratio agreed well (R2 = 0.49) with aspect ratios calculated from field observations. Explained variance (R2) of the basic LUR models without canyon indicators was 80% for NO2 and 76% for NOx, and increased to 82% and 78% respectively if SVF was included. Despite this small increase in R2, contrasts in SVF (10th-90th percentile) resulted in substantial concentration differences of 5.56 ?g m-3 in NO2 and 10.9 ?g m-3 in NOx. We demonstrated a GIS based approach to quantify the obstruction of free air flow by buildings, applicable for large numbers of streets. Canyon indicators could be valuable to consider in air pollution models, especially in areas with low- and high-rise canyons.

Eeftens, Marloes; Beekhuizen, Johan; Beelen, Rob; Wang, Meng; Vermeulen, Roel; Brunekreef, Bert; Huss, Anke; Hoek, Gerard



Quantifying brain shift during neurosurgery using spatially tracked ultrasound  

NASA Astrophysics Data System (ADS)

Brain shift during neurosurgery currently limits the effectiveness of stereotactic guidance systems that rely on preoperative image modalities like magnetic resonance (MR). The authors propose a process for quantifying intraoperative brain shift using spatially-tracked freehand intraoperative ultrasound (iUS). First, one segments a distinct feature from the preoperative MR (tumor, ventricle, cyst, or falx) and extracts a faceted surface using the marching cubes algorithm. Planar contours are then semi-automatically segmented from two sets of iUS b-planes obtained (a) prior to the dural opening and (b) after the dural opening. These two sets of contours are reconstructed in the reference frame of the MR, composing two distinct sparsely-sampled surface descriptions of the same feature segmented from MR. Using the Iterative Closest Point (ICP) algorithm one obtains discrete estimates of the feature deformation performing point-to-surface matching. Vector subtraction of the matched points then can be used as sparse deformation data inputs for inverse biomechanical brain tissue models. The results of these simulations are then used to modify the pre-operative MR to account for intraoperative changes. The proposed process has undergone preliminary evaluations in a phantom study and was applied to data from two clinical cases. In the phantom study, the process recovered controlled deformations with an RMS error of 1.1 mm. These results also suggest that clinical accuracy would be on the order of 1-2mm. This finding is consistent with prior work by the Dartmouth Image-Guided Neurosurgery (IGNS) group. In the clinical cases, the deformations obtained were used to produce qualitatively reasonable updated guidance volumes.

Blumenthal, Tico; Hartov, Alex; Lunn, Karen; Kennedy, Francis E.; Roberts, David W.; Paulsen, Keith D.



Ancient bacteria show evidence of DNA repair  

PubMed Central

Recent claims of cultivable ancient bacteria within sealed environments highlight our limited understanding of the mechanisms behind long-term cell survival. It remains unclear how dormancy, a favored explanation for extended cellular persistence, can cope with spontaneous genomic decay over geological timescales. There has been no direct evidence in ancient microbes for the most likely mechanism, active DNA repair, or for the metabolic activity necessary to sustain it. In this paper, we couple PCR and enzymatic treatment of DNA with direct respiration measurements to investigate long-term survival of bacteria sealed in frozen conditions for up to one million years. Our results show evidence of bacterial survival in samples up to half a million years in age, making this the oldest independently authenticated DNA to date obtained from viable cells. Additionally, we find strong evidence that this long-term survival is closely tied to cellular metabolic activity and DNA repair that over time proves to be superior to dormancy as a mechanism in sustaining bacteria viability.

Johnson, Sarah Stewart; Hebsgaard, Martin B.; Christensen, Torben R.; Mastepanov, Mikhail; Nielsen, Rasmus; Munch, Kasper; Brand, Tina; Gilbert, M. Thomas P.; Zuber, Maria T.; Bunce, Michael; R?nn, Regin; Gilichinsky, David; Froese, Duane; Willerslev, Eske



Quantifying the Impact of Dust on Heterogeneous Ice Generation in Midlevel Supercooled Stratiform Clouds  

SciTech Connect

Dust aerosols have been regarded as effective ice nuclei (IN), but large uncertainties regarding their efficiencies remain. Here, four years of collocated CALIPSO and CloudSat measurements are used to quantify the impact of dust on heterogeneous ice generation in midlevel supercooled stratiform clouds (MSSCs) over the ‘dust belt’. The results show that the dusty MSSCs have an up to 20% higher mixed-phase cloud occurrence, up to 8 dBZ higher mean maximum Ze (Ze_max), and up to 11.5 g/m2 higher ice water path (IWP) than similar MSSCs under background aerosol conditions. Assuming similar ice growth and fallout history in similar MSSCs, the significant differences in Ze_max between dusty and non-dusty MSSCs reflect ice particle number concentration differences. Therefore, observed Ze_max differences indicate that dust could enhance ice particle concentration in MSSCs by a factor of 2 to 6 at temperatures colder than ?12°C. The enhancements are strongly dependent on the cloud top temperature, large dust particle concentration and chemical compositions. These results imply an important role of dust particles in modifying mixed-phase cloud properties globally.

Zhang, Damao; Wang, Zhien; Heymsfield, Andrew J.; Fan, Jiwen; Liu, Dong; Zhao, Ming



Symmetry in external work (SEW): a novel method of quantifying gait differences between prosthetic feet.  


Unilateral transtibial amputees (TTAs) show subtle gait variations while using different prosthetic feet. These variations have not been detected consistently with previous experimental measures. We introduce a novel measure called Symmetry in External Work (SEW) for quantifying kinetic gait differences between prosthetic feet. External work is the result of changes in kinetic and potential energy of body center of mass (CoM). SEW is computed by integrating vertical ground reaction forces obtained using F-scan in-sole sensors. Since various prosthetic feet have different designs, we hypothesized that SEW will vary with the type of foot used. This hypothesis was tested with a single unilateral TTA using four prosthetic feet (Proprio, Trias+, Seattle Lite and SACH). The Proprio (mean symmetry 94.5% +/- 1.1%) and the Trias+ (92.1% +/- 2.5%) feet exhibited higher symmetry between the intact and prosthetic limbs, as compared to the Seattle (67.8% +/- 19.3%) and SACH (35.7% +/- 11.1%) feet. There was also a good agreement in vertical CoM excursion between the intact foot and prosthetic feet with heel-toe foot plate designs. Results indicate that SEW measure may be a viable method to detect kinetic differences between prosthetic feet and could have clinical applications because of relatively low cost instrumentation and minimal subject intervention. PMID:19367518

Agrawal, Vibhor; Gailey, Robert; O'Toole, Christopher; Gaunaurd, Ignacio; Dowell, Tomas



Quantifying macropore recharge: Examples from a semi-arid area  

USGS Publications Warehouse

The purpose of this paper is to illustrate the significantly increased resolution of determining macropore recharge by combining physical, chemical, and isotopic methods of analysis. Techniques for quantifying macropore recharge were developed for both small-scale (1 to 10 km2) and regional-scale areas in and semi-arid areas. The Southern High Plains region of Texas and New Mexico was used as a representative field site to test these methods. Macropore recharge in small-scale areas is considered to be the difference between total recharge through floors of topographically dosed basins and interstitial recharge through the same area. On the regional scale, macropore recharge was considered to be the difference between regional average annual recharge and interstitial recharge measured in the unsaturated zone. Stable isotopic composition of ground water and precipitation was used us an independent estimate of macropore recharge on the regional scale. Results of this analysis suggest that in the Southern High Plains recharge flux through macropores is between 60 and 80 percent of the total 11 mm/y. Between 15 and 35 percent of the recharge occurs by interstitial recharge through the basin floors. Approximately 5 percent of the total recharge occurs as either interstitial or matrix recharge between the basin floors, representing approximately 95 percent of the area. The approach is applicable to other arid and semi-arid areas that focus rainfall into depressions or valleys.The purpose of this paper is to illustrate the significantly increased resolution of determining macropore recharge by combining physical, chemical, and isotopic methods of analysis. Techniques for quantifying macropore recharge were developed for both small-scale (1 to 10 km2) and regional-scale areas in arid and semi-arid areas. The Southern High Plains region of Texas and New Mexico was used as a representative field site to test these methods. Macropore recharge in small-scale areas is considered to be the difference between total recharge through floors of topographically closed basins and interstitial recharge through the same area. On the regional scale, macropore recharge was considered to be the difference between regional average annual recharge and interstitial recharge measured in the unsaturated zone. Stable isotopic composition of ground water and precipitation was used as an independent estimate of macropore recharge on the regional scale. Results of this analysis suggest that in the Southern High Plains recharge flux through macropores is between 60 and 80 percent of the total 11 mm/y. Between 15 and 35 percent of the recharge occurs by interstitial recharge through the basin floors. Approximately 5 percent of the total recharge occurs as either interstitial or matrix recharge between the basin floors, representing approximately 95 percent of the area. The approach is applicable to other arid and semi-arid areas that focus rainfall into depressions or valleys.

Wood, W. W.; Rainwater, K. A.; Thompson, D. B.



Quantifying Methane Fluxes Simply and Accurately: The Tracer Dilution Method  

NASA Astrophysics Data System (ADS)

Methane is an important atmospheric constituent with a wide variety of sources, both natural and anthropogenic, including wetlands and other water bodies, permafrost, farms, landfills, and areas with significant petrochemical exploration, drilling, transport, or processing, or refining occurs. Despite its importance to the carbon cycle, its significant impact as a greenhouse gas, and its ubiquity in modern life as a source of energy, its sources and sinks in marine and terrestrial ecosystems are only poorly understood. This is largely because high quality, quantitative measurements of methane fluxes in these different environments have not been available, due both to the lack of robust field-deployable instrumentation as well as to the fact that most significant sources of methane extend over large areas (from 10's to 1,000,000's of square meters) and are heterogeneous emitters - i.e., the methane is not emitted evenly over the area in question. Quantifying the total methane emissions from such sources becomes a tremendous challenge, compounded by the fact that atmospheric transport from emission point to detection point can be highly variable. In this presentation we describe a robust, accurate, and easy-to-deploy technique called the tracer dilution method, in which a known gas (such as acetylene, nitrous oxide, or sulfur hexafluoride) is released in the same vicinity of the methane emissions. Measurements of methane and the tracer gas are then made downwind of the release point, in the so-called far-field, where the area of methane emissions cannot be distinguished from a point source (i.e., the two gas plumes are well-mixed). In this regime, the methane emissions are given by the ratio of the two measured concentrations, multiplied by the known tracer emission rate. The challenges associated with atmospheric variability and heterogeneous methane emissions are handled automatically by the transport and dispersion of the tracer. We present detailed methane flux results from four different landfills in the United States, using a commercially available Cavity Ringdown Spectroscopy (CRDS) dual-species (methane - acetylene) analyzer. This instrument, because of its high precision, mobility, and ease-of-use, enables quantification of the methane flux from a variety of extended area sources. The instrument was operated off of batteries and was mounted in a four-wheel drive vehicle. A high-precision GPS and two-dimensional self-aligning anemometer were integrated directly with the instrument. Concentration data on methane and acetylene were collected every second, and, together with the wind and GPS data, were processed to provide quantitative measurements of total methane fluxes, on a time scale of just minutes. The landfills studied varied widely in their size, location, topography, and physical access. Data were collected using three variants of the method: the Mobile Transect Method, in which the dual-species analyzer is transported rapidly through the plumes in the far-field; the Stationary Plume Method, in which the analyzer is situated in a fixed location downwind of the release point; and a new method called the Mid-Field Stationary Method, in which the instrument is located at a fixed location at a closer distance than the true far-field, where the plume overlap is not ideal. The resulting methane fluxes varied over a wide range of values, from just a few kg methane / minute, to over 20 kg methane / minute. Finally, we describe how these methods can be used to quantify methane emissions from other natural and anthropogenic extended-area sources, such as wetlands.

Rella, Christopher; Crosson, Eric; Green, Roger; Hater, Gary; Dayton, Dave; Lafleur, Rick; Merrill, Ray; Tan, Sze; Thoma, Eben



Quantifying near-surface thermal perturbations with multikinetic thermochronologic systems  

NASA Astrophysics Data System (ADS)

Processes causing near-surface heating of rocks at depths much shallower than nominal thermochronologic closure depths include hydrothermal fluid flow, wildfire, brittle faulting, magmatism, diurnal/radiation effects, and impact. The frequency, intensity, and spatial distribution of these processes vary with tectonic and geomorphic setting, and although samples in some regions are unlikely to be susceptible to near-surface reheating, some are strongly affected and may provide constraints on the dynamics of such shallow processes. Burial and metamorphic histories can also be interpreted and quantitatively modeled as nonmonotonic thermal perturbations. When thermochronologic systems are completely reset by a heating event they provide useful information on its timing, but no constraint on its temperature and duration. When systems are only partially reset, however, characteristics of thermal perturbations can be quantified from ages of two or more thermochronometers, provided the systems have diffusion kinetics with different activation energies and measured ages can be interpreted as fractional resetting extents. Fractional resetting extents can be converted to Fourier numbers (Dt/a2) for each system and combined to yield a unique duration and temperature of an equivalent square pulse perturbation. More realistic model perturbations involving finite pro- and retrograde paths can also be modeled using this approach, which consistently involve higher maximum temperatures and longer durations than equivalent square pulse events. Relationships between model durations and temperatures of thermal perturbations for multiple samples from spatially restricted regions show distinctive patterns characteristic of the geometry and dynamics of the heating event(s) that created them. Applying this approach to centimeter-scale transects of exposed bedrock shows a strong positive correlation between model durations and temperatures of the perturbation(s) inverted from partially reset He and fission-track ages on single crystals of apatite. The majority of crystals require temperatures between 250-500°C and durations of seconds to hours, though a significant number imply (probably unrealistically) higher temperatures and shorter durations. The temperature-duration correlations for samples at different depths from the exposed surface are most consistent with one or more short-duration (<2 minutes) and high-temperature (~500-800°C) heating events at the surface. Different types of heating events, such as hydrothermal fluid flow or magmatic heating, lead to other types of correlations between model durations and temperatures. Given estimated fractional resetting extents for two different kinetic systems with distinct activation energies, this general approach may be used to constrain characteristics of a wide range of nonmonotonic thermal histories affecting material at shallow depths, as well as metamorphism and burial.

Reiners, P. W.



Quantifying mesoscale soil moisture with the cosmic-ray rover  

NASA Astrophysics Data System (ADS)

Soil moisture governs the surface fluxes of mass and energy and is a major influence on floods and drought. Existing techniques measure soil moisture either at a point or over a large area many kilometers across. To bridge these two scales we used the cosmic-ray rover, an instrument similar to the recently developed COSMOS probe, but bigger and mobile. This paper explores the challenges and opportunities for mapping soil moisture over large areas using the cosmic-ray rover. In 2012, soil moisture was mapped 22 times in a 25 km × 40 km survey area of the Tucson Basin at 1 km2 resolution, i.e., a survey area extent comparable to that of a pixel for the Soil Moisture and Ocean Salinity (SMOS) satellite mission. The soil moisture distribution is dominated by climatic variations, notably by the North American monsoon, that results in a systematic increase in the standard deviation, observed up to 0.022 m3 m-3, as a function of the mean, between 0.06 and 0.14 m3 m-3. Two techniques are explored to use the cosmic-ray rover data for hydrologic applications: (1) interpolation of the 22 surveys into a daily soil moisture product by defining an approach to utilize and quantify the observed temporal stability producing an average correlation coefficient of 0.82 for the soil moisture distributions that were surveyed and (2) estimation of soil moisture profiles by combining surface moisture from satellite microwave sensors with deeper measurements from the cosmic-ray rover. The interpolated soil moisture and soil moisture profile estimates allow for basin-wide mass balance calculation of evapotranspiration, totaling 241 mm for the year 2012. Generating soil moisture maps with cosmic-ray rover at this intermediate scale may help in the calibration and validation of satellite campaigns and may also aid in various large scale hydrologic studies.

Chrisman, B.; Zreda, M.



Bayesian Glaciological Modelling to quantify uncertainties in ice core chronologies  

NASA Astrophysics Data System (ADS)

Valuable information about the environment and climate of the past is preserved in ice cores which are drilled through ice sheets in polar and alpine regions. A pivotal part of interpreting the information held within the cores is to build ice core chronologies i.e. to relate time to depth. Existing dating methods can be categorised as follows: (1) layer counting using the seasonality in signals, (2) glaciological modelling describing processes such as snow accumulation and plastic deformation of ice, (3) comparison with other dated records, or (4) any combination of these. Conventionally, implementation of these approaches does not use statistical methods.In order to quantify dating uncertainties, in this paper we develop the approach of category (2) further. First, the sources of uncertainty involved in glaciological models are formalised. Feeding these into a statistical framework, that we call Bayesian Glaciological Modelling (BGM), allows us to demonstrate the effect that uncertainty in the glaciological model has on the chronology. BGM may also include additional information to constrain the resulting chronology, for example from layer counting or other dated records such as traces from volcanic eruptions.Our case study involves applying BGM to date an Antarctic ice core (a Dyer plateau core). Working through this example allows us to emphasise the importance of properly assessing uncertain elements in order to arrive at accurate chronologies, including valid dating uncertainties. Valid dating uncertainties, in turn, facilitate the interpretation of environmental and climatic conditions at the location of the ice core as well as the comparison and development of ice core chronologies from different locations.

Klauenberg, Katy; Blackwell, Paul G.; Buck, Caitlin E.; Mulvaney, Robert; Röthlisberger, Regine; Wolff, Eric W.



Quantifying the Uncertainty in Land Loss Estimates in Coastal Louisiana  

NASA Astrophysics Data System (ADS)

For the past twenty-five years the land loss along the Louisiana Coast has been recognized as a growing problem. One of the clearest indicators of this land loss is that in 2000 smooth cord grass (spartina alterniflora) was turning brown well before its normal hibernation period. In 2001 data were collected using low altitude helicopter based transects of the coast, with 8,400 data points being collected. The surveys contained data describing the characteristics of the marsh, including; latitude, longitude, marsh condition, marsh color, percent vegetated, and marsh die-back. The 2001 data were compared with previously collected data from 1997. Over 100,000 acres of marsh were affected by the 2000 browning. Satellite imagery can be used to monitor changes in coastlines, vegetation health, and conversion of land to open water. An unsupervised classification was applied to 1997 Landsat TM imagery from the Louisiana coast. Based on the classification, polygons were delineated surrounding areas of water. Using the Kappa Classification Statistical Analysis extension in ArcView, kappa statistics were calculated to quantify the amount of agreement between the unsupervised classification and field checked data while correcting for agreement due to chance. Numerical results reveal that a straightforward unsupervised classification does a reasonable job of approximating the actual field checked data. Kappa values of 0.57 and higher have been obtained, which is considered fair to good agreement. This agreement adds credibility to imagery based estimates of coastal land loss, which affords the opportunity for significant savings of time, labor, and cost compared to field based monitoring. Refined classifications and use of higher resolution imagery are expected to yield improved costal land loss estimates.

Wales, P. M.; Kuszmaul, J. S.; Roberts, C.



Quantifying conformational dynamics using solid-state R1? experiments  

NASA Astrophysics Data System (ADS)

We demonstrate the determination of quantitative rates of molecular reorientation in the solid state with rotating frame (R1?) relaxation measurements. Reorientation of the carbon chemical shift anisotropy (CSA) tensor was used to probe site-specific conformational exchange in a model system, d6-dimethyl sulfone (d6-DMS). The CSA as a probe of exchange has the advantage that it can still be utilized when there is no dipolar mechanism (i.e. no protons attached to the site of interest). Other works have presented R1? measurements as a general indicator of dynamics, but this study extracts quantitative rates of molecular reorientation from the R1? values. Some challenges of this technique include precise knowledge of sample temperature and determining the R20 contribution to the observed relaxation rate from interactions other than molecular reorientation, such as residual dipolar couplings or fast timescale dynamics; determination of this term is necessary in order to quantify the exchange rate due to covariance between the 2 terms. Low-temperature experiments measured an R20 value of 1.8 ± 0.2 s-1 Allowing for an additional relaxation term (R20), which was modeled as both temperature-dependent and temperature-independent, rates of molecular reorientation were extracted from field strength-dependent R1? measurements at four different temperatures and the activation energy was determined from these exchange rates. The activation energies determined were 74.7 ± 4.3 kJ/mol and 71.7 ± 2.9 kJ/mol for the temperature-independent and temperature-dependent R20 models respectively, in excellent agreement with literature values. The results of this study suggest important methodological considerations for the application of the method to more complicated systems such as proteins, such as the importance of deuterating samples and the need to make assumptions regarding the R20 contribution to relaxation.

Quinn, Caitlin M.; McDermott, Ann E.



Verifying and Quantifying Helicobacter pylori Infection Status of Research Mice  

PubMed Central

Mice used to model helicobacter gastritis should be screened by PCR prior to experimental dosing to confirm the absence of enterohepatic Helicobacter species (EHS) that colonize the cecum and colon of mice. Natural infections with EHS are common and impact of concurrent EHS infection on Helicobacter pylori-induced gastric pathology has been demonstrated. PCR of DNA isolated from gastric tissue is the most sensitive and efficient technique to confirm the H. pylori infection status of research mice after experimental dosing. To determine the level of colonization, quantitative PCR to estimate the equivalent colony-forming units of H. pylori per µg of mouse DNA is less labor-intensive than limiting dilution culture methods. Culture recovery of H. pylori is a less sensitive technique due to its fastidious in vitro culture requirements; however, recovery of viable organisms confirms persistent colonization and allows for further molecular characterization of wild-type or mutant H. pylori strains. ELISA is useful to confirm PCR and culture results and to correlate pro- and anti-inflammatory host immune responses with lesion severity and cytokine gene or protein expression. Histologic assessment with a silver stain has a role in identifying gastric bacteria with spiral morphology consistent with H. pylori but is a relatively insensitive technique and lacks specificity. A variety of spiral bacteria colonizing the lower bowel of mice can be observed in the stomach, particularly if gastric atrophy develops, and these species are not morphologically distinct at the level of light microscopy either in the stomach or lower bowel. Other less commonly used techniques to localize H. pylori in tissues include immunohistochemistry using labeled polyclonal antisera or in situ hybridization for H. pylori rRNA. In this chapter, we will summarize strategies to allow initiation of experiments with helicobacter-free mice and then focus on PCR and ELISA techniques to verify and quantify H. pylori infection of research mice.

Whary, Mark T.; Ge, Zhongming; Fox, James G.



Quantifying conformational dynamics using solid-state R1? experiments  

PubMed Central

We demonstrate the determination of quantitative rates of molecular reorientation in the solid state with rotating frame (R1?) relaxation measurements. Reorientation of the carbon chemical shift anisotropy (CSA) tensor was used to probe site-specific conformational exchange in a model system, d6-dimethyl sulfone (d6-DMS). The CSA as a probe of exchange has the advantage that it can still be utilized when there is no dipolar mechanism (i.e. no protons attached to the site of interest). Other works have presented R1? measurements as a general indicator of dynamics, but this study extracts quantitative rates of molecular reorientation from the R1? values. Some challenges of this technique include precise knowledge of sample temperature and determining the R20 contribution to the observed relaxation rate from interactions other than molecular reorientation, such as residual dipolar couplings or fast timescale dynamics; determination of this term is necessary in order to quantify the exchange rate due to covariance between the 2 terms. Low-temperature experiments measured an R20 value of 1.8 ± 0.2 s?1 Allowing for an additional relaxation term (R20), which was modeled as both temperature-dependent and temperature-independent, rates of molecular reorientation were extracted from field strength-dependent R1? measurements at 4 different temperatures and the activation energy was determined from these exchange rates. The activation energies determined were 74.7 ± 4.3 kJ/mol and 71.7 ± 2.9 kJ/mol for the temperature-independent and temperature-dependent R20 models respectively, in excellent agreement with literature values. The results of this study suggest important methodological considerations for the application of the method to more complicated systems such as proteins, such as the importance of deuterating samples and the need to make assumptions regarding the R20 contribution to relaxation.

Quinn, Caitlin M.




SciTech Connect

Spectroscopic selection has been the most productive technique for the selection of galaxy-scale strong gravitational lens systems with known redshifts. Statistically significant samples of strong lenses provide a powerful method for measuring the mass-density parameters of the lensing population, but results can only be generalized to the parent population if the lensing selection biases are sufficiently understood. We perform controlled Monte Carlo simulations of spectroscopic lens surveys in order to quantify the bias of lenses relative to parent galaxies in velocity dispersion, mass axis ratio, and mass-density profile. For parameters typical of the SLACS and BELLS surveys, we find (1) no significant mass axis ratio detection bias of lenses relative to parent galaxies; (2) a very small detection bias toward shallow mass-density profiles, which is likely negligible compared to other sources of uncertainty in this parameter; (3) a detection bias toward smaller Einstein radius for systems drawn from parent populations with group- and cluster-scale lensing masses; and (4) a lens-modeling bias toward larger velocity dispersions for systems drawn from parent samples with sub-arcsecond mean Einstein radii. This last finding indicates that the incorporation of velocity-dispersion upper limits of non-lenses is an important ingredient for unbiased analyses of spectroscopically selected lens samples. In general, we find that the completeness of spectroscopic lens surveys in the plane of Einstein radius and mass-density profile power-law index is quite uniform, up to a sharp drop in the region of large Einstein radius and steep mass-density profile, and hence that such surveys are ideally suited to the study of massive field galaxies.

Arneson, Ryan A.; Brownstein, Joel R.; Bolton, Adam S., E-mail:, E-mail:, E-mail: [Department of Physics and Astronomy, University of Utah, Salt Lake City, UT 84112 (United States)



Quantifying Hg within ectomycorrhizal fruiting bodies, from emergence to senescence.  


Ectomycorrhizal fruiting bodies (basidiomata) collected from forested areas in southwestern New Brunswick were analyzed for total mercury, sulphur, nitrogen, and carbon concentrations (THg, TS, TN, and TC, respectively). This analysis was done for caps and stalks and by development stage (emergent, mature, senescent) across 27 species associated with five classes, eight families, and 13 genera. Across the species, THg correlated positively with TN and TS, thereby implying N as well as S mitigated transfer of Hg from the mycelia into the basidiomata, with THg ranging from 3 to 10?457 ppb. TS, TN, and TC varied from 0.07 to 1, 1 to 11, and 43 to 53 %, respectively. Cap and stalk THg, TS, TN, and TC were also correlated to one another, with mean stalk/cap ratios of 0.59, 0.76, 0.71, and 0.98, respectively. Soil availability indexed by THg, TS, TN, and TC within the forest floor contributed to basidiomatal THg as well. THg, THg/TS, and THg/N varied strongly by species. These variations involved: (i) no growth dilution and no volatilization (Group I), (ii) growth dilution only (Group II), (iii) growth dilution followed by loss during senescence (Group III), and (iv) growth dilution combined with loss from emergence onward (Group IV). Depending on species, TN and TS remained the same or declined from 100 % at emergence to about 80 and 70 % at senescence. Lack of THg decline for the Group I species would be due to HgS encapsulation. Reanalyzing the freeze-dried samples revealed that THg continued to drop during the first year of air-dry storage for the Group II, II, and IV species, but TS, TN, and TC remained stable. The results were quantified by way of best-fitted regression models. PMID:23153807

Nasr, Mina; Malloch, David W; Arp, Paul A



Talk shows’ representations of interpersonal conflicts  

Microsoft Academic Search

In the past ten years, daytime talk shows became very popular among television programmers and viewers alike. Given the large audiences to whom talk shows communicate, it is important to analyze the messages contained in the programs. Remarkably little academic attention has been paid to this phenomenon, however. The present study focuses on the presentation of interpersonal conflicts, particularly regarding

Susan L. Brinson; J. Emmett Winn



International Plowing Match & Farm Machinery Show  

NSDL National Science Digital Library

The 1995 International Plowing Match & Farm Machinery Show in Ontario, Canada has a site of the Web. The IPM is a non-profit organization of volunteers which annually organizes Canada's largest farm machinery show. The event is commercial and educational. Thousands of school children and educators attend and participate in organized educational activities.



The Language of Show Biz: A Dictionary.  

ERIC Educational Resources Information Center

This dictionary of the language of show biz provides the layman with definitions and essays on terms and expressions often used in show business. The overall pattern of selection was intended to be more rather than less inclusive, though radio, television, and film terms were deliberately omitted. Lengthy explanations are sometimes used to express…

Sergel, Sherman Louis, Ed.


Show Me: Automatic Presentation for Visual Analysis  

Microsoft Academic Search

This paper describes Show Me, an integrated set of user interface commands and defaults that incorporate automatic presentation into a commercial visual analysis system called Tableau. A key aspect of Tableau is VizQL, a language for specifying views, which is used by Show Me to extend automatic presentation to the generation of tables of views (commonly called small multiple displays).

Jock D. Mackinlay; Pat Hanrahan; Chris Stolte



Acculturation, Cultivation, and Daytime TV Talk Shows.  

ERIC Educational Resources Information Center

|Explores the cultivation phenomenon among international college students in the United States by examining the connection between levels of acculturation, daytime TV talk show viewing, and beliefs about social reality. Finds that students who scored low on acculturation and watched a great deal of daytime talk shows had a more negative perception…

Woo, Hyung-Jin; Dominick, Joseph R.



The Language of Show Biz: A Dictionary.  

ERIC Educational Resources Information Center

|This dictionary of the language of show biz provides the layman with definitions and essays on terms and expressions often used in show business. The overall pattern of selection was intended to be more rather than less inclusive, though radio, television, and film terms were deliberately omitted. Lengthy explanations are sometimes used to…

Sergel, Sherman Louis, Ed.


Revelle revisited: Buffer factors that quantify the response of ocean chemistry to changes in DIC and alkalinity  

NASA Astrophysics Data System (ADS)

We derive explicit expressions of the Revelle factor and several other buffer factors of interest to climate change scientists and those studying ocean acidification. These buffer factors quantify the sensitivity of CO2 and H+ concentrations ([CO2] and [H+]) and CaCO3 saturation (?) to changes in dissolved inorganic carbon concentration (DIC) and alkalinity (Alk). The explicit expressions of these buffer factors provide a convenient means to compare the degree of buffering of [CO2], [H+], and ? in different regions of the oceans and at different times in the future and to gain insight into the buffering mechanisms. All six buffer factors have roughly similar values, and all reach an absolute minimum when DIC = Alk (pH ˜ 7.5). Surface maps of the buffer factors generally show stronger buffering capacity in the subtropical gyres relative to the polar regions. As the dissolution of anthropogenic CO2 increases the DIC of surface seawater over the next century, all the buffer factors will decrease, resulting in a much greater sensitivity to local variations in DIC and Alk. For example, diurnal and seasonal variations in pH and ? caused by photosynthesis and respiration will be greatly amplified. Buffer factors provide convenient means to quantify the effect that changes in DIC and Alk have on seawater chemistry. They should also help illuminate the role that various physical and biological processes have in determining the oceanic response to an increase in atmospheric CO2.

Egleston, Eric S.; Sabine, Christopher L.; Morel, FrançOis M. M.



Quantifying the degradation and dilution contribution to natural attenuation of contaminants by means of an open system rayleigh equation.  


Quantifying the share of destructive and nondestructive processes to natural attenuation (NA) of groundwater pollution plumes is of high importance to the evaluation and acceptance of NA as remediation strategy. Dilution as consequence of hydrodynamic dispersion may contribute considerably to NA, however, without reducing the mass of pollution. Unfortunately, tracers to quantify dilution are usually lacking. Degradation though of low-molecular-weight organic chemicals such as BTEX, chlorinated ethenes, and MTBE is uniquely associated with increases in isotope ratios for steady-state plumes. Compound-specific isotope analysis (CSIA) data are commonly interpreted by means of the Rayleigh equation, originally developed for closed systems, to calculate the extent of degradation under open system field conditions. For that reason, the validity of this approach has been questioned. The Rayleigh equation was accordingly modified to account for dilution, and showed that dilution contributed several to many times more to NA than biodegradation at a groundwater benzene plume. Derived equations also (i) underlined that field-derived isotopic enrichment factors underestimate actual values operative as a consequence of dilution, and (ii) provided a check on the lower limit of isotopic fractionation, thereby resulting in more reliable predictions on the extent of degradation. PMID:17711212

Van Breukelen, Boris M



An experimental model to quantify horizontal transmission of Mycoplasma gallisepticum  

Microsoft Academic Search

Before interventions to control horizontal transmission of Mycoplasma gallisepticum can be tested, a suitable experimental model should be available. Transmission dynamics in a flock can be quantified by two parameters: the average number of secondary cases infected by one typical infectious case (R0) and the number of new infections that occur due to one infectious animal per unit of time

A. Feberwee; D. R. Mekkes; D. Klinkenberg; J. C. M. Vernooij; A. L. J. Gielkens; J. A. Stegeman



Quantifying NUMA and contention effects in multi-GPU systems  

Microsoft Academic Search

As system architects strive for increased density and power efficiency, the traditional compute node is being augmented with an increasing number of graphics processing units (GPUs). The integration of multiple GPUs per node introduces complex performance phenomena including non-uniform memory access (NUMA) and contention for shared system resources. Utilizing the Keeneland system, this paper quantifies these effects and presents some

Kyle Spafford; Jeremy S. Meredith; Jeffrey S. Vetter



Quantifying Water Stress Using Total Water Volumes and GRACE  

NASA Astrophysics Data System (ADS)

Water will follow oil as the next critical resource leading to unrest and uprisings globally. To better manage this threat, an improved understanding of the distribution of water stress is required today. This study builds upon previous efforts to characterize water stress by improving both the quantification of human water use and the definition of water availability. Current statistics on human water use are often outdated or inaccurately reported nationally, especially for groundwater. This study improves these estimates by defining human water use in two ways. First, we use NASA's Gravity Recovery and Climate Experiment (GRACE) to isolate the anthropogenic signal in water storage anomalies, which we equate to water use. Second, we quantify an ideal water demand by using average water requirements for the domestic, industrial, and agricultural water use sectors. Water availability has traditionally been limited to "renewable" water, which ignores large, stored water sources that humans use. We compare water stress estimates derived using either renewable water or the total volume of water globally. We use the best-available data to quantify total aquifer and surface water volumes, as compared to groundwater recharge and surface water runoff from land-surface models. The work presented here should provide a more realistic image of water stress by explicitly quantifying groundwater, defining water availability as total water supply, and using GRACE to more accurately quantify water use.

Richey, A. S.; Famiglietti, J. S.; Druffel-Rodriguez, R.



Quantifying an imagery system's performance with transformational mission data analysis  

Microsoft Academic Search

Traditionally, the performance of an imagery intelligence collection system is quantified by a satisfaction percentage. The mission satisfaction is the number of images collected divided by the number of images requested. This paradigm assumes the information needed is generated from the collected imagery data if the data is delivered on time to the consumer. As persistent surveillance requirements become more

Alisha W. Mauck



Choosing among Techniques for Quantifying Single-Case Intervention Effectiveness  

ERIC Educational Resources Information Center

|If single-case experimental designs are to be used to establish guidelines for evidence-based interventions in clinical and educational settings, numerical values that reflect treatment effect sizes are required. The present study compares four recently developed procedures for quantifying the magnitude of intervention effect using data with…

Manolov, Rumen; Solanas, Antonio; Sierra, Vicenta; Evans, Jonathan J.



Sampling and quantifying invertebrates from drinking water distribution mains  

Microsoft Academic Search

Water utilities in the Netherlands aim at controlling the multiplication of (micro-) organisms by distributing biologically stable water through biologically stable materials. Disinfectant residuals are absent or very low. To be able to assess invertebrate abundance, methods for sampling and quantifying these animals from distribution mains were optimised and evaluated. The presented method for collecting invertebrates consists of unidirectionally flushing

J. Hein M. van Lieverloo; Dick W. Bosboom; Geo L. Bakker; Anke J. Brouwer; Remko Voogt; Josje E. M. De Roos



Design of Experiments to Quantify Communication Satellite System Performance.  

National Technical Information Service (NTIS)

The report describes the steps for designing experiments to quantify the performance of a communication satellite system according to the methods specified by ANS X3.141. Performance is described in terms of performance parameters that are user-oriented a...

R. D. Cass M. J. Miles



Field sampling method for quantifying odorants in humid environments  

Technology Transfer Automated Retrieval System (TEKTRAN)

Most air quality studies in agricultural environments typically use thermal desorption analysis for quantifying volatile organic compounds (VOC) associated with odor. Carbon molecular sieves (CMS) are popular sorbent materials used in these studies. However, there is little information on the effe...


Quantifying Qualitative Analyses of Verbal Data: A Practical Guide  

Microsoft Academic Search

This article provides one example of a method of analyzing qualitative data in an objective and quantifiable way. Although the application of the method is illustrated in the context of verbal data such as explanations, interviews, problem-solving protocols, and retrospective reports, in principle, the mechanics of the method can be adapted for coding other types of qualitative data such as

Michelene T. H. Chi



The next step: quantifying infrastructure interdependencies to improve security  

Microsoft Academic Search

Understanding cascading effects among interdependent infrastructure systems can have an important effect on public policies that aim to address vulnerabilities in critical infrastructures, especially those policies pertaining to infrastructure security. Efforts to quantify these cascading effects and illustrative examples of such metrics are presented. The first set of examples is based upon various impacts that the 14th August, 2003 blackout

Rae Zimmerman; Carlos E. Restrepo



Quantifying Variation in the Strengths of Species Interactions  

Microsoft Academic Search

Understanding how the strengths of species interactions are distributed among species is critical for developing predictive models of natural food webs as well as for developing management and conservation strategies. Recently a number of ecologists have attempted to clarify the concepts of ''strong-'' and ''weak-interactors'' in a community, and to derive techniques for quantifying interaction strengths in the field, using

Eric L. Berlow; Sergio A. Navarrete; Cheryl J. Briggs; Mary E. Power; Bruce A. Menge



Trophic field overlap: A new approach to quantify keystone species  

Microsoft Academic Search

It is a current challenge to better understand the relative importance of species in ecosystems, and the network perspective is able to offer quantitative tools for this. It is plausible to assume, in general, that well-linked species, being key interactors, are also more important for the community. Recently a number of methods have been suggested for quantifying the network position

Ferenc Jordán; Wei-chung Liu; Ágnes Mike



Quantifying the indirect effects of a marketing contact  

Microsoft Academic Search

The Internet banner advertising has become an important marketing channel in recent years. There is a strong demand in the industry to quantify the expected return from a marketing contact. Contacts have both direct effects, such as a banner ad prompting the viewer to click and buy, and indirect effects, such as building awareness so that future exposures are more

Dingxi Qiu; Edward C. Malthouse



A Sustainability Initiative to Quantify Carbon Sequestration by Campus Trees  

ERIC Educational Resources Information Center

|Over 3,900 trees on a university campus were inventoried by an instructor-led team of geography undergraduates in order to quantify the carbon sequestration associated with biomass growth. The setting of the project is described, together with its logistics, methodology, outcomes, and benefits. This hands-on project provided a team of students…

Cox, Helen M.



Quantifying variable rainfall intensity events on runoff and sediment losses  

Technology Transfer Automated Retrieval System (TEKTRAN)

Coastal Plain soils in Georgia are susceptible to runoff, sediment, and chemical losses from short duration-high intensity, runoff producing storms at critical times during the growing season. We quantified runoff and sediment losses from a Tifton loamy sand managed under conventional- (CT) and stri...


Designing a systematic landscape monitoring approach for quantifying ecosystem services  

EPA Science Inventory

A key problem encountered early on by governments striving to incorporate the ecosystem services concept into decision making is quantifying ecosystem services across large landscapes. Basically, they are faced with determining what to measure, how to measure it and how to aggre...


Raman spectroscopy for quantifying cholesterol in intact coronary artery wall  

Microsoft Academic Search

The chemical composition of vascular lesions, an important determinant of plaque progression and rupture, can not presently be determined in vivo. Prior studies have shown that Raman spectroscopy can accurately quantify the amounts of major lipid classes and calcium salts in homogenized coronary artery tissue. This study determines how the relative cholesterol content, which is calculated from Raman spectra collected

Tjeerd J. Römer; James F. Brennan III; Tom C. Bakker Schut; Rolf Wolthuis; Ria C. M. van den Hoogen; Jef J. Emeis; Arnoud van der Laarse; Albert V. G. Bruschke; Gerwin J. Puppels