Note: This page contains sample records for the topic quantified results show from Science.gov.
While these samples are representative of the content of Science.gov,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of Science.gov
to obtain the most current and comprehensive results.
Last update: August 15, 2014.
1

Quantifying causal emergence shows that macro can beat micro  

PubMed Central

Causal interactions within complex systems can be analyzed at multiple spatial and temporal scales. For example, the brain can be analyzed at the level of neurons, neuronal groups, and areas, over tens, hundreds, or thousands of milliseconds. It is widely assumed that, once a micro level is fixed, macro levels are fixed too, a relation called supervenience. It is also assumed that, although macro descriptions may be convenient, only the micro level is causally complete, because it includes every detail, thus leaving no room for causation at the macro level. However, this assumption can only be evaluated under a proper measure of causation. Here, we use a measure [effective information (EI)] that depends on both the effectiveness of a system’s mechanisms and the size of its state space: EI is higher the more the mechanisms constrain the system’s possible past and future states. By measuring EI at micro and macro levels in simple systems whose micro mechanisms are fixed, we show that for certain causal architectures EI can peak at a macro level in space and/or time. This happens when coarse-grained macro mechanisms are more effective (more deterministic and/or less degenerate) than the underlying micro mechanisms, to an extent that overcomes the smaller state space. Thus, although the macro level supervenes upon the micro, it can supersede it causally, leading to genuine causal emergence—the gain in EI when moving from a micro to a macro level of analysis.

Hoel, Erik P.; Albantakis, Larissa; Tononi, Giulio

2013-01-01

2

Quantifying causal emergence shows that macro can beat micro.  

PubMed

Causal interactions within complex systems can be analyzed at multiple spatial and temporal scales. For example, the brain can be analyzed at the level of neurons, neuronal groups, and areas, over tens, hundreds, or thousands of milliseconds. It is widely assumed that, once a micro level is fixed, macro levels are fixed too, a relation called supervenience. It is also assumed that, although macro descriptions may be convenient, only the micro level is causally complete, because it includes every detail, thus leaving no room for causation at the macro level. However, this assumption can only be evaluated under a proper measure of causation. Here, we use a measure [effective information (EI)] that depends on both the effectiveness of a system's mechanisms and the size of its state space: EI is higher the more the mechanisms constrain the system's possible past and future states. By measuring EI at micro and macro levels in simple systems whose micro mechanisms are fixed, we show that for certain causal architectures EI can peak at a macro level in space and/or time. This happens when coarse-grained macro mechanisms are more effective (more deterministic and/or less degenerate) than the underlying micro mechanisms, to an extent that overcomes the smaller state space. Thus, although the macro level supervenes upon the micro, it can supersede it causally, leading to genuine causal emergence--the gain in EI when moving from a micro to a macro level of analysis. PMID:24248356

Hoel, Erik P; Albantakis, Larissa; Tononi, Giulio

2013-12-01

3

13. DETAIL VIEW OF BUTTRESS 4 SHOWING THE RESULTS OF ...  

Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

13. DETAIL VIEW OF BUTTRESS 4 SHOWING THE RESULTS OF POOR CONSTRUCTION WORK. THOUGH NOT A SERIOUS STRUCTURAL DEFICIENCY, THE 'HONEYCOMB' TEXTURE OF THE CONCRETE SURFACE WAS THE RESULT OF INADEQUATE TAMPING AT THE TIME OF THE INITIAL 'POUR'. - Hume Lake Dam, Sequioa National Forest, Hume, Fresno County, CA

4

Quantifying IOHDR brachytherapy underdosage resulting from an incomplete scatter environment  

SciTech Connect

Purpose: Most brachytherapy planning systems are based on a dose calculation algorithm that assumes an infinite scatter environment surrounding the target volume and applicator. Dosimetric errors from this assumption are negligible. However, in intraoperative high-dose-rate brachytherapy (IOHDR) where treatment catheters are typically laid either directly on a tumor bed or within applicators that may have little or no scatter material above them, the lack of scatter from one side of the applicator can result in underdosage during treatment. This study was carried out to investigate the magnitude of this underdosage. Methods: IOHDR treatment geometries were simulated using a solid water phantom beneath an applicator with varying amounts of bolus material on the top and sides of the applicator to account for missing tissue. Treatment plans were developed for 3 different treatment surface areas (4 x 4, 7 x 7, 12 x 12 cm{sup 2}), each with prescription points located at 3 distances (0.5 cm, 1.0 cm, and 1.5 cm) from the source dwell positions. Ionization measurements were made with a liquid-filled ionization chamber linear array with a dedicated electrometer and data acquisition system. Results: Measurements showed that the magnitude of the underdosage varies from about 8% to 13% of the prescription dose as the prescription depth is increased from 0.5 cm to 1.5 cm. This treatment error was found to be independent of the irradiated area and strongly dependent on the prescription distance. Furthermore, for a given prescription depth, measurements in planes parallel to an applicator at distances up to 4.0 cm from the applicator plane showed that the dose delivery error is equal in magnitude throughout the target volume. Conclusion: This study demonstrates the magnitude of underdosage in IOHDR treatments delivered in a geometry that may not result in a full scatter environment around the applicator. This implies that the target volume and, specifically, the prescription depth (tumor bed) may get a dose significantly less than prescribed. It might be clinically relevant to correct for this inaccuracy.

Raina, Sanjay [Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, NY (United States); Avadhani, Jaiteerth S. [Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, NY (United States); Oh, Moonseong [Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, NY (United States); Malhotra, Harish K. [Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, NY (United States); Jaggernauth, Wainwright [Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, NY (United States); Kuettel, Michael R. [Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, NY (United States); Podgorsak, Matthew B. [Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, NY (United States)]. E-mail: matthew.podgorsak@roswellpark.org

2005-04-01

5

Breast vibro-acoustography: initial results show promise  

PubMed Central

Introduction Vibro-acoustography (VA) is a recently developed imaging modality that is sensitive to the dynamic characteristics of tissue. It detects low-frequency harmonic vibrations in tissue that are induced by the radiation force of ultrasound. Here, we have investigated applications of VA for in vivo breast imaging. Methods A recently developed combined mammography-VA system for in vivo breast imaging was tested on female volunteers, aged 25 years or older, with suspected breast lesions on their clinical examination. After mammography, a set of VA scans was acquired by the experimental device. In a masked assessment, VA images were evaluated independently by 3 reviewers who identified mass lesions and calcifications. The diagnostic accuracy of this imaging method was determined by comparing the reviewers' responses with clinical data. Results We collected images from 57 participants: 7 were used for training and 48 for evaluation of diagnostic accuracy (images from 2 participants were excluded because of unexpected imaging artifacts). In total, 16 malignant and 32 benign lesions were examined. Specificity for diagnostic accuracy was 94% or higher for all 3 reviewers, but sensitivity varied (69% to 100%). All reviewers were able to detect 97% of masses, but sensitivity for detection of calcification was lower (? 72% for all reviewers). Conclusions VA can be used to detect various breast abnormalities, including calcifications and benign and malignant masses, with relatively high specificity. VA technology may lead to a new clinical tool for breast imaging applications.

2012-01-01

6

A new way to quantify the fidelity of imitation: preliminary results with gesture sequences  

Microsoft Academic Search

Imitation is a common and effective way for humans to learn new behaviors. Until now, the study of imitation has been hampered\\u000a by the challenge of measuring how well an attempted imitation corresponds to its stimulus model. We describe a new method\\u000a for quantifying the fidelity with which observers imitate complex series of gestures. Wearing a data glove that transduced

Brian J. Gold; Marc Pomplun; Nichola J. Rice; Robert Sekuler

2008-01-01

7

Astronomy Diagnostic Test Results Reflect Course Goals and Show Room for Improvement  

Microsoft Academic Search

The results of administering the Astronomy Diagnostic Test (ADT) to introductory astronomy students at Henry Ford Community College over three years have shown gains comparable with national averages. Results have also accurately corresponded to course goals, showing greater gains in topics covered in more detail, and lower gains in topics covered in less detail. Also evident in the results were

Michael C. Lopresto

2006-01-01

8

Comparison of some results of program SHOW with other solar hot water computer programs  

NASA Astrophysics Data System (ADS)

The SHOW (solar hot water) computer program is capable of simulating both one and two tank designs of thermosiphon and pumped solar domestic hot water systems. SHOW differs in a number of ways from other programs, the most notable of which is the emphasis on a thermal/hydraulic model of the stratified storage tank. The predicted performance for a typical two tank pumped system, computed by Program SHOW are compared, with results computed using F-CHART and TRNSYS. The results show fair to good agreement between the various computer programs when comparing the annual percent solar contributions. SHOW is also used to compute the expected performance of a two tank thermosiphon system and to compare its performance to the two tank pumped system.

Young, M. F.; Baughn, J. W.

9

Astronomy Diagnostic Test Results Reflect Course Goals and Show Room for Improvement  

ERIC Educational Resources Information Center

The results of administering the Astronomy Diagnostic Test (ADT) to introductory astronomy students at Henry Ford Community College over three years have shown gains comparable with national averages. Results have also accurately corresponded to course goals, showing greater gains in topics covered in more detail, and lower gains in topics covered…

LoPresto, Michael C.

2007-01-01

10

Quantifying the effects of root reinforcing on slope stability: results of the first tests with an new shearing device  

NASA Astrophysics Data System (ADS)

The role of vegetation in preventing shallow soil mass movements such as shallow landslides and soil erosion is generally well recognized and, correspondingly, soil bioengineering on steep slopes has been widely used in practice. However, the precise effectiveness of vegetation regarding slope stabilityis still difficult to determine. A recently designed inclinable shearing device for large scale vegetated soil samples allows quantitative evaluation of the additional shear strength provided by roots of specific plant species. In the following we describe the results of a first series of shear strength experiments with this apparatus focusing on root reinforcement of White Alder (Alnus incana) and Silver Birch (Betula pendula) in large soil block samples (500 x 500 x 400 mm). The specimen with partly saturated soil of a maximum grain size of 10 mm were slowly sheared at an inclination of 35° with low normal stresses of 3.2 kPa accounting for natural conditions on a typical slope prone to mass movements. Measurements during the experiments involved shear stress, shear displacement and normal displacement, all recorded with high accuracy. In addition, dry weights of sprout and roots were measured to quantify plant growth of the planted specimen. The results with the new apparatus indicate a considerable reinforcement of the soil due to plant roots, i.e. maximum shear stress of the vegetated specimen were substantially higher compared to non-vegetated soil and the additional strength was a function of species and growth. Soil samples with seedlings planted five months prior to the test yielded an important increase in maximum shear stress of 250% for White Alder and 240% for Silver Birch compared to non-vegetated soil. The results of a second test series with 12 month old plants showed even clearer enhancements in maximum shear stress (390% for Alder and 230% for Birch). Overall the results of this first series of shear strength experiments with the new apparatus using planted and unplanted soil specimen confirm the importance of plants in soil stabilisation. Furthermore, they demonstrate the suitability of the apparatus to quantify the additional strength of specific vegetation as a function of species and growth under clearly defined conditions in the laboratory.

Rickli, Christian; Graf, Frank

2013-04-01

11

Testing Delays Resulting in Increased Identification Accuracy in Line-Ups and Show-Ups.  

ERIC Educational Resources Information Center

Investigated time delays (immediate, two-three days, one week) between viewing a staged theft and attempting an eyewitness identification. Compared lineups to one-person showups in a laboratory analogue involving 412 subjects. Results show that across all time delays, participants maintained a higher identification accuracy with the showup…

Dekle, Dawn J.

1997-01-01

12

Nanotribology Results Show that DNA Forms a Mechanically Resistant 2D Network in Metaphase Chromatin Plates  

PubMed Central

In a previous study, we found that metaphase chromosomes are formed by thin plates, and here we have applied atomic force microscopy (AFM) and friction force measurements at the nanoscale (nanotribology) to analyze the properties of these planar structures in aqueous media at room temperature. Our results show that high concentrations of NaCl and EDTA and extensive digestion with protease and nuclease enzymes cause plate denaturation. Nanotribology studies show that native plates under structuring conditions (5 mM Mg2+) have a relatively high friction coefficient (? ? 0.3), which is markedly reduced when high concentrations of NaCl or EDTA are added (? ? 0.1). This lubricant effect can be interpreted considering the electrostatic repulsion between DNA phosphate groups and the AFM tip. Protease digestion increases the friction coefficient (? ? 0.5), but the highest friction is observed when DNA is cleaved by micrococcal nuclease (? ? 0.9), indicating that DNA is the main structural element of plates. Whereas nuclease-digested plates are irreversibly damaged after the friction measurement, native plates can absorb kinetic energy from the AFM tip without suffering any damage. These results suggest that plates are formed by a flexible and mechanically resistant two-dimensional network which allows the safe storage of DNA during mitosis.

Gallego, Isaac; Oncins, Gerard; Sisquella, Xavier; Fernandez-Busquets, Xavier; Daban, Joan-Ramon

2010-01-01

13

Long-Term Trial Results Show No Mortality Benefit from Annual Prostate Cancer Screening  

Cancer.gov

Thirteen year follow-up data from the Prostate, Lung, Colorectal and Ovarian (PLCO) cancer screening trial show higher incidence but similar mortality among men screened annually with the prostate-specific antigen (PSA) test and digital rectal examination (DRE).

14

Comparison of some results of program SHOW with other solar hot water computer programs  

NASA Astrophysics Data System (ADS)

Subroutines and the driver program for the simulation code SHOW (solar hot water) for solar thermosyphon systems are discussed, and simulations are compared with predictions by the F-CHART and TRNSYS codes. SHOW has the driver program MAIN, which defines the system control logic for choosing the appropriate system subroutine for analysis. Ten subroutines are described, which account for the solar system physical parameters, the weather data, the manufacturer-supplied system specifications, mass flow rates, pumped systems, total transformed radiation, load use profiles, stratification in storage, an electric water heater, and economic analyses. The three programs are employed to analyze a thermosiphon installation in Sacramento with two storage tanks. TRNSYS and SHOW were in agreement and lower than F-CHARt for annual predictions, although significantly more computer time was necessary to make TRNSYS converge.

Young, M. F.; Baughn, J. W.

15

NIH trial shows promising results in treating a lymphoma in young people  

Cancer.gov

Patients with a type of cancer known as primary mediastinal B-cell lymphoma who received infusions of chemotherapy, but who did not have radiation therapy to an area of the thorax known as the mediastinum, had excellent outcomes, according to clinical trial results.

16

Lung cancer trial results show mortality benefit with low-dose CT:  

Cancer.gov

The NCI has released initial results from a large-scale test of screening methods to reduce deaths from lung cancer by detecting cancers at relatively early stages. The National Lung Screening Trial, a randomized national trial involving more than 53,000 current and former heavy smokers ages 55 to 74, compared the effects of two screening procedures for lung cancer -- low-dose helical computed tomography (CT) and standard chest X-ray -- on lung cancer mortality and found 20 percent fewer lung cancer deaths among trial participants screened with low-dose helical CT.

17

Results From Mars Show Electrostatic Charging of the Mars Pathfinder Sojourner Rover  

NASA Technical Reports Server (NTRS)

Indirect evidence (dust accumulation) has been obtained indicating that the Mars Pathfinder rover, Sojourner, experienced electrostatic charging on Mars. Lander camera images of the Sojourner rover provide distinctive evidence of dust accumulation on rover wheels during traverses, turns, and crabbing maneuvers. The sol 22 (22nd Martian "day" after Pathfinder landed) end-of-day image clearly shows fine red dust concentrated around the wheel edges with additional accumulation in the wheel hubs. A sol 41 image of the rover near the rock "Wedge" (see the next image) shows a more uniform coating of dust on the wheel drive surfaces with accumulation in the hubs similar to that in the previous image. In the sol 41 image, note particularly the loss of black-white contrast on the Wheel Abrasion Experiment strips (center wheel). This loss of contrast was also seen when dust accumulated on test wheels in the laboratory. We believe that this accumulation occurred because the Martian surface dust consists of clay-sized particles, similar to those detected by Viking, which have become electrically charged. By adhering to the wheels, the charged dust carries a net nonzero charge to the rover, raising its electrical potential relative to its surroundings. Similar charging behavior was routinely observed in an experimental facility at the NASA Lewis Research Center, where a Sojourner wheel was driven in a simulated Martian surface environment. There, as the wheel moved and accumulated dust (see the following image), electrical potentials in excess of 100 V (relative to the chamber ground) were detected by a capacitively coupled electrostatic probe located 4 mm from the wheel surface. The measured wheel capacitance was approximately 80 picofarads (pF), and the calculated charge, 8 x 10(exp -9) coulombs (C). Voltage differences of 100 V and greater are believed sufficient to produce Paschen electrical discharge in the Martian atmosphere. With an accumulated net charge of 8 x 10(exp -9) C, and average arc time of 1 msec, arcs can also occur with estimated arc currents approaching 10 milliamperes (mA). Discharges of this magnitude could interfere with the operation of sensitive electrical or electronic elements and logic circuits. Sojourner rover wheel tested in laboratory before launch to Mars. Before launch, we believed that the dust would become triboelectrically charged as it was moved about and compacted by the rover wheels. In all cases observed in the laboratory, the test wheel charged positively, and the wheel tracks charged negatively. Dust samples removed from the laboratory wheel averaged a few ones to tens of micrometers in size (clay size). Coarser grains were left behind in the wheel track. On Mars, grain size estimates of 2 to 10 mm were derived for the Martian surface materials from the Viking Gas Exchange Experiment. These size estimates approximately match the laboratory samples. Our tentative conclusion for the Sojourner observations is that fine clay-sized particles acquired an electrostatic charge during rover traverses and adhered to the rover wheels, carrying electrical charge to the rover. Since the Sojourner rover carried no instruments to measure this mission's onboard electrical charge, confirmatory measurements from future rover missions on Mars are desirable so that the physical and electrical properties of the Martian surface dust can be characterized. Sojourner was protected by discharge points, and Faraday cages were placed around sensitive electronics. But larger systems than Sojourner are being contemplated for missions to the Martian surface in the foreseeable future. The design of such systems will require a detailed knowledge of how they will interact with their environment. Validated environmental interaction models and guidelines for the Martian surface must be developed so that design engineers can test new ideas prior to cutting hardware. These models and guidelines cannot be validated without actual flighata. Electrical charging of vehicles and, one day, astronauts moving across t

Kolecki, Joseph C.; Siebert, Mark W.

1998-01-01

18

Updated clinical results show experimental agent ibrutinib as highly active in CLL patients  

Cancer.gov

Updated results from a Phase Ib/II clinical trial led by the Ohio State University Comprehensive Cancer Center – Arthur G. James Cancer Hospital and Richard J. Solove Research Institute indicates that a novel therapeutic agent for chronic lymphocytic leukemia (CLL) is highly active and well tolerated in patients who have relapsed and are resistant to other therapy. The agent, ibrutinib (PCI-32765), is the first drug designed to target Bruton's tyrosine kinase (BTK), a protein essential for CLL-cell survival and proliferation. CLL is the most common form of leukemia, with about 15,000 new cases annually in the U.S. About 4,400 Americans die of the disease each year.

19

Animation shows promise in initiating timely cardiopulmonary resuscitation: results of a pilot study.  

PubMed

Delayed responses during cardiac arrest are common. Timely interventions during cardiac arrest have a direct impact on patient survival. Integration of technology in nursing education is crucial to enhance teaching effectiveness. The goal of this study was to investigate the effect of animation on nursing students' response time to cardiac arrest, including initiation of timely chest compression. Nursing students were randomized into experimental and control groups prior to practicing in a high-fidelity simulation laboratory. The experimental group was educated, by discussion and animation, about the importance of starting cardiopulmonary resuscitation upon recognizing an unresponsive patient. Afterward, a discussion session allowed students in the experimental group to gain more in-depth knowledge about the most recent changes in the cardiac resuscitation guidelines from the American Heart Association. A linear mixed model was run to investigate differences in time of response between the experimental and control groups while controlling for differences in those with additional degrees, prior code experience, and basic life support certification. The experimental group had a faster response time compared with the control group and initiated timely cardiopulmonary resuscitation upon recognition of deteriorating conditions (P < .0001). The results demonstrated the efficacy of combined teaching modalities for timely cardiopulmonary resuscitation. Providing opportunities for repetitious practice when a patient's condition is deteriorating is crucial for teaching safe practice. PMID:24473120

Attin, Mina; Winslow, Katheryn; Smith, Tyler

2014-04-01

20

QUantifying the Aerosol Direct and Indirect Effect over Eastern Mediterranean from Satellites (QUADIEEMS): Overview and preliminary results  

NASA Astrophysics Data System (ADS)

An overview and preliminary results from the research implemented within the framework of QUADIEEMS project are presented. For the scopes of the project, satellite data from five sensors (MODIS aboard EOS TERRA, MODIS aboard EOS AQUA, TOMS aboard Earth Probe, OMI aboard EOS AURA and CALIOP aboard CALIPSO) are used in conjunction with meteorological data from ECMWF ERA-interim reanalysis and data from a global chemical-aerosol-transport model as well as simulation results from a regional climate model (RegCM4) coupled with a simplified aerosol scheme. QUADIEEMS focuses on Eastern Mediterranean [30oN-45No, 17.5oE-37.5oE], a region situated at the crossroad of different aerosol types and thus ideal for the investigation of the direct and indirect effects of various aerosol types at a high spatial resolution. The project consists of five components. First, raw data from various databases are acquired, analyzed and spatially homogenized with the outcome being a high resolution (0.1x0.1 degree) and a moderate resolution (1.0x1.0 degree) gridded dataset of aerosol and cloud optical properties. The marine, dust and anthropogenic fraction of aerosols over the region is quantified making use of the homogenized dataset. Regional climate model simulations with REGCM4/aerosol are also implemented for the greater European region for the period 2000-2010 at a resolution of 50 km. REGCM4's ability to simulate AOD550 over Europe is evaluated. The aerosol-cloud relationships, for sub-regions of Eastern Mediterranean characterized by the presence of predominant aerosol types, are examined. The aerosol-cloud relationships are also examined taking into account the relative position of aerosol and cloud layers as defined by CALIPSO observations. Within the final component of the project, results and data that emerged from all the previous components are used in satellite-based parameterizations in order to quantify the direct and indirect (first) radiative effect of the different aerosol types at a resolution of 0.1x0.1 degrees. The procedure is repeated using a 1.0x1.0 degree resolution, in order to examine the footprint of the aerosol direct and indirect effects. The project ends with the evaluation of REGCM4's ability to simulate the aerosol direct radiative effect over the region. QUADIEEMS is co-financed by the European Social Fund (ESF) and national resources under the operational programme Education and Lifelong Learning (EdLL) within the framework of the Action "Supporting Postdoctoral Researchers".

Georgoulias, Aristeidis K.; Zanis, Prodromos; Pöschl, Ulrich; Kourtidis, Konstantinos A.; Alexandri, Georgia; Ntogras, Christos; Marinou, Eleni; Amiridis, Vassilis

2013-04-01

21

Quantifying contextuality.  

PubMed

Contextuality is central to both the foundations of quantum theory and to the novel information processing tasks. Despite some recent proposals, it still faces a fundamental problem: how to quantify its presence? In this work, we provide a universal framework for quantifying contextuality. We conduct two complementary approaches: (i) the bottom-up approach, where we introduce a communication game, which grasps the phenomenon of contextuality in a quantitative manner; (ii) the top-down approach, where we just postulate two measures, relative entropy of contextuality and contextuality cost, analogous to existent measures of nonlocality (a special case of contextuality). We then match the two approaches by showing that the measure emerging from the communication scenario turns out to be equal to the relative entropy of contextuality. Our framework allows for the quantitative, resource-type comparison of completely different games. We give analytical formulas for the proposed measures for some contextual systems, showing in particular that the Peres-Mermin game is by order of magnitude more contextual than that of Klyachko et al. Furthermore, we explore properties of these measures such as monotonicity or additivity. PMID:24724629

Grudka, A; Horodecki, K; Horodecki, M; Horodecki, P; Horodecki, R; Joshi, P; K?obus, W; Wójcik, A

2014-03-28

22

Quantifying Contextuality  

NASA Astrophysics Data System (ADS)

Contextuality is central to both the foundations of quantum theory and to the novel information processing tasks. Despite some recent proposals, it still faces a fundamental problem: how to quantify its presence? In this work, we provide a universal framework for quantifying contextuality. We conduct two complementary approaches: (i) the bottom-up approach, where we introduce a communication game, which grasps the phenomenon of contextuality in a quantitative manner; (ii) the top-down approach, where we just postulate two measures, relative entropy of contextuality and contextuality cost, analogous to existent measures of nonlocality (a special case of contextuality). We then match the two approaches by showing that the measure emerging from the communication scenario turns out to be equal to the relative entropy of contextuality. Our framework allows for the quantitative, resource-type comparison of completely different games. We give analytical formulas for the proposed measures for some contextual systems, showing in particular that the Peres-Mermin game is by order of magnitude more contextual than that of Klyachko et al. Furthermore, we explore properties of these measures such as monotonicity or additivity.

Grudka, A.; Horodecki, K.; Horodecki, M.; Horodecki, P.; Horodecki, R.; Joshi, P.; K?obus, W.; Wójcik, A.

2014-03-01

23

Quantifying microwear on experimental Mistassini quartzite scrapers: preliminary results of exploratory research using LSCM and scale-sensitive fractal analysis.  

PubMed

Although previous use-wear studies involving quartz and quartzite have been undertaken by archaeologists, these are comparatively few in number. Moreover, there has been relatively little effort to quantify use-wear on stone tools made from quartzite. The purpose of this article is to determine the effectiveness of a measurement system, laser scanning confocal microscopy (LSCM), to document the surface roughness or texture of experimental Mistassini quartzite scrapers used on two different contact materials (fresh and dry deer hide). As in previous studies using LSCM on chert, flint, and obsidian, this exploratory study incorporates a mathematical algorithm that permits the discrimination of surface roughness based on comparisons at multiple scales. Specifically, we employ measures of relative area (RelA) coupled with the F-test to discriminate used from unused stone tool surfaces, as well as surfaces of quartzite scrapers used on dry and fresh deer hide. Our results further demonstrate the effect of raw material variation on use-wear formation and its documentation using LSCM and RelA. PMID:22688593

Stemp, W James; Lerner, Harry J; Kristant, Elaine H

2013-01-01

24

News Note: Long-term Results from Study of Tamoxifen and Raloxifene Shows Lower Toxicities of Raloxifene  

Cancer.gov

Initial results in 2006 of the NCI-sponsored Study of Tamoxifen and Raloxifene (STAR) showed that a common osteoporosis drug, raloxifene, prevented breast cancer to the same degree, but with fewer serious side-effects, than the drug tamoxifen that had been in use many years for breast cancer prevention as well as treatment. The longer-term results show that raloxifene retained 76 percent of the effectiveness of tamoxifen in preventing invasive disease and grew closer to tamoxifen in preventing noninvasive disease, while remaining far less toxic – in particular, there was significantly less endometrial cancer with raloxifene use.

25

Quantifying Quality  

ERIC Educational Resources Information Center

Speeches and tutorials of the ASIS Workshop "Quantifying Quality" are summarized. Topics include quantitative methods for measuring performance; queueing theory in libraries; data base value analysis; performance standards for libraries; use of Statistical Package for the Social Sciences in decision making; designing optimal information access…

Kazlauskas, Edward J.; Bennion, Bruce

1977-01-01

26

Quantifying the uncertainty in heritability.  

PubMed

The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large. PMID:24670270

Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph

2014-05-01

27

Recombinant PNPLA3 protein shows triglyceride hydrolase activity and its I148M mutation results in loss of function.  

PubMed

The patatin-like phospholipase domain containing 3 (PNPLA3, also called adiponutrin, ADPN) is a membrane-bound protein highly expressed in the liver. The genetic variant I148M (rs738409) was found to be associated with progression of chronic liver disease. We aimed to establish a protein purification protocol in a yeast system (Pichia pastoris) and to examine the human PNPLA3 enzymatic activity, substrate specificity and the I148M mutation effect. hPNPLA3 148I wild type and 148M mutant cDNA were cloned into P. pastoris expression vectors. Yeast cells were grown in 3L fermentors. PNPLA3 protein was purified from membrane fractions by Ni-affinity chromatography. Enzymatic activity was assessed using radiolabeled substrates. Both 148I wild type and 148M mutant proteins are localized to the membrane. The wild type protein shows a predominant lipase activity with mild lysophosphatidic acid acyl transferase activity (LPAAT) and the I148M mutation results in a loss of function of both these activities. Our data show that PNPLA3 has a predominant lipase activity and I148M mutation results in a loss of function. PMID:24369119

Pingitore, Piero; Pirazzi, Carlo; Mancina, Rosellina M; Motta, Benedetta M; Indiveri, Cesare; Pujia, Arturo; Montalcini, Tiziana; Hedfalk, Kristina; Romeo, Stefano

2014-04-01

28

Transgenic plants expressing HC-Pro show enhanced virus sensitivity while silencing of the transgene results in resistance.  

PubMed

Nicotiana benthamiana plants were engineered to express sequences of the helper component-proteinase (HC-Pro) of Cowpea aphid-borne mosaic potyvirus (CABMV). The sensitivity of the transgenic plants to infection with parental and heterologous viruses was studied. The lines expressing HC-Pro showed enhanced symptoms after infection with the parental CABMV isolate and also after infection with a heterologous potyvirus, Potato virus Y (PVY) and a comovirus, Cowpea mosaic virus (CPMV). On the other hand, transgenic lines expressing nontranslatable HC-Pro or translatable HC-Pro with a deletion of the central domain showed wild type symptoms after infection with the parental CABMV isolate and heterologous viruses. These results showed that CABMV HC-Pro is a pathogenicity determinant that conditions enhanced sensitivity to virus infection in plants, and that the central domain of the protein is essential for this. The severe symptoms in CABMV-infected HC-Pro expressing lines were remarkably followed by brief recovery and subsequent re-establishment of infection, possibly indicating counteracting effects of HC-Pro expression and a host defense response. One of the HC-Pro expressing lines (h48) was found to contain low levels of transgenic HC-Pro RNA and to be resistant to CABMV and to recombinant CPMV expressing HC-Pro. This indicated that h48 was (partially) posttranscriptionally silenced for the HC-Pro transgene inspite of the established role of HC-Pro as a suppressor of posttranscriptional gene silencing. Line h48 was not resistant to PVY, but instead showed enhanced symptoms compared to nontransgenic plants. This may be due to relief of silencing of the HC-Pro transgene by HC-Pro expressed by PVY. PMID:12206307

Mlotshwa, Sizolwenkosi; Verver, Jan; Sithole-Niang, Idah; Prins, Marcel; Van Kammen, A B; Wellink, Joan

2002-01-01

29

The ankle ergometer: A new tool for quantifying changes in mechanical properties of human muscle as a result of spaceflight  

NASA Astrophysics Data System (ADS)

A mechanical device for studying changes in mechanical properties of human muscle as a result of spaceflight is presented. Its main capacities are to allow during a given experiment investigation of both contractile and visco-elastic properties of a musculo-articular complex using respectively isometric contractions, isokinetic movements, quick-release tests and sinusoidal perturbations. This device is a motor driven ergometer associated to an experimental protocol designed for pre- and post-flight experiments. As microgravity preferentially affects postural muscles, the apparatus was designed to test muscle groups crossing the ankle joint. Three subjects were tested during the Euromir '94 mission. Preliminary results obtained on the european astronaut are briefly reported. During the next two years the experiments will be performed during six missions.

Mainar, A.; Vanhoutte, C.; Pérot, C.; Voronine, L.; Goubel, F.

30

Acute Myocardial Infarction and Pulmonary Diseases Result in Two Different Degradation Profiles of Elastin as Quantified by Two Novel ELISAs  

PubMed Central

Background Elastin is a signature protein of the arteries and lungs, thus it was hypothesized that elastin is subject to enzymatic degradation during cardiovascular and pulmonary diseases. The aim was to investigate if different fragments of the same protein entail different information associated to two different diseases and if these fragments have the potential of being diagnostic biomarkers. Methods Monoclonal antibodies were raised against an identified fragment (the ELM-2 neoepitope) generated at the amino acid position ‘552 in elastin by matrix metalloproteinase (MMP) ?9/?12. A newly identified ELM neoepitope was generated by the same proteases but at amino acid position ‘441. The distribution of ELM-2 and ELM, in human arterial plaques and fibrotic lung tissues were investigated by immunohistochemistry. A competitive ELISA for ELM-2 was developed. The clinical relevance of the ELM and ELM-2 ELISAs was evaluated in patients with acute myocardial infarction (AMI), no AMI, high coronary calcium, or low coronary calcium. The serological release of ELM-2 in patients with chronic obstructive pulmonary disease (COPD) or idiopathic pulmonary fibrosis (IPF) was compared to controls. Results ELM and ELM-2 neoepitopes were both localized in diseased carotid arteries and fibrotic lungs. In the cardiovascular cohort, ELM-2 levels were 66% higher in serum from AMI patients compared to patients with no AMI (p<0.01). Levels of ELM were not significantly increased in these patients and no correlation was observed between ELM-2 and ELM. ELM-2 was not elevated in the COPD and IPF patients and was not correlated to ELM. ELM was shown to be correlated with smoking habits (p<0.01). Conclusions The ELM-2 neoepitope was related to AMI whereas the ELM neoepitope was related to pulmonary diseases. These results indicate that elastin neoepitopes generated by the same proteases but at different amino acid sites provide different tissue-related information depending on the disease in question.

Skj?t-Arkil, Helene; Clausen, Rikke E.; Rasmussen, Lars M.; Wang, Wanchun; Wang, Yaguo; Zheng, Qinlong; Mickley, Hans; Saaby, Lotte; Diederichsen, Axel C. P.; Lambrechtsen, Jess; Martinez, Fernando J.; Hogaboam, Cory M.; Han, MeiLan; Larsen, Martin R.; Nawrocki, Arkadiusz; Vainer, Ben; Krustrup, Dorrit; Bj?rling-Poulsen, Marina; Karsdal, Morten A.; Leeming, Diana J.

2013-01-01

31

Genomic and Enzymatic Results Show Bacillus cellulosilyticus Uses a Novel Set of LPXTA Carbohydrases to Hydrolyze Polysaccharides  

PubMed Central

Background Alkaliphilic Bacillus species are intrinsically interesting due to the bioenergetic problems posed by growth at high pH and high salt. Three alkaline cellulases have been cloned, sequenced and expressed from Bacillus cellulosilyticus N-4 (Bcell) making it an excellent target for genomic sequencing and mining of biomass-degrading enzymes. Methodology/Principal Findings The genome of Bcell is a single chromosome of 4.7 Mb with no plasmids present and three large phage insertions. The most unusual feature of the genome is the presence of 23 LPXTA membrane anchor proteins; 17 of these are annotated as involved in polysaccharide degradation. These two values are significantly higher than seen in any other Bacillus species. This high number of membrane anchor proteins is seen only in pathogenic Gram-positive organisms such as Listeria monocytogenes or Staphylococcus aureus. Bcell also possesses four sortase D subfamily 4 enzymes that incorporate LPXTA-bearing proteins into the cell wall; three of these are closely related to each other and unique to Bcell. Cell fractionation and enzymatic assay of Bcell cultures show that the majority of polysaccharide degradation is associated with the cell wall LPXTA-enzymes, an unusual feature in Gram-positive aerobes. Genomic analysis and growth studies both strongly argue against Bcell being a truly cellulolytic organism, in spite of its name. Preliminary results suggest that fungal mycelia may be the natural substrate for this organism. Conclusions/Significance Bacillus cellulosilyticus N-4, in spite of its name, does not possess any of the genes necessary for crystalline cellulose degradation, demonstrating the risk of classifying microorganisms without the benefit of genomic analysis. Bcell is the first Gram-positive aerobic organism shown to use predominantly cell-bound, non-cellulosomal enzymes for polysaccharide degradation. The LPXTA-sortase system utilized by Bcell may have applications both in anchoring cellulases and other biomass-degrading enzymes to Bcell itself and in anchoring proteins other Gram-positive organisms.

Mead, David; Drinkwater, Colleen; Brumm, Phillip J.

2013-01-01

32

Children of Low Socioeconomic Status Show Accelerated Linear Growth in Early Childhood; Results from the Generation R Study  

PubMed Central

Objectives People of low socioeconomic status are shorter than those of high socioeconomic status. The first two years of life being critical for height development, we hypothesized that a low socioeconomic status is associated with a slower linear growth in early childhood. We studied maternal educational level (high, mid-high, mid-low, and low) as a measure of socioeconomic status and its association with repeatedly measured height in children aged 0–2 years, and also examined to what extent known determinants of postnatal growth contribute to this association. Methods This study was based on data from 2972 mothers with a Dutch ethnicity, and their children participating in The Generation R Study, a population-based cohort study in Rotterdam, the Netherlands (participation rate 61%). All children were born between April 2002 and January 2006. Height was measured at 2 months (mid-90% range 1.0–3.9), 6 months (mid-90% range 5.6–11.4), 14 months (mid-90% range 13.7–17.9) and 25 months of age (mid-90% range 23.6–29.6). Results At 2 months, children in the lowest educational subgroup were shorter than those in the highest (difference: ?0.87 cm; 95% CI: ?1.16, ?0.58). Between 1 and 18 months, they grew faster than their counterparts. By 14 months, children in the lowest educational subgroup were taller than those in the highest (difference at 14 months: 0.40 cm; 95% CI: 0.08,0.72). Adjustment for other determinants of postnatal growth did not explain the taller height. On the contrary, the differences became even larger (difference at 14 months: 0.61 cm; 95% CI: 0.26,0.95; and at 25 months: 1.00 cm; 95% CI: 0.57,1.43) Conclusions Compared with children of high socioeconomic status, those of low socioeconomic status show an accelerated linear growth until the18th month of life, leading to an overcompensation of their initial height deficit. The long-term consequences of these findings remain unclear and require further study.

Silva, Lindsay M.; van Rossem, Lenie; Jansen, Pauline W.; Hokken-Koelega, Anita C. S.; Moll, Henriette A.; Hofman, Albert; Mackenbach, Johan P.; Jaddoe, Vincent W. V.; Raat, Hein

2012-01-01

33

QUANTIFYING FOREST ABOVEGROUND CARBON POOLS AND FLUXES USING MULTI-TEMPORAL LIDAR A report on field monitoring, remote sensing MMV, GIS integration, and modeling results for forestry field validation test to quantify aboveground tree biomass and carbon  

SciTech Connect

Sound policy recommendations relating to the role of forest management in mitigating atmospheric carbon dioxide (CO{sub 2}) depend upon establishing accurate methodologies for quantifying forest carbon pools for large tracts of land that can be dynamically updated over time. Light Detection and Ranging (LiDAR) remote sensing is a promising technology for achieving accurate estimates of aboveground biomass and thereby carbon pools; however, not much is known about the accuracy of estimating biomass change and carbon flux from repeat LiDAR acquisitions containing different data sampling characteristics. In this study, discrete return airborne LiDAR data was collected in 2003 and 2009 across {approx}20,000 hectares (ha) of an actively managed, mixed conifer forest landscape in northern Idaho, USA. Forest inventory plots, established via a random stratified sampling design, were established and sampled in 2003 and 2009. The Random Forest machine learning algorithm was used to establish statistical relationships between inventory data and forest structural metrics derived from the LiDAR acquisitions. Aboveground biomass maps were created for the study area based on statistical relationships developed at the plot level. Over this 6-year period, we found that the mean increase in biomass due to forest growth across the non-harvested portions of the study area was 4.8 metric ton/hectare (Mg/ha). In these non-harvested areas, we found a significant difference in biomass increase among forest successional stages, with a higher biomass increase in mature and old forest compared to stand initiation and young forest. Approximately 20% of the landscape had been disturbed by harvest activities during the six-year time period, representing a biomass loss of >70 Mg/ha in these areas. During the study period, these harvest activities outweighed growth at the landscape scale, resulting in an overall loss in aboveground carbon at this site. The 30-fold increase in sampling density between the 2003 and 2009 did not affect the biomass estimates. Overall, LiDAR data coupled with field reference data offer a powerful method for calculating pools and changes in aboveground carbon in forested systems. The results of our study suggest that multitemporal LiDAR-based approaches are likely to be useful for high quality estimates of aboveground carbon change in conifer forest systems.

Lee Spangler; Lee A. Vierling; Eva K. Stand; Andrew T. Hudak; Jan U.H. Eitel; Sebastian Martinuzzi

2012-04-01

34

QUANTIFYING SPICULES  

SciTech Connect

Understanding the dynamic solar chromosphere is fundamental in solar physics. Spicules are an important feature of the chromosphere, connecting the photosphere to the corona, potentially mediating the transfer of energy and mass. The aim of this work is to study the properties of spicules over different regions of the Sun. Our goal is to investigate if there is more than one type of spicule, and how spicules behave in the quiet Sun, coronal holes, and active regions. We make use of high cadence and high spatial resolution Ca II H observations taken by Hinode/Solar Optical Telescope. Making use of a semi-automated detection algorithm, we self-consistently track and measure the properties of 519 spicules over different regions. We find clear evidence of two types of spicules. Type I spicules show a rise and fall and have typical lifetimes of 150-400 s and maximum ascending velocities of 15-40 km s{sup -1}, while type II spicules have shorter lifetimes of 50-150 s, faster velocities of 30-110 km s{sup -1}, and are not seen to fall down, but rather fade at around their maximum length. Type II spicules are the most common, seen in the quiet Sun and coronal holes. Type I spicules are seen mostly in active regions. There are regional differences between quiet-Sun and coronal hole spicules, likely attributable to the different field configurations. The properties of type II spicules are consistent with published results of rapid blueshifted events (RBEs), supporting the hypothesis that RBEs are their disk counterparts. For type I spicules we find the relations between their properties to be consistent with a magnetoacoustic shock wave driver, and with dynamic fibrils as their disk counterpart. The driver of type II spicules remains unclear from limb observations.

Pereira, Tiago M. D. [NASA Ames Research Center, Moffett Field, CA 94035 (United States); De Pontieu, Bart [Lockheed Martin Solar and Astrophysics Laboratory, Org. A021S, Building 252, 3251 Hanover Street, Palo Alto, CA 94304 (United States); Carlsson, Mats, E-mail: tiago.pereira@nasa.gov [Institute of Theoretical Astrophysics, P.O. Box 1029, Blindern, NO-0315 Oslo (Norway)

2012-11-01

35

Presentation Showing Results of a Hydrogeochemical Investigation of the Standard Mine Vicinity, Upper Elk Creek Basin, Colorado  

USGS Publications Warehouse

PREFACE This Open-File Report consists of a presentation given in Crested Butte, Colorado on December 13, 2007 to the Standard Mine Advisory Group. The presentation was paired with another presentation given by the Colorado Division of Reclamation, Mining, and Safety on the physical features and geology of the Standard Mine. The presentation in this Open-File Report summarizes the results and conclusions of a hydrogeochemical investigation of the Standard Mine performed by the U.S. Geological Survey (Manning and others, in press). The purpose of the investigation was to aid the U.S. Environmental Protection Agency in evaluating remediation options for the Standard Mine site. Additional details and supporting data related to the information in this presentation can be found in Manning and others (in press).

Manning, Andrew H.; Verplanck, Philip L.; Mast, M. Alisa; Wanty, Richard B.

2008-01-01

36

"The Show"  

ERIC Educational Resources Information Center

For the past 16 years, the blue-collar city of Huntington, West Virginia, has rolled out the red carpet to welcome young wrestlers and their families as old friends. They have come to town chasing the same dream for a spot in what many of them call "The Show". For three days, under the lights of an arena packed with 5,000 fans, the state's best…

Gehring, John

2004-01-01

37

Review of genetic parameters estimated at stallion and young horse performance tests and their correlations with later results in dressage and show-jumping competition  

Microsoft Academic Search

Results from performance tests and competitions of young horses are used by major European warmblood horse breeding associations for genetic evaluations. The aim of this review was to compare genetic parameters for various tests of young horses to assess their efficiency in selection for dressage and show-jumping. Improved understanding of genetic information across countries is also necessary, as foreign trade

E. Thorén Hellsten; Å. Viklund; E. P. C. Koenen; A. Ricard; E. Bruns; J. Philipsson

2006-01-01

38

Streamlined system for purifying and quantifying a diverse library of compounds and the effect of compound concentration measurements on the accurate interpretation of biological assay results.  

PubMed

As part of an overall systems approach to generating highly accurate screening data across large numbers of compounds and biological targets, we have developed and implemented streamlined methods for purifying and quantitating compounds at various stages of the screening process, coupled with automated "traditional" storage methods (DMSO, -20 degrees C). Specifically, all of the compounds in our druglike library are purified by LC/MS/UV and are then controlled for identity and concentration in their respective DMSO stock solutions by chemiluminescent nitrogen detection (CLND)/evaporative light scattering detection (ELSD) and MS/UV. In addition, the compound-buffer solutions used in the various biological assays are quantitated by LC/UV/CLND to determine the concentration of compound actually present during screening. Our results show that LC/UV/CLND/ELSD/MS is a widely applicable method that can be used to purify, quantitate, and identify most small organic molecules from compound libraries. The LC/UV/CLND technique is a simple and sensitive method that can be easily and cost-effectively employed to rapidly determine the concentrations of even small amounts of any N-containing compound in aqueous solution. We present data to establish error limits for concentration determination that are well within the overall variability of the screening process. This study demonstrates that there is a significant difference between the predicted amount of soluble compound from stock DMSO solutions following dilution into assay buffer and the actual amount present in assay buffer solutions, even at the low concentrations employed for the assays. We also demonstrate that knowledge of the concentrations of compounds to which the biological target is exposed is critical for accurate potency determinations. Accurate potency values are in turn particularly important for drug discovery, for understanding structure-activity relationships, and for building useful empirical models of protein-ligand interactions. Our new understanding of relative solubility demonstrates that most, if not all, decisions that are made in early discovery are based upon missing or inaccurate information. Finally, we demonstrate that careful control of compound handling and concentration, coupled with accurate assay methods, allows the use of both positive and negative data in analyzing screening data sets for structure-activity relationships that determine potency and selectivity. PMID:15595870

Popa-Burke, Ioana G; Issakova, Olga; Arroway, James D; Bernasconi, Paul; Chen, Min; Coudurier, Louis; Galasinski, Scott; Jadhav, Ajit P; Janzen, William P; Lagasca, Dennis; Liu, Darren; Lewis, Roderic S; Mohney, Robert P; Sepetov, Nikolai; Sparkman, Darren A; Hodge, C Nicholas

2004-12-15

39

Quantifiable Lateral Flow Assay Test Strips  

NASA Technical Reports Server (NTRS)

As easy to read as a home pregnancy test, three Quantifiable Lateral Flow Assay (QLFA) strips used to test water for E. coli show different results. The brightly glowing control line on the far right of each strip indicates that all three tests ran successfully. But the glowing test line on the middle left and bottom strips reveal their samples were contaminated with E. coli bacteria at two different concentrations. The color intensity correlates with concentration of contamination.

2003-01-01

40

How often do German children and adolescents show signs of common mental health problems? Results from different methodological approaches - a cross-sectional study  

PubMed Central

Background Child and adolescent mental health problems are ubiquitous and burdensome. Their impact on functional disability, the high rates of accompanying medical illnesses and the potential to last until adulthood make them a major public health issue. While methodological factors cause variability of the results from epidemiological studies, there is a lack of prevalence rates of mental health problems in children and adolescents according to ICD-10 criteria from nationally representative samples. International findings suggest only a small proportion of children with function impairing mental health problems receive treatment, but information about the health care situation of children and adolescents is scarce. The aim of this epidemiological study was a) to classify symptoms of common mental health problems according to ICD-10 criteria in order to compare the statistical and clinical case definition strategies using a single set of data and b) to report ICD-10 codes from health insurance claims data. Methods a) Based on a clinical expert rating, questionnaire items were mapped on ICD-10 criteria; data from the Mental Health Module (BELLA study) were analyzed for relevant ICD-10 and cut-off criteria; b) Claims data were analyzed for relevant ICD-10 codes. Results According to parent report 7.5% (n?=?208) met the ICD-10 criteria of a mild depressive episode and 11% (n?=?305) showed symptoms of depression according to cut-off score; Anxiety is reported in 5.6% (n?=?156) and 11.6% (n?=?323), conduct disorder in 15.2% (n?=?373) and 14.6% (n?=?357). Self-reported symptoms in 11 to 17 year olds resulted in 15% (n?=?279) reporting signs of a mild depression according to ICD-10 criteria (vs. 16.7% (n?=?307) based on cut-off) and 10.9% (n?=?201) reported symptoms of anxiety (vs. 15.4% (n?=?283)). Results from routine data identify 0.9% (n?=?1,196) with a depression diagnosis, 3.1% (n?=?6,729) with anxiety and 1.4% (n?=?3,100) with conduct disorder in outpatient health care. Conclusions Statistical and clinical case definition strategies show moderate concordance in depression and conduct disorder in a German national sample. Comparatively, lower rates of children and adolescents with diagnosed mental health problems in the outpatient health care setting support the assumptions that a small number of children and adolescents in need of treatment receive it.

2014-01-01

41

Wireless quantified reflex device  

NASA Astrophysics Data System (ADS)

The deep tendon reflex is a fundamental aspect of a neurological examination. The two major parameters of the tendon reflex are response and latency, which are presently evaluated qualitatively during a neurological examination. The reflex loop is capable of providing insight for the status and therapy response of both upper and lower motor neuron syndromes. Attempts have been made to ascertain reflex response and latency, however these systems are relatively complex, resource intensive, with issues of consistent and reliable accuracy. The solution presented is a wireless quantified reflex device using tandem three dimensional wireless accelerometers to obtain response based on acceleration waveform amplitude and latency derived from temporal acceleration waveform disparity. Three specific aims have been established for the proposed wireless quantified reflex device: 1. Demonstrate the wireless quantified reflex device is reliably capable of ascertaining quantified reflex response and latency using a quantified input. 2. Evaluate the precision of the device using an artificial reflex system. 3.Conduct a longitudinal study respective of subjects with healthy patellar tendon reflexes, using the wireless quantified reflex evaluation device to obtain quantified reflex response and latency. Aim 1 has led to the steady evolution of the wireless quantified reflex device from a singular two dimensional wireless accelerometer capable of measuring reflex response to a tandem three dimensional wireless accelerometer capable of reliably measuring reflex response and latency. The hypothesis for aim 1 is that a reflex quantification device can be established for reliably measuring reflex response and latency for the patellar tendon reflex, comprised of an integrated system of wireless three dimensional MEMS accelerometers. Aim 2 further emphasized the reliability of the wireless quantified reflex device by evaluating an artificial reflex system. The hypothesis for aim 2 is that the wireless quantified reflex device can obtain reliable reflex parameters (response and latency) from an artificial reflex device. Aim 3 synthesizes the findings relevant to aim 1 and 2, while applying the wireless accelerometer reflex quantification device to a longitudinal study of healthy patellar tendon reflexes. The hypothesis for aim 3 is that during a longitudinal evaluation of the deep tendon reflex the parameters for reflex response and latency can be measured with a considerable degree of accuracy, reliability, and reproducibility. Enclosed is a detailed description of a wireless quantified reflex device with research findings and potential utility of the system, inclusive of a comprehensive description of tendon reflexes, prior reflex quantification systems, and correlated applications.

Lemoyne, Robert Charles

42

Quantified Coalition Logic  

Microsoft Academic Search

We add a limited but useful form of quantification to Coalition Logic, a popular formalism for rea- soning about cooperation in game-like multi-agent systems. The basic constructs of Quantified Coali- tion Logic (QCL) allow us to express properties as \\

Thomas Ågotnes; Wiebe Van Der Hoek; Michael Wooldridge

2007-01-01

43

Quantifying anoxia in lakes  

Microsoft Academic Search

The anoxic factor (AF, days per year or per season) can be used to quantify anoxia in stratified lakes. AF is calculated from oxygen profiles measured in the stratified season and lake surface area (A,) as AF represents the number of days that a sediment area, equal to the whole-lake surface area, is overlain by anoxic water. Average AF for

Gertrud K. Niirnberg

1995-01-01

44

Comparison of assay kits for unconjugated estriol shows that expressing results as multiples of the median causes unacceptable variation in calculated risk factors for Down syndrome.  

PubMed

We compared the performance of two methods for assaying unconjugated estriol in serum: the modified Amerlex third-trimester RIA kit, as used in seminal papers on unconjugated estriol in Down syndrome screening, and the new optimized Amerlex-M second-trimester kit. The significant difference between the results of each assay could cause unacceptable changes in the detection rate and false-positive rate of Down syndrome screening programs, especially if previously published values for estriol are used in the risk calculation. It is not possible to define new calculation parameters for every assay kit because new parameters will need to be defined every time kit changes occur, which would require a large collection of samples from Down syndrome pregnancies for standardization. Possible solutions to this problem are discussed. PMID:1388113

Reynolds, T; John, R

1992-09-01

45

Quantifying tumour heterogeneity with CT  

PubMed Central

Abstract Heterogeneity is a key feature of malignancy associated with adverse tumour biology. Quantifying heterogeneity could provide a useful non-invasive imaging biomarker. Heterogeneity on computed tomography (CT) can be quantified using texture analysis which extracts spatial information from CT images (unenhanced, contrast-enhanced and derived images such as CT perfusion) that may not be perceptible to the naked eye. The main components of texture analysis can be categorized into image transformation and quantification. Image transformation filters the conventional image into its basic components (spatial, frequency, etc.) to produce derived subimages. Texture quantification techniques include structural-, model- (fractal dimensions), statistical- and frequency-based methods. The underlying tumour biology that CT texture analysis may reflect includes (but is not limited to) tumour hypoxia and angiogenesis. Emerging studies show that CT texture analysis has the potential to be a useful adjunct in clinical oncologic imaging, providing important information about tumour characterization, prognosis and treatment prediction and response.

Miles, Kenneth A.

2013-01-01

46

Quantifying tumour heterogeneity with CT.  

PubMed

Heterogeneity is a key feature of malignancy associated with adverse tumour biology. Quantifying heterogeneity could provide a useful non-invasive imaging biomarker. Heterogeneity on computed tomography (CT) can be quantified using texture analysis which extracts spatial information from CT images (unenhanced, contrast-enhanced and derived images such as CT perfusion) that may not be perceptible to the naked eye. The main components of texture analysis can be categorized into image transformation and quantification. Image transformation filters the conventional image into its basic components (spatial, frequency, etc.) to produce derived subimages. Texture quantification techniques include structural-, model- (fractal dimensions), statistical- and frequency-based methods. The underlying tumour biology that CT texture analysis may reflect includes (but is not limited to) tumour hypoxia and angiogenesis. Emerging studies show that CT texture analysis has the potential to be a useful adjunct in clinical oncologic imaging, providing important information about tumour characterization, prognosis and treatment prediction and response. PMID:23545171

Ganeshan, Balaji; Miles, Kenneth A

2013-01-01

47

Value of Fused 18F-Choline-PET/MRI to Evaluate Prostate Cancer Relapse in Patients Showing Biochemical Recurrence after EBRT: Preliminary Results  

PubMed Central

Purpose. We compared the accuracy of 18F-Choline-PET/MRI with that of multiparametric MRI (mMRI), 18F-Choline-PET/CT, 18F-Fluoride-PET/CT, and contrast-enhanced CT (CeCT) in detecting relapse in patients with suspected relapse of prostate cancer (PC) after external beam radiotherapy (EBRT). We assessed the association between standard uptake value (SUV) and apparent diffusion coefficient (ADC). Methods. We evaluated 21 patients with biochemical relapse after EBRT. Patients underwent 18F-Choline-PET/contrast-enhanced (Ce)CT, 18F-Fluoride-PET/CT, and mMRI. Imaging coregistration of PET and mMRI was performed. Results. 18F-Choline-PET/MRI was positive in 18/21 patients, with a detection rate (DR) of 86%. DRs of 18F-Choline-PET/CT, CeCT, and mMRI were 76%, 43%, and 81%, respectively. In terms of DR the only significant difference was between 18F-Choline-PET/MRI and CeCT. On lesion-based analysis, the accuracy of 18F-Choline-PET/MRI, 18F-Choline-PET/CT, CeCT, and mMRI was 99%, 95%, 70%, and 85%, respectively. Accuracy, sensitivity, and NPV of 18F-Choline-PET/MRI were significantly higher than those of both mMRI and CeCT. On whole-body assessment of bone metastases, the sensitivity of 18F-Choline-PET/CT and 18F-Fluoride-PET/CT was significantly higher than that of CeCT. Regarding local and lymph node relapse, we found a significant inverse correlation between ADC and SUV-max. Conclusion. 18F-Choline-PET/MRI is a promising technique in detecting PC relapse.

Piccardo, Arnoldo; Paparo, Francesco; Picazzo, Riccardo; Naseri, Mehrdad; Ricci, Paolo; Marziano, Andrea; Bacigalupo, Lorenzo; Biscaldi, Ennio; Rollandi, Gian Andrea; Grillo-Ruggieri, Filippo; Farsad, Mohsen

2014-01-01

48

Quantified coalition logic  

Microsoft Academic Search

We add a limited but useful form of quantification to Coalition Logic, a popular formalism for reasoning about cooperation\\u000a in game-like multi-agent systems. The basic constructs of Quantified Coalition Logic (QCL) allow us to express such properties as “every coalition satisfying property P can achieve ?” and “there exists a coalition C satisfying property P such that C can achieve

Thomas Ågotnes; Wiebe Van Der Hoek; Michael Wooldridge

2008-01-01

49

Avastin Shows Mixed Results Against Different Cancers  

MedlinePLUS

... longer survival," said Ramondetta, a professor at the University of Texas M.D. Anderson Cancer Center and chief of ... M.D., professor, M.D. Anderson Cancer Center, University of Texas, and chief, gynecologic oncology, Lyndon B. Johnson General ...

50

Preliminary Results Show Improvement in MS Symptoms  

MedlinePLUS

... Education Resources More Health Information Search Health Topics Search Health Topics Quick Links MedlinePlus Health Info NIH ... More Grants & Funding Information Search the NIH Guide Search the NIH Guide NIH Guide advanced search Quick ...

51

Negation and Quantifiers in NU-Prolog  

Microsoft Academic Search

We briefly discuss the shortcomings of negation in conventional Prolog systems. The design and implementation of the negation constructs in NU-Prolog are then presented. The major difference is the presence of explicit quantifiers. However, several other innovations are used to extract the maximum flexibility from current implementation techniques. These result in improved treatment of if, existential quantifiers, inequality and non-logical

Lee Naish

1986-01-01

52

The transferable belief model for quantified belief representation  

Microsoft Academic Search

The transferable belief model is a model to represent quantified beliefs based on the use of belief functions, as initially proposed by Shafer. It is developed independently from any underlying related probability model. We summarize our interpretation of the model and present several recent results that characterize the model. We show how rational decision must be made when beliefs are

Philippe Smets

1997-01-01

53

Quantifying light pollution  

NASA Astrophysics Data System (ADS)

In this paper we review new available indicators useful to quantify and monitor light pollution, defined as the alteration of the natural quantity of light in the night environment due to introduction of manmade light. With the introduction of recent radiative transfer methods for the computation of light pollution propagation, several new indicators become available. These indicators represent a primary step in light pollution quantification, beyond the bare evaluation of the night sky brightness, which is an observational effect integrated along the line of sight and thus lacking the three-dimensional information.

Cinzano, P.; Falchi, F.

2014-05-01

54

Covariation and quantifier polarity: What determines causal attribution in vignettes?  

Microsoft Academic Search

Tests of causal attribution often use verbal vignettes, with covariation information provided through statements quantified with natural language expressions. The effect of covariation information has typically been taken to show that set size information affects attribution. However, recent research shows that quantifiers provide information about discourse focus as well as covariation information. In the attribution literature, quantifiers are used to

Asifa Majid; Anthony J. Sanford; Martin J. Pickering

2006-01-01

55

Quantifying water diffusion in secondary organic material  

NASA Astrophysics Data System (ADS)

Recent research suggests that some secondary organic aerosol (SOA) is highly viscous under certain atmospheric conditions. This may have important consequences for equilibration timescales, SOA growth, heterogeneous chemistry and ice nucleation. In order to quantify these effects, knowledge of the diffusion coefficients of relevant gas species within aerosol particles is vital. In this work, a Raman isotope tracer method is used to quantify water diffusion coefficients over a range of atmospherically relevant humidity and temperature conditions. D2O is observed as it diffuses from the gas phase into a disk of aqueous solution, without the disk changing in size or viscosity. An analytical solution of Fick's second law is then used with a fitting procedure to determine water diffusion coefficients in reference materials for method validation. The technique is then extended to compounds of atmospheric relevance and ?-pinene secondary organic material. We produce water diffusion coefficients from 20 to 80 % RH at 23.5° C for sucrose, levoglucosan, M5AS and MgSO4. For levoglucosan we show that under conditions where a particle bounces, water diffusion in aqueous solutions can be fast (a fraction of a second for a 100 nm radius). For sucrose solutions, we also show that the Stokes-Einstein relation breaks down at high viscosity and cannot be used to predict water diffusion timescales with accuracy. In addition, we also quantify water diffusion coefficients in ?-pinene SOM from 20-80% RH and over temperatures from 6 to -30° C. Our results suggest that, at 6° C, water diffusion in ?-pinene SOA is not kinetically limited on the second timescale, even at 20% RH. As temperatures decrease, however, diffusion slows and may become an increasingly limiting factor for atmospheric processes. A parameterization for the diffusion coefficient of water in ?-pinene secondary organic material, as a function of relative humidity and temperature, is presented. The implications for atmospheric processes such as ice nucleation and heterogeneous chemistry in the mid- and upper-troposphere will be discussed.

Price, Hannah; Murray, Benjamin; Mattsson, Johan; O'Sullivan, Daniel; Wilson, Theodore; Zhang, Yue; Martin, Scot

2014-05-01

56

Generalized Quantifiers and Modal Logic.  

National Technical Information Service (NTIS)

The authors study several modal languages in which some (sets of) generalized quantifiers can be represented; the main language considered is suitable for defining any first order definable quantifier, but also considered is a sublanguage thereof, as well...

W. van der Hoek M. de Rijke

1991-01-01

57

Quantifying social group evolution  

NASA Astrophysics Data System (ADS)

The rich set of interactions between individuals in society results in complex community structure, capturing highly connected circles of friends, families or professional cliques in a social network. Thanks to frequent changes in the activity and communication patterns of individuals, the associated social and communication network is subject to constant evolution. Our knowledge of the mechanisms governing the underlying community dynamics is limited, but is essential for a deeper understanding of the development and self-optimization of society as a whole. We have developed an algorithm based on clique percolation that allows us to investigate the time dependence of overlapping communities on a large scale, and thus uncover basic relationships characterizing community evolution. Our focus is on networks capturing the collaboration between scientists and the calls between mobile phone users. We find that large groups persist for longer if they are capable of dynamically altering their membership, suggesting that an ability to change the group composition results in better adaptability. The behaviour of small groups displays the opposite tendency-the condition for stability is that their composition remains unchanged. We also show that knowledge of the time commitment of members to a given community can be used for estimating the community's lifetime. These findings offer insight into the fundamental differences between the dynamics of small groups and large institutions.

Palla, Gergely; Barabási, Albert-László; Vicsek, Tamás

2007-04-01

58

How to quantify structural anomalies in fluids?  

PubMed

Some fluids are known to behave anomalously. The so-called structural anomaly which means that the fluid becomes less structures under isothermal compression is among the most frequently discussed ones. Several methods for quantifying the degree of structural order are described in the literature and are used for calculating the region of structural anomaly. It is generally thought that all of the structural order determinations yield qualitatively identical results. However, no explicit comparison was made. This paper presents such a comparison for the first time. The results of some definitions are shown to contradict the intuitive notion of a fluid. On the basis of this comparison, we show that the region of structural anomaly can be most reliably determined from the behavior of the excess entropy. PMID:25053327

Fomin, Yu D; Ryzhov, V N; Klumov, B A; Tsiok, E N

2014-07-21

59

Quantifying T Lymphocyte Turnover  

PubMed Central

Peripheral T cell populations are maintained by production of naive T cells in the thymus, clonal expansion of activated cells, cellular self-renewal (or homeostatic proliferation), and density dependent cell life spans. A variety of experimental techniques have been employed to quantify the relative contributions of these processes. In modern studies lymphocytes are typically labeled with 5-bromo-2?-deoxyuridine (BrdU), deuterium, or the fluorescent dye carboxy-fluorescein diacetate succinimidyl ester (CFSE), their division history has been studied by monitoring telomere shortening and the dilution of T cell receptor excision circles (TRECs) or the dye CFSE, and clonal expansion has been documented by recording changes in the population densities of antigen specific cells. Proper interpretation of such data in terms of the underlying rates of T cell production, division, and death has proven to be notoriously difficult and involves mathematical modeling. We review the various models that have been developed for each of these techniques, discuss which models seem most appropriate for what type of data, reveal open problems that require better models, and pinpoint how the assumptions underlying a mathematical model may influence the interpretation of data. Elaborating various successful cases where modeling has delivered new insights in T cell population dynamics, this review provides quantitative estimates of several processes involved in the maintenance of naive and memory, CD4+ and CD8+ T cell pools in mice and men.

De Boer, Rob J.; Perelson, Alan S.

2013-01-01

60

Quantifying traffic exposure.  

PubMed

Living near traffic adversely affects health outcomes. Traffic exposure metrics include distance to high-traffic roads, traffic volume on nearby roads, traffic within buffer distances, measured pollutant concentrations, land-use regression estimates of pollution concentrations, and others. We used Geographic Information System software to explore a new approach using traffic count data and a kernel density calculation to generate a traffic density surface with a resolution of 50?m. The density value in each cell reflects all the traffic on all the roads within the distance specified in the kernel density algorithm. The effect of a given roadway on the raster cell value depends on the amount of traffic on the road segment, its distance from the raster cell, and the form of the algorithm. We used a Gaussian algorithm in which traffic influence became insignificant beyond 300?m. This metric integrates the deleterious effects of traffic rather than focusing on one pollutant. The density surface can be used to impute exposure at any point, and it can be used to quantify integrated exposure along a global positioning system route. The traffic density calculation compares favorably with other metrics for assessing traffic exposure and can be used in a variety of applications. PMID:24045427

Pratt, Gregory C; Parson, Kris; Shinoda, Naomi; Lindgren, Paula; Dunlap, Sara; Yawn, Barbara; Wollan, Peter; Johnson, Jean

2014-01-01

61

Quantifying the Arctic methane budget  

NASA Astrophysics Data System (ADS)

The Arctic is a major source of atmospheric methane, containing climate-sensitive emissions from natural wetlands and gas hydrates, as well as the fossil fuel industry. Both wetland and gas hydrate methane emissions from the Arctic may increase with increasing temperature, resulting in a positive feedback leading to enhancement of climate warming. It is important that these poorly-constrained sources are quantified by location and strength and their vulnerability to change be assessed. The MAMM project (Methane and other greenhouse gases in the Arctic: Measurements, process studies and Modelling') addresses these issues as part of the UK NERC Arctic Programme. A global chemistry transport model has been used, along with MAMM and other long term observations, to assess our understanding of the different source and sink terms in the Arctic methane budget. Simulations including methane coloured by source and latitude are used to distinguish between Arctic seasonal variability arising from transport and that arising from changes in Arctic sources and sinks. Methane isotopologue tracers provide a further constraint on modelled methane variability, distinguishing between isotopically light and heavy sources (e.g. wetlands and gas fields). We focus on quantifying the magnitude and seasonal variability of Arctic wetland emissions.

Warwick, Nicola; Cain, Michelle; Pyle, John

2014-05-01

62

The Understanding of Quantifiers in Semantic Dementia: A Single-Case Study  

PubMed Central

This study investigates the processing of quantifiers in a patient (AM) with semantic dementia. Quantifiers are verbal expressions such as “many” or “a few”, which refer semantically to quantity concepts although lexically they are like non-quantity words. Patient AM presented with preserved understanding of quantifier words and impaired understanding of non-quantifier words of the same frequency. In parallel to this, he showed preserved numerical knowledge and impaired comprehension of the meaning of words, objects, and of linguistic concepts. These results suggest that the neural organization of quantifiers is within the numerical domain as they pattern with numerical concepts rather than linguistic concepts. These data reinforce the evidence that numerical knowledge is functionally distinct from non-numerical knowledge in the semantic system and indicate that the semantic referent rather than the stimulus format is more relevant for semantic processing.

CAPPELLETTI, MARINELLA; BUTTERWORTH, BRIAN; KOPELMAN, MICHAEL

2008-01-01

63

Quantifying cosmic superstructures  

NASA Astrophysics Data System (ADS)

The large-scale structure (LSS) found in galaxy redshift surveys and in computer simulations of cosmic structure formation shows a very complex network of galaxy clusters, filaments and sheets around large voids. Here, we introduce a new algorithm, based on a minimal spanning tree, to find basic structural elements of this network and their properties. We demonstrate how the algorithm works using simple test cases and then apply it to haloes from the Millennium Run simulation. We show that about 70 per cent of the total halo mass is contained in a structure composed of more than 74000 individual elements, the vast majority of which are filamentary, with lengths of up to 15h-1Mpc preferred. Spatially more extended structures do exist, as do examples of what appear to be sheet-like configurations of matter. What is more, LSS appears to be composed of a fixed set of basic building blocks. The LSS formed by mass selected subsamples of haloes shows a clear correlation between the threshold mass and the mean extent of major branches, with cluster-size haloes forming structures whose branches can extend to almost 200h-1Mpc - the backbone of LSS to which smaller branches consisting of smaller haloes are attached.

Colberg, Jörg M.

2007-02-01

64

Quantifying nonisothermal subsurface soil water evaporation  

NASA Astrophysics Data System (ADS)

Accurate quantification of energy and mass transfer during soil water evaporation is critical for improving understanding of the hydrologic cycle and for many environmental, agricultural, and engineering applications. Drying of soil under radiation boundary conditions results in formation of a dry surface layer (DSL), which is accompanied by a shift in the position of the latent heat sink from the surface to the subsurface. Detailed investigation of evaporative dynamics within this active near-surface zone has mostly been limited to modeling, with few measurements available to test models. Soil column studies were conducted to quantify nonisothermal subsurface evaporation profiles using a sensible heat balance (SHB) approach. Eleven-needle heat pulse probes were used to measure soil temperature and thermal property distributions at the millimeter scale in the near-surface soil. Depth-integrated SHB evaporation rates were compared with mass balance evaporation estimates under controlled laboratory conditions. The results show that the SHB method effectively measured total subsurface evaporation rates with only 0.01-0.03 mm h-1difference from mass balance estimates. The SHB approach also quantified millimeter-scale nonisothermal subsurface evaporation profiles over a drying event, which has not been previously possible. Thickness of the DSL was also examined using measured soil thermal conductivity distributions near the drying surface. Estimates of the DSL thickness were consistent with observed evaporation profile distributions from SHB. Estimated thickness of the DSL was further used to compute diffusive vapor flux. The diffusive vapor flux also closely matched both mass balance evaporation rates and subsurface evaporation rates estimated from SHB.

Deol, Pukhraj; Heitman, Josh; Amoozegar, Aziz; Ren, Tusheng; Horton, Robert

2012-11-01

65

Methods to quantify tank losses are improved  

Microsoft Academic Search

A major revision to the American Petroleum Institute's Publication 2519¹ provides a tool to accurately quantify evaporative losses and the resultant atmospheric emissions from petroleum stocks stored in internal floating-roof tanks (IFRTs). As a result of significant improvements in the loss calculation procedures included in the revised publication, entitled ''Evaporation Loss from Internal Floating Roof Tanks,'' more accurate stock inventory

K. M. Hanzevack; B. D. Anderson; R. L. Russell

1983-01-01

66

Show-Me Center  

NSDL National Science Digital Library

The Show-Me Center, located at the University of Missouri, is a math education project of the National Science Foundation. The center's Web site "provides information and resources needed to support selection and implementation of standards-based middle grades mathematic curricula." There are some sample lesson plans offered, but most of the material is solely for use by teachers. Five different middle grade math curriculums were started in 1992, and now, the implementation and results of each curriculum are presented on this site. Teachers can examine each one, view video clips, and read case studies and other reports to choose which parts of the curriculums would fit best into their own classes.

67

Results.  

ERIC Educational Resources Information Center

Describes the Collegiate Results Instrument (CRI), which measures a range of collegiate outcomes for alumni 6 years after graduation. The CRI was designed to target alumni from institutions across market segments and assess their values, abilities, work skills, occupations, and pursuit of lifelong learning. (EV)

Zemsky, Robert; Shaman, Susan; Shapiro, Daniel B.

2001-01-01

68

What Do Blood Tests Show?  

MedlinePLUS

... page from the NHLBI on Twitter. What Do Blood Tests Show? Blood tests show whether the levels ... changes may work best. Result Ranges for Common Blood Tests This section presents the result ranges for ...

69

Television Quiz Show Simulation  

ERIC Educational Resources Information Center

This article explores the simulation of four television quiz shows for students in China studying English as a foreign language (EFL). It discusses the adaptation and implementation of television quiz shows and how the students reacted to them.

Hill, Jonnie Lynn

2007-01-01

70

Quantifying decoherence in continuous variable systems  

Microsoft Academic Search

We present a detailed report on the decoherence of quantum states of continuous variable systems under the action of a quantum optical master equation resulting from the interaction with general Gaussian uncorrelated environments. The rate of decoherence is quantified by relating it to the decay rates of various, complementary measures of the quantum nature of a state, such as the

A Serafini; M G A Paris; F. Illuminati; S. De Siena

2005-01-01

71

Quantifying electrostatic interactions in pharmaceutical solid systems.  

PubMed

Triboelectrification of pharmaceutical powders with stainless steel and polymer contact surfaces was investigated. alpha-Lactose monohydrate, from 90 to 125 up to 355-500 microm, was used to quantify electrostatic interactions with negligible powder adhesion to the contact surface. Size fractions down to 53-75 microm alone and in binary mixtures with <10 microm lactose or micronized salbutamol were used to investigate triboelectrification with powder adhered to the contact surface. Triboelectrification was performed in a cyclone charger fitted with interchangeable contact surfaces of steel and polymers, representing the surfaces of pharmaceutical processing and manufacturing equipment, packaging materials and components of dry powder inhaler devices. The results for single component powders showed charge acquisition was inversely related to particle size, where contact surface contamination was negligible. However, with particulate contamination, triboelectrification was more complex due to particle collisions with clean and contaminated contact surfaces. Analysis of adhered and non-adhered powder provided information about changes in composition of two component powders during triboelectrification. Particle size and chemical analyses showed that composition changes of mixtures may be related to powder/contact surface affinity and interparticulate forces for separation of components in a cohesive mix during triboelectrification. PMID:11564540

Rowley, G

2001-10-01

72

Homemade Laser Show  

NSDL National Science Digital Library

With a laser pointer and some household items, learners can create their own laser light show. They can explore diffuse reflection, refraction and diffraction. The webpage includes a video which shows how to set up the activity and also includes scientific explanation. Because this activity involves lasers, it requires adult supervision.

Houston, Children'S M.

2011-01-01

73

Developing accurate quantified speckle shearing data  

NASA Astrophysics Data System (ADS)

Electronic Speckle Pattern Shearing Interferometry (ESPSI) is becoming a common tool for the qualitative analysis of material defects in the aerospace and marine industries. Current trends in the development of this optical metrology nondestructive testing (NDT) technique is the introduction of quantitative analysis, which attempts to detail the defects examined and identified by the ESPSI systems. Commercial systems use divergent laser illumination, this being a design feature imposed by the typically large sizes of objects being examined, which negates the use of collimated optics. Furthermore, commercial systems are being applied to complex surfaces which distort the understanding of the instrumentation results. The growing commercial demand for quantitative out-of-lane and in-plane ESPSI for NDT is determining the quality of optical and analysis instrument. However very little attention is currently being paid to understanding, quantifying and compensating for the numerous error sources which are a function of ESPSI interferometers. This paper presents work which has been carried out on the measurement accuracy due to the divergence of the illumination wavefront and associated with the magnitude of lateral shearing function. The error is measured by comparing measurements using divergent (curvature) illumination with respect to collimated illumination. Results show that the error is increased by approximately a power factor as the distance from the illumination source to the object surface decreases.

Wan Abdullah, W. S.; Petzing, Jon N.; Tyrer, John R.

1999-08-01

74

A Holographic Road Show.  

ERIC Educational Resources Information Center

Describes the viewing sessions and the holograms of a holographic road show. The traveling exhibits, believed to stimulate interest in physics, include a wide variety of holograms and demonstrate several physical principles. (GA)

Kirkpatrick, Larry D.; Rugheimer, Mac

1979-01-01

75

Trade Show Managers  

Microsoft Academic Search

Trade show management is a multi-faceted field, requiring a breadth of skills on the part of those engaged in the craft. Whether they go by the title of Show Manager, Director of Marketing, Vice President of Meetings\\/Conventions, or Director of Meetings\\/Conventions, these professionals work with exhibitors, attendees, and service providers to produce their events. The managers of the 200 largest

Susan Gregory; Deborah Breiter

2001-01-01

76

Demonstration Road Show  

NSDL National Science Digital Library

The Idaho State University Department of Physics conducts science demonstration shows at S. E. Idaho schools. Four different presentations are currently available; "Forces and Motion", "States of Matter", "Electricity and Magnetism", and "Sound and Waves". Information provided includes descriptions of the material and links to other resources.

Shropshire, Steven

2009-04-06

77

Showing What They Know  

ERIC Educational Resources Information Center

Having students show their skills in three dimensions, known as performance-based assessment, dates back at least to Socrates. Individual schools such as Barrington High School--located just outside of Providence--have been requiring students to actively demonstrate their knowledge for years. The Rhode Island's high school graduating class became…

Cech, Scott J.

2008-01-01

78

The Ozone Show.  

ERIC Educational Resources Information Center

Uses a talk show activity for a final assessment tool for students to debate about the ozone hole. Students are assessed on five areas: (1) cooperative learning; (2) the written component; (3) content; (4) self-evaluation; and (5) peer evaluation. (SAH)

Mathieu, Aaron

2000-01-01

79

Blue Ribbon Art Show.  

ERIC Educational Resources Information Center

Describes the process of selecting judges for a Blue Ribbon Art Show (Springfield, Missouri). Used adults (teachers, custodians, professional artists, parents, and principals) chosen by the Willard South Elementary School art teacher to judge student artwork. States that nominated students received blue ribbons. (CMK)

Bowen, Judy Domeny

2002-01-01

80

Show-Me Center  

NSDL National Science Digital Library

The Show-Me Center is a partnership of four NSF-sponsored middle grades mathematics curriculum development Satellite Centers (University of Wisconsin, Michigan State University, University of Montana, and the Educational Development Center). The group's website provides "information and resources needed to support selection and implementation of standards-based middle grades mathematics curricula." The Video Showcase includes segments on Number, Algebra, Geometry, Measure, and Data Analysis, with information on ways to obtain the complete video set. The Curricula Showcase provides general information, unit goals, sample lessons and teacher pages spanning four projects: the Connected Mathematics Project (CMP), Mathematics in Context (MiC), MathScape: Seeing and Thinking Mathematically, and Middle Grades Math Thematics. The website also posts Show-Me Center newsletters, information on upcoming conferences and workshops, and links to resources including published articles and unpublished commentary on mathematics school reform.

81

The Graphing Game Show  

NSDL National Science Digital Library

This lesson plan assesses student interpretation of graphs utilizing cooperative learning to further students understanding. Types of graphs used are horizontal and vertical bar graphs, picture graphs, and pictographs. In the lesson students play a game called the Graphing Game Show, in which they must work as a team to answer questions about specific graphs. The lesson includes four student resource worksheets and suggestions for extension and differentiation.

2011-01-01

82

Education Statistics Slide Show  

NSDL National Science Digital Library

Created by Grace York, coordinator of the University of Michigan's Documents Center, the Education Statistics Slide Show is an online presentation demonstrating how to locate, obtain, and manipulate educational data on the Web. The presentation consists of 72 slides and offers instruction on the use of several Websites including the US Census Bureau's American Factfinder site (see the April 2, 1999 Scout Report), the Center for International Earth Science Information Network (CIESIN) Census Mapping site, the National Center for Education Statistics (NCES) site, the FEDSTATS site (see the May 30, 1997 Scout Report), and many more. The tutorial presentation also provides ten practice questions and a detailed Webliography.

83

Quantifying reliability uncertainty : a proof of concept.  

SciTech Connect

This paper develops Classical and Bayesian methods for quantifying the uncertainty in reliability for a system of mixed series and parallel components for which both go/no-go and variables data are available. Classical methods focus on uncertainty due to sampling error. Bayesian methods can explore both sampling error and other knowledge-based uncertainties. To date, the reliability community has focused on qualitative statements about uncertainty because there was no consensus on how to quantify them. This paper provides a proof of concept that workable, meaningful quantification methods can be constructed. In addition, the application of the methods demonstrated that the results from the two fundamentally different approaches can be quite comparable. In both approaches, results are sensitive to the details of how one handles components for which no failures have been seen in relatively few tests.

Diegert, Kathleen V.; Dvorack, Michael A.; Ringland, James T.; Mundt, Michael Joseph; Huzurbazar, Aparna (Los Alamos National Laboratory, Los Alamos, NM); Lorio, John F.; Fatherley, Quinn (Los Alamos National Laboratory, Los Alamos, NM); Anderson-Cook, Christine (Los Alamos National Laboratory, Los Alamos, NM); Wilson, Alyson G. (Los Alamos National Laboratory, Los Alamos, NM); Zurn, Rena M.

2009-10-01

84

A methodology for quantifying uncertainty in models  

SciTech Connect

This paper, condensed from McKay et al. (1992) outlines an analysis of uncertainty in the output of computer models arising from uncertainty in inputs (parameters). Uncertainty of this type most often arises when proper input values are imprecisely known. Uncertainty in the output is quantified in its probability distribution, which results from treating the inputs as random variables. The assessment of which inputs are important (sensitivity analysis) with respect to uncertainty is done relative to the probability distribution of the output.

McKay, M.D.; Beckman, R.J.

1993-09-01

85

Mucoid-to-nonmucoid conversion in alginate-producing Pseudomonas aeruginosa often results from spontaneous mutations in algT, encoding a putative alternate sigma factor, and shows evidence for autoregulation.  

PubMed Central

The mucoid phenotype is common among strains of Pseudomonas aeruginosa that cause chronic pulmonary infections in patients with cystic fibrosis and is due to overproduction of an exopolysaccharide called alginate. However, the mucoid phenotype is unstable in vitro, especially when the cells are incubated under low oxygen tension. Spontaneous conversion to the nonmucoid form is typically due to mutations (previously called algS) that are closely linked to the alginate regulatory gene algT, located at 68 min on the chromosome. Our sequence analysis of algT showed that its 22-kDa gene product shares homology with several alternate sigma factors in bacteria, suggesting that AlgT (also known as AlgU) interacts directly with RNA polymerase core to activate the promoters of alginate genes. AlgT showed striking sequence similarity (79%) to sigma E of Escherichia coli, an alternate sigma factor involved in high-temperature gene expression. Our analysis of the molecular basis for spontaneous conversion from mucoid to nonmucoid, in the cystic fibrosis isolate FRD, revealed that nonmucoid conversion was often due to one of two distinct missense mutations in algT that occurred at codons 18 and 29. RNase protection assays showed that spontaneous nonmucoid strains with the algT18 and algT29 alleles have a four- to fivefold reduction in the accumulation of algT transcripts compared with the wild-type mucoid strain. Likewise, a plasmid-borne algT-cat transcriptional fusion was about 3-fold less active in the algT18 and algT29 backgrounds compared with the mucoid wild-type strain, and it was 20-fold less active in an algT::Tn501 background. These data indicate that algT is autoregulated. The spontaneous algT missense alleles also caused about fivefold-reduced expression of the adjacent negative regulator, algN (also known as mucB). Transcripts of algN were essentially absent in the algT::Tn501 strain. Thus, algT regulates the algTN cluster, and the two genes may be cotranscribed. A primer extension analysis showed that algT transcription starts 54 bp upstream of the start of translation. Although the algT promoter showed little similarity to promoters recognized by the vegetative sigma factor, it was similar to the algR promoter. This finding suggests that AlgT may function as a sigma factor to activate its own promoter and those of other alginate genes. The primer extension analysis also showed that algT transcripts were readily detectable in the typical nonmucoid strain PAO1, which was in contrast to a weak signal seen in the algT18 mutant of FRD. A plasmid-borne algT gene in PAO1 resulted in both the mucoid phenotype and high levels of algT transcripts, further supporting the hypothesis that AlgT controls its own gene expression and expression of genes of the alginate regulon. Images

DeVries, C A; Ohman, D E

1994-01-01

86

Quantifying pulsed laser induced damage to graphene  

NASA Astrophysics Data System (ADS)

As an emerging optical material, graphene's ultrafast dynamics are often probed using pulsed lasers yet the region in which optical damage takes place is largely uncharted. Here, femtosecond laser pulses induced localized damage in single-layer graphene on sapphire. Raman spatial mapping, SEM, and AFM microscopy quantified the damage. The resulting size of the damaged area has a linear correlation with the optical fluence. These results demonstrate local modification of sp2-carbon bonding structures with optical pulse fluences as low as 14 mJ/cm2, an order-of-magnitude lower than measured and theoretical ablation thresholds.

Currie, Marc; Caldwell, Joshua D.; Bezares, Francisco J.; Robinson, Jeremy; Anderson, Travis; Chun, Hayden; Tadjer, Marko

2011-11-01

87

Quantifying torso deformity in scoliosis  

NASA Astrophysics Data System (ADS)

Scoliosis affects the alignment of the spine and the shape of the torso. Most scoliosis patients and their families are more concerned about the effect of scoliosis on the torso than its effect on the spine. There is a need to develop robust techniques for quantifying torso deformity based on full torso scans. In this paper, deformation indices obtained from orthogonal maps of full torso scans are used to quantify torso deformity in scoliosis. 'Orthogonal maps' are obtained by applying orthogonal transforms to 3D surface maps. (An 'orthogonal transform' maps a cylindrical coordinate system to a Cartesian coordinate system.) The technique was tested on 361 deformed computer models of the human torso and on 22 scans of volunteers (8 normal and 14 scoliosis). Deformation indices from the orthogonal maps correctly classified up to 95% of the volunteers with a specificity of 1.00 and a sensitivity of 0.91. In addition to classifying scoliosis, the system gives a visual representation of the entire torso in one view and is viable for use in a clinical environment for managing scoliosis.

Ajemba, Peter O.; Kumar, Anish; Durdle, Nelson G.; Raso, V. James

2006-03-01

88

Use of the Concept of Equivalent Biologically Effective Dose (BED) to Quantify the Contribution of Hyperthermia to Local Tumor Control in Radiohyperthermia Cervical Cancer Trials, and Comparison With Radiochemotherapy Results  

SciTech Connect

Purpose: To express the magnitude of contribution of hyperthermia to local tumor control in radiohyperthermia (RT/HT) cervical cancer trials, in terms of the radiation-equivalent biologically effective dose (BED) and to explore the potential of the combined modalities in the treatment of this neoplasm. Materials and Methods: Local control rates of both arms of each study (RT vs. RT+HT) reported from randomized controlled trials (RCT) on concurrent RT/HT for cervical cancer were reviewed. By comparing the two tumor control probabilities (TCPs) from each study, we calculated the HT-related log cell-kill and then expressed it in terms of the number of 2 Gy fraction equivalents, for a range of tumor volumes and radiosensitivities. We have compared the contribution of each modality and made some exploratory calculations on the TCPs that might be expected from a combined trimodality treatment (RT+CT+HT). Results: The HT-equivalent number of 2-Gy fractions ranges from 0.6 to 4.8 depending on radiosensitivity. Opportunities for clinically detectable improvement by the addition of HT are only available in tumors with an alpha value in the approximate range of 0.22-0.28 Gy{sup -1}. A combined treatment (RT+CT+HT) is not expected to improve prognosis in radioresistant tumors. Conclusion: The most significant improvements in TCP, which may result from the combination of RT/CT/HT for locally advanced cervical carcinomas, are likely to be limited only to those patients with tumors of relatively low-intermediate radiosensitivity.

Plataniotis, George A. [Department of Oncology, Aberdeen Royal Infirmary, Aberdeen (United Kingdom)], E-mail: george.plataniotis@nhs.net; Dale, Roger G. [Imperial College Healthcare NHS Trust, London (United Kingdom)

2009-04-01

89

Quantifying entanglement with witness operators  

SciTech Connect

We present a unifying approach to the quantification of entanglement based on entanglement witnesses, which includes several already established entanglement measures such as the negativity, the concurrence, and the robustness of entanglement. We then introduce an infinite family of new entanglement quantifiers, having as its limits the best separable approximation measure and the generalized robustness. Gaussian states, states with symmetry, states constrained to super-selection rules, and states composed of indistinguishable particles are studied under the view of the witnessed entanglement. We derive new bounds to the fidelity of teleportation d{sub min}, for the distillable entanglement E{sub D} and for the entanglement of formation. A particular measure, the PPT-generalized robustness, stands out due to its easy calculability and provides sharper bounds to d{sub min} and E{sub D} than the negativity in most of the states. We illustrate our approach studying thermodynamical properties of entanglement in the Heisenberg XXX and dimerized models.

Brandao, Fernando G.S.L. [Grupo de Informacao Quantica, Departamento de Fisica, Universidade Federal de Minas Gerais, Caixa Postal 702, Belo Horizonte, 30.123-970, MG (Brazil)

2005-08-15

90

Quantifying offshore wind resources from satellite wind maps: study area the North Sea  

NASA Astrophysics Data System (ADS)

Offshore wind resources are quantified from satellite synthetic aperture radar (SAR) and satellite scatterometer observations at local and regional scale respectively at the Horns Rev site in Denmark. The method for wind resource estimation from satellite observations interfaces with the wind atlas analysis and application program (WAsP). An estimate of the wind resource at the new project site at Horns Rev is given based on satellite SAR observations. The comparison of offshore satellite scatterometer winds, global model data and in situ data shows good agreement. Furthermore, the wake effect of the Horns Rev wind farm is quantified from satellite SAR images and compared with state-of-the-art wake model results with good agreement. It is a unique method using satellite observations to quantify the spatial extent of the wake behind large offshore wind farms. Copyright

Hasager, C. B.; Barthelmie, R. J.; Christiansen, M. B.; Nielsen, M.; Pryor, S. C.

2006-01-01

91

An algorithm for quantifying dependence in multivariate data sets  

NASA Astrophysics Data System (ADS)

We describe an algorithm to quantify dependence in a multivariate data set. The algorithm is able to identify any linear and non-linear dependence in the data set by performing a hypothesis test for two variables being independent. As a result we obtain a reliable measure of dependence. In high energy physics understanding dependencies is especially important in multidimensional maximum likelihood analyses. We therefore describe the problem of a multidimensional maximum likelihood analysis applied on a multivariate data set with variables that are dependent on each other. We review common procedures used in high energy physics and show that general dependence is not the same as linear correlation and discuss their limitations in practical application. Finally we present the tool CAT, which is able to perform all reviewed methods in a fully automatic mode and creates an analysis report document with numeric results and visual review.

Feindt, M.; Prim, M.

2013-01-01

92

Evaluation of two methods for quantifying passeriform lice  

PubMed Central

Two methods commonly used to quantify ectoparasites on live birds are visual examination and dust-ruffling. Visual examination provides an estimate of ectoparasite abundance based on an observer’s timed inspection of various body regions on a bird. Dust-ruffling involves application of insecticidal powder to feathers that are then ruffled to dislodge ectoparasites onto a collection surface where they can then be counted. Despite the common use of these methods in the field, the proportion of actual ectoparasites they account for has only been tested with Rock Pigeons (Columba livia), a relatively large-bodied species (238–302 g) with dense plumage. We tested the accuracy of the two methods using European Starlings (Sturnus vulgaris; ~75 g). We first quantified the number of lice (Brueelia nebulosa) on starlings using visual examination, followed immediately by dust-ruffling. Birds were then euthanized and the proportion of lice accounted for by each method was compared to the total number of lice on each bird as determined with a body-washing method. Visual examination and dust-ruffling each accounted for a relatively small proportion of total lice (14% and 16%, respectively), but both were still significant predictors of abundance. The number of lice observed by visual examination accounted for 68% of the variation in total abundance. Similarly, the number of lice recovered by dust-ruffling accounted for 72% of the variation in total abundance. Our results show that both methods can be used to reliably quantify the abundance of lice on European Starlings and other similar-sized passerines.

Koop, Jennifer A. H.; Clayton, Dale H.

2013-01-01

93

Quantifying of bactericide properties of medicinal plants.  

PubMed

Extended research has been carried out to clarify the ecological role of plant secondary metabolites (SMs). Although their primary ecological function is self-defence, bioactive compounds have long been used in alternative medicine or in biological control of pests. Several members of the family Labiatae are known to have strong antimicrobial capacity. For testing and quantifying antibacterial activity, most often standard microbial protocols are used, assessing inhibitory activity on a selected strain. In this study the applicability of a microbial ecotoxtest was evaluated to quantify the aggregate bactericide capacity of Labiatae species, based on the bioluminescence inhibition of the bacterium Vibrio fischeri. Striking differences were found amongst herbs, reaching even 10-fold toxicity. Glechoma hederacea L. proved to be the most toxic, with the EC50 of 0.4073 g dried plant/l. LC50 values generated by the standard bioassay seem to be a good indicator of the bactericide property of herbs. Traditional use of the selected herbs shows a good correlation with bioactivity expressed as bioluminescence inhibition, leading to the conclusion that the Vibrio fischeri bioassay can be a good indicator of the overall antibacterial capacity of herbs, at least on a screening level. PMID:21502819

Kováts, Nora; Ács, András; Gölöncsér, Flóra; Barabás, Anikó

2011-06-01

94

Quantifying cell behaviors during embryonic wound healing  

NASA Astrophysics Data System (ADS)

During embryogenesis, internal forces induce motions in cells leading to widespread motion in tissues. We previously developed laser hole-drilling as a consistent, repeatable way to probe such epithelial mechanics. The initial recoil (less than 30s) gives information about physical properties (elasticity, force) of cells surrounding the wound, but the long-term healing process (tens of minutes) shows how cells adjust their behavior in response to stimuli. To study this biofeedback in many cells through time, we developed tools to quantify statistics of individual cells. By combining watershed segmentation with a powerful and efficient user interaction system, we overcome problems that arise in any automatic segmentation from poor image quality. We analyzed cell area, perimeter, aspect ratio, and orientation relative to wound for a wide variety of laser cuts in dorsal closure. We quantified statistics for different regions as well, i.e. cells near to and distant from the wound. Regional differences give a distribution of wound-induced changes, whose spatial localization provides clues into the physical/chemical signals that modulate the wound healing response.

Mashburn, David; Ma, Xiaoyan; Crews, Sarah; Lynch, Holley; McCleery, W. Tyler; Hutson, M. Shane

2011-03-01

95

Quantifying uncertainty in global aerosol and forcing  

NASA Astrophysics Data System (ADS)

Aerosol-cloud-climate effects are a major source of uncertainty in climate models so it is important to identify and quantify the sources of the uncertainty and thereby direct research efforts. Here we perform a variance-based sensitivity analysis of a global 3-D aerosol microphysics model to quantify the magnitude and leading causes of uncertainty in model-estimated present-day CCN concentrations, cloud drop number concentrations and indirect forcing. New emulator techniques enable an unprecedented amount of statistical information to be extracted from a global aerosol model. Twenty-eight model parameters covering essentially all important aerosol processes and emissions were defined based on expert elicitation. A sensitivity analysis was then performed based on a Monte Carlo-type sampling of an emulator built for each monthly model grid cell from an ensemble of 168 one-year model simulations covering the uncertainty space of the 28 parameters. Variance decomposition enables the importance of the parameters for CCN uncertainty to be ranked from local to global scales. Among the most important parameters are the sizes of primary particles and the cloud-processing of aerosol, but most of the parameters are important for CCN uncertainty somewhere on the globe. We also show that uncertainties in forcing over the industrial period are sensitive to a different set of parameters than those that are important for present-day CCN.

Carslaw, Ken; Lee, Lindsay; Reddington, Carly; Mann, Graham; Spracklen, Dominick; Stier, Philip; Pierce, Jeffrey

2013-04-01

96

Quantifying Uncertainty in Epidemiological Models  

SciTech Connect

Modern epidemiology has made use of a number of mathematical models, including ordinary differential equation (ODE) based models and agent based models (ABMs) to describe the dynamics of how a disease may spread within a population and enable the rational design of strategies for intervention that effectively contain the spread of the disease. Although such predictions are of fundamental importance in preventing the next global pandemic, there is a significant gap in trusting the outcomes/predictions solely based on such models. Hence, there is a need to develop approaches such that mathematical models can be calibrated against historical data. In addition, there is a need to develop rigorous uncertainty quantification approaches that can provide insights into when a model will fail and characterize the confidence in the (possibly multiple) model outcomes/predictions, when such retrospective analysis cannot be performed. In this paper, we outline an approach to develop uncertainty quantification approaches for epidemiological models using formal methods and model checking. By specifying the outcomes expected from a model in a suitable spatio-temporal logic, we use probabilistic model checking methods to quantify the probability with which the epidemiological model satisfies the specification. We argue that statistical model checking methods can solve the uncertainty quantification problem for complex epidemiological models.

Ramanathan, Arvind [ORNL; Jha, Sumit Kumar [University of Central Florida

2012-01-01

97

Physiological Relevance of Quantifying Segmental Contraction Synchrony  

PubMed Central

Background Most current indices of synchrony quantify left ventricular (LV) contraction pattern in terms of a single, global (integrated) measure. We report the development and physiological relevance of a novel method to quantify LV segmental contraction synchrony. Methods LV pressure-volume and echocardiographic data were collected in seven anesthetized, opened-chest dogs under several pacing modes: right atrial (RA) (control), right ventricular (RV) (dyssynchrony), and additional LV pacing at either apex (CRTa) or free wall (CRTf). Cross-correlation-based integrated (CCSIint) and segmental (CCSIseg) measures of synchrony were calculated from speckle-tracking derived radial strain, along with a commonly used index (maximum time delay). LV contractility was quantified using either Ees (ESPVR slope) or ESPVRarea (defined in the manuscript). Results RV pacing decreased CCSIint at LV base (0.95 ± 0.02 [RA] vs 0.64 ± 0.14 [RV]; P < 0.05) and only CRTa improved it (0.93 ± 0.03; P < 0.05 vs RV). The CCSIseg analysis identified anteroseptal and septal segments as being responsible for the low CCSIint during RV pacing and inferior segment for poor resynchronization with CRTf. Changes in ESPVRarea, and not in Ees, indicated depressed LV contractility with RV pacing, an observation consistent with significantly decreased global LV performance (stroke work [SW]: 252 ± 23 [RA] vs 151 ± 24 [RV] mJ; P < 0.05). Only CRTa improved SW and contractility (SW: 240 ± 19 mJ; ESPVRarea: 545 ± 175 mmHg•mL; both P < 0.01 vs RV). Only changes in CCSIseg and global LV contractility were strongly correlated (R2 = 0.698, P = 0.005). Conclusion CCSIseg provided insights into the changes in LV integrated contraction pattern and a better link to global LV contractility changes.

JOHNSON, LAUREN; LAMIA, BOUCHRA; KIM, HYUNG KOOK; TANABE, MASAKI; GORCSAN, JOHN; SCHWARTZMAN, DAVID; SHROFF, SANJEEV G.; PINSKY, MICHAEL R.

2013-01-01

98

Methods to quantify tank losses are improved  

SciTech Connect

A major revision to the American Petroleum Institute's Publication 2519/sup 1/ provides a tool to accurately quantify evaporative losses and the resultant atmospheric emissions from petroleum stocks stored in internal floating-roof tanks (IFRTs). As a result of significant improvements in the loss calculation procedures included in the revised publication, entitled ''Evaporation Loss from Internal Floating Roof Tanks,'' more accurate stock inventory accounting and improved stock loss control can now be achieved. Also, cost-effectiveness of alternate floating-roof tank equipment can now be assessed for existing and new tanks. Environmental uses include the development of more accurate emission inventories for existing tankage and emission estimates required for permitting new tanks. Also, the environmental benefits of proposed emission controls can be evaluated to allow for informed regulatory development. Specifically, the effects of various generic equipment types, generic design details, and stock parameters on total evaporative loss can now be determined, and the three primary sources of loss from IFRTs (rim seals, deck fittings, and bolted deck seams) can be independently quantified. These loss calculation details, developed specifically for IFRTs, greatly enchance the accuracy and usefulness of the third edition over the second edition (1976), which it supersedes. The 1976 edition, which provided only a rough estimate of total loss based on a loss equation previously developed for external floating-roof tanks (EFRTs), had become inadequate for current industry needs.

Hanzevack, K.M.; Anderson, B.D.; Russell, R.L.

1983-09-01

99

Career and Technical Education: Show Us the Buck, We'll Show You the Bang!  

ERIC Educational Resources Information Center

Adult and CTE programs in California have been cut by about 60 percent over the past three years. A number of school districts have summarily eliminated these programs to preserve funding for other educational endeavors. The author says part of the problem has been the community's inability to communicate quantifiable results. One of the hottest…

Whetstone, Ryan

2011-01-01

100

Quantifying the fluvial autogenic processes: Tank Experiments  

NASA Astrophysics Data System (ADS)

The evolution of deltaic shorelines has long been explained by allogenic changes in the environment such as changes in tectonics, base level, and sediment supply. Recently, the importance of autogenic cyclicity has been recognized in concert with allogenic forcing. Decoupling autogenic variability from allogenic signatures is essential in order to understand depositional systems and the stratigraphic record; however, autogenic behavior in sedimentary environments is not understood well enough to separate it from allogenic factors. Data drawn from model experiments that isolate the autogenic variability from allogenic forcing are the key to understanding and predicting autogenic responses in fluvial and deltaic systems. Here, three experiments using a constant water discharge (Qw) with a varying sediment flux (Qs) are conducted to examine the autogenic variability in a fluviodeltaic system. The experimental basin has dimensions of 1 m x 1 m, and a sediment/water mixture was delivered into the experimental basin. The sediment mixture contained 50% fine sand (.1 mm) and 50% coarse sand (2 mm) by volume and was delivered into the basin. The delta was built over a flat, non-erodible surface into a standing body of water with a constant base level and no subsidence. The autogenic responses of the fluvial and deltaic systems were captured by time-lapse images and the shoreline position was mapped to quantify the autogenic processes. The autogenic response to varying sediment supply while maintaining constant water supply include changes in 1) the slope of the fluvial-surface, 2) the frequency of autogenic storage and release events, and 3) shoreline roughness. Interestingly, the data shows a non-linear relationship between the frequency of autogenic cyclicity and the ratio of sediment supply to water discharge. The successive increase in the sediment supply and thus the increase in the ratio of Qs to Qw caused the slope of the fluvial surface to increase, and the frequency of autogenic sediment storage and release events to increase, but in a non-linear nature. This non-linear increase results from the autogenic frequency not increasing by a factor of 2 when the sediment flux increases by a factor of 2. Since the experimental data suggests that the frequency of autogenic variability is also related to the slope of the fluvial-surface, an increase in the fluvial slope would force the fluvial system to experience larger autogenic processes over a longer period of time. These three experiments are part of a larger matrix of nine total flume experiments, which explore variations in sediment supply, water discharge, and Qs/Qw to better understand fluvial autogenic processes.

Powell, E. J.; Kim, W.; Muto, T.

2010-12-01

101

In favour of the definition "adolescents with idiopathic scoliosis": juvenile and adolescent idiopathic scoliosis braced after ten years of age, do not show different end results. SOSORT award winner 2014  

PubMed Central

Background The most important factor discriminating juvenile (JIS) from adolescent idiopathic scoliosis (AIS) is the risk of deformity progression. Brace treatment can change natural history, even when risk of progression is high. The aim of this study was to compare the end of growth results of JIS subjects, treated after 10 years of age, with final results of AIS. Methods Design: prospective observational controlled cohort study nested in a prospective database. Setting: outpatient tertiary referral clinic specialized in conservative treatment of spinal deformities. Inclusion criteria: idiopathic scoliosis; European Risser 0–2; 25 degrees to 45 degrees Cobb; start treatment age: 10 years or more, never treated before. Exclusion criteria: secondary scoliosis, neurological etiology, prior treatment for scoliosis (brace or surgery). Groups: 27 patients met the inclusion criteria for the AJIS, (Juvenile Idiopathic Scoliosis treated in adolescence), demonstrated by an x-ray before 10 year of age, and treatment start after 10 years of age. AIS group included 45 adolescents with a diagnostic x-ray made after the threshold of age 10 years. Results at the end of growth were analysed; the threshold of 5 Cobb degree to define worsened, improved and stabilized curves was considered. Statistics: Mean and SD were used for descriptive statistics of clinical and radiographic changes. Relative Risk of failure (RR), Chi-square and T-test of all data was calculated to find differences among the two groups. 95% Confidence Interval (CI) , and of radiographic changes have been calculated. Results We did not find any Cobb angle significant differences among groups at baseline and at the end of treatment. The only difference was in the number of patients progressed above 45 degrees, found in the JIS group. The RR of progression of AJIS was, 1.35 (IC95% 0.57-3.17) versus AIS, and it wasn't statistically significant in the AJIS group, in respect to AIS group (p = 0.5338). Conclusion There are no significant differences in the final results of AIS and JIS, treated with total respect of the SRS and SOSORT criteria, in adolescence. Brace efficacy can neutralize the risk of progression.

2014-01-01

102

Quantifying dielectrophoretic nanoparticle response to amplitude modulated input signal  

NASA Astrophysics Data System (ADS)

A new experimental system and theoretical model have been developed to systematically quantify and analyse the movement of nanoparticles subjected to continuously pulsed, or amplitude modulated, dielectrophoretic (DEP) input signal. Modulation DEP-induced concentration fluctuations of fluorescently labelled 0.5 µm and 1.0 µm diameter latex nanospheres, localized near castellated electrode edges, were quantified using real-time fluorescence microscope dielectrophoretic spectroscopy. Experimental measurements show that the fluorescence fluctuations decrease as the modulation frequency increases—in agreement with model predictions. The modulation frequency was varied from 25 × 10-3 to 25 Hz and the duty-cycle ratios ranged from zero to unity. Two new parameters for characterizing DEP nanoparticle transport are defined: the modulation frequency bandwidth and the optimal duty-cycle ratio. The ‘on/off’ modulation bandwidth, for micrometre scale movement, was measured to be 0.6 Hz and 1.0 Hz for 1.0 µm and 0.5 µm diameter nanospheres, respectively. At these cut-off frequencies very little movement of the nanospheres could be microscopically observed. Optimal fluorescence fluctuations, for modulation frequencies ranging from 0.25 to 1.0 Hz, occurred for duty-cycle ratio values ranging from 0.3 to 0.7—agreeing with theory. The results are useful for automated DEP investigations and associated technologies.

Bakewell, D. J.; Chichenkov, A.

2012-09-01

103

A Generalizable Methodology for Quantifying User Satisfaction  

NASA Astrophysics Data System (ADS)

Quantifying user satisfaction is essential, because the results can help service providers deliver better services. In this work, we propose a generalizable methodology, based on survival analysis, to quantify user satisfaction in terms of session times, i. e., the length of time users stay with an application. Unlike subjective human surveys, our methodology is based solely on passive measurement, which is more cost-efficient and better able to capture subconscious reactions. Furthermore, by using session times, rather than a specific performance indicator, such as the level of distortion of voice signals, the effects of other factors like loudness and sidetone, can also be captured by the developed models. Like survival analysis, our methodology is characterized by low complexity and a simple model-developing process. The feasibility of our methodology is demonstrated through case studies of ShenZhou Online, a commercial MMORPG in Taiwan, and the most prevalent VoIP application in the world, namely Skype. Through the model development process, we can also identify the most significant performance factors and their impacts on user satisfaction and discuss how they can be exploited to improve user experience and optimize resource allocation.

Huang, Te-Yuan; Chen, Kuan-Ta; Huang, Polly; Lei, Chin-Laung

104

Quantifying einstein-podolsky-rosen steering.  

PubMed

Einstein-Podolsky-Rosen steering is a form of bipartite quantum correlation that is intermediate between entanglement and Bell nonlocality. It allows for entanglement certification when the measurements performed by one of the parties are not characterized (or are untrusted) and has applications in quantum key distribution. Despite its foundational and applied importance, Einstein-Podolsky-Rosen steering lacks a quantitative assessment. Here we propose a way of quantifying this phenomenon and use it to study the steerability of several quantum states. In particular, we show that every pure entangled state is maximally steerable and the projector onto the antisymmetric subspace is maximally steerable for all dimensions; we provide a new example of one-way steering and give strong support that states with positive-partial transposition are not steerable. PMID:24856679

Skrzypczyk, Paul; Navascués, Miguel; Cavalcanti, Daniel

2014-05-01

105

Message passing for quantified Boolean formulas  

NASA Astrophysics Data System (ADS)

We introduce two types of message passing algorithms for quantified Boolean formulas (QBF). The first type is a message passing based heuristics that can prove unsatisfiability of the QBF by assigning the universal variables in such a way that the remaining formula is unsatisfiable. In the second type, we use message passing to guide branching heuristics of a Davis-Putnam-Logemann-Loveland (DPLL) complete solver. Numerical experiments show that on random QBFs our branching heuristics give robust exponential efficiency gain with respect to state-of-the-art solvers. We also manage to solve some previously unsolved benchmarks from the QBFLIB library. Apart from this, our study sheds light on using message passing in small systems and as subroutines in complete solvers.

Zhang, Pan; Ramezanpour, Abolfazl; Zdeborová, Lenka; Zecchina, Riccardo

2012-05-01

106

Quantifying Einstein-Podolsky-Rosen Steering  

NASA Astrophysics Data System (ADS)

Einstein-Podolsky-Rosen steering is a form of bipartite quantum correlation that is intermediate between entanglement and Bell nonlocality. It allows for entanglement certification when the measurements performed by one of the parties are not characterized (or are untrusted) and has applications in quantum key distribution. Despite its foundational and applied importance, Einstein-Podolsky-Rosen steering lacks a quantitative assessment. Here we propose a way of quantifying this phenomenon and use it to study the steerability of several quantum states. In particular, we show that every pure entangled state is maximally steerable and the projector onto the antisymmetric subspace is maximally steerable for all dimensions; we provide a new example of one-way steering and give strong support that states with positive-partial transposition are not steerable.

Skrzypczyk, Paul; Navascués, Miguel; Cavalcanti, Daniel

2014-05-01

107

Stimfit: quantifying electrophysiological data with Python  

PubMed Central

Intracellular electrophysiological recordings provide crucial insights into elementary neuronal signals such as action potentials and synaptic currents. Analyzing and interpreting these signals is essential for a quantitative understanding of neuronal information processing, and requires both fast data visualization and ready access to complex analysis routines. To achieve this goal, we have developed Stimfit, a free software package for cellular neurophysiology with a Python scripting interface and a built-in Python shell. The program supports most standard file formats for cellular neurophysiology and other biomedical signals through the Biosig library. To quantify and interpret the activity of single neurons and communication between neurons, the program includes algorithms to characterize the kinetics of presynaptic action potentials and postsynaptic currents, estimate latencies between pre- and postsynaptic events, and detect spontaneously occurring events. We validate and benchmark these algorithms, give estimation errors, and provide sample use cases, showing that Stimfit represents an efficient, accessible and extensible way to accurately analyze and interpret neuronal signals.

Guzman, Segundo J.; Schlogl, Alois; Schmidt-Hieber, Christoph

2013-01-01

108

Severe arrhythmia as a result of the interaction of cetirizine and pilsicainide in a patient with renal insufficiency: first case presentation showing competition for excretion via renal multidrug resistance protein 1 and organic cation transporter 2.  

PubMed

A 72-year-old woman with renal insufficiency who was taking oral pilsicainide (150 mg/d) complained of feeling faint 3 days after she was prescribed oral cetirizine (20 mg/d). She was found to have a wide QRS wave with bradycardia. Her symptoms were relieved by termination of pilsicainide. The plasma concentrations of both drugs were significantly increased during the coadministration, and the cetirizine concentration decreased on cessation of pilsicainide despite the fact that treatment with cetirizine was continued, which suggested that the fainting was induced by the pharmacokinetic drug interaction. A pharmacokinetic study in 6 healthy male volunteers after a single dose of either cetirizine (20 mg) or pilsicainide (50 mg), or both, found that the renal clearance of each drug was significantly decreased by the coadministration of the drugs (from 475 +/- 101 mL/min to 279 +/- 117 mL/min for pilsicainide and from 189 +/- 37 mL/min to 118 +/- 28 mL/min for cetirizine; P = .008 and .009, respectively). In vitro studies using Xenopus oocytes with microinjected human organic cation transporter 2 and renal cells transfected with human multidrug resistance protein 1 revealed that the transport of the substrates of these transporters was inhibited by either cetirizine or pilsicainide. Thus elevated concentrations of these drugs as a result of a pharmacokinetic drug-drug interaction via either human multidrug resistance protein 1 or human organic cation transporter 2 (or both) in the renal tubular cells might have caused the arrhythmia in our patient. Although cetirizine has less potential for causing arrhythmias than other histamine 1 blockers, such an interaction should be considered, especially in patients with renal insufficiency who are receiving pilsicainide. PMID:16580907

Tsuruoka, Shuichi; Ioka, Takashi; Wakaumi, Michi; Sakamoto, Koh-ichi; Ookami, Hitoshi; Fujimura, Akio

2006-04-01

109

Fuzzy Entropy Method for Quantifying Supply Chain Networks Complexity  

NASA Astrophysics Data System (ADS)

Supply chain is a special kind of complex network. Its complexity and uncertainty makes it very difficult to control and manage. Supply chains are faced with a rising complexity of products, structures, and processes. Because of the strong link between a supply chain’s complexity and its efficiency the supply chain complexity management becomes a major challenge of today’s business management. The aim of this paper is to quantify the complexity and organization level of an industrial network working towards the development of a ‘Supply Chain Network Analysis’ (SCNA). By measuring flows of goods and interaction costs between different sectors of activity within the supply chain borders, a network of flows is built and successively investigated by network analysis. The result of this study shows that our approach can provide an interesting conceptual perspective in which the modern supply network can be framed, and that network analysis can handle these issues in practice.

Zhang, Jihui; Xu, Junqin

110

Quantifying Drosophila food intake: comparative analysis of current methodology.  

PubMed

Food intake is a fundamental parameter in animal studies. Despite the prevalent use of Drosophila in laboratory research, precise measurements of food intake remain challenging in this model organism. Here, we compare several common Drosophila feeding assays: the capillary feeder (CAFE), food labeling with a radioactive tracer or colorimetric dye and observations of proboscis extension (PE). We show that the CAFE and radioisotope labeling provide the most consistent results, have the highest sensitivity and can resolve differences in feeding that dye labeling and PE fail to distinguish. We conclude that performing the radiolabeling and CAFE assays in parallel is currently the best approach for quantifying Drosophila food intake. Understanding the strengths and limitations of methods for measuring food intake will greatly advance Drosophila studies of nutrition, behavior and disease. PMID:24681694

Deshpande, Sonali A; Carvalho, Gil B; Amador, Ariadna; Phillips, Angela M; Hoxha, Sany; Lizotte, Keith J; Ja, William W

2014-05-01

111

Quantifying Global Uncertainties in a Simple Microwave Rainfall Algorithm  

NASA Technical Reports Server (NTRS)

While a large number of methods exist in the literature for retrieving rainfall from passive microwave brightness temperatures, little has been written about the quantitative assessment of the expected uncertainties in these rainfall products at various time and space scales. The latter is the result of two factors: sparse validation sites over most of the world's oceans, and algorithm sensitivities to rainfall regimes that cause inconsistencies against validation data collected at different locations. To make progress in this area, a simple probabilistic algorithm is developed. The algorithm uses an a priori database constructed from the Tropical Rainfall Measuring Mission (TRMM) radar data coupled with radiative transfer computations. Unlike efforts designed to improve rainfall products, this algorithm takes a step backward in order to focus on uncertainties. In addition to inversion uncertainties, the construction of the algorithm allows errors resulting from incorrect databases, incomplete databases, and time- and space-varying databases to be examined. These are quantified. Results show that the simple algorithm reduces errors introduced by imperfect knowledge of precipitation radar (PR) rain by a factor of 4 relative to an algorithm that is tuned to the PR rainfall. Database completeness does not introduce any additional uncertainty at the global scale, while climatologically distinct space/time domains add approximately 25% uncertainty that cannot be detected by a radiometer alone. Of this value, 20% is attributed to changes in cloud morphology and microphysics, while 5% is a result of changes in the rain/no-rain thresholds. All but 2%-3% of this variability can be accounted for by considering the implicit assumptions in the algorithm. Additional uncertainties introduced by the details of the algorithm formulation are not quantified in this study because of the need for independent measurements that are beyond the scope of this paper. A validation strategy for these errors is outlined.

Kummerow, Christian; Berg, Wesley; Thomas-Stahle, Jody; Masunaga, Hirohiko

2006-01-01

112

Processing queries with quantifiers a horticultural approach  

Microsoft Academic Search

Most research on query processing has focussed on quantifier-free conjunctive queries. Existing techniques for processing queries with quantifiers either compile the query into a nested loop program or use variants of Codd's reduction from the Relational Calculus to the Relational Algebra. In this paper we propose an alternative technique that uses an algebra of graft and prune operations on trees.

Umeshwar Dayal

1983-01-01

113

Locality in Syntax and Floating Numeral Quantifiers  

Microsoft Academic Search

We defend the idea that a floating quantifier observes syntactic locality with its associated noun phrase. This idea has given rise to a number of important empirical insights, including the VP-internal subject position, intermediate traces, and NP-traces. Recently, this syntactic locality of floating quantifiers has been questioned in a number of languages. We take up evidence from Japanese that purports

Shigeru Miyagawa; Koji Arikawa

2007-01-01

114

Scalar Quantifiers: Logic, Acquisition, and Processing  

ERIC Educational Resources Information Center

Superlative quantifiers ("at least 3", "at most 3") and comparative quantifiers ("more than 2", "fewer than 4") are traditionally taken to be interdefinable: the received view is that "at least n" and "at most n" are equivalent to "more than n-1" and "fewer than n+1", respectively. Notwithstanding the prima facie plausibility of this claim, Geurts…

Geurts, Bart; Katsos, Napoleon; Cummins, Chris; Moons, Jonas; Noordman, Leo

2010-01-01

115

Quantifying Coral Reef Ecosystem Services  

EPA Science Inventory

Coral reefs have been declining during the last four decades as a result of both local and global anthropogenic stresses. Numerous research efforts to elucidate the nature, causes, magnitude, and potential remedies for the decline have led to the widely held belief that the recov...

116

Quantifying diet for nutrigenomic studies.  

PubMed

The field of nutrigenomics shows tremendous promise for improved understanding of the effects of dietary intake on health. The knowledge that metabolic pathways may be altered in individuals with genetic variants in the presence of certain dietary exposures offers great potential for personalized nutrition advice. However, although considerable resources have gone into improving technology for measurement of the genome and biological systems, dietary intake assessment remains inadequate. Each of the methods currently used has limitations that may be exaggerated in the context of gene × nutrient interaction in large multiethnic studies. Because of the specificity of most gene × nutrient interactions, valid data are needed for nutrient intakes at the individual level. Most statistical adjustment efforts are designed to improve estimates of nutrient intake distributions in populations and are unlikely to solve this problem. An improved method of direct measurement of individual usual dietary intake that is unbiased across populations is urgently needed. PMID:23642200

Tucker, Katherine L; Smith, Caren E; Lai, Chao-Qiang; Ordovas, Jose M

2013-01-01

117

Quantifying global international migration flows.  

PubMed

Widely available data on the number of people living outside of their country of birth do not adequately capture contemporary intensities and patterns of global migration flows. We present data on bilateral flows between 196 countries from 1990 through 2010 that provide a comprehensive view of international migration flows. Our data suggest a stable intensity of global 5-year migration flows at ~0.6% of world population since 1995. In addition, the results aid the interpretation of trends and patterns of migration flows to and from individual countries by placing them in a regional or global context. We estimate the largest movements to occur between South and West Asia, from Latin to North America, and within Africa. PMID:24675962

Abel, Guy J; Sander, Nikola

2014-03-28

118

Quantifying comparisons of threshold resummations  

NASA Astrophysics Data System (ADS)

We explore similarities and differences between widely-used threshold resummation formalisms, employing electroweak boson production as an instructive example. Resummations based on both full QCD and soft-collinear effective theory (SCET) share common underlying factorizations and resulting evolution equations. The formalisms differ primarily in their choices of boundary conditions for evolution, in moment space for many treatments based on full QCD, and in momentum space for treatments based on soft-collinear effective theory. At the level of factorized hadronic cross sections, these choices lead to quite different expressions. Nevertheless, we can identify a natural expansion for parton luminosity functions, in which SCET and full QCD resummations agree for the first term, and for which subsequent terms provide differences that are small in most cases. We also clarify the roles of the non-leading resummation constants in the two formalisms, and observe a relationship of the QCD resummation function D( ? s ) to the web expansion.

Sterman, George; Zeng, Mao

2014-05-01

119

Asia: Showing the Changing Seasons  

NSDL National Science Digital Library

SeaWiFS false color data showing seasonal change in the oceans and on land for Asia. The data is seasonally averaged, and shows the sequence: fall, winter, spring, summer, fall, winter, spring (for the Northern Hemisphere).

Allen, Jesse; Newcombe, Marte; Feldman, Gene

1998-09-09

120

Quantifying the value of redundant measurements at GRUAN sites  

NASA Astrophysics Data System (ADS)

The potential for measurement redundancy to reduce uncertainty in atmospheric variables has not been investigated comprehensively for climate observations. We evaluated the usefulness of entropy and mutual correlation concepts, as defined in information theory, for quantifying random uncertainty and redundancy in time series of atmospheric water vapor provided by five highly instrumented GRUAN (GCOS [Global Climate Observing System] Reference Upper-Air Network) Stations in 2010-2012. Results show that the random uncertainties for radiosonde, frost-point hygrometer, Global Positioning System, microwave and infrared radiometers, and Raman lidar measurements differed by less than 8%. Comparisons of time series of the Integrated Water Vapor (IWV) content from ground-based remote sensing instruments with in situ soundings showed that microwave radiometers have the highest redundancy and therefore the highest potential to reduce random uncertainty of IWV time series estimated by radiosondes. Moreover, the random uncertainty of a time series from one instrument should be reduced of ~ 60% by constraining the measurements with those from another instrument. The best reduction of random uncertainty resulted from conditioning of Raman lidar measurements with microwave radiometer measurements. Specific instruments are recommended for atmospheric water vapor measurements at GRUAN sites. This approach can be applied to the study of redundant measurements for other climate variables.

Madonna, F.; Rosoldi, M.; Güldner, J.; Haefele, A.; Kivi, R.; Cadeddu, M. P.; Sisterson, D.; Pappalardo, G.

2014-06-01

121

Quantifying climatological ranges and anomalies for Pacific coral reef ecosystems.  

PubMed

Coral reef ecosystems are exposed to a range of environmental forcings that vary on daily to decadal time scales and across spatial scales spanning from reefs to archipelagos. Environmental variability is a major determinant of reef ecosystem structure and function, including coral reef extent and growth rates, and the abundance, diversity, and morphology of reef organisms. Proper characterization of environmental forcings on coral reef ecosystems is critical if we are to understand the dynamics and implications of abiotic-biotic interactions on reef ecosystems. This study combines high-resolution bathymetric information with remotely sensed sea surface temperature, chlorophyll-a and irradiance data, and modeled wave data to quantify environmental forcings on coral reefs. We present a methodological approach to develop spatially constrained, island- and atoll-scale metrics that quantify climatological range limits and anomalous environmental forcings across U.S. Pacific coral reef ecosystems. Our results indicate considerable spatial heterogeneity in climatological ranges and anomalies across 41 islands and atolls, with emergent spatial patterns specific to each environmental forcing. For example, wave energy was greatest at northern latitudes and generally decreased with latitude. In contrast, chlorophyll-a was greatest at reef ecosystems proximate to the equator and northern-most locations, showing little synchrony with latitude. In addition, we find that the reef ecosystems with the highest chlorophyll-a concentrations; Jarvis, Howland, Baker, Palmyra and Kingman are each uninhabited and are characterized by high hard coral cover and large numbers of predatory fishes. Finally, we find that scaling environmental data to the spatial footprint of individual islands and atolls is more likely to capture local environmental forcings, as chlorophyll-a concentrations decreased at relatively short distances (>7 km) from 85% of our study locations. These metrics will help identify reef ecosystems most exposed to environmental stress as well as systems that may be more resistant or resilient to future climate change. PMID:23637939

Gove, Jamison M; Williams, Gareth J; McManus, Margaret A; Heron, Scott F; Sandin, Stuart A; Vetter, Oliver J; Foley, David G

2013-01-01

122

Quantifying Tsunami Impact on Structures  

NASA Astrophysics Data System (ADS)

Tsunami impact is usually assessed through inundation simulations and maps which provide estimates of coastal flooding zones based on "credible worst case" scenarios. Earlier maps relied on one-dimensional computations, but two-dimensional computations are now employed routinely. In some cases, the maps do not represent flooding from any particular scenario event, but present an inundation line that reflects the worst inundation at this particular location among a range of scenario events. Current practice in tsunami resistant design relies on estimates of tsunami impact forces derived from empirical relationships that have been borrowed from riverine flooding calculations, which involve only inundation elevations. We examine this practice critically. Recent computational advances allow for calculation of additional parameters from scenario events such as the detailed distributions of tsunami currents and fluid accelerations, and this suggests that alternative and more comprehensive expressions for calculating tsunami impact and tsunami forces should be examined. We do so, using model output for multiple inundation simulations of Seaside, Oregon, as part of a pilot project to develop probabilistic tsunami hazard assessment methodologies for incorporation into FEMA Flood Insurance Rate Maps. We consider three different methods, compare the results with existing methodology for estimating forces and impact, and discuss the implications of these methodologies for probabilistic tsunami hazard assessment.

Yalciner, A. C.; Kanoglu, U.; Titov, V.; Gonzalez, F.; Synolakis, C. E.

2004-12-01

123

Deciphering faces: quantifiable visual cues to weight.  

PubMed

Body weight plays a crucial role in mate choice, as weight is related to both attractiveness and health. People are quite accurate at judging weight in faces, but the cues used to make these judgments have not been defined. This study consisted of two parts. First, we wanted to identify quantifiable facial cues that are related to body weight, as defined by body mass index (BMI). Second, we wanted to test whether people use these cues to judge weight. In study 1, we recruited two groups of Caucasian and two groups of African participants, determined their BMI and measured their 2-D facial images for: width-to-height ratio, perimeter-to-area ratio, and cheek-to-jaw-width ratio. All three measures were significantly related to BMI in males, while the width-to-height and cheek-to-jaw-width ratios were significantly related to BMI in females. In study 2, these images were rated for perceived weight by Caucasian observers. We showed that these observers use all three cues to judge weight in African and Caucasian faces of both sexes. These three facial cues, width-to-height ratio, perimeter-to-area ratio, and cheek-to-jaw-width ratio, are therefore not only related to actual weight but provide a basis for perceptual attributes as well. PMID:20301846

Coetzee, Vinet; Chen, Jingying; Perrett, David I; Stephen, Ian D

2010-01-01

124

Quantifying asymmetry: Ratios and alternatives.  

PubMed

Traditionally, the study of metric skeletal asymmetry has relied largely on univariate analyses, utilizing ratio transformations when the goal is comparing asymmetries in skeletal elements or populations of dissimilar dimensions. Under this approach, raw asymmetries are divided by a size marker, such as a bilateral average, in an attempt to produce size-free asymmetry indices. Henceforth, this will be referred to as "controlling for size" (see Smith: Curr Anthropol 46 (2005) 249-273). Ratios obtained in this manner often require further transformations to interpret the meaning and sources of asymmetry. This model frequently ignores the fundamental assumption of ratios: the relationship between the variables entered in the ratio must be isometric. Violations of this assumption can obscure existing asymmetries and render spurious results. In this study, we examined the performance of the classic indices in detecting and portraying the asymmetry patterns in four human appendicular bones and explored potential methodological alternatives. Examination of the ratio model revealed that it does not fulfill its intended goals in the bones examined, as the numerator and denominator are independent in all cases. The ratios also introduced strong biases in the comparisons between different elements and variables, generating spurious asymmetry patterns. Multivariate analyses strongly suggest that any transformation to control for overall size or variable range must be conducted before, rather than after, calculating the asymmetries. A combination of exploratory multivariate techniques, such as Principal Components Analysis, and confirmatory linear methods, such as regression and analysis of covariance, appear as a promising and powerful alternative to the use of ratios. Am J Phys Anthropol 154:498-511, 2014. © 2014 Wiley Periodicals, Inc. PMID:24842694

Franks, Erin M; Cabo, Luis L

2014-08-01

125

Quantifying drug-protein binding in vivo.  

SciTech Connect

Accelerator mass spectrometry (AMS) provides precise quantitation of isotope labeled compounds that are bound to biological macromolecules such as DNA or proteins. The sensitivity is high enough to allow for sub-pharmacological (''micro-'') dosing to determine macromolecular targets without inducing toxicities or altering the system under study, whether it is healthy or diseased. We demonstrated an application of AMS in quantifying the physiologic effects of one dosed chemical compound upon the binding level of another compound in vivo at sub-toxic doses [4].We are using tissues left from this study to develop protocols for quantifying specific binding to isolated and identified proteins. We also developed a new technique to quantify nanogram to milligram amounts of isolated protein at precisions that are comparable to those for quantifying the bound compound by AMS.

Buchholz, B; Bench, G; Keating III, G; Palmblad, M; Vogel, J; Grant, P G; Hillegonds, D

2004-02-17

126

Quantifying Effective Flow and Transport Properties in Heterogeneous Porous Media  

NASA Astrophysics Data System (ADS)

Spatial heterogeneity, the spatial variation in physical and chemical properties, exists at almost all scales and is an intrinsic property of natural porous media. It is important to understand and quantify how small-scale spatial variations determine large-scale "effective" properties in order to predict fluid flow and transport behavior in the natural subsurface. In this work, we aim to systematically understand and quantify the role of the spatial distribution of sand grains of different sizes in determining effective dispersivity and effective permeability using quasi-2D flow-cell experiments and numerical simulations. Two dimensional flow cells (20 cm by 20 cm) were packed with the same total amount of fine and coarse sands however with different spatial patterns. The homogeneous case has the completely mixed fine and coarse sands. The four zone case distributes the fine sand in four identical square zones within the coarse sand matrix. The one square case has all the fine sands in one square block. With the one square case pattern, two more experiments were designed in order to examine the effect of grain size contrast on effective permeability and dispersivity. Effective permeability was calculated based on both experimental and modeling results. Tracer tests were run for all cases. Advection dispersion equations were solved to match breakthrough data and to obtain average dispersivity. We also used Continuous Time Random Walk (CTRW) to quantify the non-Fickian transport behavior for each case. For the three cases with the same grain size contrast, the results show that the effective permeability does not differ significantly. The effective dispersion coefficient is the smallest for the homogeneous case (0.05 cm) and largest for the four zone case (0.27 cm). With the same pattern, the dispersivity value is the largest with the highest size contrast (0.28 cm), which is higher than the one with the lowest case by a factor of 2. The non-Fickian behavior was quantified by the ? value within the CTRW framework. Fickian transport will result in ? values larger than 2 while its deviation from 2 indicates the extent of non-Fickian behavior. Among the three cases with the same grain size contrast, the ? value is closest to 2 in the homogeneous case (1.95), while smallest in the four zone case (1.89). In the one square case, with the highest size contrast, the ? value was 1.57, indicating increasing extent of non-Fickian behavior with higher size contrast. This study is one step toward understanding how small-scale spatial variation in physical properties affect large-scale flow and transport behavior. This step is important in predicting subsurface transport processes that are relevant to earth sciences, environmental engineering, and petroleum engineering.

Heidari, P.; Li, L.

2012-12-01

127

Planning a Successful Tech Show  

ERIC Educational Resources Information Center

Tech shows are a great way to introduce prospective students, parents, and local business and industry to a technology and engineering or career and technical education program. In addition to showcasing instructional programs, a tech show allows students to demonstrate their professionalism and skills, practice public presentations, and interact…

Nikirk, Martin

2011-01-01

128

Quantifying Urban Groundwater in Environmental Field Observatories  

NASA Astrophysics Data System (ADS)

Despite the growing footprint of urban landscapes and their impacts on hydrologic and biogeochemical cycles, comprehensive field studies of urban water budgets are few. The cumulative effects of urban infrastructure (buildings, roads, culverts, storm drains, detention ponds, leaking water supply and wastewater pipe networks) on temporal and spatial patterns of groundwater stores, fluxes, and flowpaths are poorly understood. The goal of this project is to develop expertise and analytical tools for urban groundwater systems that will inform future environmental observatory planning and that can be shared with research teams working in urban environments elsewhere. The work plan for this project draws on a robust set of information resources in Maryland provided by ongoing monitoring efforts of the Baltimore Ecosystem Study (BES), USGS, and the U.S. Forest Service working together with university scientists and engineers from multiple institutions. A key concern is to bridge the gap between small-scale intensive field studies and larger-scale and longer-term hydrologic patterns using synoptic field surveys, remote sensing, numerical modeling, data mining and visualization tools. Using the urban water budget as a unifying theme, we are working toward estimating the various elements of the budget in order to quantify the influence of urban infrastructure on groundwater. Efforts include: (1) comparison of base flow behavior from stream gauges in a nested set of watersheds at four different spatial scales from 0.8 to 171 km2, with diverse patterns of impervious cover and urban infrastructure; (2) synoptic survey of well water levels to characterize the regional water table; (3) use of airborne thermal infrared imagery to identify locations of groundwater seepage into streams across a range of urban development patterns; (4) use of seepage transects and tracer tests to quantify the spatial pattern of groundwater fluxes to the drainage network in selected subwatersheds; (5) development of a mass balance for precipitation over a 170 km2 area on a 1x1 km2 grid using recording rain gages for bias correction of weather radar products; (5) calculation of urban evapotranspiration using the Penman-Monteith method compared with results from an eddy correlation station; (7) use of numerical groundwater model in a screening mode to estimate depth of groundwater contributing surface water flow; and (8) data mining of public agency records of potable water and wastewater flows to estimate leakage rates and flowpaths in relation to streamflow and groundwater fluxes.

Welty, C.; Miller, A. J.; Belt, K.; Smith, J. A.; Band, L. E.; Groffman, P.; Scanlon, T.; Warner, J.; Ryan, R. J.; Yeskis, D.; McGuire, M. P.

2006-12-01

129

Quantifying uncertainty, variability and likelihood for ordinary differential equation models  

Microsoft Academic Search

BACKGROUND: In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space. RESULTS: The partial differential equation that describes the evolution of this probability density function has a form that is particularly

Andrea Y Weiße; Richard H Middleton; Wilhelm Huisinga

2010-01-01

130

Quantifying Hepatic Shear Modulus In Vivo Using Acoustic Radiation Force  

Microsoft Academic Search

The speed at which shear waves propagate in tissue can be used to quantify the shear modulus of the tissue. As many groups have shown, shear waves can be generated within tissues using focused, impulsive, acoustic radiation force excitations, and the resulting displacement response can be ultrasonically tracked through time. The goals of the work herein are twofold: (i) to

M. L. Palmeri; M. H. Wang; J. J. Dahl; K. D. Frinkley; K. R. Nightingale

2008-01-01

131

QUANTIFYING HEPATIC SHEAR MODULUS IN VIVO USING ACOUSTIC RADIATION FORCE  

Microsoft Academic Search

The speed at which shear waves propagate in tissue can be used to quantify the shear modulus of the tissue. As many groups have shown, shear waves can be generated within tissues using focused, impulsive, acoustic radiation force excitations, and the resulting displacement response can be ultrasonically tracked through time. The goals of the work herein are twofold: (i) to

M. L. PALMERI; M. H. W ANG; J. J. D AHL; K. D. F RINKLEY; K. R. NIGHTINGALE

2008-01-01

132

Quantifying Wrinkle Features of Thin Membrane Structures  

NASA Technical Reports Server (NTRS)

For future micro-systems utilizing membrane based structures, quantified predictions of wrinkling behavior in terms of amplitude, angle and wavelength are needed to optimize the efficiency and integrity of such structures, as well as their associated control systems. For numerical analyses performed in the past, limitations on the accuracy of membrane distortion simulations have often been related to the assumptions made. This work demonstrates that critical assumptions include: effects of gravity, supposed initial or boundary conditions, and the type of element used to model the membrane. In this work, a 0.2 m x 02 m membrane is treated as a structural material with non-negligible bending stiffness. Finite element modeling is used to simulate wrinkling behavior due to a constant applied in-plane shear load. Membrane thickness, gravity effects, and initial imperfections with respect to flatness were varied in numerous nonlinear analysis cases. Significant findings include notable variations in wrinkle modes for thickness in the range of 50 microns to 1000 microns, which also depend on the presence of an applied gravity field. However, it is revealed that relationships between overall strain energy density and thickness for cases with differing initial conditions are independent of assumed initial conditions. In addition, analysis results indicate that the relationship between wrinkle amplitude scale (W/t) and structural scale (L/t) is independent of the nonlinear relationship between thickness and stiffness.

Jacobson, Mindy B.; Iwasa, Takashi; Naton, M. C.

2004-01-01

133

Quantifying selection in immune receptor repertoires  

PubMed Central

The efficient recognition of pathogens by the adaptive immune system relies on the diversity of receptors displayed at the surface of immune cells. T-cell receptor diversity results from an initial random DNA editing process, called VDJ recombination, followed by functional selection of cells according to the interaction of their surface receptors with self and foreign antigenic peptides. Using high-throughput sequence data from the ?-chain of human T-cell receptors, we infer factors that quantify the overall effect of selection on the elements of receptor sequence composition: the V and J gene choice and the length and amino acid composition of the variable region. We find a significant correlation between biases induced by VDJ recombination and our inferred selection factors together with a reduction of diversity during selection. Both effects suggest that natural selection acting on the recombination process has anticipated the selection pressures experienced during somatic evolution. The inferred selection factors differ little between donors or between naive and memory repertoires. The number of sequences shared between donors is well-predicted by our model, indicating a stochastic origin of such public sequences. Our approach is based on a probabilistic maximum likelihood method, which is necessary to disentangle the effects of selection from biases inherent in the recombination process.

Elhanati, Yuval; Murugan, Anand; Callan, Curtis G.; Mora, Thierry; Walczak, Aleksandra M.

2014-01-01

134

Quantifying selection in immune receptor repertoires.  

PubMed

The efficient recognition of pathogens by the adaptive immune system relies on the diversity of receptors displayed at the surface of immune cells. T-cell receptor diversity results from an initial random DNA editing process, called VDJ recombination, followed by functional selection of cells according to the interaction of their surface receptors with self and foreign antigenic peptides. Using high-throughput sequence data from the ?-chain of human T-cell receptors, we infer factors that quantify the overall effect of selection on the elements of receptor sequence composition: the V and J gene choice and the length and amino acid composition of the variable region. We find a significant correlation between biases induced by VDJ recombination and our inferred selection factors together with a reduction of diversity during selection. Both effects suggest that natural selection acting on the recombination process has anticipated the selection pressures experienced during somatic evolution. The inferred selection factors differ little between donors or between naive and memory repertoires. The number of sequences shared between donors is well-predicted by our model, indicating a stochastic origin of such public sequences. Our approach is based on a probabilistic maximum likelihood method, which is necessary to disentangle the effects of selection from biases inherent in the recombination process. PMID:24941953

Elhanati, Yuval; Murugan, Anand; Callan, Curtis G; Mora, Thierry; Walczak, Aleksandra M

2014-07-01

135

Quantifying Ant Activity Using Vibration Measurements  

PubMed Central

Ant behaviour is of great interest due to their sociality. Ant behaviour is typically observed visually, however there are many circumstances where visual observation is not possible. It may be possible to assess ant behaviour using vibration signals produced by their physical movement. We demonstrate through a series of bioassays with different stimuli that the level of activity of meat ants (Iridomyrmex purpureus) can be quantified using vibrations, corresponding to observations with video. We found that ants exposed to physical shaking produced the highest average vibration amplitudes followed by ants with stones to drag, then ants with neighbours, illuminated ants and ants in darkness. In addition, we devised a novel method based on wavelet decomposition to separate the vibration signal owing to the initial ant behaviour from the substrate response, which will allow signals recorded from different substrates to be compared directly. Our results indicate the potential to use vibration signals to classify some ant behaviours in situations where visual observation could be difficult.

Oberst, Sebastian; Baro, Enrique Nava; Lai, Joseph C. S.; Evans, Theodore A.

2014-01-01

136

Quantifying ant activity using vibration measurements.  

PubMed

Ant behaviour is of great interest due to their sociality. Ant behaviour is typically observed visually, however there are many circumstances where visual observation is not possible. It may be possible to assess ant behaviour using vibration signals produced by their physical movement. We demonstrate through a series of bioassays with different stimuli that the level of activity of meat ants (Iridomyrmex purpureus) can be quantified using vibrations, corresponding to observations with video. We found that ants exposed to physical shaking produced the highest average vibration amplitudes followed by ants with stones to drag, then ants with neighbours, illuminated ants and ants in darkness. In addition, we devised a novel method based on wavelet decomposition to separate the vibration signal owing to the initial ant behaviour from the substrate response, which will allow signals recorded from different substrates to be compared directly. Our results indicate the potential to use vibration signals to classify some ant behaviours in situations where visual observation could be difficult. PMID:24658467

Oberst, Sebastian; Baro, Enrique Nava; Lai, Joseph C S; Evans, Theodore A

2014-01-01

137

Quantified Energy Dissipation Rates in the Terrestrial Bow Shock  

NASA Astrophysics Data System (ADS)

We present the first observationally quantified measure of the energy dissipation rate due to wave-particle interactions in the transition region of the Earth's collisionless bow shock using data from the THEMIS spacecraft. Each of more than 11 bow shock crossings examined with available wave burst data showed both low frequency (<10 Hz) magnetosonic-whistler waves and high frequency (?10 Hz) electromagnetic and electrostatic waves throughout the entire transition region and into the magnetosheath. The high frequency waves were identified as combinations of ion-acoustic waves, electron cyclotron drift instability driven waves, electrostatic solitary waves, and electromagnetic whistler mode waves. These waves were found to have: (1) amplitudes capable of exceeding ?B ~ 10 nT and ?E ~ 300 mV/m, though more typical values were ?B ~ 0.1-1.0 nT and ?E ~ 10-50 mV/m; (2) energy fluxes in excess of 2000 ?W m-2; (3) resistivities > 9000 ? m; and (4) energy dissipation rates > 3 ?W m-3. The high frequency (>10 Hz) electromagnetic waves produce such excessive energy dissipation that they need only be, at times, < 0.01% efficient to produce the observed increase in entropy across the shocks necessary to balance the nonlinear wave steepening that produces the shocks. These results show that wave-particle interactions have the capacity to regulate the global structure and dominate the energy dissipation of collisionless shocks.

Wilson, L. B., III; Sibeck, D. G.; Breneman, A. W.; Le Contel, O.; Cully, C.; Turner, D. L.; Angelopoulos, V.; Malaspina, D. M.

2013-12-01

138

Quantifying variability within water samples: the need for adequate subsampling.  

PubMed

Accurate and precise determination of the concentration of nutrients and other substances in waterbodies is an essential requirement for supporting effective management and legislation. Owing primarily to logistic and financial constraints, however, national and regional agencies responsible for monitoring surface waters tend to quantify chemical indicators of water quality using a single sample from each waterbody, thus largely ignoring spatial variability. We show here that total sample variability, which comprises both analytical variability and within-sample heterogeneity, of a number of important chemical indicators of water quality (chlorophyll a, total phosphorus, total nitrogen, soluble molybdate-reactive phosphorus and dissolved inorganic nitrogen) varies significantly both over time and among determinands, and can be extremely high. Within-sample heterogeneity, whose mean contribution to total sample variability ranged between 62% and 100%, was significantly higher in samples taken from rivers compared with those from lakes, and was shown to be reduced by filtration. Our results show clearly that neither a single sample, nor even two sub-samples from that sample is adequate for the reliable, and statistically robust, detection of changes in the quality of surface waters. We recommend strongly that, in situations where it is practicable to take only a single sample from a waterbody, a minimum of three sub-samples are analysed from that sample for robust quantification of both the concentrations of determinands and total sample variability. PMID:17706740

Donohue, Ian; Irvine, Kenneth

2008-01-01

139

Quantifying Uncertainties in Land Surface Microwave Emissivity Retrievals  

NASA Technical Reports Server (NTRS)

Uncertainties in the retrievals of microwave land surface emissivities were quantified over two types of land surfaces: desert and tropical rainforest. Retrievals from satellite-based microwave imagers, including SSM/I, TMI and AMSR-E, were studied. Our results show that there are considerable differences between the retrievals from different sensors and from different groups over these two land surface types. In addition, the mean emissivity values show different spectral behavior across the frequencies. With the true emissivity assumed largely constant over both of the two sites throughout the study period, the differences are largely attributed to the systematic and random errors in the retrievals. Generally these retrievals tend to agree better at lower frequencies than at higher ones, with systematic differences ranging 14% (312 K) over desert and 17% (320 K) over rainforest. The random errors within each retrieval dataset are in the range of 0.52% (26 K). In particular, at 85.0/89.0 GHz, there are very large differences between the different retrieval datasets, and within each retrieval dataset itself. Further investigation reveals that these differences are mostly likely caused by rain/cloud contamination, which can lead to random errors up to 1017 K under the most severe conditions.

Tian, Yudong; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Prigent, Catherine; Norouzi, Hamidreza; Aires, Filipe; Boukabara, Sid-Ahmed; Furuzawa, Fumie A.; Masunaga, Hirohiko

2012-01-01

140

Comparisons show construction management's benefits.  

PubMed

Analysis of competitive bid, design-build, and construction management approaches shows that construction management offers owners substantial advantages regarding project cost and quality and substantial involvement and control throughout the process. PMID:110667

Payette, T M

1979-09-01

141

Using Graphs to Show Connections  

NSDL National Science Digital Library

The purpose of this resource is to show how graphs of GLOBE data over time show the interconnectedness of Earth's system components at the local level. Students visit a study site, where they observe and recall their existing knowledge of air, water, soil, and living things to make a list of interconnections among the four Earth system components. They make predictions about the effects of a change in a system, inferring ways these changes affect the characteristics of other related components.

The GLOBE Program, University Corporation for Atmospheric Research (UCAR)

2003-08-01

142

Shakespeare and other English Renaissance authors as characterized by Information Theory complexity quantifiers  

NASA Astrophysics Data System (ADS)

We introduce novel Information Theory quantifiers in a computational linguistic study that involves a large corpus of English Renaissance literature. The 185 texts studied (136 plays and 49 poems in total), with first editions that range from 1580 to 1640, form a representative set of its period. Our data set includes 30 texts unquestionably attributed to Shakespeare; in addition we also included A Lover’s Complaint, a poem which generally appears in Shakespeare collected editions but whose authorship is currently in dispute. Our statistical complexity quantifiers combine the power of Jensen-Shannon’s divergence with the entropy variations as computed from a probability distribution function of the observed word use frequencies. Our results show, among other things, that for a given entropy poems display higher complexity than plays, that Shakespeare’s work falls into two distinct clusters in entropy, and that his work is remarkable for its homogeneity and for its closeness to overall means.

Rosso, Osvaldo A.; Craig, Hugh; Moscato, Pablo

2009-03-01

143

Quantifying Arm Non-use in Individuals Post-stroke  

PubMed Central

Background Arm non-use, defined as the difference between what the individual can do when constrained to use the paretic arm and what the individual does do when given a free choice to use either arm, has not yet been quantified in individuals post-stroke. Objectives 1) to quantify non-use post-stroke and 2) to develop and test a novel, simple, objective, reliable, and valid instrument, the Bilateral Arm Reaching Test (BART), to quantify arm use and non-use post-stroke. Methods First, we quantify non-use with the Quality of Movement (QOM) subscale of the Actual Amount of Use Test (AAUT) by subtracting the AAUT QOM score in the spontaneous use condition from the AAUT QOM score in a subsequent constrained use condition. Second, we quantify arm use and non-use with BART by comparing reaching performance to visual targets projected over a 2D horizontal hemi-workspace in a spontaneous-use condition (in which participants are free to use either arm at each trial) to reaching performance in a constrained-use condition. Results All participants (N = 24) with chronic stroke and with mild to moderate impairment exhibited non-use with the AAUT QOM. Non-use with BART had excellent test-retest reliability and good external validity. Conclusions BART is the first instrument that can be used repeatedly and practically in the clinic to quantify the effects of neuro-rehabilitation on arm use and non-use, and in the laboratory for advancing theoretical knowledge about the recovery of arm use and the development of non-use and ‘learned non-use’ after stroke.

Han, Cheol E.; Kim, Sujin; Chen, Shuya; Lai, Yi-Hsuan; Lee, Jeong-Yoon; Lee, Jihye; Osu, Rieko; Winstein, Carolee J.; Schweighofer, Nicolas

2014-01-01

144

Towards quantifying cochlear implant localization performance in complex acoustic environments.  

PubMed

Cochlear implant (CI) users frequently report listening difficulties in reverberant and noisy spaces. While it is common to assess speech understanding with implants in background noise, binaural hearing performance has rarely been quantified in the presence of other sources, although the binaural system is a major contributor to the robustness of speech understanding in noisy situations with normal hearing. Here, a pointing task was used to measure horizontal localization ability of a bilateral CI user in quiet and in a continuous diffuse noise interferer at a signal-to-noise ratio of 0 dB. Results were compared to localization performance of six normal hearing listeners. The average localization error of the normal hearing listeners was within normal ranges reported previously and only increased by 1.8° when the interfering noise was introduced. In contrast, the bilateral CI user showed a localization error of 22° in quiet which rose to 31° in noise. This increase was partly due to target sounds being inaudible when presented from frontal locations between -20° and +20°. With the noise present, the implant user was only able to reliably hear target sounds presented from locations well off the median plane. The results give support to the informal complaints raised by CI users and can help to define targets for the design of, e.g., noise reduction algorithms for implant processors. PMID:21917214

Kerber, S; Seeber, B U

2011-08-01

145

Quantifying effects of ramp metering on freeway safety.  

PubMed

This study presents a real-time crash prediction model and uses this model to investigate the effect of the local traffic-responsive ramp metering strategy on freeway safety. Safety benefits of ramp metering are quantified in terms of the reduced crash potential estimated by the real-time crash prediction model. Driver responses to ramp metering and the consequent traffic flow changes were observed using a microscopic traffic simulation model and crash potential was estimated for a 14.8 km section of I-880 in Hayward, California and a hypothetical isolated on-ramp network. The results showed that ramp metering reduced crash potential by 5-37% compared to the no-control case. It was found that safety benefits of local ramp metering strategy were only restricted to the freeway sections in the vicinity of the ramp, and were highly dependent on the existing traffic conditions and the spatial extent over which the evaluation was conducted. The results provide some insight into how a local ramp metering strategy can be modified to improve safety (by reducing total crash potential) on longer stretch of freeways over a wide range of traffic conditions. PMID:16329982

Lee, Chris; Hellinga, Bruce; Ozbay, Kaan

2006-03-01

146

The OOPSLA trivia show (TOOTS)  

Microsoft Academic Search

OOPSLA has a longstanding tradition of being a forum for discussing the cutting edge of technology in a fun and participatory environment. The type of events sponsored by OOPSLA sometimes border on the unconventional. This event represents an atypical panel that conforms to the concept of a game show that is focused on questions and answers related to OOPSLA themes.

Jeff Gray; Douglas C. Schmidt

2009-01-01

147

COMPLEXITY & APPROXIMABILITY OF QUANTIFIED & STOCHASTIC CONSTRAINT SATISFACTION PROBLEMS  

SciTech Connect

Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S and T be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SATc(S).) Here, we study simultaneously the complexity of decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. We present simple yet general techniques to characterize simultaneously, the complexity or efficient approximability of a number of versions/variants of the problems SAT(S), Q-SAT(S), S-SAT(S),MAX-Q-SAT(S) etc., for many different such D,C,S,T. These versions/variants include decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. Our unified approach is based on the following two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic representability. Some of the results extend the earlier results in [Pa85,LMP99,CF+93,CF+94] Our techniques and results reported here also provide significant steps towards obtaining dichotomy theorems, for a number of the problems above, including the problems MAX-Q-SAT(S), and MAX-S-SAT(S). The discovery of such dichotomy theorems, for unquantified formulas, has received significant recent attention in the literature [CF+93, CF+94, Cr95, KSW97]. Keywords: NP-hardness; Approximation Algorithms; PSPACE-hardness; Quantified and Stochastic Constraint Satisfaction Problems.

H. B. HUNT; M. V. MARATHE; R. E. STEARNS

2001-06-01

148

Quantifying the Ripple: Word-of-Mouth and Advertising Effectiveness  

Microsoft Academic Search

In this article the authors demonstrate how a customer lifetime value approach can provide a better assessment of advertising effectiveness that takes into account postpurchase behaviors such as word-of-mouth. Although for many advertisers word-of-mouth is viewed as an alternative to advertising, the authors show that it is possible to quantify the way in which word-of-mouth often complements and extends the

JOHN E. HOGAN; KATHERINE N. LEMON; BARAK LIBAI

2004-01-01

149

On the Freeze Quantifier in Constraint LTL: Decidability and Complexity  

Microsoft Academic Search

Constraint LTL, a generalization of LTL over Presburger constraints, isoftenusedasaformallanguagetospecifythe behavior of operational models with constraints. The freeze quantifier can be part of the language, as in some real-time logics, but this variable-binding mechanism is quite general and ubiquitous in many logical languages (first-order tem- poral logics, hybrid logics, logics for sequence diagrams, navigation logics, etc.). We show that Constraint

Stéphane Demri; Ranko Lazic; David Nowak

2005-01-01

150

Taking the high (or low) road: a quantifier priming perspective on basic anchoring effects.  

PubMed

Current explanations of basic anchoring effects, defined as the influence of an arbitrary number standard on an uncertain judgment, confound numerical values with vague quantifiers. I show that the consideration of numerical anchors may bias subsequent judgments primarily through the priming of quantifiers, rather than the numbers themselves. Study 1 varied the target of a numerical comparison judgment in a between--participants design, while holding the numerical anchor value constant. This design yielded an anchoring effect consistent with a quantifier priming hypothesis. Study 2 included a direct manipulation of vague quantifiers in the traditional anchoring paradigm. Finally, Study 3 examined the notion that specific associations between quantifiers, reflecting values on separate judgmental dimensions (i.e., the price and height of a target) can affect the direction of anchoring effects. Discussion focuses on the nature of vague quantifier priming in numerically anchored judgments. PMID:23951950

Sleeth-Keppler, David

2013-01-01

151

Cross-Linguistic Relations between Quantifiers and Numerals in Language Acquisition: Evidence from Japanese  

ERIC Educational Resources Information Center

A study of 104 Japanese-speaking 2- to 5-year-olds tested the relation between numeral and quantifier acquisition. A first study assessed Japanese children's comprehension of quantifiers, numerals, and classifiers. Relative to English-speaking counterparts, Japanese children were delayed in numeral comprehension at 2 years of age but showed no…

Barner, David; Libenson, Amanda; Cheung, Pierina; Takasaki, Mayu

2009-01-01

152

Quantifying solute transport processes: are chemically "conservative" tracers electrically conservative?  

USGS Publications Warehouse

The concept of a nonreactive or conservative tracer, commonly invoked in investigations of solute transport, requires additional study in the context of electrical geophysical monitoring. Tracers that are commonly considered conservative may undergo reactive processes, such as ion exchange, thus changing the aqueous composition of the system. As a result, the measured electrical conductivity may reflect not only solute transport but also reactive processes. We have evaluated the impacts of ion exchange reactions, rate-limited mass transfer, and surface conduction on quantifying tracer mass, mean arrival time, and temporal variance in laboratory-scale column experiments. Numerical examples showed that (1) ion exchange can lead to resistivity-estimated tracer mass, velocity, and dispersivity that may be inaccurate; (2) mass transfer leads to an overestimate in the mobile tracer mass and an underestimate in velocity when using electrical methods; and (3) surface conductance does not notably affect estimated moments when high-concentration tracers are used, although this phenomenon may be important at low concentrations or in sediments with high and/or spatially variable cation-exchange capacity. In all cases, colocated groundwater concentration measurements are of high importance for interpreting geophysical data with respect to the controlling transport processes of interest.

Singha, Kamini; Li, Li; Day-Lewis, Frederick D.; Regberg, Aaron B.

2012-01-01

153

Tunable path centrality: Quantifying the importance of paths in networks  

NASA Astrophysics Data System (ADS)

Centrality is a fundamental measure in network analysis. Specifically, centrality of a path describes the importance of the path with respect to the remaining part of the network. In this paper, we propose a tunable path centrality (TPC) measure, which quantifies the centrality of a path by integrating the path degree (PD) (number of neighbors of the path) and the path bridge (PB) (number of bridges in the path) with a control parameter ?. Considering the complexity of large-scale and dynamical topologies of many real-world networks, both PD and PB are computed with only the local topological structure of a path. We demonstrate the distribution of the three path centralities (TPC, PD and PB) in computer-generated networks and real-world networks. Furthermore, we apply the three path centralities to the network fragility problem, and exploit the distribution of the optimal control parameter ? through simulation and analysis. Finally, the simulation results show that generally TPC is more efficient than PD and PB in the network fragility problem. These path centralities are also applicable in many other network problems including spread, control, prediction and so on.

Pu, Cun-Lai; Cui, Wei; Yang, Jian

2014-07-01

154

Quantifying Interactions of ?-Synuclein and ?-Synuclein with Model Membranes  

PubMed Central

The synucleins are a family of proteins involved in numerous neurodegenerative pathologies [?-synuclein and ?-synuclein (?S)], as well as in various types of cancers [?-synuclein (?S)]. While the connection between ?-synuclein and Parkinson's disease is well established, recent evidence links point mutants of ?S to dementia with Lewy bodies. Overexpression of ?S has been associated with enhanced metastasis and cancer drug resistance. Despite their prevalence in such a variety of diseases, the native functions of the synucleins remain unclear. They have a lipid-binding motif in their N-terminal region, which suggests interactions with biological membranes in vivo. In this study, we used fluorescence correlation spectroscopy to monitor the binding properties of ?S and ?S to model membranes and to determine the free energy of the interactions. Our results show that the interactions are most strongly affected by the presence of both anionic lipids and bilayer curvature, while membrane fluidity plays a very minor role. Quantifying the lipid-binding properties of ?S and ?S provides additional insights into the underlying factors governing the protein–membrane interactions. Such insights not only are relevant to the native functions of these proteins but also highlight their contributions to pathological conditions that are either mediated or characterized by perturbations of these interactions.

Ducas, Vanessa C.; Rhoades, Elizabeth

2012-01-01

155

Quantifying thiol-gold interactions towards the efficient strength control.  

PubMed

The strength of the thiol-gold interactions provides the basis to fabricate robust self-assembled monolayers for diverse applications. Investigation on the stability of thiol-gold interactions has thus become a hot topic. Here we use atomic force microscopy to quantify the stability of individual thiol-gold contacts formed both by isolated single thiols and in self-assembled monolayers on gold surface. Our results show that the oxidized gold surface can enhance greatly the stability of gold-thiol contacts. In addition, the shift of binding modes from a coordinate bond to a covalent bond with the change in environmental pH and interaction time has been observed experimentally. Furthermore, isolated thiol-gold contact is found to be more stable than that in self-assembled monolayers. Our findings revealed mechanisms to control the strength of thiol-gold contacts and will help guide the design of thiol-gold contacts for a variety of practical applications. PMID:25000336

Xue, Yurui; Li, Xun; Li, Hongbin; Zhang, Wenke

2014-01-01

156

Statistical physics approach to quantifying differences in myelinated nerve fibers  

NASA Astrophysics Data System (ADS)

We present a new method to quantify differences in myelinated nerve fibers. These differences range from morphologic characteristics of individual fibers to differences in macroscopic properties of collections of fibers. Our method uses statistical physics tools to improve on traditional measures, such as fiber size and packing density. As a case study, we analyze cross-sectional electron micrographs from the fornix of young and old rhesus monkeys using a semi-automatic detection algorithm to identify and characterize myelinated axons. We then apply a feature selection approach to identify the features that best distinguish between the young and old age groups, achieving a maximum accuracy of 94% when assigning samples to their age groups. This analysis shows that the best discrimination is obtained using the combination of two features: the fraction of occupied axon area and the effective local density. The latter is a modified calculation of axon density, which reflects how closely axons are packed. Our feature analysis approach can be applied to characterize differences that result from biological processes such as aging, damage from trauma or disease or developmental differences, as well as differences between anatomical regions such as the fornix and the cingulum bundle or corpus callosum.

Comin, César H.; Santos, João R.; Corradini, Dario; Morrison, Will; Curme, Chester; Rosene, Douglas L.; Gabrielli, Andrea; da F. Costa, Luciano; Stanley, H. Eugene

2014-03-01

157

Statistical physics approach to quantifying differences in myelinated nerve fibers.  

PubMed

We present a new method to quantify differences in myelinated nerve fibers. These differences range from morphologic characteristics of individual fibers to differences in macroscopic properties of collections of fibers. Our method uses statistical physics tools to improve on traditional measures, such as fiber size and packing density. As a case study, we analyze cross-sectional electron micrographs from the fornix of young and old rhesus monkeys using a semi-automatic detection algorithm to identify and characterize myelinated axons. We then apply a feature selection approach to identify the features that best distinguish between the young and old age groups, achieving a maximum accuracy of 94% when assigning samples to their age groups. This analysis shows that the best discrimination is obtained using the combination of two features: the fraction of occupied axon area and the effective local density. The latter is a modified calculation of axon density, which reflects how closely axons are packed. Our feature analysis approach can be applied to characterize differences that result from biological processes such as aging, damage from trauma or disease or developmental differences, as well as differences between anatomical regions such as the fornix and the cingulum bundle or corpus callosum. PMID:24676146

Comin, César H; Santos, João R; Corradini, Dario; Morrison, Will; Curme, Chester; Rosene, Douglas L; Gabrielli, Andrea; Costa, Luciano da F; Stanley, H Eugene

2014-01-01

158

Quantifying the effect of intertrial dependence on perceptual decisions.  

PubMed

In the perceptual sciences, experimenters study the causal mechanisms of perceptual systems by probing observers with carefully constructed stimuli. It has long been known, however, that perceptual decisions are not only determined by the stimulus, but also by internal factors. Internal factors could lead to a statistical influence of previous stimuli and responses on the current trial, resulting in serial dependencies, which complicate the causal inference between stimulus and response. However, the majority of studies do not take serial dependencies into account, and it has been unclear how strongly they influence perceptual decisions. We hypothesize that one reason for this neglect is that there has been no reliable tool to quantify them and to correct for their effects. Here we develop a statistical method to detect, estimate, and correct for serial dependencies in behavioral data. We show that even trained psychophysical observers suffer from strong history dependence. A substantial fraction of the decision variance on difficult stimuli was independent of the stimulus but dependent on experimental history.We discuss the strong dependence of perceptual decisions on internal factors and its implications for correct data interpretation. PMID:24944238

Fründ, Ingo; Wichmann, Felix A; Macke, Jakob H

2014-01-01

159

Quantifying intrachromosomal GC heterogeneity in prokaryotic genomes  

Microsoft Academic Search

The sequencing of prokaryotic genomes covering a wide taxonomic range has sparked renewed interest in intrachromosomal compositional {(GC)} heterogeneity, largely in view of lateral transfers. We present here a brief overview of some methods for visualizing and quantifying {GC} variation in prokaryotes. We used these methods to examine heterogeneity levels in sequenced prokaryotes, for a range of scales or stringencies.

P. Bernaola-Galvan; Jose L. Oliver; Pedro Carpena; Oliver Clay; Giorgio Bernardi

2004-01-01

160

A model to quantify wastewater odor strength  

Microsoft Academic Search

A method of quantifying the odor strength of wastewater samples has been investigated. Wastewater samples from two locations of a wastewater treatment plant were collected and subjected to air stripping. The off-gas odor concentration was measured by a dynamic olfactometer at various time intervals. Applying a first order model to the decay of odorous substances in the wastewater under air

Lawrence C. C. Koe; N. C. Tan

1988-01-01

161

Quantifiable Effects of Noise on Humans.  

National Technical Information Service (NTIS)

Quantifiable effects of noise are defined as those effects that demonstrate both a clear causal relationship, and also an increase of severity of the effect with the magnitude of the noise exposure. While possible noise effects, such as elevated blood pre...

D. L. Johnson

1980-01-01

162

Quantifying the Reuse of Learning Objects  

ERIC Educational Resources Information Center

This paper reports the findings of one case study from a larger project, which aims to quantify the claimed efficiencies of reusing learning objects to develop e-learning resources. The case study describes how an online inquiry project "Diabetes: A waste of energy" was developed by searching for, evaluating, modifying and then integrating as many…

Elliott, Kristine; Sweeney, Kevin

2008-01-01

163

Quantifying Learning Outcomes: A Gentle Approach.  

ERIC Educational Resources Information Center

In fall 1993, Coffeyville Community College (CCC) in Kansas announced the implementation of a new program known as Quantifying Learning Outcomes (QLO), one component of the college's Student Outcomes Assessment Plan. QLO calls for a clear articulation of CCC's goals regarding instructional support materials within the next 6 to 12 months; the…

Lind, Donald J.

164

Quantifying cellular alignment on anisotropic biomaterial platforms.  

PubMed

How do we quantify cellular alignment? Cellular alignment is an important technique used to study and promote tissue regeneration in vitro and in vivo. Indeed, regenerative outcomes are often strongly correlated with the efficacy of alignment, making quantitative, automated assessment an important goal for the field of tissue engineering. There currently exist various classes of algorithms, which effectively address the problem of quantifying individual cellular alignments using Fourier methods, kernel methods, and elliptical approximation; however, these algorithms often yield population distributions and are limited by their inability to yield a scalar metric quantifying the efficacy of alignment. The current work builds on these classes of algorithms by adapting the signal processing methods previously used by our group to study the alignment of cellular processes. We use an automated, ellipse-fitting algorithm to approximate cell body alignment with respect to a silk biomaterial scaffold, followed by the application of the normalized cumulative periodogram criterion to produce a scalar value quantifying alignment. The proposed work offers a generalized method for assessing cellular alignment in complex, two-dimensional environments. This method may also offer a novel alternative for assessing the alignment of cell types with polarity, such as fibroblasts, endothelial cells, and mesenchymal stem cells, as well as nuclei. PMID:23520051

Nectow, Alexander R; Kilmer, Misha E; Kaplan, David L

2014-02-01

165

Improved estimates show large circumpolar stocks of permafrost carbon while quantifying substantial uncertainty ranges and identifying remaining data gaps  

NASA Astrophysics Data System (ADS)

Soils and other unconsolidated deposits in the northern circumpolar permafrost region store large amounts of soil organic carbon (SOC). This SOC is potentially vulnerable to remobilization following soil warming and permafrost thaw, but stock estimates are poorly constrained and quantitative error estimates were lacking. This study presents revised estimates of the permafrost SOC pool, including quantitative uncertainty estimates, in the 0-3 m depth range in soils as well as for deeper sediments (>3 m) in deltaic deposits of major rivers and in the Yedoma region of Siberia and Alaska. The revised estimates are based on significantly larger databases compared to previous studies. Compared to previous studies, the number of individual sites/pedons has increased by a factor ×8-11 for soils in the 1-3 m depth range,, a factor ×8 for deltaic alluvium and a factor ×5 for Yedoma region deposits. Upscaled based on regional soil maps, estimated permafrost region SOC stocks are 217 ± 15 and 472 ± 34 Pg for the 0-0.3 m and 0-1 m soil depths, respectively (±95% confidence intervals). Depending on the regional subdivision used to upscale 1-3 m soils (following physiography or continents), estimated 0-3 m SOC storage is 1034 ± 183 Pg or 1104 ± 133 Pg. Of this, 34 ± 16 Pg C is stored in thin soils of the High Arctic. Based on generalised calculations, storage of SOC in deep deltaic alluvium (>3 m to ?60 m depth) of major Arctic rivers is estimated to 91 ± 39 Pg (of which 69 ± 34 Pg is in permafrost). In the Yedoma region, estimated >3 m SOC stocks are 178 +140/-146 Pg, of which 74 +54/-57 Pg is stored in intact, frozen Yedoma (late Pleistocene ice- and organic-rich silty sediments) with the remainder in refrozen thermokarst deposits (±16/84th percentiles of bootstrapped estimates). A total estimated mean storage for the permafrost region of ca. 1300-1370 Pg with an uncertainty range of 930-1690 Pg encompasses the combined revised estimates. Of this, ?819-836 Pg is perennially frozen. While some components of the revised SOC stocks are similar in magnitude to those previously reported for this region, there are also substantial differences in individual components. There is evidence of remaining regional data-gaps. Estimates remain particularly poorly constrained for soils in the High Arctic region and physiographic regions with thin sedimentary overburden (mountains, highlands and plateaus) as well as for >3 m depth deposits in deltas and the Yedoma region.

Hugelius, G.; Strauss, J.; Zubrzycki, S.; Harden, J. W.; Schuur, E. A. G.; Ping, C. L.; Schirrmeister, L.; Grosse, G.; Michaelson, G. J.; Koven, C. D.; O'Donnell, J. A.; Elberling, B.; Mishra, U.; Camill, P.; Yu, Z.; Palmtag, J.; Kuhry, P.

2014-03-01

166

Control in bioreactors showing gradients  

Microsoft Academic Search

In large-scale bioreactors gradients often occur as a result of non-ideal mixing. This phenomenon complicates design and control of large-scale bioreactors. Gradients in the oxygen concentration can be modeled with a two-compartment model of the liquid phase. Application of this model had been suggested for the control of the dissolved oxygen concentration with a batch gluconic acid fermentation process as

S. R. Weijers; G. Honderd; K. Ch. A. M. Luyben

1990-01-01

167

Casimir experiments showing saturation effects  

NASA Astrophysics Data System (ADS)

We address several different Casimir experiments where theory and experiment disagree. First out is the classical Casimir force measurement between two metal half spaces; here both in the form of the torsion pendulum experiment by Lamoreaux and in the form of the Casimir pressure measurement between a gold sphere and a gold plate as performed by Decca ; theory predicts a large negative thermal correction, absent in the high precision experiments. The third experiment is the measurement of the Casimir force between a metal plate and a laser irradiated semiconductor membrane as performed by Chen ; the change in force with laser intensity is larger than predicted by theory. The fourth experiment is the measurement of the Casimir force between an atom and a wall in the form of the measurement by Obrecht of the change in oscillation frequency of a R87b Bose-Einstein condensate trapped to a fused silica wall; the change is smaller than predicted by theory. We show that saturation effects can explain the discrepancies between theory and experiment observed in all these cases.

Sernelius, Bo E.

2009-10-01

168

Casimir experiments showing saturation effects  

SciTech Connect

We address several different Casimir experiments where theory and experiment disagree. First out is the classical Casimir force measurement between two metal half spaces; here both in the form of the torsion pendulum experiment by Lamoreaux and in the form of the Casimir pressure measurement between a gold sphere and a gold plate as performed by Decca et al.; theory predicts a large negative thermal correction, absent in the high precision experiments. The third experiment is the measurement of the Casimir force between a metal plate and a laser irradiated semiconductor membrane as performed by Chen et al.; the change in force with laser intensity is larger than predicted by theory. The fourth experiment is the measurement of the Casimir force between an atom and a wall in the form of the measurement by Obrecht et al. of the change in oscillation frequency of a {sup 87}Rb Bose-Einstein condensate trapped to a fused silica wall; the change is smaller than predicted by theory. We show that saturation effects can explain the discrepancies between theory and experiment observed in all these cases.

Sernelius, Bo E. [Division of Theory and Modeling, Department of Physics, Chemistry and Biology, Linkoeping University, SE-581 83 Linkoeping (Sweden)

2009-10-15

169

Entropy generation method to quantify thermal comfort.  

PubMed

The present paper presents a thermodynamic approach to assess the quality of human-thermal environment interaction and quantify thermal comfort. The approach involves development of entropy generation term by applying second law of thermodynamics to the combined human-environment system. The entropy generation term combines both human thermal physiological responses and thermal environmental variables to provide an objective measure of thermal comfort. The original concepts and definitions form the basis for establishing the mathematical relationship between thermal comfort and entropy generation term. As a result of logic and deterministic approach, an Objective Thermal Comfort Index (OTCI) is defined and established as a function of entropy generation. In order to verify the entropy-based thermal comfort model, human thermal physiological responses due to changes in ambient conditions are simulated using a well established and validated human thermal model developed at the Institute of Environmental Research of Kansas State University (KSU). The finite element based KSU human thermal computer model is being utilized as a "Computational Environmental Chamber" to conduct series of simulations to examine the human thermal responses to different environmental conditions. The output from the simulation, which include human thermal responses and input data consisting of environmental conditions are fed into the thermal comfort model. Continuous monitoring of thermal comfort in comfortable and extreme environmental conditions is demonstrated. The Objective Thermal Comfort values obtained from the entropy-based model are validated against regression based Predicted Mean Vote (PMV) values. Using the corresponding air temperatures and vapor pressures that were used in the computer simulation in the regression equation generates the PMV values. The preliminary results indicate that the OTCI and PMV values correlate well under ideal conditions. However, an experimental study is needed in the future to fully establish the validity of the OTCI formula and the model. One of the practical applications of this index is that could it be integrated in thermal control systems to develop human-centered environmental control systems for potential use in aircraft, mass transit vehicles, intelligent building systems, and space vehicles. PMID:12182196

Boregowda, S C; Tiwari, S N; Chaturvedi, S K

2001-12-01

170

Quantifying Wetland Functions: A Case Study  

NASA Astrophysics Data System (ADS)

Wetlands are reputed to reduce peak flows and improve water quality by trapping sediment and phosphorus. However, there are relatively few studies that quantify these wetland functions. This paper reports on a study of a 45-hectare wetland in southern Wisconsin. The wetland is traversed by a stream channel that drains a predominantly agricultural 17.4 km2 watershed. During the spring and summer of 2006, we collected stage data and water samples at stations upstream and downstream of the wetland, with the former accounting for 82% of the contributing area. Continuous measurements of water stage at these stations were used to construct a streamflow record. During storm events water samples were taken automatically at 2-hour intervals for the first 12 samples and 8-hour intervals for the next 12 samples. Samples were analyzed for total suspended solids, total phosphorus, and dissolved reactive phosphorus. Ten events were observed during the observation period; the two largest events were 1 to 2-year storms. One-dimensional unsteady flow routing was used to estimate the maximum extent of wetland inundation for each event. When normalized for flow volume, all peak flows were attenuated by the wetland, with the maximum attenuation corresponding to the intermediate events. The reduced attenuation of the larger events appears to be due to filling of storage, either due to antecedent conditions or the event itself. In the case of sediment, the amount leaving the wetland in the two largest storms, which accounted for 96% of the exported sediment during the period of observation, was twice the amount entering the wetland. The failure of the wetland to trap sediment is apparently due to the role of drainage ditches, which trap sediment during the wetland-filling phase and release it during drainage. The export of sediment during the largest events appears to result from remobilization of sediment deposited in the low-gradient stream channel during smaller events. This hypothesis was supported by the finding that the estimated bed shear during large events exceeded laboratory measurements of the critical shear stress of bed sediment samples. In the case of total phosphorus, the inflow to the wetland about equaled the outflow, although the wetland sequestered 40% of the incoming dissolved reactive phosphorus. The discrepancy is almost certainly due to net export of sediment. Wetlands such as this are very common in the glaciated portion of the U.S., and many contain channels and ditches. The region is dominantly agricultural, and sediment and phosphorus are the primary causes of impaired surface-water quality. Our results suggest that these wetlands are not very effective in mitigating this impairment when flow is concentrated in channels.

Potter, K. W.; Rogers, J. S.; Hoffman, A. R.; Wu, C.; Hoopes, J. A.; Armstrong, D. E.

2007-05-01

171

Entropy generation method to quantify thermal comfort  

NASA Technical Reports Server (NTRS)

The present paper presents a thermodynamic approach to assess the quality of human-thermal environment interaction and quantify thermal comfort. The approach involves development of entropy generation term by applying second law of thermodynamics to the combined human-environment system. The entropy generation term combines both human thermal physiological responses and thermal environmental variables to provide an objective measure of thermal comfort. The original concepts and definitions form the basis for establishing the mathematical relationship between thermal comfort and entropy generation term. As a result of logic and deterministic approach, an Objective Thermal Comfort Index (OTCI) is defined and established as a function of entropy generation. In order to verify the entropy-based thermal comfort model, human thermal physiological responses due to changes in ambient conditions are simulated using a well established and validated human thermal model developed at the Institute of Environmental Research of Kansas State University (KSU). The finite element based KSU human thermal computer model is being utilized as a "Computational Environmental Chamber" to conduct series of simulations to examine the human thermal responses to different environmental conditions. The output from the simulation, which include human thermal responses and input data consisting of environmental conditions are fed into the thermal comfort model. Continuous monitoring of thermal comfort in comfortable and extreme environmental conditions is demonstrated. The Objective Thermal Comfort values obtained from the entropy-based model are validated against regression based Predicted Mean Vote (PMV) values. Using the corresponding air temperatures and vapor pressures that were used in the computer simulation in the regression equation generates the PMV values. The preliminary results indicate that the OTCI and PMV values correlate well under ideal conditions. However, an experimental study is needed in the future to fully establish the validity of the OTCI formula and the model. One of the practical applications of this index is that could it be integrated in thermal control systems to develop human-centered environmental control systems for potential use in aircraft, mass transit vehicles, intelligent building systems, and space vehicles.

Boregowda, S. C.; Tiwari, S. N.; Chaturvedi, S. K.

2001-01-01

172

DOE: Quantifying the Value of Hydropower in the Electric Grid  

SciTech Connect

The report summarizes research to Quantify the Value of Hydropower in the Electric Grid. This 3-year DOE study focused on defining value of hydropower assets in a changing electric grid. Methods are described for valuation and planning of pumped storage and conventional hydropower. The project team conducted plant case studies, electric system modeling, market analysis, cost data gathering, and evaluations of operating strategies and constraints. Five other reports detailing these research results are available a project website, www.epri.com/hydrogrid. With increasing deployment of wind and solar renewable generation, many owners, operators, and developers of hydropower have recognized the opportunity to provide more flexibility and ancillary services to the electric grid. To quantify value of services, this study focused on the Western Electric Coordinating Council region. A security-constrained, unit commitment and economic dispatch model was used to quantify the role of hydropower for several future energy scenarios up to 2020. This hourly production simulation considered transmission requirements to deliver energy, including future expansion plans. Both energy and ancillary service values were considered. Addressing specifically the quantification of pumped storage value, no single value stream dominated predicted plant contributions in various energy futures. Modeling confirmed that service value depends greatly on location and on competition with other available grid support resources. In this summary, ten different value streams related to hydropower are described. These fell into three categories; operational improvements, new technologies, and electricity market opportunities. Of these ten, the study was able to quantify a monetary value in six by applying both present day and future scenarios for operating the electric grid. This study confirmed that hydropower resources across the United States contribute significantly to operation of the grid in terms of energy, capacity, and ancillary services. Many potential improvements to existing hydropower plants were found to be cost-effective. Pumped storage is the most likely form of large new hydro asset expansions in the U.S. however, justifying investments in new pumped storage plants remains very challenging with current electricity market economics. Even over a wide range of possible energy futures, up to 2020, no energy future was found to bring quantifiable revenues sufficient to cover estimated costs of plant construction. Value streams not quantified in this study may provide a different cost-benefit balance and an economic tipping point for hydro. Future studies are essential in the quest to quantify the full potential value. Additional research should consider the value of services provided by advanced storage hydropower and pumped storage at smaller time steps for integration of variable renewable resources, and should include all possible value streams such as capacity value and portfolio benefits i.e.; reducing cycling on traditional generation.

None

2012-12-31

173

Quantifying measurement uncertainty in full-scale compost piles using organic micro-pollutant concentrations.  

PubMed

Reductions in measurement uncertainty for organic micro-pollutant concentrations in full scale compost piles using comprehensive sampling and allowing equilibration time before sampling were quantified. Results showed that both application of a comprehensive sampling procedure (involving sample crushing) and allowing one week of equilibration time before sampling reduces measurement uncertainty by about 50%. Results further showed that for measurements carried out on samples collected using a comprehensive procedure, measurement uncertainty was associated exclusively with the analytic methods applied. Application of statistical analyses confirmed that these results were significant at the 95% confidence level. Overall implications of these results are (1) that it is possible to eliminate uncertainty associated with material inhomogeneity and (2) that in order to reduce uncertainty, sampling procedure is very important early in the composting process but less so later in the process. PMID:24729348

Sadef, Yumna; Poulsen, Tjalfe G; Bester, Kai

2014-05-01

174

Quantifying Shape Changes and Tissue Deformation in Leaf Development1[C][W][OPEN  

PubMed Central

The analysis of biological shapes has applications in many areas of biology, and tools exist to quantify organ shape and detect shape differences between species or among variants. However, such measurements do not provide any information about the mechanisms of shape generation. Quantitative data on growth patterns may provide insights into morphogenetic processes, but since growth is a complex process occurring in four dimensions, growth patterns alone cannot intuitively be linked to shape outcomes. Here, we present computational tools to quantify tissue deformation and surface shape changes over the course of leaf development, applied to the first leaf of Arabidopsis (Arabidopsis thaliana). The results show that the overall leaf shape does not change notably during the developmental stages analyzed, yet there is a clear upward radial deformation of the leaf tissue in early time points. This deformation pattern may provide an explanation for how the Arabidopsis leaf maintains a relatively constant shape despite spatial heterogeneities in growth. These findings highlight the importance of quantifying tissue deformation when investigating the control of leaf shape. More generally, experimental mapping of deformation patterns may help us to better understand the link between growth and shape in organ development.

Rolland-Lagan, Anne-Gaelle; Remmler, Lauren; Girard-Bock, Camille

2014-01-01

175

Digital Optical Method to quantify the visual opacity of fugitive plumes  

NASA Astrophysics Data System (ADS)

Fugitive emissions of particulate matter (PM) raise public concerns due to their adverse impacts on human health and atmospheric visibility. Although the United States Environmental Protection Agency (USEPA) has not developed a standard method for quantifying the opacities of fugitive plumes, select states have developed human vision-based opacity methods for such applications. A digital photographic method, Digital Optical Method for fugitive plumes (DOMfugitive), is described herein for quantifying the opacities of fugitive plume emissions. Field campaigns were completed to evaluate this method by driving vehicles on unpaved roads to generate dust plumes. DOMfugitive was validated by performing simultaneous measurements using a co-located laser transmissometer. For 84% of the measurements, the individual absolute opacity difference values between the two methods were ?15%. The average absolute opacity difference for all the measurements was 8.5%. The paired t-test showed no significant difference between the two methods at 99% confidence level. Comparisons of wavelength dependent opacities with grayscale opacities indicated that DOMfugitive was not sensitive to the wavelength in the visible spectrum evaluated during these field campaigns. These results encourage the development of a USEPA standard method for quantifying the opacities of fugitive PM plumes using digital photography, as an alternative to human-vision based approaches.

Du, Ke; Shi, Peng; Rood, Mark J.; Wang, Kai; Wang, Yang; Varma, Ravi M.

2013-10-01

176

Experimental verification of bridge seismic damage states quantified by calibrating analytical models with empirical field data  

NASA Astrophysics Data System (ADS)

Bridges are one of the most vulnerable components of a highway transportation network system subjected to earthquake ground motions. Prediction of resilience and sustainability of bridge performance in a probabilistic manner provides valuable information for pre-event system upgrading and post-event functional recovery of the network. The current study integrates bridge seismic damageability information obtained through empirical, analytical and experimental procedures and quantifies threshold limits of bridge damage states consistent with the physical damage description given in HAZUS. Experimental data from a large-scale shaking table test are utilized for this purpose. This experiment was conducted at the University of Nevada, Reno, where a research team from the University of California, Irvine, participated. Observed experimental damage data are processed to identify and quantify bridge damage states in terms of rotational ductility at bridge column ends. In parallel, a mechanistic model for fragility curves is developed in such a way that the model can be calibrated against empirical fragility curves that have been constructed from damage data obtained during the 1994 Northridge earthquake. This calibration quantifies threshold values of bridge damage states and makes the analytical study consistent with damage data observed in past earthquakes. The mechanistic model is transportable and applicable to most types and sizes of bridges. Finally, calibrated damage state definitions are compared with that obtained using experimental findings. Comparison shows excellent consistency among results from analytical, empirical and experimental observations.

Banerjee, Swagata; Shinozuka, Masanobu

2008-12-01

177

Quantifiers more or less quantify online: ERP evidence for partial incremental interpretation  

PubMed Central

Event-related brain potentials were recorded during RSVP reading to test the hypothesis that quantifier expressions are incrementally interpreted fully and immediately. In sentences tapping general knowledge (Farmers grow crops/worms as their primary source of income), Experiment 1 found larger N400s for atypical (worms) than typical objects (crops). Experiment 2 crossed object typicality with non-logical subject-noun phrase quantifiers (most, few). Off-line plausibility ratings exhibited the crossover interaction predicted by full quantifier interpretation: Most farmers grow crops and Few farmers grow worms were rated more plausible than Most farmers grow worms and Few farmers grow crops. Object N400s, although modulated in the expected direction, did not reverse. Experiment 3 replicated these findings with adverbial quantifiers (Farmers often/rarely grow crops/worms). Interpretation of quantifier expressions thus is neither fully immediate nor fully delayed. Furthermore, object atypicality was associated with a frontal slow positivity in few-type/rarely quantifier contexts, suggesting systematic processing differences among quantifier types.

Urbach, Thomas P.; Kutas, Marta

2010-01-01

178

Mimas Showing False Colors #1  

NASA Technical Reports Server (NTRS)

False color images of Saturn's moon, Mimas, reveal variation in either the composition or texture across its surface.

During its approach to Mimas on Aug. 2, 2005, the Cassini spacecraft narrow-angle camera obtained multi-spectral views of the moon from a range of 228,000 kilometers (142,500 miles).

The image at the left is a narrow angle clear-filter image, which was separately processed to enhance the contrast in brightness and sharpness of visible features. The image at the right is a color composite of narrow-angle ultraviolet, green, infrared and clear filter images, which have been specially processed to accentuate subtle changes in the spectral properties of Mimas' surface materials. To create this view, three color images (ultraviolet, green and infrared) were combined into a single black and white picture that isolates and maps regional color differences. This 'color map' was then superimposed over the clear-filter image at the left.

The combination of color map and brightness image shows how the color differences across the Mimas surface materials are tied to geological features. Shades of blue and violet in the image at the right are used to identify surface materials that are bluer in color and have a weaker infrared brightness than average Mimas materials, which are represented by green.

Herschel crater, a 140-kilometer-wide (88-mile) impact feature with a prominent central peak, is visible in the upper right of each image. The unusual bluer materials are seen to broadly surround Herschel crater. However, the bluer material is not uniformly distributed in and around the crater. Instead, it appears to be concentrated on the outside of the crater and more to the west than to the north or south. The origin of the color differences is not yet understood. It may represent ejecta material that was excavated from inside Mimas when the Herschel impact occurred. The bluer color of these materials may be caused by subtle differences in the surface composition or the sizes of grains making up the icy soil.

The images were obtained when the Cassini spacecraft was above 25 degrees south, 134 degrees west latitude and longitude. The Sun-Mimas-spacecraft angle was 45 degrees and north is at the top.

The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the mission for NASA's Science Mission Directorate, Washington, D.C. The Cassini orbiter and its two onboard cameras were designed, developed and assembled at JPL. The imaging operations center is based at the Space Science Institute in Boulder, Colo.

For more information about the Cassini-Huygens mission visit http://saturn.jpl.nasa.gov . The Cassini imaging team homepage is at http://ciclops.org .

2005-01-01

179

Quantifying Relative Autonomy in Multiagent Interaction  

Microsoft Academic Search

\\u000a In the paper we introduce a quantitative measure of autonomy in multiagent interactions. We quantify and analyse different\\u000a types of agent autonomy: (a) decision autonomy versus action autonomy, (b) autonomy with respect to an agent’s user, (c) autonomy\\u000a with respect to other agents and groups of agents, and (d) a measure of group autonomy that accounts for the degree with

Sviatoslav Braynov; Henry Hexmoor

180

Quantifying Sea Turtle Mortality with PSATs  

Microsoft Academic Search

While most turtles that interact with longline gear are even-tually released alive, they are often released with hooks remain-ing in their mouths, throats, gastrointestinal tracts or flippers (Aguilar et al., 1995; Oravetz, 1999). The ultimate effects of these hooks and the stress of capture are unknown. Rates of post-release mortality have not yet been adequately quantified, and available estimates are

Yonat Swimmer; Richard Brill; Michael Musyl

2002-01-01

181

Quantifying Spectral Features of Type Ia Supernovae  

Microsoft Academic Search

We introduce a new technique to quantify highly structured spectra for which\\u000athe definition of continua or spectral features in the observed flux spectra is\\u000adifficult. The method employs wavelet transformation which allows the\\u000adecomposition of the observed spectra into different scales. A procedure is\\u000aformulated to define the strength of spectral features so that the measured\\u000aspectral indices are

A. Wagers; L. Wang; S. Asztalos

2009-01-01

182

Quantifying below-ground nitrogen of legumes  

Microsoft Academic Search

Quantifying below-ground nitrogen (N) of legumes is fundamental to understanding their effects on soil mineral N fertility and on the N economies of following or companion crops in legume-based rotations. Methodologies based on 15N shoot-labelling with subsequent measurement of 15N in recovered plant parts (shoots and roots) and in the root-zone soil have proved promising. We report four glasshouse experiments

W Dil F. Khan; Mark B. Peoples; David F. Herridge

2002-01-01

183

Choosing appropriate techniques for quantifying groundwater recharge  

Microsoft Academic Search

.   Various techniques are available to quantify recharge; however, choosing appropriate techniques is often difficult. Important\\u000a considerations in choosing a technique include space\\/time scales, range, and reliability of recharge estimates based on different\\u000a techniques; other factors may limit the application of particular techniques. The goal of the recharge study is important\\u000a because it may dictate the required space\\/time scales of

Bridget R. Scanlon; Richard W. Healy; Peter G. Cook

2002-01-01

184

COMPLEXITY&APPROXIMABILITY OF QUANTIFIED&STOCHASTIC CONSTRAINT SATISFACTION PROBLEMS  

SciTech Connect

Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S and T be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SATc(S).) Here, we study simultaneously the complexity of decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. We present simple yet general techniques to characterize simultaneously, the complexity or efficient approximability of a number of versions/variants of the problems SAT(S), Q-SAT(S), S-SAT(S),MAX-Q-SAT(S) etc., for many different such D,C ,S, T. These versions/variants include decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. Our unified approach is based on the following two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic represent ability. Some of the results extend the earlier results in [Pa85,LMP99,CF+93,CF+94O]u r techniques and results reported here also provide significant steps towards obtaining dichotomy theorems, for a number of the problems above, including the problems MAX-&-SAT( S), and MAX-S-SAT(S). The discovery of such dichotomy theorems, for unquantified formulas, has received significant recent attention in the literature [CF+93,CF+94,Cr95,KSW97

Hunt, H. B. (Harry B.); Marathe, M. V. (Madhav V.); Stearns, R. E. (Richard E.)

2001-01-01

185

Quantifying the sources of error in measurements of urine activity  

SciTech Connect

Accurate scintigraphic measurements of radioactivity in the bladder and voided urine specimens can be limited by scatter, attenuation, and variations in the volume of urine that a given dose is distributed in. The purpose of this study was to quantify some of the errors that these problems can introduce. Transmission scans and 41 conjugate images of the bladder were sequentially acquired on a dual headed camera over 24 hours in 6 subjects after the intravenous administration of 100-150 MBq (2.7-3.6 mCi) of a novel I-123 labeled benzamide. Renal excretion fractions were calculated by measuring the counts in conjugate images of 41 sequentially voided urine samples. A correction for scatter was estimated by comparing the count rates in images that were acquired with the photopeak centered an 159 keV and images that were made simultaneously with the photopeak centered on 126 keV. The decay and attenuation corrected, geometric mean activities were compared to images of the net dose injected. Checks of the results were performed by measuring the total volume of each voided urine specimen and determining the activity in a 20 ml aliquot of it with a dose calibrator. Modeling verified the experimental results which showed that 34% of the counts were attenuated when the bladder had been expanded to a volume of 300 ml. Corrections for attenuation that were based solely on the transmission scans were limited by the volume of non-radioactive urine in the bladder before the activity was administered. The attenuation of activity in images of the voided wine samples was dependent on the geometry of the specimen container. The images of urine in standard, 300 ml laboratory specimen cups had 39{plus_minus}5% fewer counts than images of the same samples laid out in 3 liter bedpans. Scatter through the carbon fiber table substantially increased the number of counts in the images by an average of 14%.

Mozley, P.D.; Kim, H.J.; McElgin, W. [ORISE, Oak Ridge, TN (United States)] [and others

1994-05-01

186

Quantifying Nitrogen Loss From Flooded Hawaiian Taro Fields  

NASA Astrophysics Data System (ADS)

In 2004 a field fertilization experiment showed that approximately 80% of the fertilizer nitrogen (N) added to flooded Hawaiian taro (Colocasia esculenta) fields could not be accounted for using classic N balance calculations. To quantify N loss through denitrification and anaerobic ammonium oxidation (anammox) pathways in these taro systems we utilized a slurry-based isotope pairing technique (IPT). Measured nitrification rates and porewater N profiles were also used to model ammonium and nitrate fluxes through the top 10 cm of soil. Quantitative PCR of nitrogen cycling functional genes was used to correlate porewater N dynamics with potential microbial activity. Rates of denitrification calculated using porewater profiles were compared to those obtained using the slurry method. Potential denitrification rates of surficial sediments obtained with the slurry method were found to drastically overestimate the calculated in-situ rates. The largest discrepancies were present in fields greater than one month after initial fertilization, reflecting a microbial community poised to denitrify the initial N pulse. Potential surficial nitrification rates varied between 1.3% of the slurry-measured denitrification potential in a heavily-fertilized site to 100% in an unfertilized site. Compared to the use of urea, fish bone meal fertilizer use resulted in decreased N loss through denitrification in the surface sediment, according to both porewater modeling and IPT measurements. In addition, sub-surface porewater profiles point to root-mediated coupled nitrification/denitrification as a potential N loss pathway that is not captured in surface-based incubations. Profile-based surface plus subsurface coupled nitrification/denitrification estimates were between 1.1 and 12.7 times denitrification estimates from the surface only. These results suggest that the use of a ‘classic’ isotope pairing technique that employs 15NO3- in fertilized agricultural systems can lead to a drastic overestimation of in-situ denitrification rates and that root-associated subsurface coupled nitrification/denitrification may be a major N loss pathway in these flooded agricultural systems.

Deenik, J. L.; Penton, C. R.; Bruland, G. L.; Popp, B. N.; Engstrom, P.; Mueller, J. A.; Tiedje, J.

2010-12-01

187

Quantifying variances in comparative RNA secondary structure prediction  

PubMed Central

Background With the advancement of next-generation sequencing and transcriptomics technologies, regulatory effects involving RNA, in particular RNA structural changes are being detected. These results often rely on RNA secondary structure predictions. However, current approaches to RNA secondary structure modelling produce predictions with a high variance in predictive accuracy, and we have little quantifiable knowledge about the reasons for these variances. Results In this paper we explore a number of factors which can contribute to poor RNA secondary structure prediction quality. We establish a quantified relationship between alignment quality and loss of accuracy. Furthermore, we define two new measures to quantify uncertainty in alignment-based structure predictions. One of the measures improves on the “reliability score” reported by PPfold, and considers alignment uncertainty as well as base-pair probabilities. The other measure considers the information entropy for SCFGs over a space of input alignments. Conclusions Our predictive accuracy improves on the PPfold reliability score. We can successfully characterize many of the underlying reasons for and variances in poor prediction. However, there is still variability unaccounted for, which we therefore suggest comes from the RNA secondary structure predictive model itself.

2013-01-01

188

A nondestructive method for quantifying the wood drying quality used to determine the intra and inter species variability  

SciTech Connect

A nondestructive method for surface strain measurements is proposed to quantify the wood drying quality during convective drying. This method uses a visible laser scan micrometer intercepting needles maintained vertical at the board surface with a special device, and then gives shrinkage values at several surface points. The results analysis is related to heat and mass transfer phenomena. Experiments were made on softwoods and hardwoods either with superheated steam or with moist air. Results show that stages of shrinkage agree with classical periods of transfer. Besides, shrinkage results from a compromise between global shrinkage of the board section and local effects bound with drying conditions. The authors define two criteria for the drying quality. One compares experimental average shrinkage and free shrinkage, and the other quantifies the differences of shrinkage values between several surface points. Both criteria have to be associated in order to analyze the species behavior in terms of checking during the second drying period.

Canteri, L.; Martin, M. [INPL-UHP, Vandoeuvre-les-Nancy (France). Lab. d`Energetique et de Mecanique Theorique et Appliquee; Perre, P. [Ecole Nationale du Genie Rural, Nancy (France)

1997-05-01

189

Quantifying Error in the CMORPH Satellite Precipitation Estimates  

NASA Astrophysics Data System (ADS)

As part of the collaboration between China Meteorological Administration (CMA) National Meteorological Information Centre (NMIC) and NOAA Climate Prediction Center (CPC), a new system is being developed to construct hourly precipitation analysis on a 0.25olat/lon grid over China by merging information derived from gauge observations and CMORPH satellite precipitation estimates. Foundation to the development of the gauge-satellite merging algorithm is the definition of the systematic and random error inherent in the CMORPH satellite precipitation estimates. In this study, we quantify the CMORPH error structures through comparisons against a gauge-based analysis of hourly precipitation derived from station reports from a dense network over China. First, systematic error (bias) of the CMORPH satellite estimates are examined with co-located hourly gauge precipitation analysis over 0.25olat/lon grid boxes with at least one reporting station. The CMORPH exhibits biases of regional variations showing over-estimates over eastern China, and seasonal changes with over-/under-estimates during warm/cold seasons. The CMORPH bias presents range-dependency. In general, the CMORPH tends to over-/under-estimate weak / strong rainfall. The bias, when expressed in the form of ratio between the gauge observations and the CMORPH satellite estimates, increases with the rainfall intensity but tends to saturate at a certain level for high rainfall. Based on the above results, a prototype algorithm is developed to remove the CMORPH bias through matching the PDF of original CMORPH estimates against that of the gauge analysis using data pairs co-located over grid boxes with at least one reporting gauge over a 30-day period ending at the target date. The spatial domain for collecting the co-located data pairs is expanded so that at least 5000 pairs of data are available to ensure statistical availability. The bias-corrected CMORPH is then compared against the gauge data to quantify the remaining random error. The results showed that the random error in the bias-corrected CMORPH is proportional to the smoothness of the target precipitation fields, expressed as the standard deviation of the CMORPH fields, and to the size of the spatial domain over which the data pairs to construct the PDF functions are collected. An empirical equation is then defined to compute the random error in the bias-corrected CMORPH from the CMORPH spatial standard deviation and the size of the data collection domain. An algorithm is being developed to combine the gauge analysis with the bias-corrected CMORPH through the optimal interpolation (OI) technique using the error statistics defined in this study. In this process, the bias-corrected CMORPH will be used as the first guess, while the gauge data will be utilized as observations to modify the first guess over regions with gauge network coverage. Detailed results will be reported at the conference.

Xu, B.; Yoo, S.; Xie, P.

2010-12-01

190

Quantifying nanoscale order in amorphous materials via fluctuation electron microscopy  

NASA Astrophysics Data System (ADS)

Fluctuation electron microscopy (FEM) has been used to study the nanoscale order in various amorphous materials. The method is explicitly sensitive to 3- and 4-body atomic correlation functions in amorphous materials; this is sufficient to establish the existence of structural order on the nanoscale, even when the radial distribution function extracted from diffraction data appears entirely amorphous. The variable resolution form of the technique can reveal the characteristic decay length over which topological order persists in amorphous materials. By changing the resolution, a characteristic length is obtained without the need for a priori knowledge of the structure. However, it remains a formidable challenge to invert the FEM data into a quantitative description of the structure that is free from error due to experimental noise and quantitative in both size and volume fraction. Here, we quantify the FEM method by (i) forward simulating the FEM data from a family of high quality atomistic a-Si models, (ii) reexamining the statistical origins of contributions to the variance due to artifacts, and (iii) comparing the measured experimental data with model simulations. From simulations at a fixed resolution, we show that the variance V( k) is a complex function of the size and volume fraction of the ordered regions present in the amorphous matrix. However, the ratio of the variance peaks as a function of diffraction vector k affords the size of the ordered regions; and the magnitude of the variance affords a quantitative measure of the volume fraction. From comparison of measured characteristic length with model simulations, we are able to estimate the size and volume fraction of ordered regions. The use of the STEM mode of FEM offers significant advantages in identifying artifacts in the variances. Artifacts, caused by non-idealities in the sample unrelated to nanoscale order, can easily dominate the measured variance, producing erroneous results. We show that reexamination and correction of the contributions of artifacts to variance is necessary to obtain an accurate and quantitative description of the structure of amorphous materials. Using variable resolution FEM we are able to extract a characteristic length of ordered regions in two different amorphous silicon samples. Having eliminated the noise contribution to the variance, we show here the first demonstration of a consistent characteristic length at all values of k. The experimental results presented here are the first to be consistent with both FEM theory and simulations.

Bogle, Stephanie Nicole

191

Quantifying uncertainty in state and parameter estimation  

NASA Astrophysics Data System (ADS)

Observability of state variables and parameters of a dynamical system from an observed time series is analyzed and quantified by means of the Jacobian matrix of the delay coordinates map. For each state variable and each parameter to be estimated, a measure of uncertainty is introduced depending on the current state and parameter values, which allows us to identify regions in state and parameter space where the specific unknown quantity can(not) be estimated from a given time series. The method is demonstrated using the Ikeda map and the Hindmarsh-Rose model.

Parlitz, Ulrich; Schumann-Bischoff, Jan; Luther, Stefan

2014-05-01

192

Application of stereo laser tracking methods for quantifying flight dynamics  

NASA Astrophysics Data System (ADS)

Conventional tracking systems measure time-space-position data and collect imagery to quantify the flight dynamics of tracked targets. One of the major obstacles that severely impacts the accuracy and fidelity of the target characterization is atmospheric turbulence induced distortions of the tracking laser beam at the target surface and imagery degradations. Tracking occurs in a continuously changing atmosphere resulting in rapid variations in the tracking laser beam and distorted imagery. These atmospheric effects, in combination with other sources of degradation, such as measurement system motions (e.g. vibration/jitter), defocus blur, and spatially varying noise, severely limit the useful and accuracy of many tracking and analysis methods. This paper discusses the viability of employing stereo image correlation methods for high speed moving target characterization through atmospheric turbulence. Stereo imaging methods have proven effective in the laboratory for quantifying temporally and spatially resolved 3D motions across a target surface. This technique acquires stereo views (two or more) of a test article that has an applied random speckled (dot) pattern painted on the surface to provide trackable features on the entire target surface. The stereo views are reconciled via coordinate transformations and correlation of the transformed images. The principle limitations of this method have been the need for clean imagery and fixed camera positions and orientations. However, recent field tests have demonstrated that these limitations can be overcome to provide a new method for quantifying flight dynamics with stereo laser tracking and multi-video imagery in the presence of atmospheric turbulence.

Schreier, Hubert W.; Miller, Timothy J.; Valley, Michael T.; Brown, Timothy L.

2007-10-01

193

Applicability of digital photogrammetry technique to quantify rock fracture surfaces  

NASA Astrophysics Data System (ADS)

Several automatic recording mechanical profilographs have been used to perform fracture roughness measurements. The previous studies indicated that for accurate quantification of roughness the fracture roughness measurements should be obtained at a much higher resolution than that possible using the mechanical profilographs. With laser profilometers, roughness can be measured at very small step spacings to a high degree of precision. Laser profilometer, however, is limited to laboratory measurements, and only small scale roughness is represented. Waviness or large-scale roughness can be considered using a digital photogrammetry technique through in situ measurements. Applicability of the digital photogrammetry technique for roughness estimation of fracture surface is addressed in this study. Measurements of fracture surface have been performed for three rock fracture specimens using the laser profiler and the digital photogrammetry technique. The conventional roughness parameters, such as Z2, SDSL, SDH and Rp, as well as fractal roughness parameters have been estimated for roughness data obtained from each method. Obtained results showed that there were considerable amount of discrepancy on each of estimated conventional roughness parameter based on the laser profilometer and the digital photogrammetry technique. On the other hand estimated fractal roughness parameters based on both methods were found to close each other. It is very important to note that the estimated fractal roughness parameters obtained from the digital photogrammetry technique were lower than that based on the laser profilometer, even though a high degree of correlation exist between them. To perform the accurate estimation of fracture roughness, values obtained from the digital photogrammetry technique have to be corrected. The conducted research in this study have shown that the digital photogrammetry technique have strong capability to quantify the roughness of rock fracture accurately. Acknowledgements. This work was supported by the 2011 Energy Efficiency and Resources Program of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant.

Seo, H. K.; Noh, Y. H.; Um, J. G.; Choi, Y. S.; Park, M. H.

2012-04-01

194

Quantifying occupant energy behavior using pattern analysis techniques  

SciTech Connect

Occupant energy behavior is widely agreed upon to have a major influence over the amount of energy used in buildings. Few attempts have been made to quantify this energy behavior, even though vast amounts of end-use data containing useful information lay fallow. This paper describes analysis techniques developed to extract behavioral information from collected residential end-use data. Analysis of the averages, standard deviations and frequency distributions of hourly data can yield important behavioral information. Pattern analysis can be used to group similar daily energy patterns together for a particular end-use or set of end-uses. Resulting pattern groups can then be examined statistically using multinomial logit modeling to find their likelihood of occurrence for a given set of daily conditions. These techniques were tested successfully using end-use data for families living in four heavily instrumented residences. Energy behaviors were analyzed for individual families during each heating season of the study. These behaviors (indoor temperature, ventilation load, water heating, large appliance energy, and miscellaneous outlet energy) capture how occupants directly control the residence. The pattern analysis and multinomial logit model were able to match the occupant behavior correctly 40 to 70% of the time. The steadier behaviors of indoor temperature and ventilation were matched most successfully. Simple changes to capture more detail during pattern analysis can increase accuracy for the more variable behavior patterns. The methods developed here show promise for extracting meaningful and useful information about occupant energy behavior from the stores of existing end-use data.

Emery, A. [Univ. of Washington, Seattle, WA (United States). Dept. of Mechanical Engineering; Gartland, L. [Lawrence Berkeley National Lab., CA (United States). Energy and Environment Div.

1996-08-01

195

Quantifying and scaling airplane performance in turbulence  

NASA Astrophysics Data System (ADS)

This dissertation studies the effects of turbulent wind on airplane airspeed and normal load factor, determining how these effects scale with airplane size and developing envelopes to account for them. The results have applications in design and control of aircraft, especially small scale aircraft, for robustness with respect to turbulence. Using linearized airplane dynamics and the Dryden gust model, this dissertation presents analytical and numerical scaling laws for airplane performance in gusts, safety margins that guarantee, with specified probability, that steady flight can be maintained when stochastic wind gusts act upon an airplane, and envelopes to visualize these safety margins. Presented here for the first time are scaling laws for the phugoid natural frequency, phugoid damping ratio, airspeed variance in turbulence, and flight path angle variance in turbulence. The results show that small aircraft are more susceptible to high frequency gusts, that the phugoid damping ratio does not depend directly on airplane size, that the airspeed and flight path angle variances can be parameterized by the ratio of the phugoid natural frequency to a characteristic turbulence frequency, and that the coefficient of variation of the airspeed decreases with increasing airplane size. Accompanying numerical examples validate the results using eleven different airplanes models, focusing on NASA's hypothetical Boeing 757 analog the Generic Transport Model and its operational 5.5% scale model, the NASA T2. Also presented here for the first time are stationary flight, where the flight state is a stationary random process, and the stationary flight envelope, an adjusted steady flight envelope to visualize safety margins for stationary flight. The dissertation shows that driving the linearized airplane equations of motion with stationary, stochastic gusts results in stationary flight. It also shows how feedback control can enlarge the stationary flight envelope by alleviating gust loads, though the enlargement is significantly limited by control surface saturation. The results end with a numerical example of a Navion general aviation aircraft performing various steady flight maneuvers in moderate turbulence, showing substantial reductions in the steady flight envelope for some combinations of maneuvers, turbulence, and safety margins.

Richardson, Johnhenri R.

196

Precise thermal NDE for quantifying structural damage  

SciTech Connect

The authors demonstrated a fast, wide-area, precise thermal NDE imaging system to quantify aircraft corrosion damage, such as percent metal loss, above a threshold of 5% with 3% overall uncertainties. The DBIR precise thermal imaging and detection method has been used successfully to characterize defect types, and their respective depths, in aircraft skins, and multi-layered composite materials used for wing patches, doublers and stiffeners. This precise thermal NDE inspection tool has long-term potential benefits to evaluate the structural integrity of airframes, pipelines and waste containers. They proved the feasibility of the DBIR thermal NDE imaging system to inspect concrete and asphalt-concrete bridge decks. As a logical extension to the successful feasibility study, they plan to inspect a concrete bridge deck from a moving vehicle to quantify the volumetric damage within the deck and the percent of the deck which has subsurface delaminations. Potential near-term benefits are in-service monitoring from a moving vehicle to inspect the structural integrity of the bridge deck. This would help prioritize the repair schedule for a reported 200,000 bridge decks in the US which need substantive repairs. Potential long-term benefits are affordable, and reliable, rehabilitation for bridge decks.

Del Grande, N.K.; Durbin, P.F.

1995-09-18

197

New measurements quantify atmospheric greenhouse effect  

NASA Astrophysics Data System (ADS)

In spite of a large body of existing measurements of incoming short-wave solar radiation and outgoing long-wave terrestrial radiation at the surface of the Earth and, more recently, in the upper atmosphere, there are few observations documenting how radiation profiles change through the atmosphere—information that is necessary to fully quantify the greenhouse effect of Earth's atmosphere. Through the use of existing technology but employing improvements in observational techniques it may now be possible not only to quantify but also to understand how different components of the atmosphere (e.g., concentration of gases, cloud cover, moisture, and aerosols) contribute to the greenhouse effect. Using weather balloons equipped with radiosondes, Philipona et al. continuously measured radiation fluxes from the surface of Earth up to altitudes of 35 kilometers in the upper stratosphere. Combining data from flights conducted during both day and night with continuous 24-hour measurements made at the surface of the Earth, the researchers created radiation profiles of all four components necessary to fully capture the radiation budget of Earth, namely, the upward and downward short-wave and long-wave radiation as a function of altitude.

Bhattacharya, Atreyee

2012-10-01

198

Quantifying Position-Dependent Codon Usage Bias  

PubMed Central

Although the mapping of codon to amino acid is conserved across nearly all species, the frequency at which synonymous codons are used varies both between organisms and between genes from the same organism. This variation affects diverse cellular processes including protein expression, regulation, and folding. Here, we mathematically model an additional layer of complexity and show that individual codon usage biases follow a position-dependent exponential decay model with unique parameter fits for each codon. We use this methodology to perform an in-depth analysis on codon usage bias in the model organism Escherichia coli. Our methodology shows that lowly and highly expressed genes are more similar in their codon usage patterns in the 5?-gene regions, but that these preferences diverge at distal sites resulting in greater positional dependency (pD, which we mathematically define later) for highly expressed genes. We show that position-dependent codon usage bias is partially explained by the structural requirements of mRNAs that results in increased usage of A/T rich codons shortly after the gene start. However, we also show that the pD of 4- and 6-fold degenerate codons is partially related to the gene copy number of cognate-tRNAs supporting existing hypotheses that posit benefits to a region of slow translation in the beginning of coding sequences. Lastly, we demonstrate that viewing codon usage bias through a position-dependent framework has practical utility by improving accuracy of gene expression prediction when incorporating positional dependencies into the Codon Adaptation Index model.

Hockenberry, Adam J.; Sirer, M. Irmak; Amaral, Luis A. Nunes; Jewett, Michael C.

2014-01-01

199

Effect of soil structure on the growth of bacteria in soil quantified using CARD-FISH  

NASA Astrophysics Data System (ADS)

It has been reported that compaction of soil due to use of heavy machinery has resulted in the reduction of crop yield. Compaction affects the physical properties of soil such as bulk density, soil strength and porosity. This causes an alteration in the soil structure which limits the mobility of nutrients, water and air infiltration and root penetration in soil. Several studies have been conducted to explore the effect of soil compaction on plant growth and development. However, there is scant information on the effect of soil compaction on the microbial community and its activities in soil. Understanding the effect of soil compaction on microbial community is essential as microbial activities are very sensitive to abrupt environmental changes in soil. Therefore, the aim of this work was to investigate the effect of soil structure on growth of bacteria in soil. The bulk density of soil was used as a soil physical parameter to quantify the effect of soil compaction. To detect and quantify bacteria in soil the method of catalyzed reporter deposition-fluorescence in situ hybridization (CARD-FISH) was used. This technique results in high intensity fluorescent signals which make it easy to quantify bacteria against high levels of autofluorescence emitted by soil particles and organic matter. In this study, bacterial strains Pseudomonas fluorescens SBW25 and Bacillus subtilis DSM10 were used. Soils of aggregate size 2-1mm were packed at five different bulk densities in polyethylene rings (4.25 cm3).The soil rings were sampled at four different days. Results showed that the total number of bacteria counts was reduced significantly (P

Juyal, Archana; Eickhorst, Thilo; Falconer, Ruth; Otten, Wilfred

2014-05-01

200

Quantifying litter decomposition losses to dissolved organic carbon and respiration  

NASA Astrophysics Data System (ADS)

As litter decomposes its carbon is lost from the litter layer, largely through microbial processing. However, much of the carbon lost from the surface litter layer during decomposition is not truly lost from the ecosystem but gets transferred to the soil through fragmentation and leaching of dissolved organic carbon (DOC). This DOC in the soil acts as a stock of soil organic matter (SOM) to be utilized by soil microbes, stabilized in the soil, or leached further through the soil profile. The total amount of C that ends up leaching from litter to the soil, as well as its chemical composition, has important implications on the residence time of decomposing litter C in the soil and is not currently well parameterized in models. In this study we aim to quantify the proportional relationship between CO2 efflux and DOC partitioning during decomposition of fresh leaf litter with distinct structural and chemical composition. The results from this one-year laboratory incubation show a clear relationship between the lignin to cellulose ratios of litter and DOC to CO2 partitioning during four distinct phases of litter decomposition. For example, bluestem grass litter with a low lignin to cellulose ratio loses almost 50% of its C as DOC whereas pine needles with a high lignin to cellulose ratio loses only 10% of its C as DOC, indicating a potential ligno-cellulose complexation effect on carbon use efficiency during litter decomposition. DOC production also decreases with time during decomposition, correlating with increasing lignin to cellulose ratios as decomposition progresses. Initial DOC leaching can be predicted based on the amount of labile fraction in each litter type. Field data using stable isotope labeled bluestem grass show that about 18% of the surface litter C lost in 18 months of decomposition is stored in the soil, and that over 50% of this is recovered in mineral-associated heavy SOM fractions, not as litter fragments, confirming the relative importance of the DOC flux of C from the litter layer to the soil for stable SOM formation. These results are being used to parameterize a new litter decomposition sub-model to more accurately represent the movement of decomposing surface litter C to CO2 and the mineral soil. This surface litter sub-model can be used to strengthen our understanding of the litter C and microbial processes that feed into larger ecosystem models such as Daycent.

Soong, J.; Parton, W. J.; Calderon, F. J.; Guilbert, K.; Cotrufo, M.

2013-12-01

201

Crisis of Japanese vascular flora shown by quantifying extinction risks for 1618 taxa.  

PubMed

Although many people have expressed alarm that we are witnessing a mass extinction, few projections have been quantified, owing to limited availability of time-series data on threatened organisms, especially plants. To quantify the risk of extinction, we need to monitor changes in population size over time for as many species as possible. Here, we present the world's first quantitative projection of plant species loss at a national level, with stochastic simulations based on the results of population censuses of 1618 threatened plant taxa in 3574 map cells of ca. 100 km2. More than 500 lay botanists helped monitor those taxa in 1994-1995 and in 2003-2004. We projected that between 370 and 561 vascular plant taxa will go extinct in Japan during the next century if past trends of population decline continue. This extinction rate is approximately two to three times the global rate. Using time-series data, we show that existing national protected areas (PAs) covering ca. 7% of Japan will not adequately prevent population declines: even core PAs can protect at best <60% of local populations from decline. Thus, the Aichi Biodiversity Target to expand PAs to 17% of land (and inland water) areas, as committed to by many national governments, is not enough: only 29.2% of currently threatened species will become non-threatened under the assumption that probability of protection success by PAs is 0.5, which our assessment shows is realistic. In countries where volunteers can be organized to monitor threatened taxa, censuses using our method should be able to quantify how fast we are losing species and to assess how effective current conservation measures such as PAs are in preventing species extinction. PMID:24922311

Kadoya, Taku; Takenaka, Akio; Ishihama, Fumiko; Fujita, Taku; Ogawa, Makoto; Katsuyama, Teruo; Kadono, Yasuro; Kawakubo, Nobumitsu; Serizawa, Shunsuke; Takahashi, Hideki; Takamiya, Masayuki; Fujii, Shinji; Matsuda, Hiroyuki; Muneda, Kazuo; Yokota, Masatsugu; Yonekura, Koji; Yahara, Tetsukazu

2014-01-01

202

Crisis of Japanese Vascular Flora Shown By Quantifying Extinction Risks for 1618 Taxa  

PubMed Central

Although many people have expressed alarm that we are witnessing a mass extinction, few projections have been quantified, owing to limited availability of time-series data on threatened organisms, especially plants. To quantify the risk of extinction, we need to monitor changes in population size over time for as many species as possible. Here, we present the world's first quantitative projection of plant species loss at a national level, with stochastic simulations based on the results of population censuses of 1618 threatened plant taxa in 3574 map cells of ca. 100 km2. More than 500 lay botanists helped monitor those taxa in 1994–1995 and in 2003–2004. We projected that between 370 and 561 vascular plant taxa will go extinct in Japan during the next century if past trends of population decline continue. This extinction rate is approximately two to three times the global rate. Using time-series data, we show that existing national protected areas (PAs) covering ca. 7% of Japan will not adequately prevent population declines: even core PAs can protect at best <60% of local populations from decline. Thus, the Aichi Biodiversity Target to expand PAs to 17% of land (and inland water) areas, as committed to by many national governments, is not enough: only 29.2% of currently threatened species will become non-threatened under the assumption that probability of protection success by PAs is 0.5, which our assessment shows is realistic. In countries where volunteers can be organized to monitor threatened taxa, censuses using our method should be able to quantify how fast we are losing species and to assess how effective current conservation measures such as PAs are in preventing species extinction.

Kadoya, Taku; Takenaka, Akio; Ishihama, Fumiko; Fujita, Taku; Ogawa, Makoto; Katsuyama, Teruo; Kadono, Yasuro; Kawakubo, Nobumitsu; Serizawa, Shunsuke; Takahashi, Hideki; Takamiya, Masayuki; Fujii, Shinji; Matsuda, Hiroyuki; Muneda, Kazuo; Yokota, Masatsugu; Yonekura, Koji; Yahara, Tetsukazu

2014-01-01

203

Quantifying Power Grid Risk from Geomagnetic Storms  

NASA Astrophysics Data System (ADS)

We are creating a statistical model of the geophysical environment that can be used to quantify the geomagnetic storm hazard to power grid infrastructure. Our model is developed using a database of surface electric fields for the continental United States during a set of historical geomagnetic storms. These electric fields are derived from the SUPERMAG compilation of worldwide magnetometer data and surface impedances from the United States Geological Survey. This electric field data can be combined with a power grid model to determine GICs per node and reactive MVARs at each minute during a storm. Using publicly available substation locations, we derive relative risk maps by location by combining magnetic latitude and ground conductivity. We also estimate the surface electric fields during the August 1972 geomagnetic storm that caused a telephone cable outage across the middle of the United States. This event produced the largest surface electric fields in the continental U.S. in at least the past 40 years.

Homeier, N.; Wei, L. H.; Gannon, J. L.

2012-12-01

204

World Health Organization: Quantifying environmental health impacts  

NSDL National Science Digital Library

The World Health Organization works in a number of public health areas, and their work on quantifying environmental health impacts has been receiving praise from many quarters. This site provides materials on their work in this area and visitors with a penchant for international health relief efforts and policy analysis will find the site invaluable. Along the left-hand side of the site, visitors will find topical sections that include "Methods", "Assessment at national level", "Global estimates", and "Publications". In the "Methods" area, visitors will learn about how the World Health Organization's methodology for studying environmental health impacts has been developed and they can also read a detailed report on the subject. The "Global Estimates" area is worth a look as well, and users can look at their complete report, "Preventing Disease Through Healthy Environments: Towards An Estimate Of the Global Burden Of Disease".

205

How to quantify conduits in wood?  

PubMed Central

Vessels and tracheids represent the most important xylem cells with respect to long distance water transport in plants. Wood anatomical studies frequently provide several quantitative details of these cells, such as vessel diameter, vessel density, vessel element length, and tracheid length, while important information on the three dimensional structure of the hydraulic network is not considered. This paper aims to provide an overview of various techniques, although there is no standard protocol to quantify conduits due to high anatomical variation and a wide range of techniques available. Despite recent progress in image analysis programs and automated methods for measuring cell dimensions, density, and spatial distribution, various characters remain time-consuming and tedious. Quantification of vessels and tracheids is not only important to better understand functional adaptations of tracheary elements to environment parameters, but will also be essential for linking wood anatomy with other fields such as wood development, xylem physiology, palaeobotany, and dendrochronology.

Scholz, Alexander; Klepsch, Matthias; Karimi, Zohreh; Jansen, Steven

2013-01-01

206

Quantifying the statistical complexity of low-frequency fluctuations in semiconductor lasers with optical feedback  

NASA Astrophysics Data System (ADS)

Low-frequency fluctuations (LFFs) represent a dynamical instability that occurs in semiconductor lasers when they are operated near the lasing threshold and subject to moderate optical feedback. LFFs consist of sudden power dropouts followed by gradual, stepwise recoveries. We analyze experimental time series of intensity dropouts and quantify the complexity of the underlying dynamics employing two tools from information theory, namely, Shannon’s entropy and the Martín, Plastino, and Rosso statistical complexity measure. These measures are computed using a method based on ordinal patterns, by which the relative length and ordering of consecutive interdropout intervals (i.e., the time intervals between consecutive intensity dropouts) are analyzed, disregarding the precise timing of the dropouts and the absolute durations of the interdropout intervals. We show that this methodology is suitable for quantifying subtle characteristics of the LFFs, and in particular the transition to fully developed chaos that takes place when the laser’s pump current is increased. Our method shows that the statistical complexity of the laser does not increase continuously with the pump current, but levels off before reaching the coherence collapse regime. This behavior coincides with that of the first- and second-order correlations of the interdropout intervals, suggesting that these correlations, and not the chaotic behavior, are what determine the level of complexity of the laser’s dynamics. These results hold for two different dynamical regimes, namely, sustained LFFs and coexistence between LFFs and steady-state emission.

Tiana-Alsina, J.; Torrent, M. C.; Rosso, O. A.; Masoller, C.; Garcia-Ojalvo, J.

2010-07-01

207

Quantifying the relative impact of climate and human activities on streamflow  

NASA Astrophysics Data System (ADS)

The objective of this study is to quantify the role of climate and human impacts on streamflow conditions by using historical streamflow records, in conjunction with trend analysis and hydrologic modeling. Four U.S. states, including Indiana, New York, Arizona and Georgia area used to represent various level of human activity based on population change and diverse climate conditions. The Mann-Kendall trend analysis is first used to examine the magnitude changes in precipitation, streamflow and potential evapotranspiration for the four states. Four hydrologic modeling methods, including linear regression, hydrologic simulation, annual balance, and Budyko analysis are then used to quantify the amount of climate and human impacts on streamflow. All four methods show that the human impact is higher on streamflow at most gauging stations in all four states compared to climate impact. Among the four methods used, the linear regression approach produced the best hydrologic output in terms of higher Nash-Sutcliffe coefficient. The methodology used in this study is also able to correctly highlight the areas with higher human impact such as the modified channelized reaches in the northwestern part of Indiana. The results from this study show that population alone cannot capture all the changes caused by human activities in a region. However, this approach provides a starting point towards understanding the role of individual human activities on streamflow changes.

Ahn, Kuk-Hyun; Merwade, Venkatesh

2014-07-01

208

Quantifying nonverbal communicative behavior in face-to-face human dialogues  

NASA Astrophysics Data System (ADS)

The referred study is based on the assumption that understanding how humans use nonverbal behavior in dialogues can be very useful in the design of more natural-looking animated talking heads. The goal of the study is twofold: (1) to explore how people use specific facial expressions and head movements to serve important dialogue functions, and (2) to show evidence that it is possible to measure and quantify the entity of these movements with the Qualisys MacReflex motion tracking system. Naturally elicited dialogues between humans have been analyzed with focus on the attention on those nonverbal behaviors that serve the very relevant functions of regulating the conversational flux (i.e., turn taking) and producing information about the state of communication (i.e., feedback). The results show that eyebrow raising, head nods, and head shakes are typical signals involved during the exchange of speaking turns, as well as in the production and elicitation of feedback. These movements can be easily measured and quantified, and this measure can be implemented in animated talking heads.

Skhiri, Mustapha; Cerrato, Loredana

2002-11-01

209

A mass-balance model to separate and quantify colloidal and solute redistributions in soil  

USGS Publications Warehouse

Studies of weathering and pedogenesis have long used calculations based upon low solubility index elements to determine mass gains and losses in open systems. One of the questions currently unanswered in these settings is the degree to which mass is transferred in solution (solutes) versus suspension (colloids). Here we show that differential mobility of the low solubility, high field strength (HFS) elements Ti and Zr can trace colloidal redistribution, and we present a model for distinguishing between mass transfer in suspension and solution. The model is tested on a well-differentiated granitic catena located in Kruger National Park, South Africa. Ti and Zr ratios from parent material, soil and colloidal material are substituted into a mixing equation to quantify colloidal movement. The results show zones of both colloid removal and augmentation along the catena. Colloidal losses of 110kgm-2 (-5% relative to parent material) are calculated for one eluviated soil profile. A downslope illuviated profile has gained 169kgm-2 (10%) colloidal material. Elemental losses by mobilization in true solution are ubiquitous across the catena, even in zones of colloidal accumulation, and range from 1418kgm-2 (-46%) for an eluviated profile to 195kgm-2 (-23%) at the bottom of the catena. Quantification of simultaneous mass transfers in solution and suspension provide greater specificity on processes within soils and across hillslopes. Additionally, because colloids include both HFS and other elements, the ability to quantify their redistribution has implications for standard calculations of soil mass balances using such index elements. ?? 2011.

Bern, C. R.; Chadwick, O. A.; Hartshorn, A. S.; Khomo, L. M.; Chorover, J.

2011-01-01

210

3D Wind: Quantifying wind speed and turbulence intensity  

NASA Astrophysics Data System (ADS)

Integrating measurements and modeling of wind characteristics for wind resource assessment and wind farm control is increasingly challenging as the scales of wind farms increases. Even offshore or in relatively homogeneous landscapes, there are significant gradients of both wind speed and turbulence intensity on scales that typify large wind farms. Our project is, therefore, focused on (i) improving methods to integrate remote sensing and in situ measurements with model simulations to produce a 3-dimensional view of the flow field on wind farm scales and (ii) investigating important controls on the spatiotemporal variability of flow fields within the coastal zone. The instrument suite deployed during the field experiments includes; 3-D sonic and cup anemometers deployed on meteorological masts and buoys, anemometers deployed on tethersondes and an Unmanned Aerial Vehicle, multiple vertically-pointing continuous-wave lidars and scanning Doppler lidars. We also integrate data from satellite-borne instrumentation - specifically synthetic aperture radar and scatterometers and output from the Weather Research and Forecast (WRF) model. Spatial wind fields and vertical profiles of wind speed from WRF and from the full in situ observational suite exhibit excellent agreement in a proof-of-principle experiment conducted in north Indiana particularly during convective conditions, but showed some discrepancies during the breakdown of the nocturnal stable layer. Our second experiment in May 2013 focused on triangulating a volume above an area of coastal water extending from the port in Cleveland out to an offshore water intake crib (about 5 km) and back to the coast, and includes extremely high resolution WRF simulations designed to characterize the coastal zone. Vertically pointing continuous-wave lidars were operated at each apex of the triangle, while the scanning Doppler lidar scanned out across the water over 90 degrees azimuth angle. Preliminary results pertaining to objective (i) indicate there is good agreement between wind and turbulence profiles to 200 m as measured by the vertically pointing lidars with the expected modification of the offshore profiles relative to the profiles at the coastline. However, these profiles do not always fully agree with wind speed and direction profiles measured by the scanning Doppler lidar. Further investigation is required to elucidate these results and to analyze whether these discrepancies occur during particular atmospheric conditions. Preliminary results regarding controls on flow in the coastal zone (i.e. objective ii) include clear evidence that the wind profile to 200 m was modified due to swell during unstable conditions even under moderate to high wind speed conditions. The measurement campaigns will be described in detail, with a view to evaluating optimal strategies for offshore measurement campaigns and in the context of quantifying wind and turbulence in a 3D volume.

Barthelmie, R. J.; Pryor, S. C.; Wang, H.; Crippa, P.

2013-12-01

211

Olaparib shows promise in multiple tumor types.  

PubMed

A phase II study of the PARP inhibitor olaparib (AstraZeneca) for cancer patients with inherited BRCA1 and BRCA2 gene mutations confirmed earlier results showing clinical benefit for advanced breast and ovarian cancers, and demonstrated evidence of effectiveness against pancreatic and prostate cancers. PMID:23847380

2013-07-01

212

Quantifying Hierarchy Stimuli in Systematic Desensitization Via GSR: A Preliminary Investigation  

ERIC Educational Resources Information Center

The aim of the method for quantifying hierarchy stimuli by Galvanic Skin Resistance recordings is to improve the results of systematic desensitization by attenuating the subjective influences in hierarchy construction which are common in traditional procedures. (Author/CS)

Barabasz, Arreed F.

1974-01-01

213

Quantifying the local Seebeck coefficient with scanning thermoelectric microscopy  

NASA Astrophysics Data System (ADS)

We quantify the local Seebeck coefficient with scanning thermoelectric microscopy, using a direct approach to convert temperature gradient-induced voltages (V) to Seebeck coefficients (S). We use a quasi-3D conversion matrix that considers both the sample geometry and the temperature profile. For a GaAs p-n junction, the resulting S-profile is consistent with that computed using the free carrier concentration profile. This combined computational-experimental approach is expected to enable nanoscale measurements of S across a wide variety of heterostructure interfaces.

Walrath, J. C.; Lin, Y. H.; Pipe, K. P.; Goldman, R. S.

2013-11-01

214

Quantifying Subsidence in the 1999-2000 Arctic Winter Vortex  

NASA Technical Reports Server (NTRS)

Quantifying the subsidence of the polar winter stratospheric vortex is essential to the analysis of ozone depletion, as chemical destruction often occurs against a large, altitude-dependent background ozone concentration. Using N2O measurements made during SOLVE on a variety of platforms (ER-2, in-situ balloon and remote balloon), the 1999-2000 Arctic winter subsidence is determined from N2O-potential temperature correlations along several N2O isopleths. The subsidence rates are compared to those determined in other winters, and comparison is also made with results from the SLIMCAT stratospheric chemical transport model.

Greenblatt, Jeffery B.; Jost, Hans-juerg; Loewenstein, Max; Podolske, James R.; Bui, T. Paul; Elkins, James W.; Moore, Fred L.; Ray, Eric A.; Sen, Bhaswar; Margitan, James J.; Hipskind, R. Stephen (Technical Monitor)

2000-01-01

215

Quantifiers More or Less Quantify On-Line: ERP Evidence for Partial Incremental Interpretation  

ERIC Educational Resources Information Center

Event-related brain potentials were recorded during RSVP reading to test the hypothesis that quantifier expressions are incrementally interpreted fully and immediately. In sentences tapping general knowledge ("Farmers grow crops/worms as their primary source of income"), Experiment 1 found larger N400s for atypical ("worms") than typical objects…

Urbach, Thomas P.; Kutas, Marta

2010-01-01

216

Quantifying touch feel perception: tribological aspects  

NASA Astrophysics Data System (ADS)

We report a new investigation into how surface topography and friction affect human touch-feel perception. In contrast with previous work based on micro-scale mapping of surface mechanical and tribological properties, this investigation focuses on the direct measurement of the friction generated when a fingertip is stroked on a test specimen. A special friction apparatus was built for the in situ testing, based on a linear flexure mechanism with both contact force and frictional force measured simultaneously. Ten specimens, already independently assessed in a 'perception clinic', with materials including natural wood, leather, engineered plastics and metal were tested and the results compared with the perceived rankings. Because surface geometrical features are suspected to play a significant role in perception, a second set of samples, all of one material, were prepared and tested in order to minimize the influence of properties such as hardness and thermal conductivity. To minimize subjective effects, all specimens were also tested in a roller-on-block configuration based upon the same friction apparatus, with the roller materials being steel, brass and rubber. This paper reports the detailed design and instrumentation of the friction apparatus, the experimental set-up and the friction test results. Attempts have been made to correlate the measured properties and the perceived feelings for both roughness and friction. The results show that the measured roughness and friction coefficient both have a strong correlation with the rough-smooth and grippy-slippery feelings.

Liu, X.; Yue, Z.; Cai, Z.; Chetwynd, D. G.; Smith, S. T.

2008-08-01

217

Quantifying spin mixing conductance in F/Pt (F =Ni, Fe, and Ni81Fe19) bilayer film  

NASA Astrophysics Data System (ADS)

The spin-mixing conductances in F/Pt (F = Ni, Fe, and Ni81Fe19) bilayer films were quantified from the peak-to-peak linewidth of ferromagnetic resonance (FMR) spectra based on the model of the spin pumping. When the Pt layer is attached to the F layer, we found the enhancement of the FMR linewidth due to the spin pumping. The experimental results show that the spin-mixing conductances in F/Pt (F = Ni, Fe, and Ni81Fe19) bilayer films have the same order of magnitude, showing that spin injection efficiency in the spin pumping is almost identical in these films.

Yoshino, T.; Ando, K.; Harii, K.; Nakayama, H.; Kajiwara, Y.; Saitoh, E.

2011-01-01

218

A Time-Domain Hybrid Analysis Method for Detecting and Quantifying T-Wave Alternans  

PubMed Central

T-wave alternans (TWA) in surface electrocardiograph (ECG) signals has been recognized as a marker of cardiac electrical instability and is hypothesized to be associated with increased risk for ventricular arrhythmias among patients. A novel time-domain TWA hybrid analysis method (HAM) utilizing the correlation method and least squares regression technique is described in this paper. Simulated ECGs containing artificial TWA (cases of absence of TWA and presence of stationary or time-varying or phase-reversal TWA) under different baseline wanderings are used to test the method, and the results show that HAM has a better ability of quantifying TWA amplitude compared with the correlation method (CM) and adapting match filter method (AMFM). The HAM is subsequently used to analyze the clinical ECGs, and results produced by the HAM have, in general, demonstrated consistency with those produced by the CM and the AMFM, while the quantifying TWA amplitudes by the HAM are universally higher than those by the other two methods.

Wan, Xiangkui; Yan, Kanghui; Zhang, Linlin; Zeng, Yanjun

2014-01-01

219

Quantifying Biofilm in Porous Media Using Rock Physics Models  

NASA Astrophysics Data System (ADS)

Biofilm formation and growth in porous rocks can change their material properties such as porosity, permeability which in turn will impact fluid flow. Finding a non-intrusive method to quantify biofilms and their byproducts in rocks is a key to understanding and modeling bioclogging in porous media. Previous geophysical investigations have documented that seismic techniques are sensitive to biofilm growth. These studies pointed to the fact that microbial growth and biofilm formation induces heterogeneity in the seismic properties. Currently there are no rock physics models to explain these observations and to provide quantitative interpretation of the seismic data. Our objectives are to develop a new class of rock physics model that incorporate microbial processes and their effect on seismic properties. Using the assumption that biofilms can grow within pore-spaces or as a layer coating the mineral grains, P-wave velocity (Vp) and S-wave (Vs) velocity models were constructed using travel-time and waveform tomography technique. We used generic rock physics schematics to represent our rock system numerically. We simulated the arrival times as well as waveforms by treating biofilms either as fluid (filling pore spaces) or as part of matrix (coating sand grains). The preliminary results showed that there is a 1% change in Vp and 3% change in Vs when biofilms are represented discrete structures in pore spaces. On the other hand, a 30% change in Vp and 100% change in Vs was observed when biofilm was represented as part of matrix coating sand grains. Therefore, Vp and Vs changes are more rapid when biofilm grows as grain-coating phase. The significant change in Vs associated with biofilms suggests that shear velocity can be used as a diagnostic tool for imaging zones of bioclogging in the subsurface. The results obtained from this study have significant implications for the study of the rheological properties of biofilms in geological media. Other applications include assessing biofilms used as barriers in CO2 sequestration studies as well as assisting in evaluating microbial enhanced oil recovery methods (MEOR), where microorganisms are used to plug highly porous rocks for efficient oil production.

Alhadhrami, F. M.; Jaiswal, P.; Atekwana, E. A.

2012-12-01

220

Peptoid analogues of anoplin show antibacterial activity.  

PubMed

We have synthesised nine analogues of the antibacterial peptide anoplin with a peptoid residue at position 5 (H-GLLKXIKTLL-NH(2)). The most active compounds showed MIC-values of 12.5 and 25 microM against E.coli and S.aureus. These MIC-values are comparable with anoplin which showed 23 microM and 11 microM against E. coli and S.aureus. However, the selectivity was reversed. Our results indicate that peptoid analogues of anoplin are promising lead structures for developing new antibacterial agents. PMID:19799550

Meinike, K; Hansen, P R

2009-01-01

221

Quantifying the motion of Kager's fat pad.  

PubMed

Kager's fat pad is located in Kager's triangle between the Achilles tendon, the superior cortex of the calcaneus, and flexor hallucis longus (FHL) muscle and tendon. Its biomechanical functions are not yet established, but recent studies suggest it performs important biomechanical roles as it is lined by a synovial membrane and its retrocalcaneal protruding wedge can be observed moving into the bursal space during ankle plantarflexion. Such features have prompted hypotheses that the protruding wedge assists in the lubrication of the Achilles tendon subtendinous area, distributes stress at the Achilles enthesis, and removes debris from within the retrocalcaneal bursa. This study examined the influence of FHL activity and Achilles tendon load on the protruding wedge sliding distance, using both dynamic ultrasound imaging and surface electromyogram. Intervolunteer results showed sliding distance was independent of FHL activity. This study has shown the protruding wedge to slide on average 60% further into the retrocalcaneal bursa when comparing the Achilles tendon loaded versus unloaded, consistently reaching the distal extremity. Sliding distance was dependent on a change in the Achilles tendon insertion angle. Our results support a number of hypothesized biomechanical functions of the protruding wedge including: lubrication of the subtendinous region; reduction of pressure change within the Achilles tendon enthesis organ; and removal of debris from within the retrocalcaneal bursa. PMID:19396861

Ghazzawi, Ahmad; Theobald, Peter; Pugh, Neil; Byrne, Carl; Nokes, Len

2009-11-01

222

Quantifying individual performance in Cricket — A network analysis of batsmen and bowlers  

NASA Astrophysics Data System (ADS)

Quantifying individual performance in the game of Cricket is critical for team selection in International matches. The number of runs scored by batsmen and wickets taken by bowlers serves as a natural way of quantifying the performance of a cricketer. Traditionally the batsmen and bowlers are rated on their batting or bowling average respectively. However, in a game like Cricket it is always important the manner in which one scores the runs or claims a wicket. Scoring runs against a strong bowling line-up or delivering a brilliant performance against a team with a strong batting line-up deserves more credit. A player’s average is not able to capture this aspect of the game. In this paper we present a refined method to quantify the ‘quality’ of runs scored by a batsman or wickets taken by a bowler. We explore the application of Social Network Analysis (SNA) to rate the players in a team performance. We generate a directed and weighted network of batsmen-bowlers using the player-vs-player information available for Test cricket and ODI cricket. Additionally we generate a network of batsmen and bowlers based on the dismissal record of batsmen in the history of cricket-Test (1877-2011) and ODI (1971-2011). Our results show that M. Muralitharan is the most successful bowler in the history of Cricket. Our approach could potentially be applied in domestic matches to judge a player’s performance which in turn paves the way for a balanced team selection for International matches.

Mukherjee, Satyam

2014-01-01

223

Model for quantifying photoelastic fringe degradation by imperfect retroreflective backings.  

PubMed

In any automated algorithm for interpreting photoelastic fringe patterns it is necessary to understand and quantify sources of error in the measurement system. We have been considering how the various components of the coating affect the photoelastic measurement, because this source of error has received fairly little attention in the literature. Because the reflective backing is not a perfect retroreflector, it does not preserve the polarization of light and thereby introduces noise into the measurement that depends on the angle of obliqueness and roughness of the reflective surface. This is of particular concern in resolving the stress tensor through the combination of thermoelasticity and photoelasticity where the components are sensitive to errors in the principal angle and difference of the principal stresses. We have developed a physical model that accounts for this and other sources of measurement error to be introduced in a systematic way so that the individual effects on the fringe patterns can be quantified. Simulations show altered photoelastic fringes when backing roughness and oblique incident angles are incorporated into the model. PMID:18345104

Woolard, D; Hinders, M

2000-05-01

224

A framework for quantifying net benefits of alternative prognostic models‡  

PubMed Central

New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measures and can be used in full health economic evaluations of prognostic models used for screening and allocating risk reduction interventions. We extend previous work in this area by quantifying net benefits in life years, thus linking prognostic performance to health economic measures; by taking full account of the occurrence of events over time; and by considering estimation and cross-validation in a multiple-study setting. The method is illustrated in the context of cardiovascular disease risk prediction using an individual participant data meta-analysis. We estimate the number of cardiovascular-disease-free life years gained when statin treatment is allocated based on a risk prediction model with five established risk factors instead of a model with just age, gender and region. We explore methodological issues associated with the multistudy design and show that cost-effectiveness comparisons based on the proposed methodology are robust against a range of modelling assumptions, including adjusting for competing risks. Copyright © 2011 John Wiley & Sons, Ltd.

Rapsomaniki, Eleni; White, Ian R; Wood, Angela M; Thompson, Simon G

2012-01-01

225

A framework for quantifying net benefits of alternative prognostic models.  

PubMed

New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measures and can be used in full health economic evaluations of prognostic models used for screening and allocating risk reduction interventions. We extend previous work in this area by quantifying net benefits in life years, thus linking prognostic performance to health economic measures; by taking full account of the occurrence of events over time; and by considering estimation and cross-validation in a multiple-study setting. The method is illustrated in the context of cardiovascular disease risk prediction using an individual participant data meta-analysis. We estimate the number of cardiovascular-disease-free life years gained when statin treatment is allocated based on a risk prediction model with five established risk factors instead of a model with just age, gender and region. We explore methodological issues associated with the multistudy design and show that cost-effectiveness comparisons based on the proposed methodology are robust against a range of modelling assumptions, including adjusting for competing risks. PMID:21905066

Rapsomaniki, Eleni; White, Ian R; Wood, Angela M; Thompson, Simon G

2012-01-30

226

Concurrent schedules: Quantifying the aversiveness of noise  

PubMed Central

Four hens worked under independent multiple concurrent variable-interval schedules with an overlaid aversive stimulus (sound of hens in a poultry shed at 100dBA) activated by the first peck on a key. The sound remained on until a response was made on the other key. The key that activated the sound in each component was varied over a series of conditions. When the sound was activated by the left (or right) key in one component, it was activated by the right (or left) key in the other component. Bias was examined under a range of different variable-interval schedules, and the applicability of the generalized matching law was examined. It was found that the hens' behavior was biased away from the sound independently of the schedule in effect and that this bias could be quantified using a modified version of the generalized matching law. Behavior during the changeover delays was not affected by the presence of the noise or by changes in reinforcement rate, even though the total response measures were. Insensitivity shown during the delay suggests that behavior after the changeover delay may be more appropriate as a measure of preference (or aversiveness) of stimuli than are overall behavior measures.

McAdie, Tina M.; Foster, T. Mary; Temple, William

1996-01-01

227

Quantifying the evolutionary dynamics of language  

PubMed Central

Human language is based on grammatical rules1–4. Cultural evolution allows these rules to change over time5. Rules compete with each other: as new rules rise to prominence, old ones die away. To quantify the dynamics of language evolution, we studied the regularization of English verbs over the last 1200 years. Although an elaborate system of productive conjugations existed in English’s proto-Germanic ancestor, modern English uses the dental suffix, -ed, to signify past tense6. Here, we describe the emergence of this linguistic rule amidst the evolutionary decay of its exceptions, known to us as irregular verbs. We have generated a dataset of verbs whose conjugations have been evolving for over a millennium, tracking inflectional changes to 177 Old English irregulars. Of these irregulars, 145 remained irregular in Middle English and 98 are still irregular today. We study how the rate of regularization depends on the frequency of word usage. The half-life of an irregular verb scales as the square root of its usage frequency: a verb that is 100 times less frequent regularizes 10 times as fast. Our study provides a quantitative analysis of the regularization process by which ancestral forms gradually yield to an emerging linguistic rule.

Lieberman, Erez; Michel, Jean-Baptiste; Jackson, Joe; Tang, Tina; Nowak, Martin A.

2008-01-01

228

Quantifying Climate Risks for Urban Environments  

NASA Astrophysics Data System (ADS)

High-density urban areas are both uniquely vulnerable and uniquely able to adapt to climate change. Enabling this potential requires identifying the vulnerabilities, however, and these depend strongly on location: the challenges climate change poses for a southern coastal city such as Miami, for example, have little in common with those facing a northern inland city such as Chicago. By combining local knowledge with climate science, risk assessment, engineering analysis, and adaptation planning, it is possible to develop relevant climate information that feeds directly into vulnerability assessment and long-term planning. Key steps include: developing climate projections tagged to long-term weather stations within the city itself that reflect local characteristics; mining local knowledge to identify existing vulnerabilities to, and costs of, weather and climate extremes; understanding how future projections can be integrated into the planning process; and identifying ways in which the city may adapt. Using examples from our work in the cities of Boston, Chicago, and Mobile we illustrate the practical application of this approach to quantify the impacts of climate change on these cities and identify robust adaptation options as diverse as reducing the urban heat island effect, protecting essential infrastructure, changing design standards and building codes, developing robust emergency management plans, and rebuilding storm sewer systems.

Hayhoe, K.; Stoner, A. K.; Dickson, L.

2013-12-01

229

Quantifying instantaneous performance in alpine ski racing.  

PubMed

Alpine ski racing is a popular sport in many countries and a lot of research has gone into optimising athlete performance. Two factors influence athlete performance in a ski race: speed and the chosen path between the gates. However, to date there is no objective, quantitative method to determine instantaneous skiing performance that takes both of these factors into account. The purpose of this short communication was to define a variable quantifying instantaneous skiing performance and to study how this variable depended on the skiers' speed and on their chosen path. Instantaneous skiing performance was defined as time loss per elevation difference dt/dz, which depends on the skier's speed v(z), and the distance travelled per elevation difference ds/dz. Using kinematic data collected in an earlier study, it was evaluated how these variables can be used to assess the individual performance of six ski racers in two slalom turns. The performance analysis conducted in this study might be a useful tool not only for athletes and coaches preparing for competition, but also for sports scientists investigating skiing techniques or engineers developing and testing skiing equipment. PMID:22620279

Federolf, Peter Andreas

2012-01-01

230

Quantifying Acute Myocardial Injury Using Ratiometric Fluorometry  

PubMed Central

Early reperfusion is the best therapy for myocardial infarction (MI). Effectiveness, however, varies significantly between patients and has implications for long-term prognosis and treatment. A technique to assess the extent of myocardial salvage after reperfusion therapy would allow for high-risk patients to be identified in the early post-MI period. Mitochondrial dysfunction is associated with cell death following myocardial reperfusion and can be quantified by fluorometry. Therefore, we hypothesized that variations in the fluorescence of mitochondrial nicotinamide adenine dinucleotide (NADH) and flavoprotein (FP) can be used acutely to predict the degree of myocardial injury. Thirteen rabbits had coronary occlusion for 30 min followed by 3 h of reperfusion. To produce a spectrum of infarct sizes, six animals were infused cyclosporine A prior to ischemia. Using a specially designed fluorometric probe, NADH and FP fluorescence were measured in the ischemic area. Changes in NADH and FP fluorescence, as early as 15 min after reperfusion, correlated with postmortem assessment infarct size (r = 0.695, p < 0.01). This correlation strengthened with time (r = 0.827, p < 0.001 after 180 min). Clinical application of catheter-based myocardial fluorometry may provide a minimally invasive technique for assessing the early response to reperfusion therapy.

Ranji, Mahsa; Matsubara, Muneaki; Leshnower, Bradley G.; Hinmon, Robin H.; Jaggard, Dwight L.; Chance, Britton; Gorman, Robert C.

2011-01-01

231

Quantifying bank storage of variably saturated aquifers.  

PubMed

Numerical simulations were conducted to quantify bank storage in a variably saturated, homogenous, and anisotropic aquifer abutting a stream during rising stream stage. Seepage faces and bank slopes ranging from 1/3 to 100/3 were simulated. The initial conditions were assumed steady-state flow with water draining toward the stream. Then, the stream level rose at a constant rate to the specified elevation of the water table given by the landward boundary condition and stayed there until the system reached a new steady state. This represents a highly simplified version of a real world hydrograph. For the specific examples considered, the following conclusions can be made. The volume of surface water entering the bank increased with the rate of stream level rise, became negligible when the rate of rise was slow, and approached a positive constant when the rate was large. Also, the volume decreased with the dimensionless parameter M (the product of the anisotropy ratio and the square of the domain's aspect ratio). When M was large (>10), bank storage was small because most pore space was initially saturated with ground water due to the presence of a significant seepage face. When M was small, the seepage face became insignificant and capillarity began to play a role. The weaker the capillary effect, the easier for surface water to enter the bank. The effect of the capillary forces on the volume of surface water entering the bank was significant and could not be neglected. PMID:18657116

Li, Hailong; Boufadel, Michel C; Weaver, James W

2008-01-01

232

Quantifying non-Markovianity via correlations  

NASA Astrophysics Data System (ADS)

In the study of open quantum systems, memory effects are usually ignored, and this leads to dynamical semigroups and Markovian dynamics. However, in practice, non-Markovian dynamics is the rule rather than the exception. With the recent emergence of quantum information theory, there is a flurry of investigations of non-Markovian dynamics, and several significant measures for non-Markovianity are introduced from various perspectives such as deviation from divisibility, information exchange between a system and its environment, or entanglement with the environment. In this work, by exploiting the correlations flow between a system and an arbitrary ancillary, we propose a considerably intuitive measure for non-Markovianity by use of correlations as quantified by the quantum mutual information rather than entanglement. The fundamental properties, physical significance, and differences and relations with existing measures for non-Markovianity are elucidated. The measure captures quite directly and deeply the characteristics of non-Markovianity from the perspective of information. A simplified version based on Jamio?kowski-Choi isomorphism which encodes operations via bipartite states and does not involve any optimization is also proposed.

Luo, Shunlong; Fu, Shuangshuang; Song, Hongting

2012-10-01

233

Data Used in Quantified Reliability Models  

NASA Technical Reports Server (NTRS)

Data is the crux to developing quantitative risk and reliability models, without the data there is no quantification. The means to find and identify reliability data or failure numbers to quantify fault tree models during conceptual and design phases is often the quagmire that precludes early decision makers consideration of potential risk drivers that will influence design. The analyst tasked with addressing a system or product reliability depends on the availability of data. But, where is does that data come from and what does it really apply to? Commercial industries, government agencies, and other international sources might have available data similar to what you are looking for. In general, internal and external technical reports and data based on similar and dissimilar equipment is often the first and only place checked. A common philosophy is "I have a number - that is good enough". But, is it? Have you ever considered the difference in reported data from various federal datasets and technical reports when compared to similar sources from national and/or international datasets? Just how well does your data compare? Understanding how the reported data was derived, and interpreting the information and details associated with the data is as important as the data itself.

DeMott, Diana; Kleinhammer, Roger K.; Kahn, C. J.

2014-01-01

234

Quantifying the Magnetic Advantage in Magnetotaxis  

PubMed Central

Magnetotactic bacteria are characterized by the production of magnetosomes, nanoscale particles of lipid bilayer encapsulated magnetite, that act to orient the bacteria in magnetic fields. These magnetosomes allow magneto-aerotaxis, which is the motion of the bacteria along a magnetic field and toward preferred concentrations of oxygen. Magneto-aerotaxis has been shown to direct the motion of these bacteria downward toward sediments and microaerobic environments favorable for growth. Herein, we compare the magneto-aerotaxis of wild-type, magnetic Magnetospirillum magneticum AMB-1 with a nonmagnetic mutant we have engineered. Using an applied magnetic field and an advancing oxygen gradient, we have quantified the magnetic advantage in magneto-aerotaxis as a more rapid migration to preferred oxygen levels. Magnetic, wild-type cells swimming in an applied magnetic field more quickly migrate away from the advancing oxygen than either wild-type cells in a zero field or the nonmagnetic cells in any field. We find that the responses of the magnetic and mutant strains are well described by a relatively simple analytical model, an analysis of which indicates that the key benefit of magnetotaxis is an enhancement of a bacterium's ability to detect oxygen, not an increase in its average speed moving away from high oxygen concentrations.

Smith, M. J.; Sheehan, P. E.; Perry, L. L.; O'Connor, K.; Csonka, L. N.; Applegate, B. M.; Whitman, L. J.

2006-01-01

235

Fluorescence imaging to quantify crop residue cover  

NASA Technical Reports Server (NTRS)

Crop residues, the portion of the crop left in the field after harvest, can be an important management factor in controlling soil erosion. Methods to quantify residue cover are needed that are rapid, accurate, and objective. Scenes with known amounts of crop residue were illuminated with long wave ultraviolet (UV) radiation and fluorescence images were recorded with an intensified video camera fitted with a 453 to 488 nm band pass filter. A light colored soil and a dark colored soil were used as background for the weathered soybean stems. Residue cover was determined by counting the proportion of the pixels in the image with fluorescence values greater than a threshold. Soil pixels had the lowest gray levels in the images. The values of the soybean residue pixels spanned nearly the full range of the 8-bit video data. Classification accuracies typically were within 3(absolute units) of measured cover values. Video imaging can provide an intuitive understanding of the fraction of the soil covered by residue.

Daughtry, C. S. T.; Mcmurtrey, J. E., III; Chappelle, E. W.

1994-01-01

236

How to quantify energy landscapes of solids.  

PubMed

We explore whether the topology of energy landscapes in chemical systems obeys any rules and what these rules are. To answer this and related questions we use several tools: (i) Reduced energy surface and its density of states, (ii) descriptor of structure called fingerprint function, which can be represented as a one-dimensional function or a vector in abstract multidimensional space, (iii) definition of a "distance" between two structures enabling quantification of energy landscapes, (iv) definition of a degree of order of a structure, and (v) definitions of the quasi-entropy quantifying structural diversity. Our approach can be used for rationalizing large databases of crystal structures and for tuning computational algorithms for structure prediction. It enables quantitative and intuitive representations of energy landscapes and reappraisal of some of the traditional chemical notions and rules. Our analysis confirms the expectations that low-energy minima are clustered in compact regions of configuration space ("funnels") and that chemical systems tend to have very few funnels, sometimes only one. This analysis can be applied to the physical properties of solids, opening new ways of discovering structure-property relations. We quantitatively demonstrate that crystals tend to adopt one of the few simplest structures consistent with their chemistry, providing a thermodynamic justification of Pauling's fifth rule. PMID:19292538

Oganov, Artem R; Valle, Mario

2009-03-14

237

Quantifying galaxy shapes: sérsiclets and beyond  

NASA Astrophysics Data System (ADS)

Parametrization of galaxy morphologies is a challenging task, for instance in shear measurements of weak gravitational lensing or investigations of formation and evolution of galaxies. The huge variety of different morphologies requires a parametrization scheme that is highly flexible and that accounts for certain morphological observables, such as ellipticity, steepness of the radial light profile and azimuthal structure. In this article, we revisit the method of sérsiclets, where galaxy morphologies are decomposed into a set of polar basis functions that are based on the Sérsic profile. This approach is justified by the fact that the Sérsic profile is the first-order Taylor expansion of any real light profile. We show that sérsiclets indeed overcome the modelling failures of shapelets in the case of early-type galaxies. However, sérsiclets implicate an unphysical relation between the steepness of the light profile and the spatial scale of the polynomial oscillations, which is not necessarily obeyed by real galaxy morphologies and can therefore give rise to modelling failures. Moreover, we demonstrate that sérsiclets are prone to undersampling, which restricts sérsiclet modelling to highly resolved galaxy images. Analysing data from the weak-lensing GREAT08 challenge, we demonstrate that sérsiclets should not be used in weak-lensing studies. We conclude that although the sérsiclet approach appears very promising at first glance, it suffers from conceptual and practical problems that severely limit its usefulness. In particular, sérsiclets do not provide high-precision results in weak-lensing studies. Finally, we show that the Sérsic profile can be enhanced by higher order terms in the Taylor expansion, which can drastically improve model reconstructions of galaxy images. When orthonormalized, these higher order profiles can overcome the problems of sérsiclets, while preserving their mathematical justification. However, this method is computationally expensive.

Andrae, René; Melchior, Peter; Jahnke, Knud

2011-11-01

238

Quantifying VOC emissions for the strategic petroleum reserve.  

SciTech Connect

A very important aspect of the Department of Energy's (DOE's) Strategic Petroleum Reserve (SPR) program is regulatory compliance. One of the regulatory compliance issues deals with limiting the amount of volatile organic compounds (VOCs) that are emitted into the atmosphere from brine wastes when they are discharged to brine holding ponds. The US Environmental Protection Agency (USEPA) has set limits on the amount of VOCs that can be discharged to the atmosphere. Several attempts have been made to quantify the VOC emissions associated with the brine ponds going back to the late 1970's. There are potential issues associated with each of these quantification efforts. Two efforts were made to quantify VOC emissions by analyzing VOC content of brine samples obtained from wells. Efforts to measure air concentrations were mentioned in historical reports but no data have been located to confirm these assertions. A modeling effort was also performed to quantify the VOC emissions. More recently in 2011- 2013, additional brine sampling has been performed to update the VOC emissions estimate. An analysis of the statistical confidence in these results is presented here. Arguably, there are uncertainties associated with each of these efforts. The analysis herein indicates that the upper confidence limit in VOC emissions based on recent brine sampling is very close to the 0.42 ton/MMB limit used historically on the project. Refining this estimate would require considerable investment in additional sampling, analysis, and monitoring. An analysis of the VOC emissions at each site suggests that additional discharges could be made and stay within current regulatory limits.

Knowlton, Robert G.; Lord, David L.

2013-06-01

239

Quantifying uncertainty in LCA-modelling of waste management systems  

SciTech Connect

Highlights: Black-Right-Pointing-Pointer Uncertainty in LCA-modelling of waste management is significant. Black-Right-Pointing-Pointer Model, scenario and parameter uncertainties contribute. Black-Right-Pointing-Pointer Sequential procedure for quantifying uncertainty is proposed. Black-Right-Pointing-Pointer Application of procedure is illustrated by a case-study. - Abstract: Uncertainty analysis in LCA studies has been subject to major progress over the last years. In the context of waste management, various methods have been implemented but a systematic method for uncertainty analysis of waste-LCA studies is lacking. The objective of this paper is (1) to present the sources of uncertainty specifically inherent to waste-LCA studies, (2) to select and apply several methods for uncertainty analysis and (3) to develop a general framework for quantitative uncertainty assessment of LCA of waste management systems. The suggested method is a sequence of four steps combining the selected methods: (Step 1) a sensitivity analysis evaluating the sensitivities of the results with respect to the input uncertainties, (Step 2) an uncertainty propagation providing appropriate tools for representing uncertainties and calculating the overall uncertainty of the model results, (Step 3) an uncertainty contribution analysis quantifying the contribution of each parameter uncertainty to the final uncertainty and (Step 4) as a new approach, a combined sensitivity analysis providing a visualisation of the shift in the ranking of different options due to variations of selected key parameters. This tiered approach optimises the resources available to LCA practitioners by only propagating the most influential uncertainties.

Clavreul, Julie, E-mail: julc@env.dtu.dk [Department of Environmental Engineering, Technical University of Denmark, Miljoevej, Building 113, DK-2800 Kongens Lyngby (Denmark); Guyonnet, Dominique [BRGM, ENAG BRGM-School, BP 6009, 3 Avenue C. Guillemin, 45060 Orleans Cedex (France); Christensen, Thomas H. [Department of Environmental Engineering, Technical University of Denmark, Miljoevej, Building 113, DK-2800 Kongens Lyngby (Denmark)

2012-12-15

240

Quantifying the Decanalizing Effects of Spontaneous Mutations in Rhabditid Nematodes  

PubMed Central

The evolution of canalization, the robustness of the phenotype to environmental or genetic perturbation, has attracted considerable recent interest. A key step toward understanding the evolution of any phenotype is characterizing the rate at which mutation introduces genetic variation for the trait (the mutational variance, VM) and the average directional effects of mutations on the trait mean (?M). In this study, the mutational parameters for canalization of productivity and body volume are quantified in two sets of mutation accumulation lines of nematodes in the genus Caenorhabditis and are compared with the mutational parameters for the traits themselves. Four results emerge: (1) spontaneous mutations consistently decanalize the phenotype; (2) the mutational parameters for decanalization, VM (quantified as mutational heritability) and ?M, are of the same order of magnitude as the same parameters for the traits themselves; (3) the mutational parameters for canalization are roughly correlated with the parameters for the traits themselves across taxa; and (4) there is no evidence that residual segregating overdominant loci contribute to the decay of canalization. These results suggest that canalization is readily evolvable and that any evolutionary factor that causes mutations to accumulate will, on average, decanalize the phenotype.

Baer, Charles F.

2013-01-01

241

Species determination - Can we detect and quantify meat adulteration?  

PubMed

Proper labelling of meat products is important to help fair-trade, and to enable consumers to make informed choices. However, it has been shown that labelling of species, expressed as weight/weight (w/w), on meat product labels was incorrect in more than 20% of cases. Enforcement of labelling regulations requires reliable analytical methods. Analytical methods are often based on protein or DNA measurements, which are not directly comparable to labelled meat expressed as w/w. This review discusses a wide range of analytical methods with focus on their ability to quantify and their limits of detection (LOD). In particular, problems associated with a correlation from quantitative DNA based results to meat content (w/w) are discussed. The hope is to make researchers aware of the problems of expressing DNA results as meat content (w/w) in order to find better alternatives. One alternative is to express DNA results as genome/genome equivalents. PMID:20416768

Ballin, Nicolai Z; Vogensen, Finn K; Karlsson, Anders H

2009-10-01

242

Approach to quantify human dermal skin aging using multiphoton laser scanning microscopy  

NASA Astrophysics Data System (ADS)

Extracellular skin structures in human skin are impaired during intrinsic and extrinsic aging. Assessment of these dermal changes is conducted by subjective clinical evaluation and histological and molecular analysis. We aimed to develop a new parameter for the noninvasive quantitative determination of dermal skin alterations utilizing the high-resolution three-dimensional multiphoton laser scanning microscopy (MPLSM) technique. To quantify structural differences between chronically sun-exposed and sun-protected human skin, the respective collagen-specific second harmonic generation and the elastin-specific autofluorescence signals were recorded in young and elderly volunteers using the MPLSM technique. After image processing, the elastin-to-collagen ratio (ELCOR) was calculated. Results show that the ELCOR parameter of volar forearm skin significantly increases with age. For elderly volunteers, the ELCOR value calculated for the chronically sun-exposed temple area is significantly augmented compared to the sun-protected upper arm area. Based on the MPLSM technology, we introduce the ELCOR parameter as a new means to quantify accurately age-associated alterations in the extracellular matrix.

Puschmann, Stefan; Rahn, Christian-Dennis; Wenck, Horst; Gallinat, Stefan; Fischer, Frank

2012-03-01

243

A Synthetic Phased Array Surface Acoustic Wave Sensor for Quantifying Bolt Tension  

PubMed Central

In this paper, we report our findings on implementing a synthetic phased array surface acoustic wave sensor to quantify bolt tension. Maintaining proper bolt tension is important in many fields such as for ensuring safe operation of civil infrastructures. Significant advantages of this relatively simple methodology is its capability to assess bolt tension without any contact with the bolt, thus enabling measurement at inaccessible locations, multiple bolt measurement capability at a time, not requiring data collection during the installation and no calibration requirements. We performed detailed experiments on a custom-built flexible bench-top experimental setup consisting of 1018 steel plate of 12.7 mm (½ in) thickness, a 6.4 mm (¼ in) grade 8 bolt and a stainless steel washer with 19 mm (¾ in) of external diameter. Our results indicate that this method is not only capable of clearly distinguishing properly bolted joints from loosened joints but also capable of quantifying how loose the bolt actually is. We also conducted detailed signal-to-noise (SNR) analysis and showed that the SNR value for the entire bolt tension range was sufficient for image reconstruction.

Martinez, Jairo; Sisman, Alper; Onen, Onursal; Velasquez, Dean; Guldiken, Rasim

2012-01-01

244

Quantifying the effect of environment stability on the transcription factor repertoire of marine microbes  

PubMed Central

Background DNA-binding transcription factors (TFs) regulate cellular functions in prokaryotes, often in response to environmental stimuli. Thus, the environment exerts constant selective pressure on the TF gene content of microbial communities. Recently a study on marine Synechococcus strains detected differences in their genomic TF content related to environmental adaptation, but so far the effect of environmental parameters on the content of TFs in bacterial communities has not been systematically investigated. Results We quantified the effect of environment stability on the transcription factor repertoire of marine pelagic microbes from the Global Ocean Sampling (GOS) metagenome using interpolated physico-chemical parameters and multivariate statistics. Thirty-five percent of the difference in relative TF abundances between samples could be explained by environment stability. Six percent was attributable to spatial distance but none to a combination of both spatial distance and stability. Some individual TFs showed a stronger relationship to environment stability and space than the total TF pool. Conclusions Environmental stability appears to have a clearly detectable effect on TF gene content in bacterioplanktonic communities described by the GOS metagenome. Interpolated environmental parameters were shown to compare well to in situ measurements and were essential for quantifying the effect of the environment on the TF content. It is demonstrated that comprehensive and well-structured contextual data will strongly enhance our ability to interpret the functional potential of microbes from metagenomic data.

2011-01-01

245

Quantifying Mountain Block Recharge by Means of Catchment-Scale Storage-Discharge Relationships  

NASA Astrophysics Data System (ADS)

Despite the hydrologic significance of mountainous catchments in providing freshwater resources, especially in semi-arid regions, little is known about key hydrological processes in these systems, such as mountain block recharge (MBR). We developed an empirical approach based on the storage sensitivity function introduced by Kirchner (2009) to develop storage-discharge relationships from stream flow analysis. We investigated sensitivity of MBR estimates to uncertainty in the derivation of the catchment storage-discharge relations. We implemented this technique in a semi-arid mountainous catchment in South-east Arizona, USA (the Marshall Gulch catchment in the Santa Catalina Mountains near Tucson) with two distinct rainy seasons, winter frontal storms and summer monsoon separated by prolonged dry periods. Developing storage-discharge relation based on baseflow data in the dry period allowed quantifying change in fractured bedrock storage caused by MBR. Contribution of fractured bedrock to stream flow was confirmed using stable isotope data. Our results show that 1) incorporating scalable time steps to correct for stream flow measurement errors improves the model fit; 2) the quantile method is more suitable for stream flow data binning; 3) the choice of the regression model is more critical when the stage-discharge function is used to predict changes in bedrock storage beyond the maximum observed flow in the catchment and 4) application of daily versus hourly flow did not affect the storage-discharge relationship. This methodology allowed quantifying MBR using stream flow recession analysis from within the mountain system.

Ajami, H.; Troch, P. A.; Maddock, T.; Meixner, T.; Eastoe, C. J.

2009-12-01

246

Chimpanzees (Pan troglodytes) and bonobos (Pan paniscus) quantify split solid objects.  

PubMed

Recent research suggests that gorillas' and orangutans' object representations survive cohesion violations (e.g., a split of a solid object into two halves), but that their processing of quantities may be affected by them. We assessed chimpanzees' (Pan troglodytes) and bonobos' (Pan paniscus) reactions to various fission events in the same series of action tasks modelled after infant studies previously run on gorillas and orangutans (Cacchione and Call in Cognition 116:193-203, 2010b). Results showed that all four non-human great ape species managed to quantify split objects but that their performance varied as a function of the non-cohesiveness produced in the splitting event. Spatial ambiguity and shape invariance had the greatest impact on apes' ability to represent and quantify objects. Further, we observed species differences with gorillas performing lower than other species. Finally, we detected a substantial age effect, with ape infants below 6 years of age being outperformed by both juvenile/adolescent and adult apes. PMID:22875724

Cacchione, Trix; Hrubesch, Christine; Call, Josep

2013-01-01

247

Quantifying Riverscape Connectivity with Graph Theory  

NASA Astrophysics Data System (ADS)

Fluvial catchments convey fluxes of water, sediment, nutrients and aquatic biota. At continental scales, crustal topography defines the overall path of channels whilst at local scales depositional and/or erosional features generally determine the exact path of a channel. Furthermore, constructions such as dams, for either water abstraction or hydropower, often have a significant impact on channel networks.The concept of ';connectivity' is commonly invoked when conceptualising the structure of a river network.This concept is easy to grasp but there have been uneven efforts across the environmental sciences to actually quantify connectivity. Currently there have only been a few studies reporting quantitative indices of connectivity in river sciences, notably, in the study of avulsion processes. However, the majority of current work describing some form of environmental connectivity in a quantitative manner is in the field of landscape ecology. Driven by the need to quantify habitat fragmentation, landscape ecologists have returned to graph theory. Within this formal setting, landscape ecologists have successfully developed a range of indices which can model connectivity loss. Such formal connectivity metrics are currently needed for a range of applications in fluvial sciences. One of the most urgent needs relates to dam construction. In the developed world, hydropower development has generally slowed and in many countries, dams are actually being removed. However, this is not the case in the developing world where hydropower is seen as a key element to low-emissions power-security. For example, several dam projects are envisaged in Himalayan catchments in the next 2 decades. This region is already under severe pressure from climate change and urbanisation, and a better understanding of the network fragmentation which can be expected in this system is urgently needed. In this paper, we apply and adapt connectivity metrics from landscape ecology. We then examine the connectivity structure of the Gangetic riverscape with fluvial remote sensing. Our study reach extends from the heavily dammed headwaters of the Bhagirathi, Mandakini and Alaknanda rivers which form the source of the Ganga to Allahabad ~900 km downstream on the main stem. We use Landsat-8 imagery as the baseline dataset. Channel width along the Ganga (i.e. Ganges) is often several kilometres. Therefore, the pan-sharpened 15m pixels of Landsat-8 are in fact capable of resolving inner channel features for over 80% of the channel length thus allowing a riverscape approach to be adopted. We examine the following connectivity metrics: size distribution of connected components, betweeness centrality and the integrated index of connectivity. A geographic perspective is added by mapping local (25 km-scale) values for these metrics in order to examine spatial patterns of connectivity. This approach allows us to map impacts of dam construction and has the potential to inform policy decisions in the area as well as open-up new avenues of investigation.

Carbonneau, P.; Milledge, D.; Sinha, R.; Tandon, S. K.

2013-12-01

248

Quantifying Collective Attention from Tweet Stream  

PubMed Central

Online social media are increasingly facilitating our social interactions, thereby making available a massive “digital fossil” of human behavior. Discovering and quantifying distinct patterns using these data is important for studying social behavior, although the rapid time-variant nature and large volumes of these data make this task difficult and challenging. In this study, we focused on the emergence of “collective attention” on Twitter, a popular social networking service. We propose a simple method for detecting and measuring the collective attention evoked by various types of events. This method exploits the fact that tweeting activity exhibits a burst-like increase and an irregular oscillation when a particular real-world event occurs; otherwise, it follows regular circadian rhythms. The difference between regular and irregular states in the tweet stream was measured using the Jensen-Shannon divergence, which corresponds to the intensity of collective attention. We then associated irregular incidents with their corresponding events that attracted the attention and elicited responses from large numbers of people, based on the popularity and the enhancement of key terms in posted messages or “tweets.” Next, we demonstrate the effectiveness of this method using a large dataset that contained approximately 490 million Japanese tweets by over 400,000 users, in which we identified 60 cases of collective attentions, including one related to the Tohoku-oki earthquake. “Retweet” networks were also investigated to understand collective attention in terms of social interactions. This simple method provides a retrospective summary of collective attention, thereby contributing to the fundamental understanding of social behavior in the digital era.

Sasahara, Kazutoshi; Hirata, Yoshito; Toyoda, Masashi; Kitsuregawa, Masaru; Aihara, Kazuyuki

2013-01-01

249

A new model for quantifying climate episodes  

NASA Astrophysics Data System (ADS)

When long records of climate (precipitation, temperature, stream runoff, etc.) are available, either from instrumental observations or from proxy records, the objective evaluation and comparison of climatic episodes becomes necessary. Such episodes can be quantified in terms of duration (the number of time intervals, e.g. years, the process remains continuously above or below a reference level) and magnitude (the sum of all series values for a given duration). The joint distribution of duration and magnitude is represented here by a stochastic model called BEG, for bivariate distribution with exponential and geometric marginals. The model is based on the theory of random sums, and its mathematical derivation confirms and extends previous empirical findings. Probability statements that can be obtained from the model are illustrated by applying it to a 2300-year dendroclimatic reconstruction of water-year precipitation for the eastern Sierra Nevada-western Great Basin. Using the Dust Bowl drought period as an example, the chance of a longer or greater drought is 8%. Conditional probabilities are much higher, i.e. a drought of that magnitude has a 62% chance of lasting for 11 years or longer, and a drought that lasts 11 years has a 46% chance of having an equal or greater magnitude. In addition, because of the bivariate model, we can estimate a 6% chance of witnessing a drought that is both longer and greater. Additional examples of model application are also provided. This type of information provides a way to place any climatic episode in a temporal perspective, and such numerical statements help with reaching science-based management and policy decisions.

Biondi, Franco; Kozubowski, Tomasz J.; Panorska, Anna K.

2005-07-01

250

Quantifying Missing Heritability at Known GWAS Loci  

PubMed Central

Recent work has shown that much of the missing heritability of complex traits can be resolved by estimates of heritability explained by all genotyped SNPs. However, it is currently unknown how much heritability is missing due to poor tagging or additional causal variants at known GWAS loci. Here, we use variance components to quantify the heritability explained by all SNPs at known GWAS loci in nine diseases from WTCCC1 and WTCCC2. After accounting for expectation, we observed all SNPs at known GWAS loci to explain more heritability than GWAS-associated SNPs on average (). For some diseases, this increase was individually significant: for Multiple Sclerosis (MS) () and for Crohn's Disease (CD) (); all analyses of autoimmune diseases excluded the well-studied MHC region. Additionally, we found that GWAS loci from other related traits also explained significant heritability. The union of all autoimmune disease loci explained more MS heritability than known MS SNPs () and more CD heritability than known CD SNPs (), with an analogous increase for all autoimmune diseases analyzed. We also observed significant increases in an analysis of Rheumatoid Arthritis (RA) samples typed on ImmunoChip, with more heritability from all SNPs at GWAS loci () and more heritability from all autoimmune disease loci () compared to known RA SNPs (including those identified in this cohort). Our methods adjust for LD between SNPs, which can bias standard estimates of heritability from SNPs even if all causal variants are typed. By comparing adjusted estimates, we hypothesize that the genome-wide distribution of causal variants is enriched for low-frequency alleles, but that causal variants at known GWAS loci are skewed towards common alleles. These findings have important ramifications for fine-mapping study design and our understanding of complex disease architecture.

Gusev, Alexander; Bhatia, Gaurav; Zaitlen, Noah; Vilhjalmsson, Bjarni J.; Diogo, Dorothee; Stahl, Eli A.; Gregersen, Peter K.; Worthington, Jane; Klareskog, Lars; Raychaudhuri, Soumya; Plenge, Robert M.; Pasaniuc, Bogdan; Price, Alkes L.

2013-01-01

251

Quantifying Relative Diver Effects in Underwater Visual Censuses  

PubMed Central

Diver-based Underwater Visual Censuses (UVCs), particularly transect-based surveys, are key tools in the study of coral reef fish ecology. These techniques, however, have inherent problems that make it difficult to collect accurate numerical data. One of these problems is the diver effect (defined as the reaction of fish to a diver). Although widely recognised, its effects have yet to be quantified and the extent of taxonomic variation remains to be determined. We therefore examined relative diver effects on a reef fish assemblage on the Great Barrier Reef. Using common UVC methods, the recorded abundance of seven reef fish groups were significantly affected by the ongoing presence of SCUBA divers. Overall, the diver effect resulted in a 52% decrease in the mean number of individuals recorded, with declines of up to 70% in individual families. Although the diver effect appears to be a significant problem, UVCs remain a useful approach for quantifying spatial and temporal variation in relative fish abundances, especially if using methods that minimise the exposure of fishes to divers. Fixed distance transects using tapes or lines deployed by a second diver (or GPS-calibrated timed swims) would appear to maximise fish counts and minimise diver effects.

Dickens, Luke C.; Goatley, Christopher H. R.; Tanner, Jennifer K.; Bellwood, David R.

2011-01-01

252

Quantifiable outcomes from corporate and higher education learning collaborations  

NASA Astrophysics Data System (ADS)

The study investigated the existence of measurable learning outcomes that emerged out of the shared strengths of collaborating sponsors. The study identified quantifiable learning outcomes that confirm corporate, academic and learner participation in learning collaborations. Each of the three hypotheses and the synergy indicator quantitatively and qualitatively confirmed learning outcomes benefiting participants. The academic-indicator quantitatively confirmed that learning outcomes attract learners to the institution. The corporate-indicator confirmed that learning outcomes include knowledge exchange and enhanced workforce talents for careers in the energy-utility industry. The learner-indicator confirmed that learning outcomes provide professional development opportunities for employment. The synergy-indicator confirmed that best learning practices in learning collaborations emanate out of the sponsors' shared strengths, and that partnerships can be elevated to strategic alliances, going beyond response to the desires of sponsors to create learner-centered cultures. The synergy-indicator confirmed the value of organizational processes that elevate sponsors' interactions to sharing strength, to create a learner-centered culture. The study's series of qualitative questions confirmed prior success factors, while verifying the hypothesis results and providing insight not available from quantitative data. The direct benefactors of the study are the energy-utility learning-collaboration participants of the study, and corporation, academic institutions, and learners of the collaboration. The indirect benefactors are the stakeholders of future learning collaborations, through improved knowledge of the existence or absence of quantifiable learning outcomes.

Devine, Thomas G.

253

A novel technique to quantify the instantaneous mitral regurgitant rate  

PubMed Central

Background The systolic variation of mitral regurgitation (MR) is a pitfall in its quantification. Current recommendations advocate using quantitative echocardiographic techniques that account for this systolic variation. While prior studies have qualitatively described patterns of systolic variation no study has quantified this variation. Methods This study includes 41 patients who underwent cardiovascular magnetic resonance (CMR) evaluation for the assessment of MR. Systole was divided into 3 equal parts: early, mid, and late. The MR jets were categorized as holosystolc, early, or late based on the portions of systole the jet was visible. The aortic flow and left ventricular stroke volume (LVSV) acquired by CMR were plotted against time. The instantaneous regurgitant rate was calculated for each third of systole as the difference between the LVSV and the aortic flow. Results The regurgitant rate varied widely with a 1.9-fold, 3.4-fold, and 1.6-fold difference between the lowest and highest rate in patients with early, late, and holosystolic jets respectively. There was overlap of peak regurgitant rates among patients with mild, moderate and severe MR. The greatest variation of regurgitant rate was seen among patients with mild MR. Conclusion CMR can quantify the systolic temporal variation of MR. There is significant variation of the mitral regurgitant rate even among patients with holosystolic MR jets. These findings highlight the need to use quantitative measures of MR severity that take into consideration the temporal variation of MR.

2013-01-01

254

Phosphorus-based absolutely quantified standard peptides for quantitative proteomics.  

PubMed

An innovative method for the production of absolutely quantified peptide standards is described. These are named phosphorus-based absolutely quantified standard (PASTA) peptides. As the first step, synthetic phosphopeptides are calibrated via a hybrid LC-(ICP+ESI)-MS system. Quantification is achieved by ICP-MS detection of 31P, and identification is performed by ESI-MS. Generation of phosphopeptide standard solutions with this system is demonstrated to provide absolute concentrations with an accuracy better than 10%. The concept was extended to the production of peptide standards by subjecting a PASTA phosphopeptide to gentle and complete dephosphorylation to obtain the cognate PASTA peptide. It is demonstrated that both enzymatic hydrolysis by alkaline or antarctic phosphatase or chemical hydrolysis by hydrofluoric acid can be employed for this purpose. Further, the introduction of one or more stable isotope-labeled amino acids (preferably labeled by 13C, 15N) results in the production of a labeled PASTA peptide, which then can be employed as an internal standard for quantitative analysis by LC-ESI-MS. Using a 1:1 mixture of a stable isotope-labeled PASTA peptide/phosphopeptide pair as dual standard, a quantification between active and inactive recombinant MAP kinase p38alpha was performed by a combination of tryptic digestion and nanoLC-MS. PMID:19663461

Zinn, Nico; Hahn, Bettina; Pipkorn, Rüdiger; Schwarzer, Dominik; Lehmann, Wolf D

2009-10-01

255

Quantifying the information transmitted in a single stimulus.  

PubMed

Information theory - in particular mutual information- has been widely used to investigate neural processing in various brain areas. Shannon mutual information quantifies how much information is, on average, contained in a set of neural activities about a set of stimuli. To extend a similar approach to single stimulus encoding, we need to introduce a quantity specific for a single stimulus. This quantity has been defined in literature by four different measures, but none of them satisfies the same intuitive properties (non-negativity, additivity), that characterize mutual information. We present here a detailed analysis of the different meanings and properties of these four definitions. We show that all these measures satisfy, at least, a weaker additivity condition, i.e. limited to the response set. This allows us to use them for analysing correlated coding, as we illustrate in a toy-example from hippocampal place cells. PMID:17296260

Bezzi, Michele

2007-01-01

256

Quantifying Disorder through Conditional Entropy: An Application to Fluid Mixing  

PubMed Central

In this paper, we present a method to quantify the extent of disorder in a system by using conditional entropies. Our approach is especially useful when other global, or mean field, measures of disorder fail. The method is equally suited for both continuum and lattice models, and it can be made rigorous for the latter. We apply it to mixing and demixing in multicomponent fluid membranes, and show that it has advantages over previous measures based on Shannon entropies, such as a much diminished dependence on binning and the ability to capture local correlations. Further potential applications are very diverse, and could include the study of local and global order in fluid mixtures, liquid crystals, magnetic materials, and particularly biomolecular systems.

Brandani, Giovanni B.; Schor, Marieke; MacPhee, Cait E.; Grubmuller, Helmut; Zachariae, Ulrich; Marenduzzo, Davide

2013-01-01

257

Quantifying disorder through conditional entropy: an application to fluid mixing.  

PubMed

In this paper, we present a method to quantify the extent of disorder in a system by using conditional entropies. Our approach is especially useful when other global, or mean field, measures of disorder fail. The method is equally suited for both continuum and lattice models, and it can be made rigorous for the latter. We apply it to mixing and demixing in multicomponent fluid membranes, and show that it has advantages over previous measures based on Shannon entropies, such as a much diminished dependence on binning and the ability to capture local correlations. Further potential applications are very diverse, and could include the study of local and global order in fluid mixtures, liquid crystals, magnetic materials, and particularly biomolecular systems. PMID:23762401

Brandani, Giovanni B; Schor, Marieke; Macphee, Cait E; Grubmüller, Helmut; Zachariae, Ulrich; Marenduzzo, Davide

2013-01-01

258

Molecular Marker Approach on Characterizing and Quantifying Charcoal in Environmental Media  

NASA Astrophysics Data System (ADS)

Black carbon (BC) is widely distributed in natural environments including soils, sediments, freshwater, seawater and the atmosphere. It is produced mostly from the incomplete combustion of fossil fuels and vegetation. In recent years, increasing attention has been given to BC due to its potential influence in many biogeochemical processes. In the environment, BC exists as a continuum ranging from partly charred plant materials, charcoal residues to highly condensed soot and graphite particles. The heterogeneous nature of black carbon means that BC is always operationally-defined, highlighting the need for standard methods that support data comparisons. Unlike soot and graphite that can be quantified with well-established methods, it is difficult to directly quantify charcoal in geologic media due to its chemical and physical heterogeneity. Most of the available charcoal quantification methods detect unknown fractions of the BC continuum. To specifically identify and quantify charcoal in soils and sediments, we adopted and validated an innovative molecular marker approach that quantifies levoglucosan, a pyrogenic derivative of cellulose, as a proxy of charcoal. Levoglucosan is source-specific, stable and is able to be detected at low concentrations using gas chromatograph-mass spectrometer (GC-MS). In the present study, two different plant species, honey mesquite and cordgrass, were selected as the raw materials to synthesize charcoals. The lab-synthesize charcoals were made under control conditions to eliminate the high heterogeneity often found in natural charcoals. The effects of two major combustion factors, temperature and duration, on the yield of levoglucosan were characterized in the lab-synthesize charcoals. Our results showed that significant levoglucosan production in the two types of charcoal was restricted to relatively low combustion temperatures (150-350 degree C). The combustion duration did not cause significant differences in the yield of levoglucosan in the two charcoals. Interestingly, the low temperature charcoals are undetectable by the acid dichromate oxidation method, a popular soot/charcoal analytical approach. Our study demonstrates that levoglucosan can serve as a proxy of low temperature charcoals that are undetectable using other BC methods. Moreover, our study highlights the limitations of the common BC quantification methods to characterize the entire BC continuum.

Kuo, L.; Herbert, B. E.; Louchouarn, P.

2006-12-01

259

Quantifying distributed damage in composites via the thermoelastic effect  

SciTech Connect

A new approach toward quantifying transverse matrix cracking in composite laminates using the thermoelastic effect is developed. The thermoelastic effect refers to the small temperature changes that are generated in components under dynamic loading. Two models are derived, and the theoretical predictions are experimentally verified for three types of laminates. Both models include damage-induced changes in the lamina stress state and lamina coefficients of thermal expansion conduction effects, and epoxy thickness. The first model relates changes in the laminate TSA signal to changes in longitudinal laminate stiffness and Poisson's ratio. This model is based on gross simplifying assumptions and can be used on any composite laminate layup undergoing transverse matrix cracking. The second model relates TSA signal changes to longitudinal laminate stiffness, Poisson's ratio, and microcrack density for (0[sub p]90[sub q])[sub s] and (90[sub q]/0[sub p])[sub s] cross-ply laminates. Both models yield virtually identical results for the cross-ply laminates considered. A sensitivity analysis is performed on both models to quantify the effects of reasonable property variations on the normalized stiffness vs. normalized TSA signal results for the three laminates under consideration. The results for the cross-ply laminates are very insensitive, while the (+/- 45)[sub 5s] are particularly sensitive to epoxy thickness and longitudinal lamina coefficient of thermal expansion. Experiments are conducted on (0[sub 3]/90[sub 3])[sub s] and (90[sub 3]/0[sub 3])[sub s] Gl/Ep laminates and (+/- 45)[sub 5s] Gr/Ep laminates to confirm the theoretical developments of the thesis. There is a very good correlation between the theoretical predictions and experimental results for the Gl/Ep laminates.

Mahoney, B.J.

1992-01-01

260

Quantifying lithological variability in the mantle  

NASA Astrophysics Data System (ADS)

We present a method that can be used to estimate the amount of recycled material present in the source region of mid-ocean ridge basalts by combining three key constraints: (1) the melting behaviour of the lithologies identified to be present in a mantle source, (2) the overall volume of melt production, and (3) the proportion of melt production attributable to melting of each lithology. These constraints are unified in a three-lithology melting model containing lherzolite, pyroxenite and harzburgite, representative products of mantle differentiation, to quantify their abundance in igneous source regions. As a case study we apply this method to Iceland, a location with sufficient geochemical and geophysical data to meet the required observational constraints. We find that to generate the 20 km of igneous crustal thickness at Iceland's coasts, with 30±10% of the crust produced from melting a pyroxenitic lithology, requires an excess mantle potential temperature (?Tp) of ?130 °C (Tp?1460 °C) and a source consisting of at least 5% recycled basalt. Therefore, the mantle beneath Iceland requires a significant excess temperature to match geophysical and geochemical observations: lithological variation alone cannot account for the high crustal thickness. Determining a unique source solution is only possible if mantle potential temperature is known precisely and independently, otherwise a family of possible lithology mixtures is obtained across the range of viable ?Tp. For Iceland this uncertainty in ?Tp means that the mantle could be >20% harzburgitic if ?Tp>150 °C (Tp>1480 °C). The consequences of lithological heterogeneity for plume dynamics in various geological contexts are also explored through thermodynamic modelling of the densities of lherzolite, basalt, and harzburgite mixtures in the mantle. All lithology solutions for Iceland are buoyant in the shallow mantle at the ?Tp for which they are valid, however only lithology mixtures incorporating a significant harzburgite component are able to reproduce recent estimates of the Iceland plume's volume flux. Using the literature estimates of the amount of recycled basalt in the sources of Hawaiian and Siberian volcanism, we found that they are negatively buoyant in the upper mantle, even at the extremes of their expected ?Tp. One solution to this problem is that low density refractory harzburgite is a more ubiquitous component in mantle plumes than previously acknowledged.

Shorttle, Oliver; Maclennan, John; Lambart, Sarah

2014-06-01

261

Quantifying Permafrost Characteristics with DCR-ERT  

NASA Astrophysics Data System (ADS)

Geophysical methods are an efficient method for quantifying permafrost characteristics for Arctic road design and engineering. In the Alaskan Arctic construction and maintenance of roads requires integration of permafrost; ground that is below 0 degrees C for two or more years. Features such as ice content and temperature are critical for understanding current and future ground conditions for planning, design and evaluation of engineering applications. This study focused on the proposed Foothills West Transportation Access project corridor where the purpose is to construct a new all-season road connecting the Dalton Highway to Umiat. Four major areas were chosen that represented a range of conditions including gravel bars, alluvial plains, tussock tundra (both unburned and burned conditions), high and low centered ice-wedge polygons and an active thermokarst feature. Direct-current resistivity using galvanic contact (DCR-ERT) was applied over transects. In conjunction complimentary site data including boreholes, active layer depths, vegetation descriptions and site photographs was obtained. The boreholes provided information on soil morphology, ice texture and gravimetric moisture content. Horizontal and vertical resolutions in the DCR-ERT were varied to determine the presence or absence of ground ice; subsurface heterogeneity; and the depth to groundwater (if present). The four main DCR-ERT methods used were: 84 electrodes with 2 m spacing; 42 electrodes with 0.5 m spacing; 42 electrodes with 2 m spacing; and 84 electrodes with 1 m spacing. In terms of identifying the ground ice characteristics the higher horizontal resolution DCR-ERT transects with either 42 or 84 electrodes and 0.5 or 1 m spacing were best able to differentiate wedge-ice. This evaluation is based on a combination of both borehole stratigraphy and surface characteristics. Simulated apparent resistivity values for permafrost areas varied from a low of 4582 ? m to a high of 10034 ? m. Previous studies found permafrost conditions with corresponding resistivity values as low as 5000 ? m. This work emphasizes the necessity of tailoring the DCR-ERT survey to verified ground ice characteristics.

Schnabel, W.; Trochim, E.; Munk, J.; Kanevskiy, M. Z.; Shur, Y.; Fortier, R.

2012-12-01

262

Quantifying the impacts of global disasters  

NASA Astrophysics Data System (ADS)

The US Geological Survey, National Oceanic and Atmospheric Administration, California Geological Survey, and other entities are developing a Tsunami Scenario, depicting a realistic outcome of a hypothetical but plausible large tsunami originating in the eastern Aleutian Arc, affecting the west coast of the United States, including Alaska and Hawaii. The scenario includes earth-science effects, damage and restoration of the built environment, and social and economic impacts. Like the earlier ShakeOut and ARkStorm disaster scenarios, the purpose of the Tsunami Scenario is to apply science to quantify the impacts of natural disasters in a way that can be used by decision makers in the affected sectors to reduce the potential for loss. Most natural disasters are local. A major hurricane can destroy a city or damage a long swath of coastline while mostly sparing inland areas. The largest earthquake on record caused strong shaking along 1500 km of Chile, but left the capital relatively unscathed. Previous scenarios have used the local nature of disasters to focus interaction with the user community. However, the capacity for global disasters is growing with the interdependency of the global economy. Earthquakes have disrupted global computer chip manufacturing and caused stock market downturns. Tsunamis, however, can be global in their extent and direct impact. Moreover, the vulnerability of seaports to tsunami damage can increase the global consequences. The Tsunami Scenario is trying to capture the widespread effects while maintaining the close interaction with users that has been one of the most successful features of the previous scenarios. The scenario tsunami occurs in the eastern Aleutians with a source similar to the 2011 Tohoku event. Geologic similarities support the argument that a Tohoku-like source is plausible in Alaska. It creates a major nearfield tsunami in the Aleutian arc and peninsula, a moderate tsunami in the US Pacific Northwest, large but not the maximum in Hawaii, and the largest plausible tsunami in southern California. To support the analysis of global impacts, we begin with the Ports of Los Angeles and Long Beach which account for >40% of the imports to the United States. We expand from there throughout California for the first level economic analysis. We are looking to work with Alaska and Hawaii, especially on similar economic issues in ports, over the next year and to expand the analysis to consideration of economic interactions between the regions.

Jones, L. M.; Ross, S.; Wilson, R. I.; Borrero, J. C.; Brosnan, D.; Bwarie, J. T.; Geist, E. L.; Hansen, R. A.; Johnson, L. A.; Kirby, S. H.; Long, K.; Lynett, P. J.; Miller, K. M.; Mortensen, C. E.; Perry, S. C.; Porter, K. A.; Real, C. R.; Ryan, K. J.; Thio, H. K.; Wein, A. M.; Whitmore, P.; Wood, N. J.

2012-12-01

263

Normalization of gas shows improves evaluation  

SciTech Connect

A normalization scheme has been developed that allows mud-log gas curves to be correlated with each other and with other well logs. The normalized mud logs may also be used to enhance formation and geopressure evaluation. The method, which requires relatively simple calculations and uses data already available in the mud logging unit, overcomes a major weakness of traditional mud logging methods: too many factors, other than reservoir content, affected gas show magnitude. As a result, mud log gas analyses could not be numerically integrated with other well logs. Often, even mud logs from nearby wells might not be reliably correlated with each other.

Whittaker, A.; Sellens, G.

1987-04-20

264

Dissociated neural correlates of quantity processing of quantifiers, numbers, and numerosities.  

PubMed

Quantities can be represented using either mathematical language (i.e., numbers) or natural language (i.e., quantifiers). Previous studies have shown that numerical processing elicits greater activation in the brain regions around the intraparietal sulcus (IPS) relative to other semantic processes. However, little research has been conducted to investigate whether the IPS is also critical for the semantic processing of quantifiers in natural language. In this study, 20 adults were scanned with functional magnetic resonance imaging while they performed semantic distance judgment involving six types of materials (i.e., frequency adverbs, quantity pronouns and nouns, animal names, Arabic digits, number words, and dot arrays). Conjunction analyses of brain activation showed that numbers and dot arrays elicited greater activation in the right IPS than did words (i.e., animal names) or quantifiers (i.e., frequency adverbs and quantity pronouns and nouns). Quantifiers elicited more activation in left middle temporal gyrus and inferior frontal gyrus than did numbers and dot arrays. No differences were found between quantifiers and animal names. These findings suggest that, although quantity processing for numbers and dot arrays typically relies on the right IPS region, quantity processing for quantifiers typically relies on brain regions for general semantic processing. Thus, the IPS does not appear to be the only brain region for quantity processing. PMID:23019128

Wei, Wei; Chen, Chuansheng; Yang, Tao; Zhang, Han; Zhou, Xinlin

2014-02-01

265

Radiative transfer modeling for quantifying lunar surface minerals, particle size, and submicroscopic metallic Fe  

NASA Astrophysics Data System (ADS)

The main objective of this work is to quantify lunar surface minerals (agglutinate, clinopyroxene, orthopyroxene, plagioclase, olivine, ilmenite, and volcanic glass), particle sizes, and the abundance of submicroscopic metallic Fe (SMFe) from the Lunar Soil Characterization Consortium (LSCC) data set with Hapke's radiative transfer theory. The mode is implemented for both forward and inverse modeling. We implement Hapke's radiative transfer theory in the inverse mode in which, instead of commonly used look-up tables, Newton's method and least squares are jointly used to solve nonlinear questions. Although the effects of temperature and surface roughness are incorporated into the implementation to improve the model performance for application of lunar spacecraft data, these effects cannot be extensively addressed in the current work because of the use of lab-measured reflectance data. Our forward radiative transfer model results show that the correlation coefficients between modeled and measured spectra are over 0.99. For the inverse model, the distribution of the particle sizes is all within their measured range. The range of modeled SMFe for highland samples is 0.01%-0.5%, and for mare samples it is 0.03%-1%. The linear trend between SMFe and ferromagnetic resonance (Is) for all the LSCC samples is consistent with laboratory measurements. For quantifying lunar mineral abundances, the results show that the R squared for the training samples (Is/FeO ? 65) are over 0.65 with plagioclase having highest correlation (0.94) and pyroxene having the lowest correlation (0.68). In future work, the model needs to be improved for handling more mature lunar soil samples.

Li, Shuai; Li, Lin

2011-09-01

266

Quantifying complexity of the chaotic regime of a semiconductor laser subject to feedback via information theory measures  

NASA Astrophysics Data System (ADS)

The time evolution of the output of a semiconductor laser subject to optical feedback can exhibit high-dimensional chaotic fluctuations. In this contribution, our aim is to quantify the complexity of the chaotic time-trace generated by a semiconductor laser subject to delayed optical feedback. To that end, we discuss the properties of two recently introduced complexity measures based on information theory, namely the permutation entropy (PE) and the statistical complexity measure (SCM). The PE and SCM are defined as a functional of a symbolic probability distribution, evaluated using the Bandt-Pompe recipe to assign a probability distribution function to the time series generated by the chaotic system. In order to evaluate the performance of these novel complexity quantifiers, we compare them to a more standard chaos quantifier, namely the Kolmogorov-Sinai entropy. Here, we present numerical results showing that the statistical complexity and the permutation entropy, evaluated at the different time-scales involved in the chaotic regime of the laser subject to optical feedback, give valuable information about the complexity of the laser dynamics.

Soriano, Miguel C.; Zunino, Luciano; Rosso, Osvaldo A.; Mirasso, Claudio R.

2010-04-01

267

Using Insar to Quantify Seasonal Fluctuations in Landslide Velocity, Eel River, Northern California  

NASA Astrophysics Data System (ADS)

Large, slow-moving, deep-seated landslides are hydrologically driven and respond to precipitation over seasonal time scales. Precipitation causes changes in pore pressure, which alters effective stress and landslide velocity. Here, we use InSAR to quantify changes in landslide velocity for 32 landslides between February 2007 and January 2011 in the Eel River catchment, northern California. We investigate relationships between lithology and landslide properties (including aspect ratio, planform area, depth) and landslide dynamics. The time series behavior of each landslide was calculated by performing an inversion of small-baseline interferograms. We produced 165 differential interferograms with a minimum satellite return interval of 46 days using ALOS PALSAR data from tracks 223 and 224 with the ROI_PAC processing package. Climatic data and geologic maps were provided by NOAA and the California State Geological Survey, respectively. For each landslide we analyzed the planform area, depth, slope, and drainage area using DEMs derived from LiDAR and SRTM data. To quantify the resolution of our time series methodology, we performed a sensitivity analysis using a synthetic data set to determine the minimum detectable temporal signal given the temporal distribution of interferograms. This analysis shows that the temporal sampling of the data set is sufficient to resolve a seasonal signal with a wavelength of ~1 year, which is consistent with the expected seasonal response time of these landslides. Preliminary results show that holding lithology and climate constant, landslides move continuously through the year, accelerating well into the wet season and decelerating during the dry season with a lag time of weeks to months. The 32 identified landslides move at line-of-sight rates ranging from 0.1 m yr -1 to 0.45 m yr -1, and have dimensions ranging from 0.5 to 5 km long and 0.27 to 3 km wide. Each landslide has distinct kinematic zones (e.g. source, transport, toe) that exhibit different seasonal behaviors; the largest seasonal response occurs in the source and toe zones for the largest landslides. Landslide size (i.e. planform area) also appears to influence temporal changes in velocity as smaller landslides respond to precipitation before larger landslides. Because landslide area scales with depth, this implies that the depth to the shear zone modulates landslide response time. Future work is aimed at quantifying relationships between topography (e.g. drainage area, slope, convergent vs. divergent drainage) and landslide velocity. We hypothesize that landslides with large upslope convergent zones receive high recharge. As a result, they experience shorter response times than landslides that lie within planar or divergent areas.

Handwerger, A. L.; Roering, J. J.; Schmidt, D. A.

2011-12-01

268

Identifying and quantifying interactions in a laboratory swarm  

NASA Astrophysics Data System (ADS)

Emergent collective behavior, such as in flocks of birds or swarms of bees, is exhibited throughout the animal kingdom. Many models have been developed to describe swarming and flocking behavior using systems of self-propelled particles obeying simple rules or interacting via various potentials. However, due to experimental difficulties and constraints, little empirical data exists for characterizing the exact form of the biological interactions. We study laboratory swarms of flying Chironomus riparius midges, using stereoimaging and particle tracking techniques to record three-dimensional trajectories for all the individuals in the swarm. We describe methods to identify and quantify interactions by examining these trajectories, and report results on interaction magnitude, frequency, and mutuality.

Puckett, James G.; Kelley, Douglas H.; Ouellette, Nicholas T.

2013-03-01

269

Quantifying anthropogenic and natural contributions to thermosteric sea level rise  

NASA Astrophysics Data System (ADS)

in thermosteric sea level at decadal and longer time scales respond to anthropogenic forcing and natural variability of the climate system. Disentangling these contributions is essential to quantify the impact of human activity in the past and to anticipate thermosteric sea level rise under global warming. Climate models, fed with radiative forcing, display a large spread of outputs with limited correspondence with the observationally based estimates of thermosteric sea level during the last decades of the twentieth century. Here we extract the common signal of climate models from Coupled Model Intercomparison Project Phase 5 using a signal-to-noise maximizing empirical orthogonal function technique for the period 1950-2005. Our results match the observed trends, improving the widely used approach of multimodel ensemble averaging. We then compute the fraction of the observed thermosteric sea level rise of anthropogenic origin and conclude that 87% of the observed trend in the upper 700 m since 1970 is induced by human activity.

Marcos, Marta; Amores, Angel

2014-04-01

270

Quantifying the Relationship Between Financial News and the Stock Market  

NASA Astrophysics Data System (ADS)

The complex behavior of financial markets emerges from decisions made by many traders. Here, we exploit a large corpus of daily print issues of the Financial Times from 2nd January 2007 until 31st December 2012 to quantify the relationship between decisions taken in financial markets and developments in financial news. We find a positive correlation between the daily number of mentions of a company in the Financial Times and the daily transaction volume of a company's stock both on the day before the news is released, and on the same day as the news is released. Our results provide quantitative support for the suggestion that movements in financial markets and movements in financial news are intrinsically interlinked.

Alanyali, Merve; Moat, Helen Susannah; Preis, Tobias

2013-12-01

271

Quantifying the Behavior of Stock Correlations Under Market Stress  

NASA Astrophysics Data System (ADS)

Understanding correlations in complex systems is crucial in the face of turbulence, such as the ongoing financial crisis. However, in complex systems, such as financial systems, correlations are not constant but instead vary in time. Here we address the question of quantifying state-dependent correlations in stock markets. Reliable estimates of correlations are absolutely necessary to protect a portfolio. We analyze 72 years of daily closing prices of the 30 stocks forming the Dow Jones Industrial Average (DJIA). We find the striking result that the average correlation among these stocks scales linearly with market stress reflected by normalized DJIA index returns on various time scales. Consequently, the diversification effect which should protect a portfolio melts away in times of market losses, just when it would most urgently be needed. Our empirical analysis is consistent with the interesting possibility that one could anticipate diversification breakdowns, guiding the design of protected portfolios.

Preis, Tobias; Kenett, Dror Y.; Stanley, H. Eugene; Helbing, Dirk; Ben-Jacob, Eshel

2012-10-01

272

The Local Dimension: a method to quantify the Cosmic Web  

NASA Astrophysics Data System (ADS)

It is now well accepted that the galaxies are distributed in filaments, sheets and clusters, all of which form an interconnected network known as the Cosmic Web. It is a big challenge to quantify the shapes of the interconnected structural elements that form this network. Tools like the Minkowski functionals which use global properties, though well-suited for an isolated object like a single sheet or filament, are not suited for an interconnected network of such objects. We consider the Local Dimension D, defined through N(R) = A RD, where N(R) is the galaxy number count within a sphere of comoving radius R centred on a particular galaxy, as a tool to locally quantify the shape in the neighbourhood of different galaxies along the Cosmic Web. We expect D ~ 1, 2 and 3 for a galaxy located in a filament, sheet and cluster, respectively. Using LCDM N-body simulations, we find that it is possible to determine D through a power-law fit to N(R) across the length-scales 2 to 10Mpc for ~33 per cent of the galaxies. We have visually identified the filaments and sheets corresponding to many of the galaxies with D ~ 1 and 2, respectively. In several other situations, the structure responsible for the D value could not be visually identified, either due to it being tenuous or due to other dominating structures in the vicinity. We also show that the global distribution of the D values can be used to visualize and interpret how the different structural elements are woven into the Cosmic Web.

Sarkar, Prakash; Bharadwaj, Somnath

2009-03-01

273

Sodium borohydride/chloranil-based assay for quantifying total flavonoids.  

PubMed

A novel sodium borohydride/chloranil-based (SBC) assay for quantifying total flavonoids, including flavones, flavonols, flavonones, flavononols, isoflavonoids, flavanols, and anthocyanins, has been developed. Flavonoids with a 4-carbonyl group were reduced to flavanols using sodium borohydride catalyzed with aluminum chloride. Then the flavan-4-ols were oxidized to anthocyanins by chloranil in an acetic acid solution. The anthocyanins were reacted with vanillin in concentrated hydrochloric acid and then quantified spectrophotometrically at 490 nm. A representative of each common flavonoid class including flavones (baicalein), flavonols (quercetin), flavonones (hesperetin), flavononols (silibinin), isoflavonoids (biochanin A), and flavanols (catechin) showed excellent linear dose-responses in the general range of 0.1-10.0 mM. For most flavonoids, the detection limit was about 0.1 mM in this assay. The recoveries of quercetin from spiked samples of apples and red peppers were 96.5 +/- 1.4% (CV = 1.4%, n = 4) and 99.0 +/- 4.2% (CV = 4.2%, n = 4), respectively. The recovery of catechin from spiked samples of cranberry extracts was 97.9 +/- 2.0% (CV = 2.0%, n = 4). The total flavonoids of selected common fruits and vegetables were measured using this assay. Among the samples tested, blueberry had the highest total flavonoid content (689.5 +/- 10.7 mg of catechin equiv per 100 g of sample), followed by cranberry, apple, broccoli, and red pepper. This novel SBC total flavonoid assay can be widely used to measure the total flavonoid content of fruits, vegetables, whole grains, herbal products, dietary supplements, and nutraceutical products. PMID:18798633

He, Xiangjiu; Liu, Dong; Liu, Rui Hai

2008-10-22

274

Quantifying Local Radiation-Induced Lung Damage From Computed Tomography  

SciTech Connect

Purpose: Optimal implementation of new radiotherapy techniques requires accurate predictive models for normal tissue complications. Since clinically used dose distributions are nonuniform, local tissue damage needs to be measured and related to local tissue dose. In lung, radiation-induced damage results in density changes that have been measured by computed tomography (CT) imaging noninvasively, but not yet on a localized scale. Therefore, the aim of the present study was to develop a method for quantification of local radiation-induced lung tissue damage using CT. Methods and Materials: CT images of the thorax were made 8 and 26 weeks after irradiation of 100%, 75%, 50%, and 25% lung volume of rats. Local lung tissue structure (S{sub L}) was quantified from local mean and local standard deviation of the CT density in Hounsfield units in 1-mm{sup 3} subvolumes. The relation of changes in S{sub L} (DELTAS{sub L}) to histologic changes and breathing rate was investigated. Feasibility for clinical application was tested by applying the method to CT images of a patient with non-small-cell lung carcinoma and investigating the local dose-effect relationship of DELTAS{sub L}. Results: In rats, a clear dose-response relationship of DELTAS{sub L} was observed at different time points after radiation. Furthermore, DELTAS{sub L} correlated strongly to histologic endpoints (infiltrates and inflammatory cells) and breathing rate. In the patient, progressive local dose-dependent increases in DELTAS{sub L} were observed. Conclusion: We developed a method to quantify local radiation-induced tissue damage in the lung using CT. This method can be used in the development of more accurate predictive models for normal tissue complications.

Ghobadi, Ghazaleh; Hogeweg, Laurens E. [Department of Radiation Oncology, University Medical Center Groningen/University of Groningen, Groningen (Netherlands); Faber, Hette [Department of Radiation Oncology, University Medical Center Groningen/University of Groningen, Groningen (Netherlands); Department of Cell Biology, Section of Radiation and Stress Cell Biology, University Medical Center Groningen/University of Groningen, Groningen (Netherlands); Tukker, Wim G.J. [Department of Radiology, University Medical Center Groningen/University of Groningen, Groningen (Netherlands); Schippers, Jacobus M. [Department of Radiation Oncology, University Medical Center Groningen/University of Groningen, Groningen (Netherlands); Accelerator Department, Paul Scherrer Institut, Villigen (Switzerland); Brandenburg, Sytze [Kernfysisch Versneller Instituut, Groningen (Netherlands); Langendijk, Johannes A. [Department of Radiation Oncology, University Medical Center Groningen/University of Groningen, Groningen (Netherlands); Coppes, Robert P. [Department of Radiation Oncology, University Medical Center Groningen/University of Groningen, Groningen (Netherlands); Department of Cell Biology, Section of Radiation and Stress Cell Biology, University Medical Center Groningen/University of Groningen, Groningen (Netherlands); Luijk, Peter van, E-mail: p.van.luijk@rt.umcg.n [Department of Radiation Oncology, University Medical Center Groningen/University of Groningen, Groningen (Netherlands)

2010-02-01

275

Quantifying heat losses using aerial thermography  

Microsoft Academic Search

A theoretical model is described for calculating flat roof total heat losses and thermal conductances from aerial infrared data. Three empirical methods for estimating convective losses are described. The disagreement between the methods shows that they are prone to large (20%) errors, and that the survey should be carried out in low wind speeds, in order to minimize the effect

Gillian A. Haigh; Susan E. Pritchard

1980-01-01

276

Use of short half-life cosmogenic isotopes to quantify sediment mixing and transport in karst conduits  

NASA Astrophysics Data System (ADS)

Particulate inorganic carbon (PIC) transport and flux in karst aquifers is poorly understood. Methods to quantify PIC flux are needed in order to account for total inorganic carbon removal (chemical plus mechanical) from karst settings. Quantifying PIC flux will allow more accurate calculations of landscape denudation and global carbon sink processes. The study concentrates on the critical processes of the suspended sediment component of mass flux - surface soil/stored sediment mixing, transport rates and distance, and sediment storage times. The primary objective of the study is to describe transport and mixing with the resolution of single storm-flow events. To quantify the transport processes, short half-life cosmogenic isotopes are utilized. The isotopes 7Be (t1/2 = 53d) and 210Pb (t1/2 = 22y) are the primary isotopes measured, and other potential isotopes such as 137Cs and 241Am are investigated. The study location is at Mammoth Cave National Park within the Logsdon River watershed. The Logsdon River conduit is continuously traversable underground for two kilometers. Background levels and input concentrations of isotopes are determined from soil samples taken at random locations in the catchment area, and suspended sediment collected from the primary sinking stream during a storm event. Suspended sediment was also collected from the downstream end of the conduit during the storm event. After the storm flow receded, fine sediment samples were taken from the cave stream at regular intervals to determine transport distances and mixing ratios along the conduit. Samples were analyzed with a Canberra Industries gamma ray spectrometer, counted for 24 hours to increase detection of low radionuclide activities. The measured activity levels of radionuclides in the samples were adjusted for decay from time of sampling using standard decay curves. The results of the study show that surface sediment mixing, transport and storage in karst conduits is a dynamic but potentially quantifiable process at the storm-event scale.

Paylor, R.

2011-12-01

277

Land cover change and remote sensing: Examples of quantifying spatiotemporal dynamics in tropical forests  

SciTech Connect

Research on human impacts or natural processes that operate over broad geographic areas must explicitly address issues of scale and spatial heterogeneity. While the tropical forests of Southeast Asia and Mexico have been occupied and used to meet human needs for thousands of years, traditional forest management systems are currently being transformed by rapid and far-reaching demographic, political, economic, and environmental changes. The dynamics of population growth, migration into the remaining frontiers, and responses to national and international market forces result in a demand for land to produce food and fiber. These results illustrate some of the mechanisms that drive current land use changes, especially in the tropical forest frontiers. By linking the outcome of individual land use decisions and measures of landscape fragmentation and change, the aggregated results shows the hierarchy of temporal and spatial events that in summation result in global changes to the most complex and sensitive biome -- tropical forests. By quantifying the spatial and temporal patterns of tropical forest change, researchers can assist policy makers by showing how landscape systems in these tropical forests are controlled by physical, biological, social, and economic parameters.

Krummel, J.R.; Su, Haiping [Argonne National Lab., IL (United States); Fox, J. [East-West Center, Honolulu, HI (United States); Yarnasan, S.; Ekasingh, M. [Chiang Mai Univ. (Thailand)

1995-06-01

278

Lemurs and macaques show similar numerical sensitivity.  

PubMed

We investigated the precision of the approximate number system (ANS) in three lemur species (Lemur catta, Eulemur mongoz, and Eulemur macaco flavifrons), one Old World monkey species (Macaca mulatta) and humans (Homo sapiens). In Experiment 1, four individuals of each nonhuman primate species were trained to select the numerically larger of two visual arrays on a touchscreen. We estimated numerical acuity by modeling Weber fractions (w) and found quantitatively equivalent performance among all four nonhuman primate species. In Experiment 2, we tested adult humans in a similar procedure, and they outperformed the four nonhuman species but showed qualitatively similar performance. These results indicate that the ANS is conserved over the primate order. PMID:24068469

Jones, Sarah M; Pearson, John; DeWind, Nicholas K; Paulsen, David; Tenekedjieva, Ana-Maria; Brannon, Elizabeth M

2014-05-01

279

Quantifying invasion resistance: the use of recruitment functions to control for propagule pressure.  

PubMed

Invasive species distributions tend to be biased towards some habitats compared to others due to the combined effects of habitat-specific resistance to invasion and non-uniform propagule pressure. These two factors may also interact, with habitat resistance varying as a function of propagule supply rate. Recruitment experiments, in which the number of individuals recruiting into a population is measured under different propagule supply rates, can help us understand these interactions and quantify habitat resistance to invasion while controlling for variation in propagule supply rate. Here, we constructed recruitment functions for the invasive herb Hieracium lepidulum by sowing seeds at five different densities into six different habitat types in New Zealand's Southern Alps repeated over two successive years, and monitored seedling recruitment and survival over a four year period. We fitted recruitment functions that allowed us to estimate the total number of safe sites available for plants to occupy, which we used as a measure of invasion resistance, and tested several hypotheses concerning how invasion resistance differed among habitats and over time. We found significant differences in levels of H. lepidulum recruitment among habitats, which did not match the species' current distribution in the landscape. Local biotic and abiotic characteristics helped explain some of the between-habitat variation, with vascular plant species richness, vascular plant cover, and light availability, all positively correlated with the number of safe sites for recruitment. Resistance also varied over time however, with cohorts sown in successive years showing different levels of recruitment in some habitats but not others. These results show that recruitment functions can be used to quantify habitat resistance to invasion and to identify potential mechanisms of invasion resistance. PMID:24933811

Miller, Alice L; Diez, Jeffrey M; Sullivan, Jon J; Wangen, Steven R; Wiser, Susan K; Meffin, Ross; Duncan, Richard P

2014-04-01

280

Quantifying the Ease of Scientific Discovery  

PubMed Central

It has long been known that scientific output proceeds on an exponential increase, or more properly, a logistic growth curve. The interplay between effort and discovery is clear, and the nature of the functional form has been thought to be due to many changes in the scientific process over time. Here I show a quantitative method for examining the ease of scientific progress, another necessary component in understanding scientific discovery. Using examples from three different scientific disciplines – mammalian species, chemical elements, and minor planets – I find the ease of discovery to conform to an exponential decay. In addition, I show how the pace of scientific discovery can be best understood as the outcome of both scientific output and ease of discovery. A quantitative study of the ease of scientific discovery in the aggregate, such as done here, has the potential to provide a great deal of insight into both the nature of future discoveries and the technical processes behind discoveries in science.

Arbesman, Samuel

2012-01-01

281

Shortcuts to Quantifier Interpretation in Children and Adults  

ERIC Educational Resources Information Center

Errors involving universal quantification are common in contexts depicting sets of individuals in partial, one-to-one correspondence. In this article, we explore whether quantifier-spreading errors are more common with distributive quantifiers each and every than with all. In Experiments 1 and 2, 96 children (5- to 9-year-olds) viewed pairs of…

Brooks, Patricia J.; Sekerina, Irina

2006-01-01

282

Children's Knowledge of the Quantifier "Dou" in Mandarin Chinese  

ERIC Educational Resources Information Center

The quantifier "dou" (roughly corresponding to English "all") in Mandarin Chinese has been the topic of much discussion in the theoretical literature. This study investigated children's knowledge of this quantifier using a new methodological technique, which we dubbed the Question-Statement Task. Three questions were addressed: (i) whether young…

Zhou, Peng; Crain, Stephen

2011-01-01

283

Identifying and Quantifying Landscape Patterns in Space and Time  

Microsoft Academic Search

In landscape ecology, approaches to identify and quantify landscape patterns are well developed for discrete landscape representations. Discretisation is often seen as a form of generalisation and simplification. Landscape patterns however are shaped by complex dynamic processes acting at various spatial and temporal scales. Thus, standard landscape metrics that quantify static, discrete overall landscape pattern or individual patch properties may

Janine Bolliger; Helene H. Wagner; Monica G. Turner

284

Quantifying interpretability for motion imagery with applications to image compression  

Microsoft Academic Search

For still imagery, the national imagery interpretability rating scale (NIIRS) has served as a community standard for quantifying interpretability. No comparable scale exists for motion imagery. This paper summarizes a series of user evaluations to understand and quantify the effects of critical factors affecting the perceived interpretability of motion imagery. These evaluations provide the basis for relating perceived image interpretability

John M. Irvine; David M. Cannon; Steven A. Israel; Gary O'brien; Charles Fenimore; John Roberts; Ana Ivelisse Avilés

2008-01-01

285

Quantifier elimination for real closed fields by cylindrical algebraic decomposition  

Microsoft Academic Search

Tarski in 1948, [18] published a quantifier elimination method for the elementary theory of real closed fields (which he had discovered in 1930). As noted by Tarski, any quantifier elimination method for this theory also provides a decision method, which enables one to decide whether any sentence of the theory is true or false. Since many important and difficult mathematical

George E. Collins

1975-01-01

286

QUANTIFYING DIFFUSIVE MASS TRANSFER IN FRACTURED SHALEBEDROCK  

EPA Science Inventory

A significant limitation in defining remediation needs at contaminated sites often results from an insufficient understanding of the transport processes that control contaminant migration. The objectives of this research were to help resolve this dilemma by providing an improved ...

287

Quantified LSD Effects on Ego Strength.  

National Technical Information Service (NTIS)

It was found, in support of the postulated nature of hallucination as an inadequate integration of new with stored information resulting in aberrant perception, that subclinical doses of LSD bring out a latent or accentuate an existing difficulty in resol...

A. S. Marrazzi R. A. Meisch W. L. Pew T. G. Bieter

1966-01-01

288

Quantifying Drawdown in Complex Geologic Terrain with Theis Transforms  

NASA Astrophysics Data System (ADS)

Bulk hydraulic properties of aquifers and fault structures are most accurately quantified with multi-well aquifer tests. Detecting and quantifying pumping-induced drawdown at observation wells distant (> 1km) from the pumping well greatly expands the aquifer volume being investigated. Drawdown analyses at these greater distances, however, are often confounded because environmental water-level fluctuations typically mask the pumping signal. Environmental fluctuations (e.g. barometric and tidal potential) can be simulated and separated from the pumping signal with analytical water-level models if the period of pre-pumping data exceeds the period of drawdown and recovery to be analyzed. These circumstances occur infrequently, however, as a result of incomplete datasets and/or pervasive pumping or climatic trends. Pumping-induced changes can be differentiated reliably from environmental fluctuations in pumping-affected water-level records by simultaneously simulating pumping and environmental effects with analytical water-level models. Pumping effects are simulated with Theis transforms, which translate pumping schedules into water-level change by superimposing multiple Theis solutions. Environmental fluctuations are simulated by summing multiple time-series of barometric pressure change, tidal potential, and background water levels when available. Differences between simulated and measured water levels are minimized by adjusting the transmissivity and storage coefficient of pumping components and amplitude and phase of non-pumping components. Water levels simulated with the relatively simple Theis-transform method are capable of matching pumping signals generated with complex, three-dimensional numerical models. Pumping-induced changes simulated with Theis transforms and with a numerical model agreed (standard deviations as low as 0.003 m) in cases where pumping signals crossed confining units and fault structures and traveled distances of more than 1km.

Garcia, C.; Halford, K. J.; Fenelon, J. M.

2011-12-01

289

Quantifying the Ease of Scientific Discovery.  

PubMed

It has long been known that scientific output proceeds on an exponential increase, or more properly, a logistic growth curve. The interplay between effort and discovery is clear, and the nature of the functional form has been thought to be due to many changes in the scientific process over time. Here I show a quantitative method for examining the ease of scientific progress, another necessary component in understanding scientific discovery. Using examples from three different scientific disciplines - mammalian species, chemical elements, and minor planets - I find the ease of discovery to conform to an exponential decay. In addition, I show how the pace of scientific discovery can be best understood as the outcome of both scientific output and ease of discovery. A quantitative study of the ease of scientific discovery in the aggregate, such as done here, has the potential to provide a great deal of insight into both the nature of future discoveries and the technical processes behind discoveries in science. PMID:22328796

Arbesman, Samuel

2011-02-01

290

Quantifying tissue mechanical properties using photoplethysmography  

PubMed Central

Photoplethysmography (PPG) is a non-invasive optical method that can be used to detect blood volume changes in the microvascular bed of tissue. The PPG signal comprises two components; a pulsatile waveform (AC) attributed to changes in the interrogated blood volume with each heartbeat, and a slowly varying baseline (DC) combining low frequency fluctuations mainly due to respiration and sympathetic nervous system activity. In this report, we investigate the AC pulsatile waveform of the PPG pulse for ultimate use in extracting information regarding the biomechanical properties of tissue and vasculature. By analyzing the rise time of the pulse in the diastole period, we show that PPG is capable of measuring changes in the Young’s Modulus of tissue mimicking phantoms with a resolution of 4 KPa in the range of 12 to 61 KPa. In addition, the shape of the pulse can potentially be used to diagnose vascular complications by differentiating upstream from downstream complications. A Windkessel model was used to model changes in the biomechanical properties of the circulation and to test the proposed concept. The modeling data confirmed the response seen in vitro and showed the same trends in the PPG rise and fall times with changes in compliance and vascular resistance.

Akl, Tony J.; Wilson, Mark A.; Ericson, M. Nance; Cote, Gerard L.

2014-01-01

291

Quantifying transport properties by exchange matrix method  

NASA Astrophysics Data System (ADS)

The exchange matrix method is described to study of transport properties in chaotic geophysical flows. This study is important for applying in problems of pollutants transport (such as petroleum patches) in tidal flows and others. In order to construct this special exchange matrix (first suggested by Spencer & Wiley) we use an approximation of such flows made by Zimmerman, who adopted the idea of chaotic advection, first put forward by Aref. Then for a quantitative estimation of the transport properties we explore a coarse-grained density description introduced by Gibbs and Welander. Such coarse-grained representations over an investigation area, show a ``residence place'' for the pollutant material at any instant. The orbit expansion method, exploited an assumption that the contributions of tidal and residual currents are of different orders (the tidal is much stronger), does not give answers in many real situations. The exchange matrix can show transport of patches or particles from any place in the area under consideration to an arbitrary location in the tidal sea and time if it happens.

Krasnopolskaya, Tatyana; Meleshko, Vyacheslav

2005-11-01

292

Quantifying the impact of metamorphic reactions on strain localization in the mantle  

NASA Astrophysics Data System (ADS)

Metamorphic reactions are most often considered as a passive record of changes in pressure, temperature and fluid conditions that rocks experience. In that way, they provide key constraints on the tectonic evolution of the crust and the mantle. However, natural examples show that metamorphism can also modify the strength of rocks and affect the strain localization in ductile shear zones. Hence, metamorphic reactions have an active role in tectonics by inducing softening and/or hardening depending on the involved reactions. Quantifying the mechanical effect of such metamorphic reactions is, therefore, a crucial task for determining both the strength distribution in the lithosphere and its evolution. However, the estimate of the effective strength of such polyphase rocks remains still an open issue. Some flow laws (determined experimentally) already exist for monophase aggregates and polyphase rocks for rheologically important materials. They provide good constraints on lithology-controlled lithospheric strength variations. Unfortunately, since the whole range of mineralogical and chemical rock compositions cannot be experimentally tested, the variations of strength due to in metamorphism reaction cannot be systematically and fully characterized. In order to tackle this issue, we here present the results of a study coupling thermodynamical and mechanical modeling that allows us to predict the mechanical impact of metamorphic reactions on the strength of the mantle. Thermodynamic modeling (using Theriak-Domino) is used for calculating the mineralogical composition of a typical peridotite as a function of pressure, temperature and water content. The calculated modes and flow laws parameters for monophase aggregates are then used as input of the Minimized Power Geometric model for predicting the polyphase aggregate strength. Our results are then used to quantify the strength evolution of the mantle as a function of pressure, temperature and water content in two characteristic tectonic contexts by following P-T evolutions underwent by the lithospheric mantle in both subduction zones and rifts. The mechanical consequences of metamorphic reactions at the convergent and divergent plate boundaries are finally discussed.

Huet, Benjamin; Yamato, Philippe

2014-05-01

293

Quantifying Magnetic Reconnection in the Solar Corona  

Microsoft Academic Search

Magnetic reconnection is believed to play a role in many aspects of solar activity including flares, CMEs and quiet sun brightenings. The process itself is fundamentally a change field line topology resulting from some non-ideal term in the generalized Ohm's law such as collisional resistivity or electron inertia. Such non-ideal effects may or may not dissipate energy directly but do

D. W. Longcope

2005-01-01

294

Techniques to quantify TSH receptor antibodies  

Microsoft Academic Search

The presence of antibodies to TSH receptor (TSHR) is the hallmark of Graves disease (GD). These antibodies mimic the action of TSH, resulting in TSHR stimulation and hyperthyroidism, and have been associated with GD-associated extrathyroidal manifestations. TSH binding inhibition assays and bioassays for measurement of TSHR antibody levels have been used for clinical and research purposes. In the former, inhibition

AP Weetman; RA Ajjan

2008-01-01

295

The Interpretation of Classically Quantified Sentences: A Set-Theoretic Approach  

ERIC Educational Resources Information Center

We present a set-theoretic model of the mental representation of classically quantified sentences (All P are Q, Some P are Q, Some P are not Q, and No P are Q). We take inclusion, exclusion, and their negations to be primitive concepts. We show that although these sentences are known to have a diagrammatic expression (in the form of the Gergonne…

Politzer, Guy; Van der Henst, Jean-Baptiste; Delle Luche, Claire; Noveck, Ira A.

2006-01-01

296

Quantifying hepatic shear modulus in vivo using acoustic radiation force.  

PubMed

The speed at which shear waves propagate in tissue can be used to quantify the shear modulus of the tissue. As many groups have shown, shear waves can be generated within tissues using focused, impulsive, acoustic radiation force excitations, and the resulting displacement response can be ultrasonically tracked through time. The goals of the work herein are twofold: (i) to develop and validate an algorithm to quantify shear wave speed from radiation force-induced, ultrasonically-detected displacement data that is robust in the presence of poor displacement signal-to-noise ratio and (ii) to apply this algorithm to in vivo datasets acquired in human volunteers to demonstrate the clinical feasibility of using this method to quantify the shear modulus of liver tissue in longitudinal studies. The ultimate clinical application of this work is noninvasive quantification of liver stiffness in the setting of fibrosis and steatosis. In the proposed algorithm, time-to-peak displacement data in response to impulsive acoustic radiation force outside the region of excitation are used to characterize the shear wave speed of a material, which is used to reconstruct the material's shear modulus. The algorithm is developed and validated using finite element method simulations. By using this algorithm on simulated displacement fields, reconstructions for materials with shear moduli (mu) ranging from 1.3-5 kPa are accurate to within 0.3 kPa, whereas stiffer shear moduli ranging from 10-16 kPa are accurate to within 1.0 kPa. Ultrasonically tracking the displacement data, which introduces jitter in the displacement estimates, does not impede the use of this algorithm to reconstruct accurate shear moduli. By using in vivo data acquired intercostally in 20 volunteers with body mass indices ranging from normal to obese, liver shear moduli have been reconstructed between 0.9 and 3.0 kPa, with an average precision of +/-0.4 kPa. These reconstructed liver moduli are consistent with those reported in the literature (mu = 0.75-2.5 kPa) with a similar precision (+/-0.3 kPa). Repeated intercostal liver shear modulus reconstructions were performed on nine different days in two volunteers over a 105-day period, yielding an average shear modulus of 1.9 +/- 0.50 kPa (1.3-2.5 kPa) in the first volunteer and 1.8 +/- 0.4 kPa (1.1-3.0 kPa) in the second volunteer. The simulation and in vivo data to date demonstrate that this method is capable of generating accurate and repeatable liver stiffness measurements and appears promising as a clinical tool for quantifying liver stiffness. PMID:18222031

Palmeri, M L; Wang, M H; Dahl, J J; Frinkley, K D; Nightingale, K R

2008-04-01

297

Complexity and approximability of quantified and stochastic constraint satisfaction problems  

SciTech Connect

Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SAT{sub c}(S)). Here, we study simultaneously the complexity of and the existence of efficient approximation algorithms for a number of variants of the problems SAT(S) and SAT{sub c}(S), and for many different D, C, and S. These problem variants include decision and optimization problems, for formulas, quantified formulas stochastically-quantified formulas. We denote these problems by Q-SAT(S), MAX-Q-SAT(S), S-SAT(S), MAX-S-SAT(S) MAX-NSF-Q-SAT(S) and MAX-NSF-S-SAT(S). The main contribution is the development of a unified predictive theory for characterizing the the complexity of these problems. Our unified approach is based on the following basic two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic representability. Let k {ge} 2. Let S be a finite set of finite-arity relations on {Sigma}{sub k} with the following condition on S: All finite arity relations on {Sigma}{sub k} can be represented as finite existentially-quantified conjunctions of relations in S applied to variables (to variables and constant symbols in C), Then we prove the following new results: (1) The problems SAT(S) and SAT{sub c}(S) are both NQL-complete and {le}{sub logn}{sup bw}-complete for NP. (2) The problems Q-SAT(S), Q-SAT{sub c}(S), are PSPACE-complete. Letting k = 2, the problem S-SAT(S) and S-SAT{sub c}(S) are PSPACE-complete. (3) {exists} {epsilon} > 0 for which approximating the problems MAX-Q-SAT(S) within {epsilon} times optimum is PSPACE-hard. Letting k =: 2, {exists} {epsilon} > 0 for which approximating the problems MAX-S-SAT(S) within {epsilon} times optimum is PSPACE-hard. (4) {forall} {epsilon} > 0 the problems MAX-NSF-Q-SAT(S) and MAX-NSF-S-SAT(S), are PSPACE-hard to approximate within a factor of n{sup {epsilon}} times optimum. These results significantly extend the earlier results by (i) Papadimitriou [Pa851] on complexity of stochastic satisfiability, (ii) Condon, Feigenbaum, Lund and Shor [CF+93, CF+94] by identifying natural classes of PSPACE-hard optimization problems with provably PSPACE-hard {epsilon}-approximation problems. Moreover, most of our results hold not just for Boolean relations: most previous results were done only in the context of Boolean domains. The results also constitute as a significant step towards obtaining a dichotomy theorems for the problems MAX-S-SAT(S) and MAX-Q-SAT(S): a research area of recent interest [CF+93, CF+94, Cr95, KSW97, LMP99].

Hunt, H. B. (Harry B.); Stearns, R. L.; Marathe, M. V. (Madhav V.)

2001-01-01

298

Children with Autism Show Reduced Somatosensory Response: An MEG Study  

PubMed Central

Lay Abstract Autism spectrum disorders are reported to affect nearly one out of every one hundred children, with over 90% of these children showing behavioral disturbances related to the processing of basic sensory information. Behavioral sensitivity to light touch, such as profound discomfort with clothing tags and physical contact, is a ubiquitous finding in children on the autism spectrum. In this study, we investigate the strength and timing of brain activity in response to simple, light taps to the fingertip. Our results suggest that children with autism show a diminished early response in the primary somatosensory cortex (S1). This finding is most evident in the left hemisphere. In exploratory analysis, we also show that tactile sensory behavior, as measured by the Sensory Profile, may be a better predictor of the intensity and timing of brain activity related to touch than a clinical autism diagnosis. We report that children with atypical tactile behavior have significantly lower amplitude somatosensory cortical responses in both hemispheres. Thus sensory behavioral phenotype appears to be a more powerful strategy for investigating neural activity in this cohort. This study provides evidence for atypical brain activity during sensory processing in autistic children and suggests that our sensory behavior based methodology may be an important approach to investigating brain activity in people with autism and neurodevelopmental disorders. Scientific Abstract The neural underpinnings of sensory processing differences in autism remain poorly understood. This prospective magnetoencephalography (MEG) study investigates whether children with autism show atypical cortical activity in the primary somatosensory cortex (S1) in comparison to matched controls. Tactile stimuli were clearly detectable, painless taps applied to the distal phalanx of the second (D2) and third (D3) fingers of the right and left hands. Three tactile paradigms were administered: an oddball paradigm (standard taps to D3 at an inter-stimulus interval (ISI) of 0.33 and deviant taps to D2 with ISI ranging from 1.32–1.64s); a slow-rate paradigm (D2) with an ISI matching the deviant taps in the oddball paradigm; and a fast-rate paradigm (D2) with an ISI matching the standard taps in the oddball. Study subjects were boys (age 7–11 years) with and without autism disorder. Sensory behavior was quantified using the Sensory Profile questionnaire. Boys with autism exhibited smaller amplitude left hemisphere S1 response to slow and deviant stimuli during the right hand paradigms. In post-hoc analysis, tactile behavior directly correlated with the amplitude of cortical response. Consequently, the children were re-categorized by degree of parent-report tactile sensitivity. This regrouping created a more robust distinction between the groups with amplitude diminution in the left and right hemispheres and latency prolongation in the right hemisphere in the deviant and slow-rate paradigms for the affected children. This study suggests that children with autism have early differences in somatosensory processing, which likely influence later stages of cortical activity from integration to motor response.

Marco, Elysa J.; Khatibi, Kasra; Hill, Susanna S.; Siegel, Bryna; Arroyo, Monica S.; Dowling, Anne F.; Neuhaus, John M.; Sherr, Elliott H.; Hinkley, Leighton N. B.; Nagarajan, Srikantan S.

2012-01-01

299

Quantifying Irregularity in Pulsating Red Giants  

NASA Astrophysics Data System (ADS)

Hundreds of red giant variable stars are classified as “type L,” which the General Catalogue of Variable Stars (GCVS) defines as “slow irregular variables of late spectral type...which show no evidence of periodicity, or any periodicity present is very poorly defined....” Self-correlation (Percy and Muhammed 2004) is a simple form of time-series analysis which determines the cycle-to-cycle behavior of a star, averaged over all the available data. It is well suited for analyzing stars which are not strictly periodic. Even for non-periodic stars, it provides a “profile” of the variability, including the average “characteristic time” of variability. We have applied this method to twenty-three L-type variables which have been measured extensively by AAVSO visual observers. We find a continuous spectrum of behavior, from irregular to semiregular.

Percy, J. R.; Esteves, S.; Lin, A.; Menezes, C.; Wu, S.

2009-12-01

300

Quantifying non-Gaussianity for quantum information  

SciTech Connect

We address the quantification of non-Gaussianity (nG) of states and operations in continuous-variable systems and its use in quantum information. We start by illustrating in detail the properties and the relationships of two recently proposed measures of nG based on the Hilbert-Schmidt distance and the quantum relative entropy (QRE) between the state under examination and a reference Gaussian state. We then evaluate the non-Gaussianities of several families of non-Gaussian quantum states and show that the two measures have the same basic properties and also share the same qualitative behavior in most of the examples taken into account. However, we also show that they introduce a different relation of order; that is, they are not strictly monotone to each other. We exploit the nG measures for states in order to introduce a measure of nG for quantum operations, to assess Gaussification and de-Gaussification protocols, and to investigate in detail the role played by nG in entanglement-distillation protocols. Besides, we exploit the QRE-based nG measure to provide different insight on the extremality of Gaussian states for some entropic quantities such as conditional entropy, mutual information, and the Holevo bound. We also deal with parameter estimation and present a theorem connecting the QRE nG to the quantum Fisher information. Finally, since evaluation of the QRE nG measure requires the knowledge of the full density matrix, we derive some experimentally friendly lower bounds to nG for some classes of states and by considering the possibility of performing on the states only certain efficient or inefficient measurements.

Genoni, Marco G.; Paris, Matteo G. A. [Consorzio Nazionale Interuniversitario per le Scienze Fisiche della Materia, UdR Milano, I-20133 Milano (Italy); Dipartimento di Fisica, Universita degli Studi di Milano, I-20133 Milano (Italy)

2010-11-15

301

A graph-theoretic method to quantify the airline route authority  

NASA Technical Reports Server (NTRS)

The paper introduces a graph-theoretic method to quantify the legal statements in route certificate which specifies the airline routing restrictions. All the authorized nonstop and multistop routes, including the shortest time routes, can be obtained, and the method suggests profitable route structure alternatives to airline analysts. This method to quantify the C.A.B. route authority was programmed in a software package, Route Improvement Synthesis and Evaluation, and demonstrated in a case study with a commercial airline. The study showed the utility of this technique in suggesting route alternatives and the possibility of improvements in the U.S. route system.

Chan, Y.

1979-01-01

302

Flat Globe: Showing the Changing Seasons  

NSDL National Science Digital Library

SeaWiFS false color data showing seasonal change in the oceans and on land for the entire globe. The data is seasonally averaged, and shows the sequence: fall, winter, spring, summer, fall, winter, spring (for the Northern Hemisphere).

Allen, Jesse; Newcombe, Marte; Feldman, Gene

1998-09-09

303

The Franklin Institute's Traveling Science Shows  

NSDL National Science Digital Library

The Franklin Institute's team of science educators are available for shows on a variety of science topics. Traveling Science shows are aligned with National Science Education Standards, and focus on Physics, Biology and Chemistry.

Shows, Traveling S.

2004-04-05

304

A Learning Model of Trade Show Attendance  

Microsoft Academic Search

This study examines trade show attendees' show choice behavior from a learning perspective. The purpose is to create a model that can be useful in attracting attendees, either to a show or even to a specific booth. Reasons for attending were found to be three-dimensional: shopping, career development, and general industry awareness. These dimensions were then used to develop clusters,

John F. Tanner Jr; Lawrence B. Chonko; Thomas V. Ponzurick

2001-01-01

305

The Wonders of Physics Traveling Show  

NSDL National Science Digital Library

The Wonders of Physics is a live physics show designed to stimulate interest in science in people of all ages and backgrounds. The program's fast-paced presentation is supplemented by a variety of media tools. In addition, a smaller traveling show is based in Madison, WI, but does shows all over the United States and Canada.

Physics, The W.

2004-06-02

306

Quantifying Ice Nucleation by Silver Iodide Aerosols.  

NASA Astrophysics Data System (ADS)

Laboratory studies of artificial ice nucleating aerosols used for weather modification by cloud seeding have generally been inadequate for describing their complex action in the varied temperature, pressure humidity, and cloud conditions that can be encountered in the atmosphere. This study provides a quantitative framework for predicting ice formation by aerosol particles based on experiments which specifically target currently accepted mechanisms by which ice can form. A physical system for reproducing realistic atmospheric cloud conditions, the Colorado State University dynamic (controlled expansion) cloud chamber, is described. Physical simulations of adiabatic cloud formation and growth are demonstrated. Methodologies were also formulated to use the cloud chamber and other available instrumentation to isolate the action of ice nucleating aerosols by accepted primary ice nucleation modes. These methods were applied to the study of two chemically different silver iodide (AgI)-type aerosols, generated in the exact form in which they have been used for seeding natural clouds. The results were formulated as a function of basic thermodynamic quantities and particle size. An available one dimensional numerical cloud model with microphysical detail was adapted for the equivalent simulation of experiments performed in the cloud chamber. The model was utilized as a diagnostic tool to estimate water supersaturation in association with experiments and it was used for comparison of the predictions of new ice nucleus formulations with observations from generalized seeding simulations conducted in the cloud chamber. The nucleant and mode-specific formulations represent vast improvements compared to available formulations for "pure" AgI. The general implications of these new results were tested by using the model to simulate a few common seeding situations in the atmosphere, and the transferability of results was evaluated by modeling two actual seeding experiments conducted in summertime cumuli. Within the limitations of the cloud model used, agreement with the atmospheric results was very good. The results of this study should be most useful for designing standard and better methods for the quantitative study of ice nucleation by artificially generated and natural aerosols, and for evaluating cloud seeding methodologies and potential seeding effects using more complex microphysical-dynamic cloud models.

Demott, Paul Judson

307

Plant species descriptions show signs of disease.  

PubMed

It is well known that diseases can greatly influence the morphology of plants, but often the incidence of disease is either too rare or the symptoms too obvious for the 'abnormalities' to cause confusion in systematics. However, we have recently come across several misinterpretations of disease-induced traits that may have been perpetuated into modern species inventories. Anther-smut disease (caused by the fungus Microbotryum violaceum) is common in many members of the Caryophyllaceae and related plant families. This disease causes anthers of infected plants to be filled with dark-violet fungal spores rather than pollen. Otherwise, their vegetative morphology is within the normal range of healthy plants. Here, we present the results of a herbarium survey showing that a number of type specimens (on which the species name and original description are based) in the genus Silene from Asia are diseased with anther smut. The primary visible disease symptom, namely the dark-violet anthers, is incorporated into the original species descriptions and some of these descriptions have persisted unchanged into modern floras. This raises the question of whether diseased type specimens have erroneously been given unique species names. PMID:14667368

Hood, Michael E; Antonovics, Janis

2003-11-01

308

Quantified EEG in different G situations  

NASA Astrophysics Data System (ADS)

The electrical activity of the brain (EEG) has been recorded during parabolic flights in trained astronauts and non trained volunteers as well. The Fast Fourier analysis of the EEG activity evidenced more asymmetry between the two brain hemispheres in the subjects who suffered from motion sickness than in the others. However, such a FFT classification does not lead to a discrimination between deterministic and stochastic events. Therefore, a first attempt was made to calculate the dimensionality of "chaotic attractors" in the EEG patterns as a function of the different g-epochs of one parabola. Very preliminary results are given here.

de Metz, K.; Quadens, O.; De Graeve, M.

309

Misleading estimates of forecast quality: quantifying skill with sequential forecasts  

NASA Astrophysics Data System (ADS)

Quantifying the skill in probability forecasts is complicated by the fact that the behaviour of many physical systems changes slowly over long timescales. This can lead to unreliable evaluation of the skill and the value of a forecast system. Demonstrating robust out-of-sample skill requires a sufficient sample of probability forecasts and corresponding verifications. Larger samples of evaluations result in more precise confidence intervals but what often appears to be overlooked (D. Wilks. QJRMS, 136(653), pp2109-2118, 2010) is the effect of serial correlation in a forecast/outcome archive on skill score statistics. Linear autocorrelation in a forecast time series can contribute to variance inflation in the sampling distribution, and thus result in falsely narrowed confidence intervals. The investigation is applied to the ignorance score in a broader range of systems where sample variance inflation effects have been observed. Sampling variances can sometimes be deflated in a linearly autocorrelated time series. In fact, it is not merely linear correlation but the lack of independence of consecutive forecasts which can lead to a misleading estimate of skill. This is demonstrated using a chaotic time series which lacks independence yet has no linear autocorrelation in the forecast/observation time series. These results support the need for sample size corrections to avoid overconfidence in forecast skill but also indicate that a forecast user should be aware of the implications of any serial correlation for statistical inference with skill scores.

Jarman, A. S.; Smith, L. A.

2012-04-01

310

Quantifying heat losses using aerial thermography  

SciTech Connect

A theoretical model is described for calculating flat roof total heat losses and thermal conductances from aerial infrared data. Three empirical methods for estimating convective losses are described. The disagreement between the methods shows that they are prone to large (20%) errors, and that the survey should be carried out in low wind speeds, in order to minimize the effect of these errors on the calculation of total heat loss. The errors associated with knowledge of ground truth data are discussed for a high emissivity roof and three sets of environmental conditions. It is shown that the error in the net radiative loss is strongly dependent on the error in measuring the broad-band radiation incident on the roof. This is minimized for clear skies, but should be measured. Accurate knowledge of roof emissivity and the radiation reflected from the roof is shown to be less important. Simple techniques are described for measuring all three factors. Using these techniques in good conditions it should be possible to measure total heat losses to within 15%.

Haigh, G.A.; Pritchard, S.E.

1980-01-01

311

Quantifying the effects of melittin on liposomes.  

PubMed

Melittin, the soluble peptide of bee venom, has been demonstrated to induce lysis of phospholipid liposomes. We have investigated the dependence of the lytic activity of melittin on lipid composition. The lysis of liposomes, measured by following their mass and dimensions when immobilised on a solid substrate, was close to zero when the negatively charged lipids phosphatidyl glycerol or phosphatidyl serine were used as the phospholipid component of the liposome. Whilst there was significant binding of melittin to the liposomes, there was little net change in their diameter with melittin binding reversed upon salt injection. For the zwitterionic phosphatidyl choline the lytic ability of melittin is dependent on the degree of acyl chain unsaturation, with melittin able to induce lysis of liposomes in the liquid crystalline state, whilst those in the gel state showed strong resistance to lysis. By directly measuring the dimensions and mass changes of liposomes on exposure to melittin using Dual Polarisation Interferometry, rather than following the florescence of entrapped dyes we attained further information about the initial stages of melittin binding to liposomes. PMID:17092481

Popplewell, J F; Swann, M J; Freeman, N J; McDonnell, C; Ford, R C

2007-01-01

312

Quantifying selective linear erosion in Antarctica  

NASA Astrophysics Data System (ADS)

David Sugden (1978) coined the term 'selective linear erosion' to describe landscapes, characteristic of high-latitude glaciated areas, that are distinguished by deep glacially excavated troughs separated by low-relief upland surfaces that show no evidence of glacial erosion. Sugden (and later researchers) proposed that this landscape form owed its existence to the thermal distribution within polar ice sheets: ice at high elevations is thin, frozen to its bed, and therefore protects rather than erodes the landscape; thicker ice in topographic depressions can sustain basal melting with consequent erosion by hydraulic and thermodynamic processes. This contrast in basal thermal regime implies an extreme contrast in erosion rates, which amplifies preexisting relief and gives rise to landscapes of selective linear erosion. These landscapes are currently exposed in formerly glaciated high-latitude regions of the northern continents. They also exist beneath the Antarctic ice sheets, where presumably the processes responsible for their formation are currently active. Here we argue that understanding how and when these landscapes form is important to understanding how ice sheets mediate climate-landscape interactions. However, the facts that: i) the processes in question occur beneath the modern Antarctic ice sheet, and ii) currently unglaciated portions of glacier troughs in Arctic and Antarctic landscapes are nearly universally submerged, present several challenges to attaining this understanding. Here we summarize geochemical and geochronological means of addressing these challenges. These include: first, cosmogenic-nuclide measurements that establish the Plio-Pleistocene erosion history of high-elevation plateau surfaces; second, thermochronometric observations on debris shed by glaciers occupying major troughs that provide information about when and how fast these troughs formed.

Balco, G.; Shuster, D. L.

2012-12-01

313

Quantifying defects in zeolites and zeolite membranes  

NASA Astrophysics Data System (ADS)

Zeolites are crystalline aluminosilicates that are frequently used as catalysts to transform chemical feedstocks into more useful materials in a size- or shape-selective fashion; they are one of the earliest forms of nanotechnology. Zeolites can also be used, especially in the form of zeolite membranes (layers of zeolite on a support), to separate mixtures based on the size of the molecules. Recent advances have also created the possibility of using zeolites as alkaline catalysts, in addition to their traditional applications as acid catalysts and catalytic supports. Transport and catalysis in zeolites are greatly affected by physical and chemical defects. Such defects can be undesirable (in the case of zeolite membranes), or desirable (in the case of nitrogen-doped alkaline zeolites). Studying zeolites at the relevant length scales requires indirect experimental methods such as vapor adsorption or atomic-scale modeling such as electronic structure calculations. This dissertation explores both experimental and theoretical characterization of zeolites and zeolite membranes. Physical defects, important in membrane permeation, are studied using physical adsorption experiments and models of membrane transport. The results indicate that zeolite membranes can be modeled as a zeolite powder on top of a support---a "supported powder," so to speak---for the purposes of adsorption. Mesoporosity that might be expected based on permeation and confocal microscopy measurements is not observed. Chemical defects---substitutions of nitrogen for oxygen---are studied using quantum mechanical models that predict spectroscopic properties. These models provide a method for simulating the 29Si NMR spectra of nitrogendefected zeolites. They also demonstrate that nitrogen substitutes into the zeolite framework (not just on the surface) under the proper reaction conditions. The results of these studies will be valuable to experimentalists and theorists alike in our efforts to understand the versatile and complicated materials that are zeolites.

Hammond, Karl Daniel

314

A new HPLC-ELSD method to quantify indican in Polygonum tinctorium L. and to evaluate beta-glucosidase hydrolysis of indican for indigo production.  

PubMed

A method to quantify the indigo precursor indican (indoxyl-beta-D-glucoside) in Polygonum tinctorium L. has been developed. Plant material was extracted in deionized water, and indican was identified and quantified using high performance liquid chromatography (HPLC) coupled to an evaporative light scattering detector (ELSD). Results confirmed that with this method it is possible to measure indican content in a short time, obtaining reliable and reproducible data. Using this method, leaf indican content was quantified every 15 days during the growing season (from May to October) in P. tinctorium crops grown in a field experiment in Central Italy. Results showed that indican increased along the growing season until flowering and was positively affected by photosynthetic active radiation (PAR). Indican is naturally hydrolyzed by native beta-glucosidase to indoxyl and glucose, the indoxyl yielding indigo. The activity of two enzymes, sweet almond beta-glucosidase and Novarom G preparation, were compared with P. tinctorium native beta-glucosidase to evaluate indigo production. Results showed that the ability to promote indigo formation increased as follows: almond beta-glucosidase

Angelini, Luciana G; Campeol, Elisabetta; Tozzi, Sabrina; Gilbert, Kerry G; Cooke, David T; John, Philip

2003-01-01

315

Quantifying Spatial Variability of Selected Soil Trace Elements and Their Scaling Relationships Using Multifractal Techniques  

PubMed Central

Multifractal techniques were utilized to quantify the spatial variability of selected soil trace elements and their scaling relationships in a 10.24-ha agricultural field in northeast China. 1024 soil samples were collected from the field and available Fe, Mn, Cu and Zn were measured in each sample. Descriptive results showed that Mn deficiencies were widespread throughout the field while Fe and Zn deficiencies tended to occur in patches. By estimating single multifractal spectra, we found that available Fe, Cu and Zn in the study soils exhibited high spatial variability and the existence of anomalies ([?(q)max??(q)min]?0.54), whereas available Mn had a relatively uniform distribution ([?(q)max??(q)min]?0.10). The joint multifractal spectra revealed that the strong positive relationships (r?0.86, P<0.001) among available Fe, Cu and Zn were all valid across a wider range of scales and over the full range of data values, whereas available Mn was weakly related to available Fe and Zn (r?0.18, P<0.01) but not related to available Cu (r?=??0.03, P?=?0.40). These results show that the variability and singularities of selected soil trace elements as well as their scaling relationships can be characterized by single and joint multifractal parameters. The findings presented in this study could be extended to predict selected soil trace elements at larger regional scales with the aid of geographic information systems.

Zhang, Fasheng; Yin, Guanghua; Wang, Zhenying; McLaughlin, Neil; Geng, Xiaoyuan; Liu, Zuoxin

2013-01-01

316

Quantifying spatial variability of selected soil trace elements and their scaling relationships using multifractal techniques.  

PubMed

Multifractal techniques were utilized to quantify the spatial variability of selected soil trace elements and their scaling relationships in a 10.24-ha agricultural field in northeast China. 1024 soil samples were collected from the field and available Fe, Mn, Cu and Zn were measured in each sample. Descriptive results showed that Mn deficiencies were widespread throughout the field while Fe and Zn deficiencies tended to occur in patches. By estimating single multifractal spectra, we found that available Fe, Cu and Zn in the study soils exhibited high spatial variability and the existence of anomalies ([?(q)max-?(q)min]?0.54), whereas available Mn had a relatively uniform distribution ([?(q)max-?(q)min]?0.10). The joint multifractal spectra revealed that the strong positive relationships (r?0.86, P<0.001) among available Fe, Cu and Zn were all valid across a wider range of scales and over the full range of data values, whereas available Mn was weakly related to available Fe and Zn (r?0.18, P<0.01) but not related to available Cu (r?=?-0.03, P?=?0.40). These results show that the variability and singularities of selected soil trace elements as well as their scaling relationships can be characterized by single and joint multifractal parameters. The findings presented in this study could be extended to predict selected soil trace elements at larger regional scales with the aid of geographic information systems. PMID:23874944

Zhang, Fasheng; Yin, Guanghua; Wang, Zhenying; McLaughlin, Neil; Geng, Xiaoyuan; Liu, Zuoxin

2013-01-01

317

Solar System Odyssey - Fulldome Digital Planetarium Show  

NSDL National Science Digital Library

This is a Fulldome Digital Planetarium Show. Learners go on a futuristic journey through our Solar System. They explore the inner and outer planets, then the moons: Titan, Europa, and Callisto as possible places to establish a human colony. A full-length preview of the show is available on the website, you need to scroll down about 3/4 of the page - under section on children's shows, direct link not available.

318

Quantifying the kinetic stability of hyperstable proteins via time-dependent SDS trapping.  

PubMed

Globular proteins are usually in equilibrium with unfolded conformations, whereas kinetically stable proteins (KSPs) are conformationally trapped by their high unfolding transition state energy. Kinetic stability (KS) could allow proteins to maintain their activity under harsh conditions, increase a protein's half-life, or protect against misfolding-aggregation. Here we show the development of a simple method for quantifying a protein's KS that involves incubating a protein in SDS at high temperature as a function of time, running the unheated samples on SDS-PAGE, and quantifying the bands to determine the time-dependent loss of a protein's SDS resistance. Six diverse proteins, including two monomer, two dimers, and two tetramers, were studied by this method, and the kinetics of the loss of SDS resistance correlated linearly with their unfolding rate determined by circular dichroism. These results imply that the mechanism by which SDS denatures proteins involves conformational trapping, with a trapping rate that is determined and limited by the rate of protein unfolding. We applied the SDS trapping of proteins (S-TraP) method to superoxide dismutase (SOD) and transthyretin (TTR), which are highly KSPs with native unfolding rates that are difficult to measure by conventional spectroscopic methods. A combination of S-TraP experiments between 75 and 90 °C combined with Eyring plot analysis yielded an unfolding half-life of 70 ± 37 and 18 ± 6 days at 37 °C for SOD and TTR, respectively. The S-TraP method shown here is extremely accessible, sample-efficient, cost-effective, compatible with impure or complex samples, and will be useful for exploring the biological and pathological roles of kinetic stability. PMID:22106876

Xia, Ke; Zhang, Songjie; Bathrick, Brendan; Liu, Shuangqi; Garcia, Yeidaliz; Colón, Wilfredo

2012-01-10

319

Quantifying metal ions binding onto dissolved organic matter using log-transformed absorbance spectra.  

PubMed

This study introduces the concept of consistent examination of changes of log-transformed absorbance spectra of dissolved organic matter (DOM) at incrementally increasing concentrations of heavy metal cations such as copper, cadmium, and aluminum at environmentally relevant concentrations. The approach is designed to highlight contributions of low-intensity absorbance features that appear to be especially sensitive to DOM reactions. In accord with this approach, log-transformed absorbance spectra of fractions of DOM from the Suwannee River were acquired at varying pHs and concentrations of copper, cadmium, and aluminum. These log-transformed spectra were processed using the differential approach and used to examine the nature of the observed changes of DOM absorbance and correlate them with the extent of Me-DOM complexation. Two alternative parameters, namely the change of the spectral slope in the range of wavelengths 325-375 nm (DSlope325-375) and differential logarithm of DOM absorbance at 350 nm (DLnA350) were introduced to quantify Cu(II), Cd(II), and Al(III) binding onto DOMs. DLnA350 and DSlope325-375 datasets were compared with the amount of DOM-bound Cu(II), Cd(II), and Al(III) estimated based on NICA-Donnan model calculations. This examination showed that the DLnA350 and DSlope325-375 acquired at various pH values, metal ions concentrations, and DOM types were strongly and unambiguously correlated with the concentration of DOM-bound metal ions. The obtained experimental results and their interpretation indicate that the introduced DSlope325-375 and DLnA35 parameters are predictive of and can be used to quantify in situ metal ions interactions with DOMs. The presented approach can be used to gain more information about DOM-metal interactions and for further optimization of existing formal models of metal-DOM complexation. PMID:23490103

Yan, Mingquan; Wang, Dongsheng; Korshin, Gregory V; Benedetti, Marc F

2013-05-01

320

Quantifying fluvial topography using UAS imagery and SfM photogrammetry  

NASA Astrophysics Data System (ADS)

The measurement and monitoring of fluvial topography at high spatial and temporal resolutions is in increasing demand for a range of river science and management applications, including change detection, hydraulic models, habitat assessments, river restorations and sediment budgets. Existing approaches are yet to provide a single technique for rapidly quantifying fluvial topography in both exposed and submerged areas, with high spatial resolution, reach-scale continuous coverage, high accuracy and reasonable cost. In this paper, we explore the potential of using imagery acquired from a small unmanned aerial system (UAS) and processed using Structure-from-Motion (SfM) photogrammetry for filling this gap. We use a rotary winged hexacopter known as the Draganflyer X6, a consumer grade digital camera (Panasonic Lumix DMC-LX3) and the commercially available PhotoScan Pro SfM software (Agisoft LLC). We test the approach on three contrasting river systems; a shallow margin of the San Pedro River in the Valdivia region of south-central Chile, the lowland River Arrow in Warwickshire, UK, and the upland Coledale Beck in Cumbria, UK. Digital elevation models (DEMs) and orthophotos of hyperspatial resolution (0.01-0.02m) are produced. Mean elevation errors are found to vary somewhat between sites, dependent on vegetation coverage and the spatial arrangement of ground control points (GCPs) used to georeference the data. Mean errors are in the range 4-44mm for exposed areas and 17-89mm for submerged areas. Errors in submerged areas can be improved to 4-56mm with the application of a simple refraction correction procedure. Multiple surveys of the River Arrow site show consistently high quality results, indicating the repeatability of the approach. This work therefore demonstrates the potential of a UAS-SfM approach for quantifying fluvial topography.

Woodget, Amy; Carbonneau, Patrice; Visser, Fleur; Maddock, Ian; Habit, Evelyn

2014-05-01

321

Locally-calibrated light transmission visualization methods to quantify nonaqueous phase liquid mass in porous media  

NASA Astrophysics Data System (ADS)

Five locally-calibrated light transmission visualization (LTV) methods were tested to quantify nonaqueous phase liquid (NAPL) mass and mass reduction in porous media. Tetrachloroethylene (PCE) was released into a two-dimensional laboratory flow chamber packed with water-saturated sand which was then flushed with a surfactant solution (2% Tween 80) until all of the PCE had been dissolved. In all the LTV methods employed here, the water phase was dyed, rather than the more common approach of dyeing the NAPL phase, such that the light adsorption characteristics of NAPL did not change as dissolution progressed. Also, none of the methods used here required the use of external calibration chambers. The five visualization approaches evaluated included three methods developed from previously published models, a binary method, and a novel multiple wavelength method that has the advantage of not requiring any assumptions about the intra-pore interface structure between the various phases (sand/water/NAPL). The new multiple wavelength method is also expected to be applicable to any translucent porous media containing two immiscible fluids (e.g., water-air, water-NAPL). Results from the sand-water-PCE system evaluated here showed that the model that assumes wetting media of uniform pore size (Model C of Niemet and Selker, 2001) and the multiple wavelength model with no interface structure assumptions were able to accurately quantify PCE mass reduction during surfactant flushing. The average mass recoveries from these two imaging methods were greater than 95% for domain-average NAPL saturations of approximately 2.6 × 10 - 2 , and were approximately 90% during seven cycles of surfactant flushing that sequentially reduced the average NAPL saturation to 7.5 × 10 - 4 .

Wang, Huaguo; Chen, Xiaosong; Jawitz, James W.

2008-11-01

322

Generic metric to quantify quorum sensing activation dynamics.  

PubMed

Quorum sensing (QS) enables bacteria to sense and respond to changes in their population density. It plays a critical role in controlling different biological functions, including bioluminescence and bacterial virulence. It has also been widely adapted to program robust dynamics in one or multiple cellular populations. While QS systems across bacteria all appear to function similarly-as density-dependent control systems-there is tremendous diversity among these systems in terms of signaling components and network architectures. This diversity hampers efforts to quantify the general control properties of QS. For a specific QS module, it remains unclear how to most effectively characterize its regulatory properties in a manner that allows quantitative predictions of the activation dynamics of the target gene. Using simple kinetic models, here we show that the dominant temporal dynamics of QS-controlled target activation can be captured by a generic metric, 'sensing potential', defined at a single time point. We validate these predictions using synthetic QS circuits in Escherichia coli. Our work provides a computational framework and experimental methodology to characterize diverse natural QS systems and provides a concise yet quantitative criterion for selecting or optimizing a QS system for synthetic biology applications. PMID:24011134

Pai, Anand; Srimani, Jaydeep K; Tanouchi, Yu; You, Lingchong

2014-04-18

323

Quantifying the global cellular thiol-disulfide status.  

PubMed

It is widely accepted that the redox status of protein thiols is of central importance to protein structure and folding and that glutathione is an important low-molecular-mass redox regulator. However, the total cellular pools of thiols and disulfides and their relative abundance have never been determined. In this study, we have assembled a global picture of the cellular thiol-disulfide status in cultured mammalian cells. We have quantified the absolute levels of protein thiols, protein disulfides, and glutathionylated protein (PSSG) in all cellular protein, including membrane proteins. These data were combined with quantification of reduced and oxidized glutathione in the same cells. Of the total protein cysteines, 6% and 9.6% are engaged in disulfide bond formation in HEK and HeLa cells, respectively. Furthermore, the steady-state level of PSSG is <0.1% of the total protein cysteines in both cell types. However, when cells are exposed to a sublethal dose of the thiol-specific oxidant diamide, PSSG levels increase to >15% of all protein cysteine. Glutathione is typically characterized as the "cellular redox buffer"; nevertheless, our data show that protein thiols represent a larger active redox pool than glutathione. Accordingly, protein thiols are likely to be directly involved in the cellular defense against oxidative stress. PMID:19122143

Hansen, Rosa E; Roth, Doris; Winther, Jakob R

2009-01-13

324

Quantifying the global cellular thiol-disulfide status  

PubMed Central

It is widely accepted that the redox status of protein thiols is of central importance to protein structure and folding and that glutathione is an important low-molecular-mass redox regulator. However, the total cellular pools of thiols and disulfides and their relative abundance have never been determined. In this study, we have assembled a global picture of the cellular thiol–disulfide status in cultured mammalian cells. We have quantified the absolute levels of protein thiols, protein disulfides, and glutathionylated protein (PSSG) in all cellular protein, including membrane proteins. These data were combined with quantification of reduced and oxidized glutathione in the same cells. Of the total protein cysteines, 6% and 9.6% are engaged in disulfide bond formation in HEK and HeLa cells, respectively. Furthermore, the steady-state level of PSSG is <0.1% of the total protein cysteines in both cell types. However, when cells are exposed to a sublethal dose of the thiol-specific oxidant diamide, PSSG levels increase to >15% of all protein cysteine. Glutathione is typically characterized as the “cellular redox buffer”; nevertheless, our data show that protein thiols represent a larger active redox pool than glutathione. Accordingly, protein thiols are likely to be directly involved in the cellular defense against oxidative stress.

Hansen, Rosa E.; Roth, Doris; Winther, Jakob R.

2009-01-01

325

Quantifying Ocean Mixing Processes Through Stochastic Heterogeneity Mapping  

NASA Astrophysics Data System (ADS)

Stochastic heterogeneity mapping based on the band-limited Von Karman function is applied to stacked, migrated seismic data allowing the extraction of several stochastic parameters that may elucidate ocean mixing processes. In particular, the Von Karman method enables extraction from the reflectivity field: 1) the power spectrum, a combined estimate of amplitude and coherence in the analysis window, 2) correlation length, an estimator of the maximum length for which the event distribution described by a power law, and 3) the Hurst number, which is the exponent of the power law and is directly related to the fractal dimension, a measure of how completely a fractal fills space. With the extraction of these parameters we aim to quantify various scale mixing processes in the ocean. Curiously, a single scaling law derived from percolation theory asserts that Hurst numbers between 0 and 0.5 are indicative of sub-diffusive behavior and Hurst numbers between 0.5 and 1 indicate super-diffusive behavior. Moreover, double-diffusion regimes are represented by a Hurst number of 0.25. Low Hurst numbers represent a rich range of scale lengths and, accordingly correspond to well-mixed regimes. Preliminary analysis of GO seismic profiles acquired in April-May, 2007 show that zones corresponding to particular water masses display varying degrees of diffusive behavior. We believe that this method of analysis can address multi-scale mixing processes in the ocean from seismic data alone.

Buffett, G.; Hurich, C.; Carbonell, R.; Biescas, B.; Sallarès, V.; Kläschen, D.

2009-04-01

326

Quantifying Correlations Between Allosteric Sites in Thermodynamic Ensembles  

PubMed Central

Allostery describes altered protein function at one site due to a perturbation at another site. One mechanism of allostery involves correlated motions, which can occur even in the absence of substantial conformational change. We present a novel method, “MutInf”, to identify statistically significant correlated motions from equilibrium molecular dynamics simulations. Our approach analyzes both backbone and sidechain motions using internal coordinates to account for the gear-like twists that can take place even in the absence of the large conformational changes typical of traditional allosteric proteins. We quantify correlated motions using a mutual information metric, which we extend to incorporate data from multiple short simulations and to filter out correlations that are not statistically significant. Applying our approach to uncover mechanisms of cooperative small molecule binding in human interleukin-2, we identify clusters of correlated residues from 50 ns of molecular dynamics simulations. Interestingly, two of the clusters with the strongest correlations highlight known cooperative small-molecule binding sites and show substantial correlations between these sites. These cooperative binding sites on interleukin-2 are correlated not only through the hydrophobic core of the protein but also through a dynamic polar network of hydrogen bonding and electrostatic interactions. Since this approach identifies correlated conformations in an unbiased, statistically robust manner, it should be a useful tool for finding novel or “orphan” allosteric sites in proteins of biological and therapeutic importance.

McClendon, Christopher L.; Friedland, Gregory; Mobley, David L.; Amirkhani, Homeira; Jacobson, Matthew P.

2009-01-01

327

Quantifying stochastic effects in biochemical reaction networks using partitioned leaping  

NASA Astrophysics Data System (ADS)

“Leaping” methods show great promise for significantly accelerating stochastic simulations of complex biochemical reaction networks. However, few practical applications of leaping have appeared in the literature to date. Here, we address this issue using the “partitioned leaping algorithm” (PLA) [L. A. Harris and P. Clancy, J. Chem. Phys. 125, 144107 (2006)], a recently introduced multiscale leaping approach. We use the PLA to investigate stochastic effects in two model biochemical reaction networks. The networks that we consider are simple enough so as to be accessible to our intuition but sufficiently complex so as to be generally representative of real biological systems. We demonstrate how the PLA allows us to quantify subtle effects of stochasticity in these systems that would be difficult to ascertain otherwise as well as not-so-subtle behaviors that would strain commonly used “exact” stochastic methods. We also illustrate bottlenecks that can hinder the approach and exemplify and discuss possible strategies for overcoming them. Overall, our aim is to aid and motivate future applications of leaping by providing stark illustrations of the benefits of the method while at the same time elucidating obstacles that are often encountered in practice.

Harris, Leonard A.; Piccirilli, Aaron M.; Majusiak, Emily R.; Clancy, Paulette

2009-05-01

328

QUANTIFYING THE EVOLVING MAGNETIC STRUCTURE OF ACTIVE REGIONS  

SciTech Connect

The topical and controversial issue of parameterizing the magnetic structure of solar active regions has vital implications in the understanding of how these structures form, evolve, produce solar flares, and decay. This interdisciplinary and ill-constrained problem of quantifying complexity is addressed by using a two-dimensional wavelet transform modulus maxima (WTMM) method to study the multifractal properties of active region photospheric magnetic fields. The WTMM method provides an adaptive space-scale partition of a fractal distribution, from which one can extract the multifractal spectra. The use of a novel segmentation procedure allows us to remove the quiet Sun component and reliably study the evolution of active region multifractal parameters. It is shown that prior to the onset of solar flares, the magnetic field undergoes restructuring as Dirac-like features (with a Hoelder exponent, h = -1) coalesce to form step functions (where h = 0). The resulting configuration has a higher concentration of gradients along neutral line features. We propose that when sufficient flux is present in an active region for a period of time, it must be structured with a fractal dimension greater than 1.2, and a Hoelder exponent greater than -0.7, in order to produce M- and X-class flares. This result has immediate applications in the study of the underlying physics of active region evolution and space weather forecasting.

Conlon, Paul A.; McAteer, R.T. James; Gallagher, Peter T.; Fennell, Linda, E-mail: mcateer@nmsu.ed [School of Physics, Trinity College Dublin, Dublin 2 (Ireland)

2010-10-10

329

Quantifying the direct use value of Condor seamount  

NASA Astrophysics Data System (ADS)

Seamounts often satisfy numerous uses and interests. Multiple uses can generate multiple benefits but also conflicts and impacts, calling, therefore, for integrated and sustainable management. To assist in developing comprehensive management strategies, policymakers recognise the need to include measures of socioeconomic analysis alongside ecological data so that practical compromises can be made. This study assessed the direct output impact (DOI) of the relevant marine activities operating at Condor seamount (Azores, central northeast Atlantic) as proxies of the direct use values provided by the resource system. Results demonstrated that Condor seamount supported a wide range of uses yielding distinct economic outputs. Demersal fisheries, scientific research and shark diving were the top-three activities generating the highest revenues, while tuna fisheries, whale watching and scuba-diving had marginal economic significance. Results also indicated that the economic importance of non-extractive uses of Condor is considerable, highlighting the importance of these uses as alternative income-generating opportunities for local communities. It is hoped that quantifying the direct use values provided by Condor seamount will contribute to the decision making process towards its long-term conservation and sustainable use.

Ressurreição, Adriana; Giacomello, Eva

2013-12-01

330

Quantifying Representative Hydraulic Conductivity for Three-Dimensional Fractured Formations  

NASA Astrophysics Data System (ADS)

The fractures and pores in rock formations are the fundamental units for flow and contaminant transport simulations. Due to technical and logical limitations it is difficult in reality to account for such small units to model flow and transport in large-scale problems. The concept of continuum representations of fractured rocks is then used as an alternative to solve for flow and transport in complex fractured formations. For these types of approaches the determinations of the representative parameters such as hydraulic conductivity and dispersion coefficient play important roles in controlling the accuracy of simulation results for large-scale problems. The objective of this study is to develop a discrete fracture network (DFN) model and the associated unstructured mesh generation system to characterize the continuum hydraulic conductivity for fractured rocks on different scales. In this study a coupled three-dimensional model of water flow, thermal transport, solute transport, and geochemical kinetic/equilibrium reactions in saturated/unsaturated porous media (HYDROGEOCHEM) is employed to be the flow simulator to analyze the flow behaviors in fracture formations. The fracture network model and the corresponding continuum model are simulated for same scale problems. Based on the concept of mass conservation in flow, the correlations between statistics of fracture structure and the representative continuum parameters are quantified for a variety of fracture distribution scenarios and scales. The results of this study are expected to provide general insight into the procedures and the associated techniques for analyzing flow in complex large-scale fractured rock systems.

Lee, I.; Ni, C.

2013-12-01

331

Method for quantifying optical properties of the human lens  

DOEpatents

A method is disclosed for quantifying optical properties of the human lens. The present invention includes the application of fiberoptic, OMA-based instrumentation as an in vivo diagnostic tool for the human ocular lens. Rapid, noninvasive and comprehensive assessment of the optical characteristics of a lens using very modest levels of exciting light are described. Typically, the backscatter and fluorescence spectra (from about 300- to 900-nm) elicited by each of several exciting wavelengths (from about 300- to 600-nm) are collected within a few seconds. The resulting optical signature of individual lenses is then used to assess the overall optical quality of the lens by comparing the results with a database of similar measurements obtained from a reference set of normal human lenses having various ages. Several metrics have been identified which gauge the optical quality of a given lens relative to the norm for the subject`s chronological age. These metrics may also serve to document accelerated optical aging and/or as early indicators of cataract or other disease processes. 8 figs.

Loree, T.R.; Bigio, I.J.; Zuclich, J.A.; Shimada, Tsutomu; Strobl, K.

1999-04-13

332

Quantifying the Restorable Water Volume of California's Sierra Nevada Meadows  

NASA Astrophysics Data System (ADS)

The Sierra Nevada is estimated to provide over 66% of California's water supply, which is largely derived from snowmelt. Global climate warming is expected to result in a decrease in snow pack and an increase in melting rate, making the attenuation of snowmelt by any means, an important ecosystem service for ensuring water availability. Montane meadows are dispersed throughout the mountain range and can act like natural reservoirs, and also provide wildlife habitat, water filtration, and water storage. Despite the important role of meadows in the Sierra Nevada, a large proportion is degraded from stream incision, which increases volume outflows and reduces overbank flooding, thus reducing infiltration and potential water storage. Restoration of meadow stream channels would therefore improve hydrological functioning, including increased water storage. The potential water holding capacity of restored meadows has yet to be quantified, thus this research seeks to address this knowledge gap by estimating the restorable water volume due to stream incision. More than 17,000 meadows were analyzed by categorizing their erosion potential using channel slope and soil texture, ultimately resulting in six general erodibility types. Field measurements of over 100 meadows, stratified by latitude, elevation, and geologic substrate, were then taken and analyzed for each erodibility type to determine average depth of incision. Restorable water volume was then quantified as a function of water holding capacity of the soil, meadow area and incised depth. Total restorable water volume was found to be 120 x 10^6 m3, or approximately 97,000 acre-feet. Using 95% confidence intervals for incised depth, the upper and lower bounds of the total restorable water volume were found to be 107 - 140 x 10^6 m3. Though this estimate of restorable water volume is small in regards to the storage capacity of typical California reservoirs, restoration of Sierra Nevada meadows remains an important objective. Storage of water in meadows benefits California wildlife, potentially attenuate floods, and elevates base flows, which can ease effects to the spring recession curve from the expected decline in Sierran snowpack with atmospheric warming.

Emmons, J. D.; Yarnell, S. M.; Fryjoff-Hung, A.; Viers, J.

2013-12-01

333

The Language of Show Biz: A Dictionary.  

ERIC Educational Resources Information Center

This dictionary of the language of show biz provides the layman with definitions and essays on terms and expressions often used in show business. The overall pattern of selection was intended to be more rather than less inclusive, though radio, television, and film terms were deliberately omitted. Lengthy explanations are sometimes used to express…

Sergel, Sherman Louis, Ed.

334

International Plowing Match & Farm Machinery Show  

NSDL National Science Digital Library

The 1995 International Plowing Match & Farm Machinery Show in Ontario, Canada has a site of the Web. The IPM is a non-profit organization of volunteers which annually organizes Canada's largest farm machinery show. The event is commercial and educational. Thousands of school children and educators attend and participate in organized educational activities.

1995-01-01

335

Show Me: Automatic Presentation for Visual Analysis  

Microsoft Academic Search

This paper describes Show Me, an integrated set of user interface commands and defaults that incorporate automatic presentation into a commercial visual analysis system called Tableau. A key aspect of Tableau is VizQL, a language for specifying views, which is used by Show Me to extend automatic presentation to the generation of tables of views (commonly called small multiple displays).

Jock D. Mackinlay; Pat Hanrahan; Chris Stolte

2007-01-01

336

Solar Wind Dynamic Pressure Variations: Quantifying the Statistical Magnetospheric Response.  

National Technical Information Service (NTIS)

Solar wind dynamic pressure variations are common and have large amplitudes. Existing models for the transient magnetospheric and ionospheric response to the solar wind dynamic pressure variation are quantified. The variations drive large amplitude (appro...

D. G. Sibeck

1990-01-01

337

Analyzing complex networks evolution through Information Theory quantifiers  

NASA Astrophysics Data System (ADS)

A methodology to analyze dynamical changes in complex networks based on Information Theory quantifiers is proposed. The square root of the Jensen-Shannon divergence, a measure of dissimilarity between two probability distributions, and the MPR Statistical Complexity are used to quantify states in the network evolution process. Three cases are analyzed, the Watts-Strogatz model, a gene network during the progression of Alzheimer's disease and a climate network for the Tropical Pacific region to study the El Niño/Southern Oscillation (ENSO) dynamic. We find that the proposed quantifiers are able not only to capture changes in the dynamics of the processes but also to quantify and compare states in their evolution.

Carpi, Laura C.; Rosso, Osvaldo A.; Saco, Patricia M.; Ravetti, Martín Gómez

2011-01-01

338

Workshop Report on Quantifying Environmental Damage from Energy Activities.  

National Technical Information Service (NTIS)

Data and methods for quantifying environmental damage from energy activities were evaluated. Specifically, discussions were designed to identify the types and amounts of pollutants emitted by energy technologies that may affect the environment adversely, ...

P. D. Moskowitz M. D. Rowe S. C. Morris L. D. Hamilton

1977-01-01

339

Quantifying the Carbon Intensity of Biomass Energy  

NASA Astrophysics Data System (ADS)

Regulatory agencies at the national and regional level have recognized the importance of quantitative information about greenhouse gas emissions from biomass used in transportation fuels or in electricity generation. For example, in the recently enacted California Low-Carbon Fuel Standard, the California Air Resources Board conducted a comprehensive study to determine an appropriate methodology for setting carbon intensities for biomass-derived transportation fuels. Furthermore, the U.S. Environmental Protection Agency is currently conducting a multi-year review to develop a methodology for estimating biogenic carbon dioxide (CO2) emissions from stationary sources. Our study develops and explores a methodology to compute carbon emission intensities (CIs) per unit of biomass energy, which is a metric that could be used to inform future policy development exercises. To compute CIs for biomass, we use the Global Change Assessment Model (GCAM), which is an integrated assessment model that represents global energy, agriculture, land and physical climate systems with regional, sectoral, and technological detail. The GCAM land use and land cover component includes both managed and unmanaged land cover categories such as food crop production, forest products, and various non-commercial land uses, and it is subdivided into 151 global land regions (wiki.umd.edu/gcam), ten of which are located in the U.S. To illustrate a range of values for different biomass resources, we use GCAM to compute CIs for a variety of biomass crops grown in different land regions of the U.S. We investigate differences in emissions for biomass crops such as switchgrass, miscanthus and willow. Specifically, we use GCAM to compute global carbon emissions from the land use change caused by a marginal increase in the amount of biomass crop grown in a specific model region. Thus, we are able to explore how land use change emissions vary by the type and location of biomass crop grown in the U.S. Direct emissions occur when biomass production used for energy displaces land used for food crops, forest products, pasture, or other arable land in the same region. Indirect emissions occur when increased food crop production, compensating for displaced food crop production in the biomass production region, displaces land in regions outside of the region of biomass production. Initial results from this study suggest that indirect land use emissions, mainly from converting unmanaged forest land, are likely to be as important as direct land use emissions in determining the carbon intensity of biomass energy. Finally, we value the emissions of a marginal unit of biomass production for a given carbon price path and a range of assumed social discount rates. We also compare the cost of bioenergy emissions as valued by a hypothetical private actor to the relevant cost of emissions from conventional fossil fuels, such as coal or natural gas.

Hodson, E. L.; Wise, M.; Clarke, L.; McJeon, H.; Mignone, B.

2012-12-01

340

Comparison of Weather Shows in Eastern Europe  

NASA Astrophysics Data System (ADS)

Comparison of Weather Shows in Eastern Europe Television weather shows in Eastern Europe have in most cases in the high graphical standard. There is though a wast difference in duration and information content in the weather shows. There are few signs and regularities by which we can see the character of the weather show. The main differences are mainly caused by the income structure of the TV station. Either it is a fully privately funded TV relying on the TV commercials income. Or it is a public service TV station funded mainly by the national budget or fixed fee structure/tax. There are wast differences in duration and even a graphical presentation of the weather. Next important aspect is a supplier of the weather information and /or the processor. Shortly we can say, that when the TV show is produced by the national met office, the TV show consists of more scientific terms, synoptic maps, satellite imagery, etc. If the supplier is the private meteorological company, the weather show is more user-friendly, laical with less scientific terms. We are experiencing a massive shift in public weather knowledge and demand for information. In the past, weather shows consisted only of maps with weather icons. In todaýs world, even the laic weather shows consist partly of numerical weather model outputs - they are of course designed to be understandable and graphically attractive. Outputs of the numerical weather models used to be only a part of daily life of a professional meteorologist, today they are common part of life of regular people. Video samples are a part of this presentation.

Najman, M.

2009-09-01

341

Quantifying the toxic and mutagenic activity of complex mixtures with Salmonella typhimurium  

SciTech Connect

The toxicity and mutagenicity of 11 compounds individually and in mixtures were quantified in Salmonella typhimurium strains TA98, TA100, and TA1537 by a modification of the Ames spot test. The distance (millimeters) from the center of the petri dish to the bacterial growth front represented the toxic response. When mutagenicity occurred, the distance from the inner radius to the outer radius of the mutagenic growth represented the mutagenic response. Multiple regression analysis was used to quantify the toxicity and mutagenicity of individual compounds in the mixtures. The results indicate that the effects of compounds in mixtures are generally additive.

Somani, S.M.; Schaeffer, D.J.; Mack, J.O.

1981-03-01

342

Melanoma Drug Trials Show Significant Promise  

MedlinePLUS

... sharing features on this page, please enable JavaScript. Melanoma Drug Trials Show Significant Promise By targeting immune ... Monday, June 2, 2014 Related MedlinePlus Pages Medicines Melanoma MONDAY, June 2, 2014 (HealthDay News) -- A relatively ...

343

Spacecraft Image Mashup Shows Galactic Collision  

NASA Video Gallery

This new composite image from the Chandra X-ray Observatory, the Hubble Space Telescope, and the Spitzer Space Telescope shows two colliding galaxies more than a 100 million years after they first ...

344

Adult Stem Cell Research Shows Promise  

MedlinePLUS

... Tobacco Products Vaccines, Blood & Biologics Articulos en Espanol Adult Stem Cell Research Shows Promise Search the Consumer Updates Section ... re looking at a particular kind of multipotent adult stem cell—the MSC—which is being used in a ...

345

TRMM Satellite Shows Heavy Rainfall in Cristina  

NASA Video Gallery

NASA's TRMM satellite rainfall data was overlaid on an enhanced visible/infrared image from NOAA's GOES-East satellite showing cloud and rainfall extent. Green areas indicate rainfall at over 20 mm...

346

GOES Satellite Data Shows Tornado Development  

NASA Video Gallery

This animation of NOAA's GOES-East satellite data shows the development and movement of the weather system that spawned tornadoes affecting the southern and eastern U.S. states on April 27-29, 2014...

347

New Psoriasis Drug Shows Promise in Trials  

MedlinePLUS

... features on this page, please enable JavaScript. New Psoriasis Drug Shows Promise in Trials Secukinumab appears more ... Wednesday, July 9, 2014 Related MedlinePlus Pages Medicines Psoriasis WEDNESDAY, July 9, 2014 (HealthDay News) -- A new ...

348

map showing predicted habitat potentional for tortoise  

USGS Multimedia Gallery

This map shows the spatial representation of the predicted habitat potential index values for desert tortoise in the Mojave and parts of the Sonoran Deserts of California, Nevada, Utah, and Arizona. Map: USGS. ...

2009-05-21

349

Differential GPS measurements as a tool to quantify Late Cenozoic crustal deformation (Oman, Arabian Peninsula)  

NASA Astrophysics Data System (ADS)

The Sultanate of Oman is situated in the north-eastern part of the Arabian Plate. It therefore represents the leading edge as the plate is drifting north relative to the Eurasian Plate. The movement results in continent-continent collision in the northwest (Zagros fold and thrust belt) and ocean-continent collision in the northeast (Makran subduction zone). We follow the hypothesis that this plate tectonic setting results in an internal deformation of the Arabian Plate. The study presented here is part of a larger project that aims at quantifying the forcing factors of coastal evolution (Hoffmann et al. 2012). The sea level development, climate - and associated rates of weathering and sediment supply - and differential land movement (neotectonics) are identified as key factors during the Late Cenozoic. Recent vertical land movement is obvious and expressed in differences of the coastal morphology. Parts of the coastline are subsiding: these areas show drowned wadi mouths. Other parts are characterised by a straightened coastline and raised wave-cut terraces are evident well above present mean sea-level. Despite these erosional terraces, depositional terraces on alluvial fans are also encountered in close vicinity to the mountain chain. Detailed topographic profile measurements are carried out using a LEICA Viva GNSS-GS15 differential GPS. The instrument yields data with an accuracy of 1-2 cm relatively to the base station. The profile measurements are orientated perpendicular to the coastline and therefore perpendicular to the raised wave-cut terraces. Up to 6 terraces are encountered in elevations up to 400 m above present sea level with the older ones being the highest. The data allow calculating the scarp height, tread length and tread angle of the terraces. The results indicate that the terraces show an increased seaward tilting with age. This observation is interpreted as reflecting ongoing uplift. A coast-parallel deformation pattern becomes obvious when comparing parallel profiles. Profiles measured along depositional fluvial terraces also indicate a direct correlation of the age of the deposits and the dip-angle of the surface. Further evidence for ongoing uplift is seen as the older fluvial terraces are situated further inland. Additional dating evidence is needed to quantify the uplift and to resolve the differential land movement in time and space.

Rupprechter, M.; Roepert, A.; Hoffmann, G.

2012-04-01

350

Baltimore WATERS Test Bed -- Quantifying Groundwater in Urban Areas  

NASA Astrophysics Data System (ADS)

The purpose of this project is to quantify the urban water cycle, with an emphasis on urban groundwater, using investigations at multiple spatial scales. The overall study focuses on the 171 sq km Gwynns Falls watershed, which spans an urban to rural gradient of land cover and is part of the Baltimore Ecosystem Study LTER. Within the Gwynns Falls, finer-scale studies focus on the 14.3 sq km Dead Run and its subwatersheds. A coarse-grid MODFLOW model has been set up to quantify groundwater flow magnitude and direction at the larger watershed scale. Existing wells in this urban area are sparse, but are being located through mining of USGS NWIS and local well data bases. Wet and dry season water level synoptics, stream seepage transects, and existing permeability data are being used in model calibration. In collaboration with CUAHSI HMF Geophysics, a regional-scale microgravity survey was conducted over the watershed in July 2007 and will be repeated in spring 2008. This will enable calculation of the change in groundwater levels for use in model calibration. At the smaller spatial scale (Dead Run catchment), three types of data have been collected to refine our understanding of the groundwater system. (1) Multiple bromide tracer tests were conducted along a 4 km reach of Dead Run under low-flow conditions to examine groundwater- surface water exchange as a function of land cover type and stream position in the watershed. The tests will be repeated under higher base flow conditions in early spring 2008. Tracer test data will be interpreted using the USGS OTIS model and results will be incorporated into the MODFLOW model. (2) Riparian zone geophysical surveys were carried out with support from CUAHSI HMF Geophysics to delineate depth to bedrock and the water table topography as a function of distance from the stream channel. Resistivity, ground penetrating radar, and seismic refraction surveys were run in ten transects across and around the stream channels. (3) A finer-scale microgravity survey was conducted over this area and will be repeated in spring. Efforts to quantify other components of the water cycle include: (1) deployment of an eddy covariance station for ET measurement; (2) mining flow metering records; (3) evaluation of long-term stream-flow data records; and (4) processing precipitation fields. The objective of the precipitation analysis is to obtain rainfall fields at a spatial scale of 1 sq km for the study area. Analyses are based on rain gage observations and radar reflectivity observations from the Sterling, Virginia WSR-88D radar. Radar rainfall analyses utilize the HydroNEXRAD system. Data is being managed using the CUAHSI HIS Observations Data Model housed on a HIS server. The dataset will be made accessible through web services and the Data Access System for Hydrology.

Welty, C.; Miller, A. J.; Ryan, R. J.; Crook, N.; Kerchkof, T.; Larson, P.; Smith, J.; Baeck, M. L.; Kaushal, S.; Belt, K.; McGuire, M.; Scanlon, T.; Warner, J.; Shedlock, R.; Band, L.; Groffman, P.

2007-12-01

351

A two phase circular regression algorithm for quantifying wear in CV joint ball race tracks  

Microsoft Academic Search

In this paper, a two-phase circular regression algorithm is presented for extracting wear profiles from Rzeppa-type constant velocity (CV) joints and for quantifying race track wear. In ball races operating under harsh cyclic loading conditions the predominant brinelling and “false brinelling” wear mechanism result in small indentations or grooves in the race track. These are particularly difficult to measure as

Mike L. Philpott; Brain P. Welcher; Dale R. Pankow; Douglas Vandenberg

1996-01-01

352

Quantifying the error in estimated transfer functions with application to model order selection  

Microsoft Academic Search

Previous results on estimating errors or error bounds on identified transfer functions have relied on prior assumptions about the noise and the unmodeled dynamics. This prior information took the form of parameterized bounding functions or parameterized probability density functions, in the time or frequency domain with known parameters. It is shown that the parameters that quantify this prior information can

Graham C. Goodwin; Michel Gevers; Brett Ninness

1992-01-01

353

A method for quantifying facial muscle movements in the smile during facial expression training  

Microsoft Academic Search

The purpose of this study is to propose an evaluation method capable of quantifying facial expressions during facial expression training that is intended to achieve a more expressive face. The specific aim was to investigate methods of estimating facial muscle movements from facial images and display our estimation results in an understandable way, as well as to evaluate the effectiveness

Ai Takami; Kyoko Ito; Shogo Nishida

2008-01-01

354

COMPARISON OF MEASUREMENT TECHNIQUES FOR QUANTIFYING SELECTED ORGANIC EMISSIONS FROM KEROSENE SPACE HEATERS  

EPA Science Inventory

The report goes results of (1) a comparison the hood and chamber techniques for quantifying pollutant emission rates from unvented combustion appliances, and (2) an assessment of the semivolatile and nonvolatile organic-compound emissions from unvented kerosene space heaters. In ...

355

Quantifying Square Membrane Wrinkle Behavior Using MITC Shell Elements  

NASA Technical Reports Server (NTRS)

For future membrane based structures, quantified predictions of membrane wrinkling behavior in terms of amplitude, angle and wavelength are needed to optimize the efficiency and integrity of such structures, as well as their associated control systems. For numerical analyses performed in the past, limitations on the accuracy of membrane distortion simulations have often been related to the assumptions made while using finite elements. Specifically, this work demonstrates that critical assumptions include: effects of gravity. supposed initial or boundary conditions, and the type of element used to model the membrane. In this work, a 0.2 square meter membrane is treated as a structural material with non-negligible bending stiffness. Mixed Interpolation of Tensorial Components (MTTC) shell elements are used to simulate wrinkling behavior due to a constant applied in-plane shear load. Membrane thickness, gravity effects, and initial imperfections with respect to flatness were varied in numerous nonlinear analysis cases. Significant findings include notable variations in wrinkle modes for thickness in the range of 50 microns to 1000 microns, which also depend on the presence of an applied gravity field. However, it is revealed that relationships between overall strain energy density for cases with differing initial conditions are independent of assumed initial con&tions. In addition, analysis results indicate that the relationship between amplitude scale (W/t) and structural scale (L/t) is linear in the presence of a gravity field.

Jacobson, Mindy B.; Iwasa, Takashi; Natori, M. C.

2004-01-01

356

Quantifying the Rheological and Hemodynamic Characteristics of Sickle Cell Anemia  

PubMed Central

Sickle erythrocytes exhibit abnormal morphology and membrane mechanics under deoxygenated conditions due to the polymerization of hemoglobin S. We employed dissipative particle dynamics to extend a validated multiscale model of red blood cells (RBCs) to represent different sickle cell morphologies based on a simulated annealing procedure and experimental observations. We quantified cell distortion using asphericity and elliptical shape factors, and the results were consistent with a medical image analysis. We then studied the rheology and dynamics of sickle RBC suspensions under constant shear and in a tube. In shear flow, the transition from shear-thinning to shear-independent flow revealed a profound effect of cell membrane stiffening during deoxygenation, with granular RBC shapes leading to the greatest viscosity. In tube flow, the increase of flow resistance by granular RBCs was also greater than the resistance of blood flow with sickle-shape RBCs. However, no occlusion was observed in a straight tube under any conditions unless an adhesive dynamics model was explicitly incorporated into simulations that partially trapped sickle RBCs, which led to full occlusion in some cases.

Lei, Huan; Karniadakis, George Em

2012-01-01

357

Quantifying the limitations of small animal positron emission tomography  

NASA Astrophysics Data System (ADS)

The application of position sensitive semiconductor detectors in medical imaging is a field of global research interest. The Monte-Carlo simulation toolkit GEANT4 [ http://geant4.web.cern.ch/geant4/] was employed to improve the understanding of detailed ?-ray interactions within the small animal Positron Emission Tomography (PET), high-purity germanium (HPGe) imaging system, SmartPET [A.J. Boston, et al., Oral contribution, ANL, Chicago, USA, 2006]. This system has shown promising results in the field of PET [R.J. Cooper, et al., Nucl. Instr. and Meth. A (2009), accepted for publication] and Compton camera imaging [J.E. Gillam, et al., Nucl. Instr. and Meth. A 579 (2007) 76]. Images for a selection of single and multiple point, line and phantom sources were successfully reconstructed using both a filtered-back-projection (FBP) [A.R. Mather, Ph.D. Thesis, University of Liverpool, 2007] and an iterative reconstruction algorithm [A.R. Mather, Ph.D. Thesis, University of Liverpool, 2007]. Simulated data were exploited as an alternative route to a reconstructed image allowing full quantification of the image distortions introduced in each phase of the data processing. Quantifying the contribution of uncertainty in all system components from detector to reconstruction algorithm allows the areas in need of most attention on the SmartPET project and semiconductor PET to be addressed.

Oxley, D. C.; Boston, A. J.; Boston, H. C.; Cooper, R. J.; Cresswell, J. R.; Grint, A. N.; Nolan, P. J.; Scraggs, D. P.; Lazarus, I. H.; Beveridge, T. E.

2009-06-01

358

Quantifying the abnormal hemodynamics of sickle cell anemia  

NASA Astrophysics Data System (ADS)

Sickle red blood cells (SS-RBC) exhibit heterogeneous morphologies and abnormal hemodynamics in deoxygenated states. A multi-scale model for SS-RBC is developed based on the Dissipative Particle Dynamics (DPD) method. Different cell morphologies (sickle, granular, elongated shapes) typically observed in deoxygenated states are constructed and quantified by the Asphericity and Elliptical shape factors. The hemodynamics of SS-RBC suspensions is studied in both shear and pipe flow systems. The flow resistance obtained from both systems exhibits a larger value than the healthy blood flow due to the abnormal cell properties. Moreover, SS-RBCs exhibit abnormal adhesive interactions with both the vessel endothelium cells and the leukocytes. The effect of the abnormal adhesive interactions on the hemodynamics of sickle blood is investigated using the current model. It is found that both the SS-RBC - endothelium and the SS-RBC - leukocytes interactions, can potentially trigger the vicious ``sickling and entrapment'' cycles, resulting in vaso-occlusion phenomena widely observed in micro-circulation experiments.

Lei, Huan; Karniadakis, George

2012-02-01

359

New Severity Indices for Quantifying Single-suture Metopic Craniosynostosis  

PubMed Central

OBJECTIVE To describe novel severity indices with which to quantify severity of trigonocephaly malformation in children diagnosed with isolated metopic synostosis. METHODS Computed tomographic scans of the cranium were obtained from 38 infants diagnosed with isolated metopic synostosis and 53 age-matched control patients. Volumetric reformations of the cranium were used to trace two-dimensional planes defined by the cranium-base plane and well-defined brain landmarks. For each patient, novel trigonocephaly severity indices (TSI) were computed from outline cranium shapes on each of these planes. The metopic severity index based on measurements of interlandmark distances was also computed and a receiver operating characteristic analysis used to compare the accuracy of classification based on TSIs versus that based on the metopic severity index. RESULTS The proposed TSIs are a sensitive measure of trigonocephaly malformation that can provide a classification accuracy of 96% with a specificity of 95%, in contrast with 82% of the metopic severity index at the same specificity level. CONCLUSIONS We completed exploratory analysis of outline-based severity measurements computed from computed tomographic image planes of the cranium. These TSIs enable quantitative analysis of cranium features in isolated metopic synostosis that may not be accurately detected by analytic tools derived from a sparse set of traditional interlandmark and semilandmark distances.

Ruiz-Correa, Salvador; Starr, Jacqueline R.; Lin, H. Jill; Kapp-Simon, Kathleen A.; Sze, Raymond W.; Ellenbogen, Richard G.; Speltz, Matthew L.; Cunningham, Michael L.

2012-01-01

360

Graphical methods for quantifying macromolecules through bright field imaging  

PubMed Central

Bright field imaging of biological samples stained with antibodies and/or special stains provides a rapid protocol for visualizing various macromolecules. However, this method of sample staining and imaging is rarely employed for direct quantitative analysis due to variations in sample fixations, ambiguities introduced by color composition and the limited dynamic range of imaging instruments. We demonstrate that, through the decomposition of color signals, staining can be scored on a cell-by-cell basis. We have applied our method to fibroblasts grown from histologically normal breast tissue biopsies obtained from two distinct populations. Initially, nuclear regions are segmented through conversion of color images into gray scale, and detection of dark elliptic features. Subsequently, the strength of staining is quantified by a color decomposition model that is optimized by a graph cut algorithm. In rare cases where nuclear signal is significantly altered as a result of sample preparation, nuclear segmentation can be validated and corrected. Finally, segmented stained patterns are associated with each nuclear region following region-based tessellation. Compared to classical non-negative matrix factorization, proposed method: (i) improves color decomposition, (ii) has a better noise immunity, (iii) is more invariant to initial conditions and (iv) has a superior computing performance. contact: hchang@lbl.gov

Chang, Hang; DeFilippis, Rosa Anna; Tlsty, Thea D.; Parvin, Bahram

2009-01-01

361

Quantifying uncertainty, variability and likelihood for ordinary differential equation models  

PubMed Central

Background In many applications, ordinary differential equation (ODE) models are subject to uncertainty or variability in initial conditions and parameters. Both, uncertainty and variability can be quantified in terms of a probability density function on the state and parameter space. Results The partial differential equation that describes the evolution of this probability density function has a form that is particularly amenable to application of the well-known method of characteristics. The value of the density at some point in time is directly accessible by the solution of the original ODE extended by a single extra dimension (for the value of the density). This leads to simple methods for studying uncertainty, variability and likelihood, with significant advantages over more traditional Monte Carlo and related approaches especially when studying regions with low probability. Conclusions While such approaches based on the method of characteristics are common practice in other disciplines, their advantages for the study of biological systems have so far remained unrecognized. Several examples illustrate performance and accuracy of the approach and its limitations.

2010-01-01

362

Quantifying Kinematic Substructure in the Milky Way's Stellar Halo  

NASA Astrophysics Data System (ADS)

We present and analyze the positions, distances, and radial velocities for over 4000 blue horizontal-branch (BHB) stars in the Milky Way's halo, drawn from SDSS DR8. We search for position-velocity substructure in these data, a signature of the hierarchical assembly of the stellar halo. Using a cumulative "close pair distribution" as a statistic in the four-dimensional space of sky position, distance, and velocity, we quantify the presence of position-velocity substructure at high statistical significance among the BHB stars: pairs of BHB stars that are close in position on the sky tend to have more similar distances and radial velocities compared to a random sampling of these overall distributions. We make analogous mock observations of 11 numerical halo formation simulations, in which the stellar halo is entirely composed of disrupted satellite debris, and find a level of substructure comparable to that seen in the actually observed BHB star sample. This result quantitatively confirms the hierarchical build-up of the stellar halo through a signature in phase (position-velocity) space. In detail, the structure present in the BHB stars is somewhat less prominent than that seen in most simulated halos, quite possibly because BHB stars represent an older sub-population. BHB stars located beyond 20 kpc from the Galactic center exhibit stronger substructure than at r gc < 20 kpc.

Xue, Xiang-Xiang; Rix, Hans-Walter; Yanny, Brian; Beers, Timothy C.; Bell, Eric F.; Zhao, Gang; Bullock, James S.; Johnston, Kathryn V.; Morrison, Heather; Rockosi, Constance; Koposov, Sergey E.; Kang, Xi; Liu, Chao; Luo, Ali; Lee, Young Sun; Weaver, Benjamin A.

2011-09-01

363

Methods of Quantifying Change in Multiple Risk Factor Interventions  

PubMed Central

Objective Risky behaviors such as smoking, alcohol abuse, physical inactivity, and poor diet are detrimental to health, costly, and often co-occur. Greater efforts are being targeted at changing multiple risk behaviors to more comprehensively address the health needs of individuals and populations. With increased interest in multiple risk factor interventions, the field will need ways to conceptualize the issue of overall behavior change. Method Analyzing data from over 8,000 participants in four multibehavioral interventions, we present five different methods for quantifying and reporting changes in multiple risk behaviors. Results The methods are: (a) the traditional approach of reporting changes in individual risk behaviors; (b) creating a combined statistical index of overall behavior change, standardizing scores across behaviors on different metrics; (c) using a behavioral index; (d) calculating an overall impact factor; and (e) using overarching outcome measures such as quality of life, related biometrics, or cost outcomes. We discuss the methods’ interpretations, strengths, and limitations. Conclusion Given the lack of consensus in the field on how to examine change in multiple risk behaviors, we recommend researchers employ and compare multiple methods in their publications. A dialogue is needed to work towards developing a consensus for optimal ways of conceptualizing and reporting changes in multibehavioral interventions.

Prochaska, Judith J.; Velicer, Wayne F.; Nigg, Claudio R.; Prochaska, James O.

2008-01-01

364

Quantifying residual hydrogen adsorption in low-temperature STMs  

NASA Astrophysics Data System (ADS)

We report on low-temperature scanning tunneling microscopy observations demonstrating that individual Ti atoms on hexagonal boron nitride dissociate and adsorb hydrogen without measurable reaction barrier. The clean and hydrogenated states of the adatoms are clearly discerned by their apparent height and their differential conductance revealing the Kondo effect upon hydrogenation. Measurements at 50 K and 5 × 10- 11 mbar indicate a sizable hydrogenation within only 1 h originating from the residual gas pressure, whereas measurements at 4.7 K can be carried out for days without H2 contamination problems. However, heating up a low-T STM to operate it at variable temperature results in very sudden hydrogenation at around 17 K that correlates with a sharp peak in the total chamber pressure. From a quantitative analysis we derive the desorption energies of H2 on the cryostat walls. We find evidence for hydrogen contamination also during Ti evaporation and propose a strategy on how to dose transition metal atoms in the cleanliest fashion. The present contribution raises awareness of hydrogenation under seemingly ideal ultra-high vacuum conditions, it quantifies the H2 uptake by isolated transition metal atoms and its thermal desorption from the gold plated cryostat walls.

Natterer, F. D.; Patthey, F.; Brune, H.

2013-09-01

365

Quantifying interictal metabolic activity in human temporal lobe epilepsy  

SciTech Connect

The majority of patients with complex partial seizures of unilateral temporal lobe origin have interictal temporal hypometabolism on (18F)fluorodeoxyglucose positron emission tomography (FDG PET) studies. Often, this hypometabolism extends to ipsilateral extratemporal sites. The use of accurately quantified metabolic data has been limited by the absence of an equally reliable method of anatomical analysis of PET images. We developed a standardized method for visual placement of anatomically configured regions of interest on FDG PET studies, which is particularly adapted to the widespread, asymmetric, and often severe interictal metabolic alterations of temporal lobe epilepsy. This method was applied by a single investigator, who was blind to the identity of subjects, to 10 normal control and 25 interictal temporal lobe epilepsy studies. All subjects had normal brain anatomical volumes on structural neuroimaging studies. The results demonstrate ipsilateral thalamic and temporal lobe involvement in the interictal hypometabolism of unilateral temporal lobe epilepsy. Ipsilateral frontal, parietal, and basal ganglial metabolism is also reduced, although not as markedly as is temporal and thalamic metabolism.

Henry, T.R.; Mazziotta, J.C.; Engel, J. Jr.; Christenson, P.D.; Zhang, J.X.; Phelps, M.E.; Kuhl, D.E. (Univ. of California, Los Angeles (USA))

1990-09-01

366

Choosing among techniques for quantifying single-case intervention effectiveness.  

PubMed

If single-case experimental designs are to be used to establish guidelines for evidence-based interventions in clinical and educational settings, numerical values that reflect treatment effect sizes are required. The present study compares four recently developed procedures for quantifying the magnitude of intervention effect using data with known characteristics. Monte Carlo methods were used to generate AB designs data with potential confounding variables (serial dependence, linear and curvilinear trend, and heteroscedasticity between phases) and two types of treatment effect (level and slope change). The results suggest that data features are important for choosing the appropriate procedure and, thus, inspecting the graphed data visually is a necessary initial stage. In the presence of serial dependence or a change in data variability, the nonoverlap of all pairs (NAP) and the slope and level change (SLC) were the only techniques of the four examined that performed adequately. Introducing a data correction step in NAP renders it unaffected by linear trend, as is also the case for the percentage of nonoverlapping corrected data and SLC. The performance of these techniques indicates that professionals' judgments concerning treatment effectiveness can be readily complemented by both visual and statistical analyses. A flowchart to guide selection of techniques according to the data characteristics identified by visual inspection is provided. PMID:21658534

Manolov, Rumen; Solanas, Antonio; Sierra, Vicenta; Evans, Jonathan J

2011-09-01

367

Cardiovascular regulation during sleep quantified by symbolic coupling traces  

NASA Astrophysics Data System (ADS)

Sleep is a complex regulated process with short periods of wakefulness and different sleep stages. These sleep stages modulate autonomous functions such as blood pressure and heart rate. The method of symbolic coupling traces (SCT) is used to analyze and quantify time-delayed coupling of these measurements during different sleep stages. The symbolic coupling traces, defined as the symmetric and diametric traces of the bivariate word distribution matrix, allow the quantification of time-delayed coupling. In this paper, the method is applied to heart rate and systolic blood pressure time series during different sleep stages for healthy controls as well as for normotensive and hypertensive patients with sleep apneas. Using the SCT, significant different cardiovascular mechanisms not only between the deep sleep and the other sleep stages but also between healthy subjects and patients can be revealed. The SCT method is applied to model systems, compared with established methods, such as cross correlation, mutual information, and cross recurrence analysis and demonstrates its advantages especially for nonstationary physiological data. As a result, SCT proves to be more specific in detecting delays of directional interactions than standard coupling analysis methods and yields additional information which cannot be measured by standard parameters of heart rate and blood pressure variability. The proposed method may help to indicate the pathological changes in cardiovascular regulation and also the effects of continuous positive airway pressure therapy on the cardiovascular system.

Suhrbier, A.; Riedl, M.; Malberg, H.; Penzel, T.; Bretthauer, G.; Kurths, J.; Wessel, N.

2010-12-01

368

Rapidly quantifying the relative distention of a human bladder  

NASA Technical Reports Server (NTRS)

A device and method was developed to rapidly quantify the relative distention of the bladder of a human subject. An ultrasonic transducer is positioned on the human subject near the bladder. A microprocessor controlled pulser excites the transducer by sending an acoustic wave into the human subject. This wave interacts with the bladder walls and is reflected back to the ultrasonic transducer where it is received, amplified, and processed by the receiver. The resulting signal is digitized by an analog to digital converter, controlled by the microprocessor again, and is stored in data memory. The software in the microprocessor determines the relative distention of the bladder as a function of the propagated ultrasonic energy. Based on programmed scientific measurements and the human subject's past history as contained in program memory, the microprocessor sends out a signal to turn on any or all of the available alarms. The alarm system includes and audible alarm, the visible alarm, the tactile alarm, and the remote wireless alarm.

Companion, John A. (inventor); Heyman, Joseph S. (inventor); Mineo, Beth A. (inventor); Cavalier, Albert R. (inventor); Blalock, Travis N. (inventor)

1991-01-01

369

Rapidly quantifying the relative distention of a human bladder  

NASA Technical Reports Server (NTRS)

A device and method of rapidly quantifying the relative distention of the bladder in a human subject are disclosed. The ultrasonic transducer which is positioned on the subject in proximity to the bladder is excited by a pulser under the command of a microprocessor to launch an acoustic wave into the patient. This wave interacts with the bladder walls and is reflected back to the ultrasonic transducer, when it is received, amplified and processed by the receiver. The resulting signal is digitized by an analog-to-digital converter under the command of the microprocessor and is stored in the data memory. The software in the microprocessor determines the relative distention of the bladder as a function of the propagated ultrasonic energy; and based on programmed scientific measurements and individual, anatomical, and behavioral characterists of the specific subject as contained in the program memory, sends out a signal to turn on any or all of the audible alarm, the visible alarm, the tactile alarm, and the remote wireless alarm.

Companion, John A. (inventor); Heyman, Joseph S. (inventor); Mineo, Beth A. (inventor); Cavalier, Albert R. (inventor); Blalock, Travis N. (inventor)

1989-01-01

370

Quantifying and identifying the overlapping community structure in networks  

NASA Astrophysics Data System (ADS)

It has been shown that the communities of complex networks often overlap with each other. However, there is no effective method to quantify the overlapping community structure. In this paper, we propose a metric to address this problem. Instead of assuming that one node can only belong to one community, our metric assumes that a maximal clique only belongs to one community. In this way, the overlaps between communities are allowed. To identify the overlapping community structure, we construct a maximal clique network from the original network, and prove that the optimization of our metric on the original network is equivalent to the optimization of Newman's modularity on the maximal clique network. Thus the overlapping community structure can be identified through partitioning the maximal clique network using any modularity optimization method. The effectiveness of our metric is demonstrated by extensive tests on both artificial networks and real world networks with a known community structure. The application to the word association network also reproduces excellent results.

Shen, Hua-Wei; Cheng, Xue-Qi; Guo, Jia-Feng

2009-07-01

371

Quantifying nanoscale order in amorphous materials: simulating fluctuation electron microscopy of amorphous silicon  

NASA Astrophysics Data System (ADS)

Fluctuation electron microscopy (FEM) is explicitly sensitive to 3- and 4-body atomic correlation functions in amorphous materials; this is sufficient to establish the existence of structural order on the nanoscale, even when the radial distribution function extracted from diffraction data appears entirely amorphous. However, it remains a formidable challenge to invert the FEM data into a quantitative model of the structure. Here, we quantify the FEM method for a-Si by forward simulating the FEM data from a family of high quality atomistic models. Using a modified WWW method, we construct computational models that contain 10-40 vol% of topologically crystalline grains, 1-3 nm in diameter, in an amorphous matrix and calculate the FEM signal, which consists of the statistical variance V (k) of the dark-field image as a function of scattering vector k. We show that V (k) is a complex function of the size and volume fraction of the ordered regions present in the amorphous matrix. However, the ratio of the variance peaks as a function of k affords the size of the ordered regions; and the magnitude of the variance affords a semi-quantitative measure of the volume fraction. We have also compared models that contain various amounts of strain in the ordered regions. This analysis shows that the amount of strain in realistic models is sufficient to mute variance peaks at high k. We conclude with a comparison between the model results and experimental data.

Bogle, Stephanie N.; Voyles, Paul M.; Khare, Sanjay V.; Abelson, John R.

2007-11-01

372

Step changes in the flood frequency curve - Quantifying effects of catchments storage thresholds  

NASA Astrophysics Data System (ADS)

In previous work the authors have shown that non linear catchment response related to a storage threshold may lead to a step change in the flood frequency curve. In the presented study we quantify the impact of temporal and spatial changes in storage properties on the magnitude of the step change. We use the maximum of the second derivative (curvature) of the flood peaks with respect to their return period as a new measure for the magnitude of the step change. The results of the analysis apply to catchments where runoff is generated by the saturation excess mechanism and a clear separation between a permanently saturated region and a variably saturated region with spatially uniform storage deficits exists. A sensitivity analysis with a stochastic rainfall model and a simple rainfall runoff model shows that the magnitude of the step change decreases with increasing temporal variability of antecedent soil storage, and increases with increasing area of the variably saturated region. The return period where the step change occurs is very similar to the return period of the rainfall volume that is needed to exceed the storage threshold. We present diagrams that show the joint effects of spatial and temporal storage variability on the magnitude and return period of the step change. The diagrams may be useful for assessing whether step changes in the flood frequency curve are likely to occur in catchments where the runoff generation characteristics are as examined here and the flood records are too short to indicate a step change.

Rogger, Magdalena; Viglione, Alberto; Derx, Julia; Bloeschl, Guenter

2014-05-01

373

Educational Outreach: The Space Science Road Show  

NASA Astrophysics Data System (ADS)

The poster presented will give an overview of a study towards a "Space Road Show". The topic of this show is space science. The target group is adolescents, aged 12 to 15, at Dutch high schools. The show and its accompanying experiments would be supported with suitable educational material. Science teachers at schools can decide for themselves if they want to use this material in advance, afterwards or not at all. The aims of this outreach effort are: to motivate students for space science and engineering, to help them understand the importance of (space) research, to give them a positive feeling about the possibilities offered by space and in the process give them useful knowledge on space basics. The show revolves around three main themes: applications, science and society. First the students will get some historical background on the importance of space/astronomy to civilization. Secondly they will learn more about novel uses of space. On the one hand they will learn of "Views on Earth" involving technologies like Remote Sensing (or Spying), Communication, Broadcasting, GPS and Telemedicine. On the other hand they will experience "Views on Space" illustrated by past, present and future space research missions, like the space exploration missions (Cassini/Huygens, Mars Express and Rosetta) and the astronomy missions (Soho and XMM). Meanwhile, the students will learn more about the technology of launchers and satellites needed to accomplish these space missions. Throughout the show and especially towards the end attention will be paid to the third theme "Why go to space"? Other reasons for people to get into space will be explored. An important question in this is the commercial (manned) exploration of space. Thus, the questions of benefit of space to society are integrated in the entire show. It raises some fundamental questions about the effects of space travel on our environment, poverty and other moral issues. The show attempts to connect scientific with community thought. The difficulty with a show this elaborate and intricate is communicating on a level understandable for teenagers, whilst not treating them like children. Professional space scientists know how easy it is to lose oneself in technical specifics. This would, of course, only confuse young people. The author would like to discuss the ideas for this show with a knowledgeable audience and hopefully get some (constructive) feedback.

Cox, N. L. J.

2002-01-01

374

QUANTIFYING DYNAMIC MPPT PERFORMANCE UNDER REALISTIC CONDITIONS FIRST TEST RESULTS - THE WAY FORWARD  

Microsoft Academic Search

While various support mechanisms are currently applied to overcome the current cost barrier associated with photovoltaic (PV) systems, performance plays a critical role in the profitability calculation and in the determination of the pay back period. In this context, increasing attention is paid to the efficiency of inverters and in particular to the performance of Maximum Power Point Trackers (MPPT).

B. Bletterie; R. Bruendlinger; S. Spielauer

375

Liquid Crystal Research Shows Deformation By Drying  

NASA Technical Reports Server (NTRS)

These images, from David Weitz's liquid crystal research, show ordered uniform sized droplets (upper left) before they are dried from their solution. After the droplets are dried (upper right), they are viewed with crossed polarizers that show the deformation caused by drying, a process that orients the bipolar structure of the liquid crystal within the droplets. When an electric field is applied to the dried droplets (lower left), and then increased (lower right), the liquid crystal within the droplets switches its alignment, thereby reducing the amount of light that can be scattered by the droplets when a beam is shone through them.

2003-01-01

376

Quantified Self and Comprehensive Geriatric Assessment: Older Adults Are Able to Evaluate Their Own Health and Functional Status  

PubMed Central

Background There is an increased interest of individuals in quantifying their own health and functional status. The aim of this study was to examine the concordance of answers to a self-administered questionnaire exploring health and functional status with information collected during a full clinical examination performed by a physician among cognitively healthy adults (CHI) and older patients with mild cognitive impairment (MCI) or mild-to-moderate Alzheimer disease (AD). Methods Based on cross-sectional design, a total of 60 older adults (20 CHI, 20 patients with MCI, and 20 patients with mild-to-moderate AD) were recruited in the memory clinic of Angers, France. All participants completed a self-administered questionnaire in paper format composed of 33 items exploring age, gender, nutrition, place of living, social resources, drugs daily taken, memory complaint, mood and general feeling, fatigue, activities of daily living, physical activity and history of falls. Participants then underwent a full clinical examination by a physician exploring the same domains. Results High concordance between the self-administered questionnaire and physician's clinical examination was showed. The few divergences were related to cognitive status, answers of AD and MCI patients to the self-administered questionnaire being less reliable than those of CHI. Conclusion Older adults are able to evaluate their own health and functional status, regardless of their cognitive status. This result needs to be confirmed and opens new perspectives for the quantified self-trend and could be helpful in daily clinical practice of primary care.

Beauchet, Olivier; Launay, Cyrille P.; Merjagnan, Christine; Kabeshova, Anastasiia; Annweiler, Cedric

2014-01-01

377

The object-oriented trivia show (TOOTS)  

Microsoft Academic Search

OOPSLA has a longstanding tradition of being a forum for discussing the cutting edge of technology in a fun and participatory environment. The type of events sponsored by OOPSLA sometimes border on the unconventional. This event represents an atypical panel that conforms to the concept of a game show that is focused on questions and answers related to SPLASH, OOPSLA,

Jeff Gray; Jules White

2010-01-01

378

Creating Computer Slide Shows the Easy Way.  

ERIC Educational Resources Information Center

Explains how to create slide shows via computer using common software programs such as ClarisWorks. Highlights include identifying the audience, organizing the information, becoming familiar with technical specifications of the equipment, the use of text, style consistency, the use of graphics, and the use of color. (LRW)

Anderson, Mary Alice

1996-01-01

379

Video showing use of circuit construction kit  

NSDL National Science Digital Library

The link below shows "Mr. Anderson" using the circuit construction kit in order to explore Ohm's Law. It allows the students to predict either general patterns or specific variables when changes are made to a circuit. http://www.bozemanscience.com/science-videos/2011/6/26/voltage-current-and-resistance.html

2012-02-28

380

Showing Enantiomorphous Crystals of Tartaric Acid  

ERIC Educational Resources Information Center

Most of the articles and textbooks that show drawings of enantiomorphous crystals use an inadequate view to appreciate the fact that they are non-superimposable mirror images of one another. If a graphical presentation of crystal chirality is not evident, the main attribute of crystal enantiomorphism can not be recognized by students. The classic…

Andrade-Gamboa, Julio

2007-01-01

381

Analysis shows process industry accident losses rising  

Microsoft Academic Search

An analysis of the 150 largest losses caused by accidents and natural phenomena in the hydrocarbon processing and chemical industries during a period of 30 years ending Jan. 1, 1989, shows that the cost and number of losses is increasing. The catastrophic losses analyzed were used to develop statistical trends from the losses in a data base. The trended data

J. A. Krembs; J. M. Connolly

1990-01-01

382

Where the Boys Are: Show Chorus.  

ERIC Educational Resources Information Center

Boys are given a chance to experience choral music through the aid of a system that lessens peer pressure. Through auditions, the teacher can evaluate musical ability, coordination level, interaction with peers, and ability to learn rapidly. Show chorus has brought a unique interest to the music program. (AU/CS)

Mancuso, Sandra L.

1983-01-01

383

Food safety at shows and fairs  

Microsoft Academic Search

Food events, such as food festivals, agricultural shows and village fetes take place throughout the UK, usually in outdoor locations. Consumers’ overall satisfaction with the food purchased at such events is high and they have few or no concerns about the food safety on sale. A wide variety of foods, including some high-risk products are offered for sale to the

Denise Worsfold

2003-01-01

384

Laser entertainment and light shows in education  

NASA Astrophysics Data System (ADS)

Laser shows and beam effects have been a source of entertainment since its first public performance May 9, 1969, at Mills College in Oakland, California. Since 1997, the Photonics Center, NgeeAnn Polytechnic, Singapore, has been using laser shows as a teaching tool. Students are able to exhibit their creative skills and learn at the same time how lasers are used in the entertainment industry. Students will acquire a number of skills including handling three- phase power supply, operation of cooling system, and laser alignment. Students also acquire an appreciation of the arts, learning about shapes and contours as they develop graphics for the shows. After holography, laser show animation provides a combination of the arts and technology. This paper aims to briefly describe how a krypton-argon laser, galvanometer scanners, a polychromatic acousto-optic modulator and related electronics are put together to develop a laser projector. The paper also describes how students are trained to make their own laser animation and beam effects with music, and at the same time have an appreciation of the operation of a Class IV laser and the handling of optical components.

Sabaratnam, Andrew T.; Symons, Charles

2002-05-01

385

Quantifiable effectiveness of experimental scaling of river- and delta morphodynamics and stratigraphy  

NASA Astrophysics Data System (ADS)

Laboratory experiments to simulate landscapes and stratigraphy often suffer from scale effects, because reducing length- and time scales leads to different behaviour of water and sediment. Classically, scaling proceeded from dimensional analysis of the equations of motion and sediment transport, and minor concessions, such as vertical length scale distortion, led to acceptable results. In the past decade many experiments were done that seriously violated these scaling rules, but nevertheless produced significant and insightful results that resemble the real world in quantifiable ways. Here we focus on self-formed fluvial channels and channel patterns in experiments. The objectives of this paper are 1) to identify what aspects of scaling considerations are most important for experiments that simulate morphodynamics and stratigraphy of rivers and deltas, 2) to establish a design strategy for experiments based on a combination of relaxed classical scale rules, theory of bars and meanders, and small-scale experiments focussed at specific processes. We present a number of small laboratory setups and protocols that we use to rapidly quantify erosional and depositional types of forms and dynamics that develop in the landscape experiments as a function of detailed properties, such as effective material strength, and to assess potential scale effects. Most importantly, the width-to-depth ratio of channels determines the bar pattern and meandering tendency. The strength of floodplain material determines these channel dimensions, and theory predicts that laboratory rivers should have 1.5 times larger width-to-depth ratios for the same bar pattern. We show how floodplain formation can be controlled by adding silt-sized silicaflour, bentonite, Medicago sativa (alfalfa) or Partially Hydrolyzed PolyAcrylamide (a synthetic polymer) to poorly sorted sediment. The experiments demonstrate that there is a narrow range of conditions between no mobility of bed or banks, and too much mobility. The density of vegetation and the volume proportion of silt allow well-controllable channel dimensions whereas the polymer proved difficult to control. The theory, detailed methods of quantification, and experimental setups presented here show that the rivers and deltas created in the laboratory seem to behave as natural rivers when the experimental conditions adhere to the relaxed scaling rules identified herein, and that required types of fluvio-deltaic morphodynamics can be reproduced based on conditions and sediments selected on the basis of a series of small-scale experiments.

Kleinhans, Maarten G.; van Dijk, Wout M.; van de Lageweg, Wietse I.; Hoyal, David C. J. D.; Markies, Henk; van Maarseveen, Marcel; Roosendaal, Chris; van Weesep, Wendell; van Breemen, Dimitri; Hoendervoogt, Remko; Cheshier, Nathan

2014-06-01

386

Quantifying the effects of material properties on analog models of critical taper wedges  

NASA Astrophysics Data System (ADS)

Analogue models are inherently handmade and reflect their creator's shaping character. For example, sieving style in combination with grain geometry and distribution have been claimed to influence bulk material properties and the outcome of analogue experiments. Few studies exist that quantify these effects and here we aim at investigating the impact of bulk properties of granular materials on the structural development of convergent brittle wedges in analogue experiments. In a systematic fashion, natural sands as well as glass beads of different grain size and size distribution were sieved by different persons from different heights and the resulting bulk density was measured. A series of analogue experiments in both the push and pull setup were performed. The differences in the outcome of experiments were analyzed based on sidewall pictures and 3D laserscanning of the surface. A new high-resolution approach to measuring surface slope automatically is introduced and applied to the evaluation of images and profiles. This procedure is compared to manual methods of determining surface slope. The effect of sidewall friction was quantified by measuring lateral changes in surface slope. The resulting dataset is used to identify the main differences between pushed and pulled wedge experiments in the light of critical taper theory. The bulk density of granular material was found to be highly dependent on sieve height. Sieve heights of less than 50 cm produced a bulk density that was up to 10% less than the maximum bulk density; an effect equally shown for different people sieving the material. Glass beads were found to produce a more regular structure of in-sequence-thrusts in both, space and time, than sands while displaying less variability. Surface slope was found to be highly transient for pushed wedge experiments, whereas it reached and attained a stable value in pulled experiments. Pushed wedges are inferred to develop into a supercritical state because they exceed the theoretical critical surface slope by 5-15°. Since bulk density effects shear strength, different sieving styles could potentially alter the results of analogue models and must be taken into consideration when filling in material. Results from this study also show that only wedges in the pull setup are accurately described by critical taper theory.

Hofmann, F.; Rosenau, M.; Schreurs, G.; Friedrich, A. M.

2012-04-01

387

Quantifying fluvial sediment flux on a monsoonal mega-river: the Mekong  

NASA Astrophysics Data System (ADS)

Quantifying sediment fluxes and distinguishing between bed-load and suspended-load (bed-load and suspended bed-load) transport of large rivers remains a significant challenge. It is increasingly apparent that prediction of large river morphodynamics in response to environmental change requires a robust quantification of sediment fluxes across a range of discharges. Such quantification becomes even more problematic for monsoonal rivers where large non-linearities in hydrological-sediment relations exist. This paper, as part of a NERC funded STELAR-S2S project (www.stelar-s2s.org), presents a series of multibeam sonar repeat bed surveys and acoustic calibrations that allow simultaneous quantification of bed-load transport and suspended load fluxes in the lower Mekong River. Results show how multibeam sonar can be used to map bedform evolution across a range of time scales and produce robust extimates of bed-load whilst acoustic backscatter calibration to suspended sediment load can be used in combination to Doppler flow velocity estimates to recover full sediment fluxes at the reach scale. The methods, estimates of error and implications of the results for the function of large river systems will be discussed.

Parsons, D. R.; Darby, S. E.; Hackney, C. R.; Best, J.; Aalto, R. E.; Nicholas, A. P.; Leyland, J.

2013-12-01

388

Using an ensemble of statistical metrics to quantify large sets of plant transcription factor binding sites  

PubMed Central

Background From initial seed germination through reproduction, plants continuously reprogram their transcriptional repertoire to facilitate growth and development. This dynamic is mediated by a diverse but inextricably-linked catalog of regulatory proteins called transcription factors (TFs). Statistically quantifying TF binding site (TFBS) abundance in promoters of differentially expressed genes can be used to identify binding site patterns in promoters that are closely related to stress-response. Output from today’s transcriptomic assays necessitates statistically-oriented software to handle large promoter-sequence sets in a computationally tractable fashion. Results We present Marina, an open-source software for identifying over-represented TFBSs from amongst large sets of promoter sequences, using an ensemble of 7 statistical metrics and binding-site profiles. Through software comparison, we show that Marina can identify considerably more over-represented plant TFBSs compared to a popular software alternative. Conclusions Marina was used to identify over-represented TFBSs in a two time-point RNA-Seq study exploring the transcriptomic interplay between soybean (Glycine max) and soybean rust (Phakopsora pachyrhizi). Marina identified numerous abundant TFBSs recognized by transcription factors that are associated with defense-response such as WRKY, HY5 and MYB2. Comparing results from Marina to that of a popular software alternative suggests that regardless of the number of promoter-sequences, Marina is able to identify significantly more over-represented TFBSs.

2013-01-01

389

Quantifying the impact of dust on heterogeneous ice generation in midlevel supercooled stratiform clouds  

NASA Astrophysics Data System (ADS)

Dust aerosols have been regarded as effective ice nuclei (IN), but large uncertainties regarding their efficiencies remain. Here, four years of collocated CALIPSO and CloudSat measurements are used to quantify the impact of dust on heterogeneous ice generation in midlevel supercooled stratiform clouds (MSSCs) over the ‘dust belt’. The results show that the dusty MSSCs have an up to 20% higher mixed-phase cloud occurrence, up to 8 dBZ higher mean maximum Ze (Ze_max), and up to 11.5 g/m2 higher ice water path (IWP) than similar MSSCs under background aerosol conditions. Assuming similar ice growth and fallout history in similar MSSCs, the significant differences in Ze_max between dusty and non-dusty MSSCs reflect ice particle number concentration differences. Therefore, observed Ze_max differences indicate that dust could enhance ice particle concentration in MSSCs by a factor of 2 to 6 at temperatures colder than -12°C. The enhancements are strongly dependent on the cloud top temperature, large dust particle concentration and chemical compositions. These results imply an important role of dust particles in modifying mixed-phase cloud properties globally.

Zhang, Damao; Wang, Zhien; Heymsfield, Andrew; Fan, Jiwen; Liu, Dong; Zhao, Ming

2012-09-01

390

Spectral imaging-based methods for quantifying autophagy and apoptosis  

PubMed Central

Spectral imaging systems are capable of detecting and quantifying subtle differences in light quality. In this study we coupled spectral imaging with fluorescence and white light microscopy to develop new methods for quantifying autophagy and apoptosis. For autophagy, we employed multispectral imaging to examine spectral changes in the fluorescence of LC3-GFP, a chimeric protein commonly used to track autophagosome formation. We found that punctate autophagosome-associated LC3-GFP exhibited a spectral profile that was distinctly different from diffuse cytosolic LC3-GFP. We then exploited this shift in spectral quality to quantify the amount of autophagosome-associated signal in single cells. Hydroxychloroquine (CQ), an anti-malarial agent that increases autophagosomal number, significantly increased the punctate LC3-GFP spectral signature, providing proof-of-principle for this approach. For studying apoptosis, we employed the Prism and Reflector Imaging Spectroscopy System (PARISS) hyperspectral imaging system to identify a spectral signature for active caspase-8 immunostaining in ex vivo tumor samples. This system was then used to rapidly quantify apoptosis induced by lexatumumab, an agonistic TRAIL-R2/DR5 antibody, in histological sections from a preclinical mouse model. We further found that the PARISS could accurately distinguish apoptotic tumor regions in hematoxylin and eosin-stained sections, which allowed us to quantify death receptor-mediated apoptosis in the absence of an apoptotic marker. These spectral imaging systems provide unbiased, quantitative and fast means for studying autophagy and apoptosis and complement the existing methods in their respective fields.

Dolloff, Nathan G; Ma, Xiahong; Dicker, David T; Humphreys, Robin C; Li, Lin Z

2011-01-01

391

Quantifying cortical EEG responses to TMS in (un)consciousness.  

PubMed

We normally assess another individual's level of consciousness based on her or his ability to interact with the surrounding environment and communicate. Usually, if we observe purposeful behavior, appropriate responses to sensory inputs, and, above all, appropriate answers to questions, we can be reasonably sure that the person is conscious. However, we know that consciousness can be entirely within the brain, even in the absence of any interaction with the external world; this happens almost every night, while we dream. Yet, to this day, we lack an objective, dependable measure of the level of consciousness that is independent of processing sensory inputs and producing appropriate motor outputs. Theoretically, consciousness is thought to require the joint presence of functional integration and functional differentiation, otherwise defined as brain complexity. Here we review a series of recent studies in which Transcranial Magnetic Stimulation combined with electroencephalography (TMS/EEG) has been employed to quantify brain complexity in wakefulness and during physiological (sleep), pharmacological (anesthesia) and pathological (brain injury) loss of consciousness. These studies invariably show that the complexity of the cortical response to TMS collapses when consciousness is lost during deep sleep, anesthesia and vegetative state following severe brain injury, while it recovers when consciousness resurges in wakefulness, during dreaming, in the minimally conscious state or locked-in syndrome. The present paper will also focus on how this approach may contribute to unveiling the pathophysiology of disorders of consciousness affecting brain-injured patients. Finally, we will underline some crucial methodological aspects concerning TMS/EEG measurements of brain complexity. PMID:24403317

Sarasso, Simone; Rosanova, Mario; Casali, Adenauer G; Casarotto, Silvia; Fecchio, Matteo; Boly, Melanie; Gosseries, Olivia; Tononi, Giulio; Laureys, Steven; Massimini, Marcello

2014-01-01

392

Quantifying Community Dynamics of Nitrifiers in Functionally Stable Reactors? †  

PubMed Central

A sequential batch reactor (SBR) and a membrane bioreactor (MBR) were inoculated with the same sludge from a municipal wastewater treatment plant, supplemented with ammonium, and operated in parallel for 84 days. It was investigated whether the functional stability of the nitrification process corresponded with a static ammonia-oxidizing bacterial (AOB) community. The SBR provided complete nitrification during nearly the whole experimental run, whereas the MBR showed a buildup of 0 to 2 mg nitrite-N liter?1 from day 45 until day 84. Based on the denaturing gradient gel electrophoresis profiles, two novel approaches were introduced to characterize and quantify the community dynamics and interspecies abundance ratios: (i) the rate of change [?t(week)] parameter and (ii) the Pareto-Lorenz curve distribution pattern. During the whole sampling period, it was observed that neither of the reactor types maintained a static microbial community and that the SBR evolved more gradually than the MBR, particularly with respect to AOB (i.e., average weekly community changes of 12.6% ± 5.2% for the SBR and 24.6% ± 14.3% for the MBR). Based on the Pareto-Lorenz curves, it was observed that only a small group of AOB species played a numerically dominant role in the nitritation of both reactors, and this was true especially for the MBR. The remaining less dominant species were speculated to constitute a reserve of AOB which can proliferate to replace the dominant species. The value of these parameters in terms of tools to assist the operation of activated-sludge systems is discussed.

Wittebolle, Lieven; Vervaeren, Han; Verstraete, Willy; Boon, Nico

2008-01-01

393

Quantified trends in the history of verbal behavior research.  

PubMed

The history of scientific research about verbal behavior research, especially that based on Verbal Behavior (Skinner, 1957), can be assessed on the basis of a frequency and celeration analysis of the published and presented literature. In order to discover these quantified trends, a comprehensive bibliographical database was developed. Based on several literature searches, the bibliographic database included papers pertaining to verbal behavior that were published in the Journal of the Experimental Analysis of Behavior, the Journal of Applied Behavior Analysis, Behaviorism, The Behavior Analyst, and The Analysis of Verbal Behavior. A nonbehavioral journal, the Journal of Verbal Learning and Verbal Behavior was assessed as a nonexample comparison. The bibliographic database also included a listing of verbal behavior papers presented at the meetings of the Association for Behavior Analysis. Papers were added to the database if they (a) were about verbal behavior, (b) referenced B.F. Skinner's (1957) book Verbal Behavior, or (c) did both. Because the references indicated the year of publication or presentation, a count per year of them was measured. These yearly frequencies were plotted on Standard Celeration Charts. Once plotted, various celeration trends in the literature became visible, not the least of which was the greater quantity of verbal behavior research than is generally acknowledged. The data clearly show an acceleration of research across the past decade. The data also question the notion that a "paucity" of research based on Verbal Behavior currently exists. Explanations of the acceleration of verbal behavior research are suggested, and plausible reasons are offered as to why a relative lack of verbal behavior research extended through the mid 1960s to the latter 1970s. PMID:22477630

Eshleman, J W

1991-01-01

394

Quantified trends in the history of verbal behavior research  

PubMed Central

The history of scientific research about verbal behavior research, especially that based on Verbal Behavior (Skinner, 1957), can be assessed on the basis of a frequency and celeration analysis of the published and presented literature. In order to discover these quantified trends, a comprehensive bibliographical database was developed. Based on several literature searches, the bibliographic database included papers pertaining to verbal behavior that were published in the Journal of the Experimental Analysis of Behavior, the Journal of Applied Behavior Analysis, Behaviorism, The Behavior Analyst, and The Analysis of Verbal Behavior. A nonbehavioral journal, the Journal of Verbal Learning and Verbal Behavior was assessed as a nonexample comparison. The bibliographic database also included a listing of verbal behavior papers presented at the meetings of the Association for Behavior Analysis. Papers were added to the database if they (a) were about verbal behavior, (b) referenced B.F. Skinner's (1957) book Verbal Behavior, or (c) did both. Because the references indicated the year of publication or presentation, a count per year of them was measured. These yearly frequencies were plotted on Standard Celeration Charts. Once plotted, various celeration trends in the literature became visible, not the least of which was the greater quantity of verbal behavior research than is generally acknowledged. The data clearly show an acceleration of research across the past decade. The data also question the notion that a “paucity” of research based on Verbal Behavior currently exists. Explanations of the acceleration of verbal behavior research are suggested, and plausible reasons are offered as to why a relative lack of verbal behavior research extended through the mid 1960s to the latter 1970s.

Eshleman, John W.

1991-01-01