Science.gov

Sample records for quantified results show

  1. Btu accounting: Showing results

    SciTech Connect

    Nelson, K.E.

    1994-10-01

    In the preceding article in this series last month, the author showed how to calculate the energy consumed to make a pound of product. To realize a payoff, however, the results must be presented in graphs or tables that clearly display what has happened. They must call attention to plant performance and ultimately lead to more efficient use of energy. Energy-consumption reporting is particularly valuable when viewed over a period of time. The author recommend compiling data annually and maintaining a ten-year performance history. Four cases are considered: individual plant performance; site performance for sites having more than one plant; company performance, for companies having more than one site; and performance based on product, for identical or similar products made at different plants or sites. Of these, individual plant performance is inherently the most useful. It also serves as the best basis for site, company and product performance reports. A key element in energy accounting is the relating of all energy consumption to a common basis. As developed last month in Part 1 in this series, the author chose Btu[sub meth] (i.e., Btu of methane equivalent, expressed as its higher heating value) for this purpose. It represents the amount of methane that would be needed to replace (in the case of fuels) or generate (in the case of steam and power) the energy being used.

  2. Quantifying causal emergence shows that macro can beat micro

    PubMed Central

    Hoel, Erik P.; Albantakis, Larissa; Tononi, Giulio

    2013-01-01

    Causal interactions within complex systems can be analyzed at multiple spatial and temporal scales. For example, the brain can be analyzed at the level of neurons, neuronal groups, and areas, over tens, hundreds, or thousands of milliseconds. It is widely assumed that, once a micro level is fixed, macro levels are fixed too, a relation called supervenience. It is also assumed that, although macro descriptions may be convenient, only the micro level is causally complete, because it includes every detail, thus leaving no room for causation at the macro level. However, this assumption can only be evaluated under a proper measure of causation. Here, we use a measure [effective information (EI)] that depends on both the effectiveness of a system’s mechanisms and the size of its state space: EI is higher the more the mechanisms constrain the system’s possible past and future states. By measuring EI at micro and macro levels in simple systems whose micro mechanisms are fixed, we show that for certain causal architectures EI can peak at a macro level in space and/or time. This happens when coarse-grained macro mechanisms are more effective (more deterministic and/or less degenerate) than the underlying micro mechanisms, to an extent that overcomes the smaller state space. Thus, although the macro level supervenes upon the micro, it can supersede it causally, leading to genuine causal emergence—the gain in EI when moving from a micro to a macro level of analysis. PMID:24248356

  3. Different methods to quantify Listeria monocytogenes biofilms cells showed different profile in their viability

    PubMed Central

    Winkelströter, Lizziane Kretli; Martinis, Elaine C.P. De

    2015-01-01

    Listeria monocytogenes is a foodborne pathogen able to adhere and to form biofilms in several materials commonly present in food processing plants. The aim of this study was to evaluate the resistance of Listeria monocytogenes attached to abiotic surface, after treatment with sanitizers, by culture method, microscopy and Quantitative Real Time Polymerase Chain Reaction (qPCR). Biofilms of L. monocytogenes were obtained in stainless steel coupons immersed in Brain Heart Infusion Broth, under agitation at 37 °C for 24 h. The methods selected for this study were based on plate count, microscopic count with the aid of viability dyes (CTC-DAPI), and qPCR. Results of culture method showed that peroxyacetic acid was efficient to kill sessile L. monocytogenes populations, while sodium hypochlorite was only partially effective to kill attached L. monocytogenes (p < 0.05). When, viability dyes (CTC/DAPI) combined with fluorescence microscopy and qPCR were used and lower counts were found after treatments (p < 0.05). Selective quantification of viable cells of L. monocytogenes by qPCR using EMA revelead that the pre-treatment with EMA was not appropriate since it also inhibited amplification of DNA from live cells by ca. 2 log. Thus, the use of CTC counts was the best method to count viable cells in biofilms. PMID:26221112

  4. Different methods to quantify Listeria monocytogenes biofilms cells showed different profile in their viability.

    PubMed

    Winkelströter, Lizziane Kretli; De Martinis, Elaine C P

    2015-03-01

    Listeria monocytogenes is a foodborne pathogen able to adhere and to form biofilms in several materials commonly present in food processing plants. The aim of this study was to evaluate the resistance of Listeria monocytogenes attached to abiotic surface, after treatment with sanitizers, by culture method, microscopy and Quantitative Real Time Polymerase Chain Reaction (qPCR). Biofilms of L. monocytogenes were obtained in stainless steel coupons immersed in Brain Heart Infusion Broth, under agitation at 37 °C for 24 h. The methods selected for this study were based on plate count, microscopic count with the aid of viability dyes (CTC-DAPI), and qPCR. Results of culture method showed that peroxyacetic acid was efficient to kill sessile L. monocytogenes populations, while sodium hypochlorite was only partially effective to kill attached L. monocytogenes (p < 0.05). When, viability dyes (CTC/DAPI) combined with fluorescence microscopy and qPCR were used and lower counts were found after treatments (p < 0.05). Selective quantification of viable cells of L. monocytogenes by qPCR using EMA revelead that the pre-treatment with EMA was not appropriate since it also inhibited amplification of DNA from live cells by ca. 2 log. Thus, the use of CTC counts was the best method to count viable cells in biofilms. PMID:26221112

  5. 13. DETAIL VIEW OF BUTTRESS 4 SHOWING THE RESULTS OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    13. DETAIL VIEW OF BUTTRESS 4 SHOWING THE RESULTS OF POOR CONSTRUCTION WORK. THOUGH NOT A SERIOUS STRUCTURAL DEFICIENCY, THE 'HONEYCOMB' TEXTURE OF THE CONCRETE SURFACE WAS THE RESULT OF INADEQUATE TAMPING AT THE TIME OF THE INITIAL 'POUR'. - Hume Lake Dam, Sequioa National Forest, Hume, Fresno County, CA

  6. Quantifying IOHDR brachytherapy underdosage resulting from an incomplete scatter environment

    SciTech Connect

    Raina, Sanjay; Avadhani, Jaiteerth S.; Oh, Moonseong; Malhotra, Harish K.; Jaggernauth, Wainwright; Kuettel, Michael R.; Podgorsak, Matthew B. . E-mail: matthew.podgorsak@roswellpark.org

    2005-04-01

    Purpose: Most brachytherapy planning systems are based on a dose calculation algorithm that assumes an infinite scatter environment surrounding the target volume and applicator. Dosimetric errors from this assumption are negligible. However, in intraoperative high-dose-rate brachytherapy (IOHDR) where treatment catheters are typically laid either directly on a tumor bed or within applicators that may have little or no scatter material above them, the lack of scatter from one side of the applicator can result in underdosage during treatment. This study was carried out to investigate the magnitude of this underdosage. Methods: IOHDR treatment geometries were simulated using a solid water phantom beneath an applicator with varying amounts of bolus material on the top and sides of the applicator to account for missing tissue. Treatment plans were developed for 3 different treatment surface areas (4 x 4, 7 x 7, 12 x 12 cm{sup 2}), each with prescription points located at 3 distances (0.5 cm, 1.0 cm, and 1.5 cm) from the source dwell positions. Ionization measurements were made with a liquid-filled ionization chamber linear array with a dedicated electrometer and data acquisition system. Results: Measurements showed that the magnitude of the underdosage varies from about 8% to 13% of the prescription dose as the prescription depth is increased from 0.5 cm to 1.5 cm. This treatment error was found to be independent of the irradiated area and strongly dependent on the prescription distance. Furthermore, for a given prescription depth, measurements in planes parallel to an applicator at distances up to 4.0 cm from the applicator plane showed that the dose delivery error is equal in magnitude throughout the target volume. Conclusion: This study demonstrates the magnitude of underdosage in IOHDR treatments delivered in a geometry that may not result in a full scatter environment around the applicator. This implies that the target volume and, specifically, the prescription depth (tumor bed) may get a dose significantly less than prescribed. It might be clinically relevant to correct for this inaccuracy.

  7. Quantifier Scope Disambiguation Using Extracted Pragmatic Knowledge: Preliminary Results

    E-print Network

    Yates, Alexander

    Srinivasan Temple University 1805 N. Broad St. Wachman Hall 324 Philadelphia, PA 19122 prakash.srinivasan@temple.edu Alexander Yates Temple University 1805 N. Broad St. Wachman Hall 324 Philadelphia, PA 19122 yates@temple large amounts of relational data from open-domain text with high accuracy. Here, we show how we can

  8. New Drug Shows Mixed Results Against Early Alzheimer's

    MedlinePLUS

    ... Alzheimer’s Drug Can Cholesterol Medicines Treat Alzheimer’s Disease? Gene Therapy Shows Promise Against Alzheimer’s Vitamins, Drugs Of Limited ... Benefits for Aricept Understanding the Stem Cell Debate Gene Therapy Hints at New Approach to Alzheimer’s Treatment Can ...

  9. A direct approach to quantifying organic matter lost as a result of peatland wildfire

    E-print Network

    Turetsky, Merritt

    A direct approach to quantifying organic matter lost as a result of peatland wildfire M.R. Turetsky of two continental bog and two permafrost bog sites, 3 months after a March 1999 wildfire. Results organic matter loss through combustion associated with peatland wildfire. Methods In March of 1999

  10. QUANTIFYING THE ECONOMIC VALUE OF WEATHER FORECASTS: REVIEW OF METHODS AND RESULTS

    E-print Network

    Katz, Richard

    QUANTIFYING THE ECONOMIC VALUE OF WEATHER FORECASTS: REVIEW OF METHODS AND RESULTS Rick Katz Weather (Z = z) Conditional probability distribution for event Z = z indicates forecast for particular of Forecasts (4) Protypical Decision-Making Models (5) Quality-Value Relationships (6) Valuation Puzzles (7

  11. Comb-Push Ultrasound Shear Elastography of Breast Masses: Initial Results Show Promise

    PubMed Central

    Song, Pengfei; Fazzio, Robert T.; Pruthi, Sandhya; Whaley, Dana H.; Chen, Shigao; Fatemi, Mostafa

    2015-01-01

    Purpose or Objective To evaluate the performance of Comb-push Ultrasound Shear Elastography (CUSE) for classification of breast masses. Materials and Methods CUSE is an ultrasound-based quantitative two-dimensional shear wave elasticity imaging technique, which utilizes multiple laterally distributed acoustic radiation force (ARF) beams to simultaneously excite the tissue and induce shear waves. Female patients who were categorized as having suspicious breast masses underwent CUSE evaluations prior to biopsy. An elasticity estimate within the breast mass was obtained from the CUSE shear wave speed map. Elasticity estimates of various types of benign and malignant masses were compared with biopsy results. Results Fifty-four female patients with suspicious breast masses from our ongoing study are presented. Our cohort included 31 malignant and 23 benign breast masses. Our results indicate that the mean shear wave speed was significantly higher in malignant masses (6 ± 1.58 m/s) in comparison to benign masses (3.65 ± 1.36 m/s). Therefore, the stiffness of the mass quantified by the Young’s modulus is significantly higher in malignant masses. According to the receiver operating characteristic curve (ROC), the optimal cut-off value of 83 kPa yields 87.10% sensitivity, 82.61% specificity, and 0.88 for the area under the curve (AUC). Conclusion CUSE has the potential for clinical utility as a quantitative diagnostic imaging tool adjunct to B-mode ultrasound for differentiation of malignant and benign breast masses. PMID:25774978

  12. Astronomy Diagnostic Test Results Reflect Course Goals and Show Room for Improvement

    ERIC Educational Resources Information Center

    LoPresto, Michael C.

    2007-01-01

    The results of administering the Astronomy Diagnostic Test (ADT) to introductory astronomy students at Henry Ford Community College over three years have shown gains comparable with national averages. Results have also accurately corresponded to course goals, showing greater gains in topics covered in more detail, and lower gains in topics covered…

  13. Gun Shows and Gun Violence: Fatally Flawed Study Yields Misleading Results

    PubMed Central

    Hemenway, David; Webster, Daniel; Pierce, Glenn; Braga, Anthony A.

    2010-01-01

    A widely publicized but unpublished study of the relationship between gun shows and gun violence is being cited in debates about the regulation of gun shows and gun commerce. We believe the study is fatally flawed. A working paper entitled “The Effect of Gun Shows on Gun-Related Deaths: Evidence from California and Texas” outlined this study, which found no association between gun shows and gun-related deaths. We believe the study reflects a limited understanding of gun shows and gun markets and is not statistically powered to detect even an implausibly large effect of gun shows on gun violence. In addition, the research contains serious ascertainment and classification errors, produces results that are sensitive to minor specification changes in key variables and in some cases have no face validity, and is contradicted by 1 of its own authors’ prior research. The study should not be used as evidence in formulating gun policy. PMID:20724672

  14. Quantifying viruses and bacteria in wastewater—Results, interpretation methods, and quality control

    USGS Publications Warehouse

    Francy, Donna S.; Stelzer, Erin A.; Bushon, Rebecca N.; Brady, Amie M.G.; Mailot, Brian E.; Spencer, Susan K.; Borchardt, Mark A.; Elber, Ashley G.; Riddell, Kimberly R.; Gellner, Terry M.

    2011-01-01

    Membrane bioreactors (MBR), used for wastewater treatment in Ohio and elsewhere in the United States, have pore sizes small enough to theoretically reduce concentrations of protozoa and bacteria, but not viruses. Sampling for viruses in wastewater is seldom done and not required. Instead, the bacterial indicators Escherichia coli (E. coli) and fecal coliforms are the required microbial measures of effluents for wastewater-discharge permits. Information is needed on the effectiveness of MBRs in removing human enteric viruses from wastewaters, particularly as compared to conventional wastewater treatment before and after disinfection. A total of 73 regular and 28 quality-control (QC) samples were collected at three MBR and two conventional wastewater plants in Ohio during 23 regular and 3 QC sampling trips in 2008-10. Samples were collected at various stages in the treatment processes and analyzed for bacterial indicators E. coli, fecal coliforms, and enterococci by membrane filtration; somatic and F-specific coliphage by the single agar layer (SAL) method; adenovirus, enterovirus, norovirus GI and GII, rotavirus, and hepatitis A virus by molecular methods; and viruses by cell culture. While addressing the main objective of the study-comparing removal of viruses and bacterial indicators in MBR and conventional plants-it was realized that work was needed to identify data analysis and quantification methods for interpreting enteric virus and QC data. Therefore, methods for quantifying viruses, qualifying results, and applying QC data to interpretations are described in this report. During each regular sampling trip, samples were collected (1) before conventional or MBR treatment (post-preliminary), (2) after secondary or MBR treatment (post-secondary or post-MBR), (3) after tertiary treatment (one conventional plant only), and (4) after disinfection (post-disinfection). Glass-wool fiber filtration was used to concentrate enteric viruses from large volumes, and small volume grab samples were collected for direct-plating analyses for bacterial indicators and coliphage. After filtration, the viruses were eluted from the filter and further concentrated. The final concentrated sample volume (FCSV) was used for enteric virus analysis by use of two methods-cell culture and a molecular method, polymerase chain reaction (PCR). Quantitative PCR (qPCR) for DNA viruses and quantitative reverse-transcriptase PCR (qRT-PCR) for RNA viruses were used in this study. To support data interpretations, the assay limit of detection (ALOD) was set for each virus assay and used to determine sample reporting limits (SRLs). For qPCR and qRT-PCR the ALOD was an estimated value because it was not established according to established method detection limit procedures. The SRLs were different for each sample because effective sample volumes (the volume of the original sample that was actually used in each analysis) were different for each sample. Effective sample volumes were much less than the original sample volumes because of reductions from processing steps and (or) from when dilutions were made to minimize the effects from PCR-inhibiting substances. Codes were used to further qualify the virus data and indicate the level of uncertainty associated with each measurement. Quality-control samples were used to support data interpretations. Field and laboratory blanks for bacteria, coliphage, and enteric viruses were all below detection, indicating that it was unlikely that samples were contaminated from equipment or processing procedures. The absolute value log differences (AVLDs) between concurrent replicate pairs were calculated to identify the variability associated with each measurement. For bacterial indicators and coliphage, the AVLD results indicated that concentrations <10 colony-forming units or plaque-forming units per 100 mL can differ between replicates by as much as 1 log, whereas higher concentrations can differ by as much as 0.3 log. The AVLD results for viruses indicated that differences between replicates can be as great as 1.2 log g

  15. Astronomy Diagnostic Test Results Reflect Course Goals and Show Room for Improvement

    NASA Astrophysics Data System (ADS)

    Lopresto, Michael C.

    The results of administering the Astronomy Diagnostic Test (ADT) to introductory astronomy students at Henry Ford Community College over three years have shown gains comparable with national averages. Results have also accurately corresponded to course goals, showing greater gains in topics covered in more detail, and lower gains in topics covered in less detail. Also evident in the results were topics for which improvement of instruction is needed. These factors and the ease with which the ADT can be administered constitute evidence of the usefulness of the ADT as an assessment instrument for introductory astronomy.

  16. Showing Value in Newborn Screening: Challenges in Quantifying the Effectiveness and Cost-Effectiveness of Early Detection of Phenylketonuria and Cystic Fibrosis

    PubMed Central

    Grosse, Scott D.

    2015-01-01

    Decision makers sometimes request information on the cost savings, cost-effectiveness, or cost-benefit of public health programs. In practice, quantifying the health and economic benefits of population-level screening programs such as newborn screening (NBS) is challenging. It requires that one specify the frequencies of health outcomes and events, such as hospitalizations, for a cohort of children with a given condition under two different scenarios—with or without NBS. Such analyses also assume that everything else, including treatments, is the same between groups. Lack of comparable data for representative screened and unscreened cohorts that are exposed to the same treatments following diagnosis can result in either under- or over-statement of differences. Accordingly, the benefits of early detection may be understated or overstated. This paper illustrates these common problems through a review of past economic evaluations of screening for two historically significant conditions, phenylketonuria and cystic fibrosis. In both examples qualitative judgments about the value of prompt identification and early treatment to an affected child were more influential than specific numerical estimates of lives or costs saved. PMID:26702401

  17. Quantifying the effects of root reinforcing on slope stability: results of the first tests with an new shearing device

    NASA Astrophysics Data System (ADS)

    Rickli, Christian; Graf, Frank

    2013-04-01

    The role of vegetation in preventing shallow soil mass movements such as shallow landslides and soil erosion is generally well recognized and, correspondingly, soil bioengineering on steep slopes has been widely used in practice. However, the precise effectiveness of vegetation regarding slope stabilityis still difficult to determine. A recently designed inclinable shearing device for large scale vegetated soil samples allows quantitative evaluation of the additional shear strength provided by roots of specific plant species. In the following we describe the results of a first series of shear strength experiments with this apparatus focusing on root reinforcement of White Alder (Alnus incana) and Silver Birch (Betula pendula) in large soil block samples (500 x 500 x 400 mm). The specimen with partly saturated soil of a maximum grain size of 10 mm were slowly sheared at an inclination of 35° with low normal stresses of 3.2 kPa accounting for natural conditions on a typical slope prone to mass movements. Measurements during the experiments involved shear stress, shear displacement and normal displacement, all recorded with high accuracy. In addition, dry weights of sprout and roots were measured to quantify plant growth of the planted specimen. The results with the new apparatus indicate a considerable reinforcement of the soil due to plant roots, i.e. maximum shear stress of the vegetated specimen were substantially higher compared to non-vegetated soil and the additional strength was a function of species and growth. Soil samples with seedlings planted five months prior to the test yielded an important increase in maximum shear stress of 250% for White Alder and 240% for Silver Birch compared to non-vegetated soil. The results of a second test series with 12 month old plants showed even clearer enhancements in maximum shear stress (390% for Alder and 230% for Birch). Overall the results of this first series of shear strength experiments with the new apparatus using planted and unplanted soil specimen confirm the importance of plants in soil stabilisation. Furthermore, they demonstrate the suitability of the apparatus to quantify the additional strength of specific vegetation as a function of species and growth under clearly defined conditions in the laboratory.

  18. Preliminary Results In Quantifying The Climatic Impact Forcing Factors Around 3 Ma Ago

    NASA Astrophysics Data System (ADS)

    Fluteau, F.; Ramstein, G.; Duringer, P.; Schuster, M.; Tiercelin, J. J.

    What is exactly the control of climate changes on the development of the Hominids ? Is it possible to quantify such changes ? and which are the forcing factors that create these changes ? We use here a General Circulation Model to investigate the climate sensitivity of 3 different forcing factors : the uplift of the East African Rift, the ex- tent (more than twenty time PD surfaces) of the Chad Lake and ultimately we shall with a coupled oceanatmospher GCM test the the effect of Indonesian throughflow changes. To achieve these goals, we need a multidisciplinary group to assess the evo- lution of the Rift and the extent of the Lake. We prescribe these different boundary conditions to the GCM and use a biome model to assess the vegetation changes. In this presentation we will only focus on the Rift uplift and the Chad lake impacts on Atmospheric circulation, monsoon and their environmental consequences in term of vegetation changes.

  19. Long-Term Trial Results Show No Mortality Benefit from Annual Prostate Cancer Screening

    Cancer.gov

    Thirteen year follow-up data from the Prostate, Lung, Colorectal and Ovarian (PLCO) cancer screening trial show higher incidence but similar mortality among men screened annually with the prostate-specific antigen (PSA) test and digital rectal examination

  20. Image analysis techniques: Used to quantify and improve the precision of coatings testing results

    SciTech Connect

    Duncan, D.J.; Whetten, A.R.

    1993-12-31

    Coating evaluations often specify tests to measure performance characteristics rather than coating physical properties. These evaluation results are often very subjective. A new tool, Digital Video Image Analysis (DVIA), is successfully being used for two automotive evaluations; cyclic (scab) corrosion, and gravelometer (chip) test. An experimental design was done to evaluate variability and interactions among the instrumental factors. This analysis method has proved to be an order of magnitude more sensitive and reproducible than the current evaluations. Coating evaluations can be described and measured that had no way to be expressed previously. For example, DVIA chip evaluations can differentiate how much damage was done to the topcoat, primer even to the metal. DVIA with or without magnification, has the capability to become the quantitative measuring tool for several other coating evaluations, such as T-bends, wedge bends, acid etch analysis, coating defects, observing cure, defect formation or elimination over time, etc.

  1. Results From Mars Show Electrostatic Charging of the Mars Pathfinder Sojourner Rover

    NASA Technical Reports Server (NTRS)

    Kolecki, Joseph C.; Siebert, Mark W.

    1998-01-01

    Indirect evidence (dust accumulation) has been obtained indicating that the Mars Pathfinder rover, Sojourner, experienced electrostatic charging on Mars. Lander camera images of the Sojourner rover provide distinctive evidence of dust accumulation on rover wheels during traverses, turns, and crabbing maneuvers. The sol 22 (22nd Martian "day" after Pathfinder landed) end-of-day image clearly shows fine red dust concentrated around the wheel edges with additional accumulation in the wheel hubs. A sol 41 image of the rover near the rock "Wedge" (see the next image) shows a more uniform coating of dust on the wheel drive surfaces with accumulation in the hubs similar to that in the previous image. In the sol 41 image, note particularly the loss of black-white contrast on the Wheel Abrasion Experiment strips (center wheel). This loss of contrast was also seen when dust accumulated on test wheels in the laboratory. We believe that this accumulation occurred because the Martian surface dust consists of clay-sized particles, similar to those detected by Viking, which have become electrically charged. By adhering to the wheels, the charged dust carries a net nonzero charge to the rover, raising its electrical potential relative to its surroundings. Similar charging behavior was routinely observed in an experimental facility at the NASA Lewis Research Center, where a Sojourner wheel was driven in a simulated Martian surface environment. There, as the wheel moved and accumulated dust (see the following image), electrical potentials in excess of 100 V (relative to the chamber ground) were detected by a capacitively coupled electrostatic probe located 4 mm from the wheel surface. The measured wheel capacitance was approximately 80 picofarads (pF), and the calculated charge, 8 x 10(exp -9) coulombs (C). Voltage differences of 100 V and greater are believed sufficient to produce Paschen electrical discharge in the Martian atmosphere. With an accumulated net charge of 8 x 10(exp -9) C, and average arc time of 1 msec, arcs can also occur with estimated arc currents approaching 10 milliamperes (mA). Discharges of this magnitude could interfere with the operation of sensitive electrical or electronic elements and logic circuits. Sojourner rover wheel tested in laboratory before launch to Mars. Before launch, we believed that the dust would become triboelectrically charged as it was moved about and compacted by the rover wheels. In all cases observed in the laboratory, the test wheel charged positively, and the wheel tracks charged negatively. Dust samples removed from the laboratory wheel averaged a few ones to tens of micrometers in size (clay size). Coarser grains were left behind in the wheel track. On Mars, grain size estimates of 2 to 10 mm were derived for the Martian surface materials from the Viking Gas Exchange Experiment. These size estimates approximately match the laboratory samples. Our tentative conclusion for the Sojourner observations is that fine clay-sized particles acquired an electrostatic charge during rover traverses and adhered to the rover wheels, carrying electrical charge to the rover. Since the Sojourner rover carried no instruments to measure this mission's onboard electrical charge, confirmatory measurements from future rover missions on Mars are desirable so that the physical and electrical properties of the Martian surface dust can be characterized. Sojourner was protected by discharge points, and Faraday cages were placed around sensitive electronics. But larger systems than Sojourner are being contemplated for missions to the Martian surface in the foreseeable future. The design of such systems will require a detailed knowledge of how they will interact with their environment. Validated environmental interaction models and guidelines for the Martian surface must be developed so that design engineers can test new ideas prior to cutting hardware. These models and guidelines cannot be validated without actual flighata. Electrical charging of vehicles and, one day, astronauts moving across t

  2. Quantifying geological processes on Mars-Results of the high resolution stereo camera (HRSC) on Mars express

    NASA Astrophysics Data System (ADS)

    Jaumann, R.; Tirsch, D.; Hauber, E.; Ansan, V.; Di Achille, G.; Erkeling, G.; Fueten, F.; Head, J.; Kleinhans, M. G.; Mangold, N.; Michael, G. G.; Neukum, G.; Pacifici, A.; Platz, T.; Pondrelli, M.; Raack, J.; Reiss, D.; Williams, D. A.; Adeli, S.; Baratoux, D.; de Villiers, G.; Foing, B.; Gupta, S.; Gwinner, K.; Hiesinger, H.; Hoffmann, H.; Deit, L. Le; Marinangeli, L.; Matz, K.-D.; Mertens, V.; Muller, J. P.; Pasckert, J. H.; Roatsch, T.; Rossi, A. P.; Scholten, F.; Sowe, M.; Voigt, J.; Warner, N.

    2015-07-01

    This review summarizes the use of High Resolution Stereo Camera (HRSC) data as an instrumental tool and its application in the analysis of geological processes and landforms on Mars during the last 10 years of operation. High-resolution digital elevations models on a local to regional scale are the unique strength of the HRSC instrument. The analysis of these data products enabled quantifying geological processes such as effusion rates of lava flows, tectonic deformation, discharge of water in channels, formation timescales of deltas, geometry of sedimentary deposits as well as estimating the age of geological units by crater size-frequency distribution measurements. Both the quantification of geological processes and the age determination allow constraining the evolution of Martian geologic activity in space and time. A second major contribution of HRSC is the discovery of episodicity in the intensity of geological processes on Mars. This has been revealed by comparative age dating of volcanic, fluvial, glacial, and lacustrine deposits. Volcanic processes on Mars have been active over more than 4 Gyr, with peak phases in all three geologic epochs, generally ceasing towards the Amazonian. Fluvial and lacustrine activity phases spread a time span from Noachian until Amazonian times, but detailed studies show that they have been interrupted by multiple and long lasting phases of quiescence. Also glacial activity shows discrete phases of enhanced intensity that may correlate with periods of increased spin-axis obliquity. The episodicity of geological processes like volcanism, erosion, and glaciation on Mars reflects close correlation between surface processes and endogenic activity as well as orbit variations and changing climate condition.

  3. Quantifying microwear on experimental Mistassini quartzite scrapers: preliminary results of exploratory research using LSCM and scale-sensitive fractal analysis.

    PubMed

    Stemp, W James; Lerner, Harry J; Kristant, Elaine H

    2013-01-01

    Although previous use-wear studies involving quartz and quartzite have been undertaken by archaeologists, these are comparatively few in number. Moreover, there has been relatively little effort to quantify use-wear on stone tools made from quartzite. The purpose of this article is to determine the effectiveness of a measurement system, laser scanning confocal microscopy (LSCM), to document the surface roughness or texture of experimental Mistassini quartzite scrapers used on two different contact materials (fresh and dry deer hide). As in previous studies using LSCM on chert, flint, and obsidian, this exploratory study incorporates a mathematical algorithm that permits the discrimination of surface roughness based on comparisons at multiple scales. Specifically, we employ measures of relative area (RelA) coupled with the F-test to discriminate used from unused stone tool surfaces, as well as surfaces of quartzite scrapers used on dry and fresh deer hide. Our results further demonstrate the effect of raw material variation on use-wear formation and its documentation using LSCM and RelA. PMID:22688593

  4. Jaguar XK8 Test Results The table below shows data from an acceleration test of a 1997 Jaguar

    E-print Network

    Alexander, Roger K.

    Jaguar XK8 Test Results The table below shows data from an acceleration test of a 1997 Jaguar XK8 (list price: $78,308), as reported in Car and Driver, November 1996. For example, 3.7 seconds.5 36.7 49.6 v (mph) 0 30 40 50 60 70 80 90 100 110 120 130 140 Problem 1. Plot the data from the table

  5. Jaguar XK8 Test Results The table below shows data from an acceleration test of a 1997 Jaguar

    E-print Network

    Alexander, Roger K.

    Jaguar XK8 Test Results The table below shows data from an acceleration test of a 1997 Jaguar XK8 into the test, the Jaguar was doing 40 miles per hour. t (sec) 0.0 2.5 3.7 5.2 6.9 8.8 11.2 14.1 17.5 22.0 28 by the Jaguar during each of the clock intervals [0, 2.5], [2.5, 3.7], etc. How far has the Jaguar gone

  6. Seeking to quantify the ferromagnetic-to-antiferromagnetic interface coupling resulting in exchange bias with various thin-film conformations

    SciTech Connect

    Hsiao, C. H.; Wang, S.; Ouyang, H.; Desautels, R. D.; Lierop, J. van; Lin, K. W.

    2014-08-07

    Ni{sub 3}Fe/(Ni, Fe)O thin films with bilayer and nanocrystallite dispersion morphologies are prepared with a dual ion beam deposition technique permitting precise control of nanocrystallite growth, composition, and admixtures. A bilayer morphology provides a Ni{sub 3}Fe-to-NiO interface, while the dispersion films have different mixtures of Ni{sub 3}Fe, NiO, and FeO nanocrystallites. Using detailed analyses of high resolution transmission electron microscopy images with Multislice simulations, the nanocrystallites' structures and phases are determined, and the intermixing between the Ni{sub 3}Fe, NiO, and FeO interfaces is quantified. From field-cooled hysteresis loops, the exchange bias loop shift from spin interactions at the interfaces are determined. With similar interfacial molar ratios of FM-to-AF, we find the exchange bias field essentially unchanged. However, when the interfacial ratio of FM to AF was FM rich, the exchange bias field increases. Since the FM/AF interface ‘contact’ areas in the nanocrystallite dispersion films are larger than that of the bilayer film, and the nanocrystallite dispersions exhibit larger FM-to-AF interfacial contributions to the magnetism, we attribute the changes in the exchange bias to be from increases in the interfacial segments that suffer defects (such as vacancies and bond distortions), that also affects the coercive fields.

  7. Quantifying the effect of crops surface albedo variability on GHG budgets in a life cycle assessment approach : methodology and results.

    NASA Astrophysics Data System (ADS)

    Ferlicoq, Morgan; Ceschia, Eric; Brut, Aurore; Tallec, Tiphaine

    2013-04-01

    We tested a new method to estimate the radiative forcing of several crops at the annual and rotation scales, using local measurements data from two ICOS experimental sites. We used jointly 1) the radiative forcing caused by greenhouse gas (GHG) net emissions, calculated by using a Life Cycle Analysis (LCA) approach and in situ measurements (Ceschia et al. 2010), and 2) the radiative forcing caused by rapid changes in surface albedo typical from those ecosystems and resulting from management and crop phenology. The carbon and GHG budgets (GHGB) of 2 crop sites with contrasted management located in South West France (Auradé and Lamasquère sites) was estimated over a complete rotation by combining a classical LCA approach with on site flux measurements. At both sites, carbon inputs (organic fertilisation and seeds), carbon exports (harvest) and net ecosystem production (NEP), measured with the eddy covariance technique, were calculated. The variability of the different terms and their relative contributions to the net ecosystem carbon budget (NECB) were analysed for all site-years, and the effect of management on NECB was assessed. To account for GHG fluxes that were not directly measured on site, we estimated the emissions caused by field operations (EFO) for each site using emission factors from the literature. The EFO were added to the NECB to calculate the total GHGB for a range of cropping systems and management regimes. N2O emissions were or calculated following the IPCC (2007) guidelines, and CH4 emissions were assumed to be negligible compared to other contributions to the net GHGB. Additionally, albedo was calculated continuously using the short wave incident and reflected radiation measurements in the field (0.3-3µm) from CNR1 sensors. Mean annual differences in albedo and deduced radiative forcing from a reference value were then compared for all site-years. Mean annual differences in radiative forcing were then converted in g C equivalent m-2 in order to add this effect to the GHG budget (Muñoz et a. 2010). Increasing the length of the vegetative period is considered as one of the main levers for improving the NECB of crop ecosystems. Therefore, we also tested the effect of adding intermediate crops or maintaining crop voluntary re-growth on both the NECB and the radiative forcing caused by the changes in mean annual surface albedo. We showed that the NEP was improved and as a consequence NECB and GHGB too. Intermediate crops also increased the mean annual surface albedo and therefore caused a negative radiative forcing (cooling effect) expressed in g C equivalent m-2 (sink). The use of an intermediate crop could in some cases switch the crop from a positive NEP (source) to a negative one (sink) and the change in radiative forcing (up to -110 g C-eq m-2 yr-1) could overwhelm the NEP term.

  8. Airborne Laser Swath Mapping: Results of Field Tests Conducted to Quantify the Effects of Different Ground Covers

    NASA Astrophysics Data System (ADS)

    Carter, W. E.; Shrestha, R. L.; Tuell, G.; Bloomquist, D.; Sartori, M.; Raabe, E.

    2001-12-01

    Most scientific and engineering applications of Airborne Laser Swath Mapping (ALSM) require precisions and/or repeatabilities (relative accuracies) of several decimeters in the horizontal coordinates and a few to several centimeters in the vertical coordinates of the point measurements, or ultimately of surface features derived from the point measurements. Manufacturers generally use components consistent with this level of performance and laboratory calibration and testing results indicate that instrumental errors are within these bounds. However, field observations include additional sources of error that can vary significantly from project to project. Comparisons of results from an ALSM system operated by the University of Florida (Optech Model 1210) and ground survey values, on a point-by-point basis, and as profiles cut from Digital Elevation Models, consistently yield RMS differences of 30 to 50 cm in horizontal coordinates, and 4 to 8 cm in the vertical coordinates, for points on smooth bare surfaces such as pavements, roofs, and sand beaches. These numbers increase in steep or rugged terrain, and in areas covered with vegetation. Results from recent projects will be presented that illustrate the effects of different ground covers, including grass, row crops, marsh grasses, coastal mangroves, open pine and dense mixed forests. Examples illustrating the use of laser intensity values, multiple stops per pulse, and filtering algorithms, to minimize the degradation caused by ground cover, will also be presented.

  9. Development and application of methods to quantify spatial and temporal hyperpolarized 3He MRI ventilation dynamics: preliminary results in chronic obstructive pulmonary disease

    NASA Astrophysics Data System (ADS)

    Kirby, Miranda; Wheatley, Andrew; McCormack, David G.; Parraga, Grace

    2010-03-01

    Hyperpolarized helium-3 (3He) magnetic resonance imaging (MRI) has emerged as a non-invasive research method for quantifying lung structural and functional changes, enabling direct visualization in vivo at high spatial and temporal resolution. Here we described the development of methods for quantifying ventilation dynamics in response to salbutamol in Chronic Obstructive Pulmonary Disease (COPD). Whole body 3.0 Tesla Excite 12.0 MRI system was used to obtain multi-slice coronal images acquired immediately after subjects inhaled hyperpolarized 3He gas. Ventilated volume (VV), ventilation defect volume (VDV) and thoracic cavity volume (TCV) were recorded following segmentation of 3He and 1H images respectively, and used to calculate percent ventilated volume (PVV) and ventilation defect percent (VDP). Manual segmentation and Otsu thresholding were significantly correlated for VV (r=.82, p=.001), VDV (r=.87 p=.0002), PVV (r=.85, p=.0005), and VDP (r=.85, p=.0005). The level of agreement between these segmentation methods was also evaluated using Bland-Altman analysis and this showed that manual segmentation was consistently higher for VV (Mean=.22 L, SD=.05) and consistently lower for VDV (Mean=-.13, SD=.05) measurements than Otsu thresholding. To automate the quantification of newly ventilated pixels (NVp) post-bronchodilator, we used translation, rotation, and scaling transformations to register pre-and post-salbutamol images. There was a significant correlation between NVp and VDV (r=-.94 p=.005) and between percent newly ventilated pixels (PNVp) and VDP (r=- .89, p=.02), but not for VV or PVV. Evaluation of 3He MRI ventilation dynamics using Otsu thresholding and landmark-based image registration provides a way to regionally quantify functional changes in COPD subjects after treatment with beta-agonist bronchodilators, a common COPD and asthma therapy.

  10. Classroom Assessments of 6000 Teachers: What Do the Results Show about the Effectiveness of Teaching and Learning?

    ERIC Educational Resources Information Center

    Hill, Flo H.; And Others

    This paper presents the results of a series of summary analyses of descriptive statistics concerning 5,720 Louisiana teachers who were assessed with the System for Teaching and Learning Assessment and Review (STAR)--a comprehensive on-the-job statewide teacher assessment system--during the second pilot year (1989-90). Data were collected by about…

  11. Early Results Show Reduced Infection Rate Using No-touch Technique for Expander/ADM Breast Reconstruction

    PubMed Central

    2015-01-01

    Summary: Infection is a common complication of immediate breast reconstruction that often leads to device removal, a result emotionally devastating to the patient and frustrating for her surgeon. “No-touch” techniques have been used in other surgical disciplines and plastic surgery, but they have not been reported for breast reconstruction with tissue expanders or implants and acellular dermis. We report a novel technique of tissue expander and acellular dermis placement using no-touch principles with a self-retaining retractor system that holds promise to decrease infectious complications of breast reconstruction. PMID:25878928

  12. Results of assessments by year of cohort The following pages show the results of the assessments carried out over the six-year period of the

    E-print Network

    Bradbeer, Robin Sarah

    as big as this would have occurred if the samples were drawn from the same population. Conventionally that a difference as big as this would have occurred if the samples were drawn from the same population-plots and parametric statistics The first table below the box-plot chart shows the data used to draw the box

  13. QUANTIFYING FOREST ABOVEGROUND CARBON POOLS AND FLUXES USING MULTI-TEMPORAL LIDAR A report on field monitoring, remote sensing MMV, GIS integration, and modeling results for forestry field validation test to quantify aboveground tree biomass and carbon

    SciTech Connect

    Lee Spangler; Lee A. Vierling; Eva K. Stand; Andrew T. Hudak; Jan U.H. Eitel; Sebastian Martinuzzi

    2012-04-01

    Sound policy recommendations relating to the role of forest management in mitigating atmospheric carbon dioxide (CO{sub 2}) depend upon establishing accurate methodologies for quantifying forest carbon pools for large tracts of land that can be dynamically updated over time. Light Detection and Ranging (LiDAR) remote sensing is a promising technology for achieving accurate estimates of aboveground biomass and thereby carbon pools; however, not much is known about the accuracy of estimating biomass change and carbon flux from repeat LiDAR acquisitions containing different data sampling characteristics. In this study, discrete return airborne LiDAR data was collected in 2003 and 2009 across {approx}20,000 hectares (ha) of an actively managed, mixed conifer forest landscape in northern Idaho, USA. Forest inventory plots, established via a random stratified sampling design, were established and sampled in 2003 and 2009. The Random Forest machine learning algorithm was used to establish statistical relationships between inventory data and forest structural metrics derived from the LiDAR acquisitions. Aboveground biomass maps were created for the study area based on statistical relationships developed at the plot level. Over this 6-year period, we found that the mean increase in biomass due to forest growth across the non-harvested portions of the study area was 4.8 metric ton/hectare (Mg/ha). In these non-harvested areas, we found a significant difference in biomass increase among forest successional stages, with a higher biomass increase in mature and old forest compared to stand initiation and young forest. Approximately 20% of the landscape had been disturbed by harvest activities during the six-year time period, representing a biomass loss of >70 Mg/ha in these areas. During the study period, these harvest activities outweighed growth at the landscape scale, resulting in an overall loss in aboveground carbon at this site. The 30-fold increase in sampling density between the 2003 and 2009 did not affect the biomass estimates. Overall, LiDAR data coupled with field reference data offer a powerful method for calculating pools and changes in aboveground carbon in forested systems. The results of our study suggest that multitemporal LiDAR-based approaches are likely to be useful for high quality estimates of aboveground carbon change in conifer forest systems.

  14. Quantifying the Intermittency of Mars Surface Habitability Habitable-zone planets such as Mars host natural origin-of-life experiments whose results are

    E-print Network

    1 Quantifying the Intermittency of Mars Surface Habitability Habitable-zone planets requires quantifying the intermittency of habitable surface conditions [1-3]. We will do this by analyzing supported microbial life [4], but habitability could not have been both long-lasting and global because

  15. QUANTIFYING SPICULES

    SciTech Connect

    Pereira, Tiago M. D.; De Pontieu, Bart; Carlsson, Mats

    2012-11-01

    Understanding the dynamic solar chromosphere is fundamental in solar physics. Spicules are an important feature of the chromosphere, connecting the photosphere to the corona, potentially mediating the transfer of energy and mass. The aim of this work is to study the properties of spicules over different regions of the Sun. Our goal is to investigate if there is more than one type of spicule, and how spicules behave in the quiet Sun, coronal holes, and active regions. We make use of high cadence and high spatial resolution Ca II H observations taken by Hinode/Solar Optical Telescope. Making use of a semi-automated detection algorithm, we self-consistently track and measure the properties of 519 spicules over different regions. We find clear evidence of two types of spicules. Type I spicules show a rise and fall and have typical lifetimes of 150-400 s and maximum ascending velocities of 15-40 km s{sup -1}, while type II spicules have shorter lifetimes of 50-150 s, faster velocities of 30-110 km s{sup -1}, and are not seen to fall down, but rather fade at around their maximum length. Type II spicules are the most common, seen in the quiet Sun and coronal holes. Type I spicules are seen mostly in active regions. There are regional differences between quiet-Sun and coronal hole spicules, likely attributable to the different field configurations. The properties of type II spicules are consistent with published results of rapid blueshifted events (RBEs), supporting the hypothesis that RBEs are their disk counterparts. For type I spicules we find the relations between their properties to be consistent with a magnetoacoustic shock wave driver, and with dynamic fibrils as their disk counterpart. The driver of type II spicules remains unclear from limb observations.

  16. Uncertainty quantified trait predictions

    NASA Astrophysics Data System (ADS)

    Fazayeli, Farideh; Kattge, Jens; Banerjee, Arindam; Schrodt, Franziska; Reich, Peter

    2015-04-01

    Functional traits of organisms are key to understanding and predicting biodiversity and ecological change, which motivates continuous collection of traits and their integration into global databases. Such composite trait matrices are inherently sparse, severely limiting their usefulness for further analyses. On the other hand, traits are characterized by the phylogenetic trait signal, trait-trait correlations and environmental constraints, all of which provide information that could be used to statistically fill gaps. We propose the application of probabilistic models which, for the first time, utilize all three characteristics to fill gaps in trait databases and predict trait values at larger spatial scales. For this purpose we introduce BHPMF, a hierarchical Bayesian extension of Probabilistic Matrix Factorization (PMF). PMF is a machine learning technique which exploits the correlation structure of sparse matrices to impute missing entries. BHPMF additionally utilizes the taxonomic hierarchy for trait prediction. Implemented in the context of a Gibbs Sampler MCMC approach BHPMF provides uncertainty estimates for each trait prediction. We present comprehensive experimental results on the problem of plant trait prediction using the largest database of plant traits, where BHPMF shows strong empirical performance in uncertainty quantified trait prediction, outperforming the state-of-the-art based on point estimates. Further, we show that BHPMF is more accurate when it is confident, whereas the error is high when the uncertainty is high.

  17. "The Show"

    ERIC Educational Resources Information Center

    Gehring, John

    2004-01-01

    For the past 16 years, the blue-collar city of Huntington, West Virginia, has rolled out the red carpet to welcome young wrestlers and their families as old friends. They have come to town chasing the same dream for a spot in what many of them call "The Show". For three days, under the lights of an arena packed with 5,000 fans, the state's best…

  18. Methods for Quantifying the Uncertainties of LSIT Test Parameters, Test Results, and Full-Scale Mixing Performance Using Models Developed from Scaled Test Data

    SciTech Connect

    Piepel, Gregory F.; Cooley, Scott K.; Kuhn, William L.; Rector, David R.; Heredia-Langner, Alejandro

    2015-05-01

    This report discusses the statistical methods for quantifying uncertainties in 1) test responses and other parameters in the Large Scale Integrated Testing (LSIT), and 2) estimates of coefficients and predictions of mixing performance from models that relate test responses to test parameters. Testing at a larger scale has been committed to by Bechtel National, Inc. and the U.S. Department of Energy (DOE) to “address uncertainties and increase confidence in the projected, full-scale mixing performance and operations” in the Waste Treatment and Immobilization Plant (WTP).

  19. Reanalysis of mGWAS results and in vitro validation show that lactate dehydrogenase interacts with branched-chain amino acid metabolism.

    PubMed

    Heemskerk, Mattijs M; van Harmelen, Vanessa Ja; van Dijk, Ko Willems; van Klinken, Jan Bert

    2016-01-01

    The assignment of causative genes to noncoding variants identified in genome-wide association studies (GWASs) is challenging. We show how combination of knowledge from gene and pathway databases and chromatin interaction data leads to reinterpretation of published quantitative trait loci for blood metabolites. We describe a previously unidentified link between the rs2403254 locus, which is associated with the ratio of 3-methyl-2-oxobutanoate and alpha-hydroxyisovalerate levels, and the distal LDHA gene. We confirmed that lactate dehydrogenase can catalyze the conversion between these metabolites in vitro, suggesting that it has a role in branched-chain amino acid metabolism. Examining datasets from the ENCODE project we found evidence that the locus and LDHA promoter physically interact, showing that LDHA expression is likely under control of distal regulatory elements. Importantly, this discovery demonstrates that bioinformatic workflows for data integration can have a vital role in the interpretation of GWAS results. PMID:26014429

  20. Two heteronuclear dipolar results at the price of one: Quantifying Na/P contacts in phosphosilicate glasses and biomimetic hydroxy-apatite

    NASA Astrophysics Data System (ADS)

    Stevensson, Baltzar; Mathew, Renny; Yu, Yang; Edén, Mattias

    2015-02-01

    The analysis of S{I} recoupling experiments applied to amorphous solids yields a heteronuclear second moment M2 (S-I) that represents the effective through-space dipolar interaction between the detected S spins and the neighboring I-spin species. We show that both M2 (S-I) and M2 (I-S) values are readily accessible from a sole S{I} or I{S} experiment, which may involve either S or I detection, and is naturally selected as the most favorable option under the given experimental conditions. For the common case where I has half-integer spin, an I{S} REDOR implementation is preferred to the S{I} REAPDOR counterpart. We verify the procedure by 23Na{31P} REDOR and 31P{23Na} REAPDOR NMR applied to Na2O-CaO-SiO2-P2O5 glasses and biomimetic hydroxyapatite, where the M2 (P-Na) values directly determined by REAPDOR agree very well with those derived from the corresponding M2 (Na-P) results measured by REDOR. Moreover, we show that dipolar second moments are readily extracted from the REAPDOR NMR protocol by a straightforward numerical fitting of the initial dephasing data, in direct analogy with the well-established procedure to determine M2 (S-I) values from REDOR NMR experiments applied to amorphous materials; this avoids the problems with time-consuming numerically exact simulations whose accuracy is limited for describing the dynamics of a priori unknown multi-spin systems in disordered structures.

  1. Two heteronuclear dipolar results at the price of one: quantifying Na/P contacts in phosphosilicate glasses and biomimetic hydroxy-apatite.

    PubMed

    Stevensson, Baltzar; Mathew, Renny; Yu, Yang; Edén, Mattias

    2015-02-01

    The analysis of S{I} recoupling experiments applied to amorphous solids yields a heteronuclear second moment M(2)(S-I) that represents the effective through-space dipolar interaction between the detected S spins and the neighboring I-spin species. We show that both M(2)(S-I) and M(2)(I-S) values are readily accessible from a sole S{I} or I{S} experiment, which may involve either S or I detection, and is naturally selected as the most favorable option under the given experimental conditions. For the common case where I has half-integer spin, an I{S} REDOR implementation is preferred to the S{I} REAPDOR counterpart. We verify the procedure by (23)Na{(31)P} REDOR and (31)P{(23)Na} REAPDOR NMR applied to Na(2)O-CaO-SiO(2)-P(2)O(5) glasses and biomimetic hydroxyapatite, where the M(2)(P-Na) values directly determined by REAPDOR agree very well with those derived from the corresponding M(2)(Na-P) results measured by REDOR. Moreover, we show that dipolar second moments are readily extracted from the REAPDOR NMR protocol by a straightforward numerical fitting of the initial dephasing data, in direct analogy with the well-established procedure to determine M(2)(S-I) values from REDOR NMR experiments applied to amorphous materials; this avoids the problems with time-consuming numerically exact simulations whose accuracy is limited for describing the dynamics of a priori unknown multi-spin systems in disordered structures. PMID:25557863

  2. Analysis of conservative tracer measurement results using the Frechet distribution at planted horizontal subsurface flow constructed wetlands filled with coarse gravel and showing the effect of clogging processes.

    PubMed

    Dittrich, Ern?; Klincsik, Mihály

    2015-11-01

    A mathematical process, developed in Maple environment, has been successful in decreasing the error of measurement results and in the precise calculation of the moments of corrected tracer functions. It was proved that with this process, the measured tracer results of horizontal subsurface flow constructed wetlands filled with coarse gravel (HSFCW-C) can be fitted more accurately than with the conventionally used distribution functions (Gaussian, Lognormal, Fick (Inverse Gaussian) and Gamma). This statement is true only for the planted HSFCW-Cs. The analysis of unplanted HSFCW-Cs needs more research. The result of the analysis shows that the conventional solutions (completely stirred series tank reactor (CSTR) model and convection-dispersion transport (CDT) model) cannot describe these types of transport processes with sufficient accuracy. These outcomes can help in developing better process descriptions of very difficult transport processes in HSFCW-Cs. Furthermore, a new mathematical process can be developed for the calculation of real hydraulic residence time (HRT) and dispersion coefficient values. The presented method can be generalized to other kinds of hydraulic environments. PMID:26126688

  3. Quantifiable Lateral Flow Assay Test Strips

    NASA Technical Reports Server (NTRS)

    2003-01-01

    As easy to read as a home pregnancy test, three Quantifiable Lateral Flow Assay (QLFA) strips used to test water for E. coli show different results. The brightly glowing control line on the far right of each strip indicates that all three tests ran successfully. But the glowing test line on the middle left and bottom strips reveal their samples were contaminated with E. coli bacteria at two different concentrations. The color intensity correlates with concentration of contamination.

  4. How often do German children and adolescents show signs of common mental health problems? Results from different methodological approaches – a cross-sectional study

    PubMed Central

    2014-01-01

    Background Child and adolescent mental health problems are ubiquitous and burdensome. Their impact on functional disability, the high rates of accompanying medical illnesses and the potential to last until adulthood make them a major public health issue. While methodological factors cause variability of the results from epidemiological studies, there is a lack of prevalence rates of mental health problems in children and adolescents according to ICD-10 criteria from nationally representative samples. International findings suggest only a small proportion of children with function impairing mental health problems receive treatment, but information about the health care situation of children and adolescents is scarce. The aim of this epidemiological study was a) to classify symptoms of common mental health problems according to ICD-10 criteria in order to compare the statistical and clinical case definition strategies using a single set of data and b) to report ICD-10 codes from health insurance claims data. Methods a) Based on a clinical expert rating, questionnaire items were mapped on ICD-10 criteria; data from the Mental Health Module (BELLA study) were analyzed for relevant ICD-10 and cut-off criteria; b) Claims data were analyzed for relevant ICD-10 codes. Results According to parent report 7.5% (n?=?208) met the ICD-10 criteria of a mild depressive episode and 11% (n?=?305) showed symptoms of depression according to cut-off score; Anxiety is reported in 5.6% (n?=?156) and 11.6% (n?=?323), conduct disorder in 15.2% (n?=?373) and 14.6% (n?=?357). Self-reported symptoms in 11 to 17 year olds resulted in 15% (n?=?279) reporting signs of a mild depression according to ICD-10 criteria (vs. 16.7% (n?=?307) based on cut-off) and 10.9% (n?=?201) reported symptoms of anxiety (vs. 15.4% (n?=?283)). Results from routine data identify 0.9% (n?=?1,196) with a depression diagnosis, 3.1% (n?=?6,729) with anxiety and 1.4% (n?=?3,100) with conduct disorder in outpatient health care. Conclusions Statistical and clinical case definition strategies show moderate concordance in depression and conduct disorder in a German national sample. Comparatively, lower rates of children and adolescents with diagnosed mental health problems in the outpatient health care setting support the assumptions that a small number of children and adolescents in need of treatment receive it. PMID:24597565

  5. Transgene silencing of the Hutchinson-Gilford progeria syndrome mutation results in a reversible bone phenotype, whereas resveratrol treatment does not show overall beneficial effects.

    PubMed

    Strandgren, Charlotte; Nasser, Hasina Abdul; McKenna, Tomás; Koskela, Antti; Tuukkanen, Juha; Ohlsson, Claes; Rozell, Björn; Eriksson, Maria

    2015-08-01

    Hutchinson-Gilford progeria syndrome (HGPS) is a rare premature aging disorder that is most commonly caused by a de novo point mutation in exon 11 of the LMNA gene, c.1824C>T, which results in an increased production of a truncated form of lamin A known as progerin. In this study, we used a mouse model to study the possibility of recovering from HGPS bone disease upon silencing of the HGPS mutation, and the potential benefits from treatment with resveratrol. We show that complete silencing of the transgenic expression of progerin normalized bone morphology and mineralization already after 7 weeks. The improvements included lower frequencies of rib fractures and callus formation, an increased number of osteocytes in remodeled bone, and normalized dentinogenesis. The beneficial effects from resveratrol treatment were less significant and to a large extent similar to mice treated with sucrose alone. However, the reversal of the dental phenotype of overgrown and laterally displaced lower incisors in HGPS mice could be attributed to resveratrol. Our results indicate that the HGPS bone defects were reversible upon suppressed transgenic expression and suggest that treatments targeting aberrant progerin splicing give hope to patients who are affected by HGPS. PMID:25877214

  6. Magnetic Sphincter Augmentation for Gastroesophageal Reflux at 5 Years: Final Results of a Pilot Study Show Long-Term Acid Reduction and Symptom Improvement

    PubMed Central

    Saino, Greta; Bonavina, Luigi; Lipham, John C.; Dunn, Daniel

    2015-01-01

    Abstract Background: As previously reported, the magnetic sphincter augmentation device (MSAD) preserves gastric anatomy and results in less severe side effects than traditional antireflux surgery. The final 5-year results of a pilot study are reported here. Patients and Methods: A prospective, multicenter study evaluated safety and efficacy of the MSAD for 5 years. Prior to MSAD placement, patients had abnormal esophageal acid and symptoms poorly controlled by proton pump inhibitors (PPIs). Patients served as their own control, which allowed comparison between baseline and postoperative measurements to determine individual treatment effect. At 5 years, gastroesophageal reflux disease (GERD)-Health Related Quality of Life (HRQL) questionnaire score, esophageal pH, PPI use, and complications were evaluated. Results: Between February 2007 and October 2008, 44 patients (26 males) had an MSAD implanted by laparoscopy, and 33 patients were followed up at 5 years. Mean total percentage of time with pH <4 was 11.9% at baseline and 4.6% at 5 years (P?shows the relative safety and efficacy of magnetic sphincter augmentation for GERD. PMID:26437027

  7. Rapamycin and chloroquine: the in vitro and in vivo effects of autophagy-modifying drugs show promising results in valosin containing protein multisystem proteinopathy.

    PubMed

    Nalbandian, Angèle; Llewellyn, Katrina J; Nguyen, Christopher; Yazdi, Puya G; Kimonis, Virginia E

    2015-01-01

    Mutations in the valosin containing protein (VCP) gene cause hereditary Inclusion body myopathy (hIBM) associated with Paget disease of bone (PDB), frontotemporal dementia (FTD), more recently termed multisystem proteinopathy (MSP). Affected individuals exhibit scapular winging and die from progressive muscle weakness, and cardiac and respiratory failure, typically in their 40s to 50s. Histologically, patients show the presence of rimmed vacuoles and TAR DNA-binding protein 43 (TDP-43)-positive large ubiquitinated inclusion bodies in the muscles. We have generated a VCPR155H/+ mouse model which recapitulates the disease phenotype and impaired autophagy typically observed in patients with VCP disease. Autophagy-modifying agents, such as rapamycin and chloroquine, at pharmacological doses have previously shown to alter the autophagic flux. Herein, we report results of administration of rapamycin, a specific inhibitor of the mechanistic target of rapamycin (mTOR) signaling pathway, and chloroquine, a lysosomal inhibitor which reverses autophagy by accumulating in lysosomes, responsible for blocking autophagy in 20-month old VCPR155H/+ mice. Rapamycin-treated mice demonstrated significant improvement in muscle performance, quadriceps histological analysis, and rescue of ubiquitin, and TDP-43 pathology and defective autophagy as indicated by decreased protein expression levels of LC3-I/II, p62/SQSTM1, optineurin and inhibiting the mTORC1 substrates. Conversely, chloroquine-treated VCPR155H/+ mice revealed progressive muscle weakness, cytoplasmic accumulation of TDP-43, ubiquitin-positive inclusion bodies and increased LC3-I/II, p62/SQSTM1, and optineurin expression levels. Our in vitro patient myoblasts studies treated with rapamycin demonstrated an overall improvement in the autophagy markers. Targeting the mTOR pathway ameliorates an increasing list of disorders, and these findings suggest that VCP disease and related neurodegenerative multisystem proteinopathies can now be included as disorders that can potentially be ameliorated by rapalogs. PMID:25884947

  8. Rapamycin and Chloroquine: The In Vitro and In Vivo Effects of Autophagy-Modifying Drugs Show Promising Results in Valosin Containing Protein Multisystem Proteinopathy

    PubMed Central

    Nalbandian, Angèle; Llewellyn, Katrina J.; Nguyen, Christopher; Yazdi, Puya G.; Kimonis, Virginia E.

    2015-01-01

    Mutations in the valosin containing protein (VCP) gene cause hereditary Inclusion body myopathy (hIBM) associated with Paget disease of bone (PDB), frontotemporal dementia (FTD), more recently termed multisystem proteinopathy (MSP). Affected individuals exhibit scapular winging and die from progressive muscle weakness, and cardiac and respiratory failure, typically in their 40s to 50s. Histologically, patients show the presence of rimmed vacuoles and TAR DNA-binding protein 43 (TDP-43)-positive large ubiquitinated inclusion bodies in the muscles. We have generated a VCPR155H/+ mouse model which recapitulates the disease phenotype and impaired autophagy typically observed in patients with VCP disease. Autophagy-modifying agents, such as rapamycin and chloroquine, at pharmacological doses have previously shown to alter the autophagic flux. Herein, we report results of administration of rapamycin, a specific inhibitor of the mechanistic target of rapamycin (mTOR) signaling pathway, and chloroquine, a lysosomal inhibitor which reverses autophagy by accumulating in lysosomes, responsible for blocking autophagy in 20-month old VCPR155H/+ mice. Rapamycin-treated mice demonstrated significant improvement in muscle performance, quadriceps histological analysis, and rescue of ubiquitin, and TDP-43 pathology and defective autophagy as indicated by decreased protein expression levels of LC3-I/II, p62/SQSTM1, optineurin and inhibiting the mTORC1 substrates. Conversely, chloroquine-treated VCPR155H/+ mice revealed progressive muscle weakness, cytoplasmic accumulation of TDP-43, ubiquitin-positive inclusion bodies and increased LC3-I/II, p62/SQSTM1, and optineurin expression levels. Our in vitro patient myoblasts studies treated with rapamycin demonstrated an overall improvement in the autophagy markers. Targeting the mTOR pathway ameliorates an increasing list of disorders, and these findings suggest that VCP disease and related neurodegenerative multisystem proteinopathies can now be included as disorders that can potentially be ameliorated by rapalogs. PMID:25884947

  9. Modeling upward brine migration through faults as a result of CO2 storage in the Northeast German Basin shows negligible salinization in shallow aquifers

    NASA Astrophysics Data System (ADS)

    Kuehn, M.; Tillner, E.; Kempka, T.; Nakaten, B.

    2012-12-01

    The geological storage of CO2 in deep saline formations may cause salinization of shallower freshwater resources by upward flow of displaced brine from the storage formation into potable groundwater. In this regard, permeable faults or fractures can serve as potential leakage pathways for upward brine migration. The present study uses a regional-scale 3D model based on real structural data of a prospective CO2 storage site in Northeastern Germany to determine the impact of compartmentalization and fault permeability on upward brine migration as a result of pressure elevation by CO2 injection. To evaluate the degree of salinization in the shallower aquifers, different fault leakage scenarios were carried out using a newly developed workflow in which the model grid from the software package Petrel applied for pre-processing is transferred to the reservoir simulator TOUGH2-MP/ECO2N. A discrete fault description is achieved by using virtual elements. A static 3D geological model of the CO2 storage site with an a real size of 40 km x 40 km and a thickness of 766 m was implemented. Subsequently, large-scale numerical multi-phase multi-component (CO2, NaCl, H2O) flow simulations were carried out on a high performance computing system. The prospective storage site, located in the Northeast German Basin is part of an anticline structure characterized by a saline multi-layer aquifer system. The NE and SW boundaries of the study area are confined by the Fuerstenwalde Gubener and the Lausitzer Abbruch fault zones represented by four discrete faults in the model. Two formations of the Middle Bunter were chosen to assess brine migration through faults triggered by an annual injection rate of 1.7 Mt CO2 into the lowermost formation over a time span of 20 years. In addition to varying fault permeabilities, different boundary conditions were applied to evaluate the effects of reservoir compartmentalization. Simulation results show that the highest pressurization within the storage formation with a relative pressure increase of up to 150 % after 20 years of injection is caused by strong compartmentalization effects if closed boundaries and closed faults are assumed. The CO2 plume is considerably smaller compared to those that develop when laterally open boundaries are applied. Laterally open boundaries and highly permeable faults lead to the strongest pressure dissipation and cause the CO2 plume to come up almost 3 km closer to the fault. Closed model boundaries in the lower aquifers and four highly permeable faults (> 1,000 mD) lead to the highest salinities in the uppermost Stuttgart formation with an average salinity increase of 0.24 % (407 mg/l) after 20 years of injection. Less salinity changes in the uppermost aquifers are observed with closed boundaries in the lower aquifers and only one major fault open for brine flow. Here, also fault permeability, unexpectedly does not significantly influence salinization in the uppermost Stuttgart formation. Salinity increases by 0.04% (75 mg/l) for a fault permeability of 1,000 mD and by at least 0.06 % (96 mg/l) for a fault permeability of 10,000 mD and until the end of injection. Taking into account the modeling results shallow aquifer salinization is not expected to be of concern for the investigated study area in the Northeastern German Basin.

  10. Storytelling Slide Shows to Improve Diabetes and High Blood Pressure Knowledge and Self-Efficacy: Three-Year Results among Community Dwelling Older African Americans

    ERIC Educational Resources Information Center

    Bertera, Elizabeth M.

    2014-01-01

    This study combined the African American tradition of oral storytelling with the Hispanic medium of "Fotonovelas." A staggered pretest posttest control group design was used to evaluate four Storytelling Slide Shows on health that featured community members. A total of 212 participants were recruited for the intervention and 217 for the…

  11. Two yeast acid phosphatase structural genes are the result of a tandem duplication and show different degrees of homology in their promoter and coding sequences.

    PubMed Central

    Meyhack, B; Bajwa, W; Rudolph, H; Hinnen, A

    1982-01-01

    We have cloned the structural genes for a regulated ( PHO5 ) and a constitutive ( PHO3 ) acid phosphatase from yeast by transformation and complementation of a yeast pho3 , pho5 double mutant. Both genes are located on a 5.1-kb BamHI fragment. The cloned genes were identified on the basis of genetic evidence and by hybrid selection of mRNA coupled with in vitro translation and immunoprecipitation. Subcloning of partial Sau3A digests and functional in vivo analysis by transformation together with DNA sequence analysis showed that the two genes are oriented in the order (5') PHO5 , PHO3 (3'). While the nucleotide sequences of the two coding regions are quite similar, the putative promoter regions show a lower degree of sequence homology. Partly divergent promoter sequences may explain the different regulation of the two genes. Images Fig. 2. PMID:6329697

  12. Map Showing Earthquake Shaking and Tsunami Hazard in Guadeloupe and Dominica, as a Result of an M8.0 Earthquake on the Lesser Antilles Megathrust

    USGS Multimedia Gallery

    Earthquake shaking (onland) and tsunami (ocean) hazard in Guadeloupe and Dominica, as a result of anM8.0 earthquake on the Lesser Antilles megathrust adjacent to Guadeloupe. Colors onland represent scenario earthquake shaking intensities calculated in USGS ShakeMap software (Wald et al. 20...

  13. Processing of Numerical and Proportional Quantifiers.

    PubMed

    Shikhare, Sailee; Heim, Stefan; Klein, Elise; Huber, Stefan; Willmes, Klaus

    2015-09-01

    Quantifier expressions like "many" and "at least" are part of a rich repository of words in language representing magnitude information. The role of numerical processing in comprehending quantifiers was studied in a semantic truth value judgment task, asking adults to quickly verify sentences about visual displays using numerical (at least seven, at least thirteen, at most seven, at most thirteen) or proportional (many, few) quantifiers. The visual displays were composed of systematically varied proportions of yellow and blue circles. The results demonstrated that numerical estimation and numerical reference information are fundamental in encoding the meaning of quantifiers in terms of response times and acceptability judgments. However, a difference emerges in the comparison strategies when a fixed external reference numerosity (seven or thirteen) is used for numerical quantifiers, whereas an internal numerical criterion is invoked for proportional quantifiers. Moreover, for both quantifier types, quantifier semantics and its polarity (positive vs. negative) biased the response direction (accept/reject). Overall, our results indicate that quantifier comprehension involves core numerical and lexical semantic properties, demonstrating integrated processing of language and numbers. PMID:25631283

  14. Quantifying Qualitative Learning.

    ERIC Educational Resources Information Center

    Bogus, Barbara

    1995-01-01

    A teacher at an alternative school for at-risk students discusses the development of student assessment that increases students' self-esteem, convinces students that learning is fun, and prepares students to return to traditional school settings. She found that allowing students to participate in the assessment process successfully quantified the…

  15. Quantifying Faculty Workloads.

    ERIC Educational Resources Information Center

    Archer, J. Andrew

    Teaching load depends on many variables, however most colleges define it strictly in terms of contact or credit hours. The failure to give weight to variables such as number of preparations, number of students served, committee and other noninstructional assignments is usually due to the lack of a formula that will quantify the effects of these…

  16. Meditations on Quantified Constraint Satisfaction

    E-print Network

    Chen, Hubie

    2012-01-01

    The quantified constraint satisfaction problem (QCSP) is the problem of deciding, given a structure and a first-order prenex sentence whose quantifier-free part is the conjunction of atoms, whether or not the sentence holds on the structure. One obtains a family of problems by defining, for each structure B, the problem QCSP(B) to be the QCSP where the structure is fixed to be B. In this article, we offer a viewpoint on the research program of understanding the complexity of the problems QCSP(B) on finite structures. In particular, we propose and discuss a group of conjectures; throughout, we attempt to place the conjectures in relation to existing results and to emphasize open issues and potential research directions.

  17. Quantifying magma segregation in dykes

    NASA Astrophysics Data System (ADS)

    Yamato, P.; Duretz, T.; May, D. A.; Tartèse, R.

    2015-10-01

    The dynamics of magma flow is highly affected by the presence of a crystalline load. During magma ascent, it has been demonstrated that crystal-melt segregation constitutes a viable mechanism for magmatic differentiation. Moreover, crystal-melt segregation during magma transport has important implications not only in terms of magma rheology, but also in terms of differentiation of the continental crust. However, the influences of the crystal volume percentage (?), of their geometry, their size and their density on crystal-melt segregation are still not well constrained. To address these issues, we performed a parametric study using 2D direct numerical simulations, which model the ascension of a crystal-bearing magma in a vertical dyke. Using these models, we have characterised the amount of segregation as a function of different physical properties including ?, the density contrast between crystals and the melt phase (??), the size of the crystals (Ac) and their aspect ratio (R). Results show that small values of R do not affect the segregation. In this case, the amount of segregation depends upon four parameters. Segregation is highest when ?? and Ac are large, and lowest for large pressure gradient (Pd) and/or large values of dyke width (Wd). These four parameters can be combined into a single one, the Snumber, which can be used to quantify the amount of segregation occurring during magma ascent. Based on systematic numerical modelling and dimensional analysis, we provide a first order scaling law which allows quantification of the segregation for an arbitrary Snumber and ?, encompassing a wide range of typical parameters encountered in terrestrial magmatic systems. Although developed in a simplified system, this study has strong implications regarding our understanding of crystal segregation processes during magma transport. Our first order scaling law allows to immediately determine the amount of crystal-melt segregation occurring in any given magmatic dyke system.

  18. A new index quantifying the precipitation extremes

    NASA Astrophysics Data System (ADS)

    Busuioc, Aristita; Baciu, Madalina; Stoica, Cerasela

    2015-04-01

    Events of extreme precipitation have a great impact on society. They are associated with flooding, erosion and landslides.Various indices have been proposed to quantify these extreme events and they are mainly related to daily precipitation amount, which are usually available for long periods in many places over the world. The climate signal related to changes in the characteristics of precipitation extremes is different over various regions and it is dependent on the season and the index used to quantify the precipitation extremes. The climate model simulations and empirical evidence suggest that warmer climates, due to increased water vapour, lead to more intense precipitation events, even when the total annual precipitation is slightly reduced. It was suggested that there is a shift in the nature of precipitation events towards more intense and less frequent rains and increases in heavy rains are expected to occur in most places, even when the mean precipitation is not increasing. This conclusion was also proved for the Romanian territory in a recent study, showing a significant increasing trend of the rain shower frequency in the warm season over the entire country, despite no significant changes in the seasonal amount and the daily extremes. The shower events counted in that paper refer to all convective rains, including torrential ones giving high rainfall amount in very short time. The problem is to find an appropriate index to quantify such events in terms of their highest intensity in order to extract the maximum climate signal. In the present paper, a new index is proposed to quantify the maximum precipitation intensity in an extreme precipitation event, which could be directly related to the torrential rain intensity. This index is tested at nine Romanian stations (representing various physical-geographical conditions) and it is based on the continuous rainfall records derived from the graphical registrations (pluviograms) available at National Meteorological Administration in Romania. These types of records contain the rainfall intensity (mm/minute) over various intervals for which it remains constant. The maximum intensity for each continuous rain over the May-August interval has been calculated for each year. The corresponding time series over the 1951-2008 period have been analysed in terms of their long term trends and shifts in the mean; the results have been compared to those resulted from other rainfall indices based on daily and hourly data, computed over the same interval such as: total rainfall amount, maximum daily amount, contribution of total hourly amounts exceeding 10mm/day, contribution of daily amounts exceeding the 90th percentile, the 90th, 99th and 99.9th percentiles of 1-hour data . The results show that the proposed index exhibit a coherent and stronger climate signal (significant increase) for all analysed stations compared to the other indices associated to precipitation extremes, which show either no significant change or weaker signal. This finding shows that the proposed index is most appropriate to quantify the climate change signal of the precipitation extremes. We consider that this index is more naturally connected to the maximum intensity of a real rainfall event. The results presented is this study were funded by the Executive Agency for Higher Education, Research, Development and Innovation Funding (UEFISCDI) through the research project CLIMHYDEX, "Changes in climate extremes and associated impact in hydrological events in Romania", code PNII-ID-2011-2-0073 (http://climhydex.meteoromania.ro)

  19. Quantifying air pollution removal by green roofs in Chicago Jun Yang a,c,*, Qian Yu b

    E-print Network

    Yu, Qian

    Quantifying air pollution removal by green roofs in Chicago Jun Yang a,c,*, Qian Yu b , Peng Gong c t The level of air pollution removal by green roofs in Chicago was quantified using a dry deposition model. The result showed that a total of 1675 kg of air pollutants was removed by 19.8 ha of green roofs in one year

  20. Results.

    ERIC Educational Resources Information Center

    Zemsky, Robert; Shaman, Susan; Shapiro, Daniel B.

    2001-01-01

    Describes the Collegiate Results Instrument (CRI), which measures a range of collegiate outcomes for alumni 6 years after graduation. The CRI was designed to target alumni from institutions across market segments and assess their values, abilities, work skills, occupations, and pursuit of lifelong learning. (EV)

  1. Terahertz spectroscopy for quantifying refined oil mixtures.

    PubMed

    Li, Yi-nan; Li, Jian; Zeng, Zhou-mo; Li, Jie; Tian, Zhen; Wang, Wei-kui

    2012-08-20

    In this paper, the absorption coefficient spectra of samples prepared as mixtures of gasoline and diesel in different proportions are obtained by terahertz time-domain spectroscopy. To quantify the components of refined oil mixtures, a method is proposed to evaluate the best frequency band for regression analysis. With the data in this frequency band, dualistic linear regression fitting is used to determine the volume fraction of gasoline and diesel in the mixture based on the Beer-Lambert law. The minimum of regression fitting R-Square is 0.99967, and the mean error of fitted volume fraction of 97# gasoline is 4.3%. Results show that refined oil mixtures can be quantitatively analyzed through absorption coefficient spectra in terahertz frequency, which it has bright application prospects in the storage and transportation field for refined oil. PMID:22907017

  2. Quantifying actin wave modulation on periodic topography

    NASA Astrophysics Data System (ADS)

    Guven, Can; Driscoll, Meghan; Sun, Xiaoyu; Parker, Joshua; Fourkas, John; Carlsson, Anders; Losert, Wolfgang

    2014-03-01

    Actin is the essential builder of the cell cytoskeleton, whose dynamics are responsible for generating the necessary forces for the formation of protrusions. By exposing amoeboid cells to periodic topographical cues, we show that actin can be directionally guided via inducing preferential polymerization waves. To quantify the dynamics of these actin waves and their interaction with the substrate, we modify a technique from computer vision called ``optical flow.'' We obtain vectors that represent the apparent actin flow and cluster these vectors to obtain patches of newly polymerized actin, which represent actin waves. Using this technique, we compare experimental results, including speed distribution of waves and distance from the wave centroid to the closest ridge, with actin polymerization simulations. We hypothesize the modulation of the activity of nucleation promotion factors on ridges (elevated regions of the surface) as a potential mechanism for the wave-substrate coupling. Funded by NIH grant R01GM085574.

  3. Quantifying errors in trace species transport modeling

    PubMed Central

    Prather, Michael J.; Zhu, Xin; Strahan, Susan E.; Steenrod, Stephen D.; Rodriguez, Jose M.

    2008-01-01

    One expectation when computationally solving an Earth system model is that a correct answer exists, that with adequate physical approximations and numerical methods our solutions will converge to that single answer. With such hubris, we performed a controlled numerical test of the atmospheric transport of CO2 using 2 models known for accurate transport of trace species. Resulting differences were unexpectedly large, indicating that in some cases, scientific conclusions may err because of lack of knowledge of the numerical errors in tracer transport models. By doubling the resolution, thereby reducing numerical error, both models show some convergence to the same answer. Now, under realistic conditions, we identify a practical approach for finding the correct answer and thus quantifying the advection error. PMID:19066224

  4. On quantifying insect movements

    SciTech Connect

    Wiens, J.A.; Crist, T.O. ); Milne, B.T. )

    1993-08-01

    We elaborate on methods described by Turchin, Odendaal Rausher for quantifying insect movement pathways. We note the need to scale measurement resolution to the study insects and the questions being asked, and we discuss the use of surveying instrumentation for recording sequential positions of individuals on pathways. We itemize several measures that may be used to characterize movement pathways and illustrate these by comparisons among several Eleodes beetles occurring in shortgrass steppe. The fractal dimension of pathways may provide insights not available from absolute measures of pathway configuration. Finally, we describe a renormalization procedure that may be used to remove sequential interdependence among locations of moving individuals while preserving the basic attributes of the pathway.

  5. Quantifying social group evolution.

    PubMed

    Palla, Gergely; Barabási, Albert-László; Vicsek, Tamás

    2007-04-01

    The rich set of interactions between individuals in society results in complex community structure, capturing highly connected circles of friends, families or professional cliques in a social network. Thanks to frequent changes in the activity and communication patterns of individuals, the associated social and communication network is subject to constant evolution. Our knowledge of the mechanisms governing the underlying community dynamics is limited, but is essential for a deeper understanding of the development and self-optimization of society as a whole. We have developed an algorithm based on clique percolation that allows us to investigate the time dependence of overlapping communities on a large scale, and thus uncover basic relationships characterizing community evolution. Our focus is on networks capturing the collaboration between scientists and the calls between mobile phone users. We find that large groups persist for longer if they are capable of dynamically altering their membership, suggesting that an ability to change the group composition results in better adaptability. The behaviour of small groups displays the opposite tendency-the condition for stability is that their composition remains unchanged. We also show that knowledge of the time commitment of members to a given community can be used for estimating the community's lifetime. These findings offer insight into the fundamental differences between the dynamics of small groups and large institutions. PMID:17410175

  6. Quantifying nonisothermal subsurface soil water evaporation

    NASA Astrophysics Data System (ADS)

    Deol, Pukhraj; Heitman, Josh; Amoozegar, Aziz; Ren, Tusheng; Horton, Robert

    2012-11-01

    Accurate quantification of energy and mass transfer during soil water evaporation is critical for improving understanding of the hydrologic cycle and for many environmental, agricultural, and engineering applications. Drying of soil under radiation boundary conditions results in formation of a dry surface layer (DSL), which is accompanied by a shift in the position of the latent heat sink from the surface to the subsurface. Detailed investigation of evaporative dynamics within this active near-surface zone has mostly been limited to modeling, with few measurements available to test models. Soil column studies were conducted to quantify nonisothermal subsurface evaporation profiles using a sensible heat balance (SHB) approach. Eleven-needle heat pulse probes were used to measure soil temperature and thermal property distributions at the millimeter scale in the near-surface soil. Depth-integrated SHB evaporation rates were compared with mass balance evaporation estimates under controlled laboratory conditions. The results show that the SHB method effectively measured total subsurface evaporation rates with only 0.01-0.03 mm h-1difference from mass balance estimates. The SHB approach also quantified millimeter-scale nonisothermal subsurface evaporation profiles over a drying event, which has not been previously possible. Thickness of the DSL was also examined using measured soil thermal conductivity distributions near the drying surface. Estimates of the DSL thickness were consistent with observed evaporation profile distributions from SHB. Estimated thickness of the DSL was further used to compute diffusive vapor flux. The diffusive vapor flux also closely matched both mass balance evaporation rates and subsurface evaporation rates estimated from SHB.

  7. Working Memory Mechanism in Proportional Quantifier Verification

    ERIC Educational Resources Information Center

    Zajenkowski, Marcin; Szymanik, Jakub; Garraffa, Maria

    2014-01-01

    The paper explores the cognitive mechanisms involved in the verification of sentences with proportional quantifiers (e.g. "More than half of the dots are blue"). The first study shows that the verification of proportional sentences is more demanding than the verification of sentences such as: "There are seven blue and eight yellow…

  8. Public medical shows.

    PubMed

    Walusinski, Olivier

    2014-01-01

    In the second half of the 19th century, Jean-Martin Charcot (1825-1893) became famous for the quality of his teaching and his innovative neurological discoveries, bringing many French and foreign students to Paris. A hunger for recognition, together with progressive and anticlerical ideals, led Charcot to invite writers, journalists, and politicians to his lessons, during which he presented the results of his work on hysteria. These events became public performances, for which physicians and patients were transformed into actors. Major newspapers ran accounts of these consultations, more like theatrical shows in some respects. The resultant enthusiasm prompted other physicians in Paris and throughout France to try and imitate them. We will compare the form and substance of Charcot's lessons with those given by Jules-Bernard Luys (1828-1897), Victor Dumontpallier (1826-1899), Ambroise-Auguste Liébault (1823-1904), Hippolyte Bernheim (1840-1919), Joseph Grasset (1849-1918), and Albert Pitres (1848-1928). We will also note their impact on contemporary cinema and theatre. PMID:25273491

  9. Investigations of information quantifiers for the Tavis-Cummings model

    NASA Astrophysics Data System (ADS)

    Obada, A.-S. F.; Abdel-Khalek, S.; Berrada, K.; Shaheen, M. E.

    2013-12-01

    In this article, a system of two two-level atoms interacting with a single-mode quantized electromagnetic field in a lossless resonant cavity via a multi-photon transition is considered. The quantum Fisher information, negativity, classical Fisher information, and reduced von Neumann entropy for the two atoms are investigated. We found that the number of photon transitions plays an important role in the dynamics of different information quantifiers in the cases of two symmetric and two asymmetric atoms. Our results show that there is a close relationship between the different quantifiers. Also, the quantum and classical Fisher information can be useful for studying the properties of quantum states which are important in quantum optics and information.

  10. The Great Cometary Show

    NASA Astrophysics Data System (ADS)

    2007-01-01

    The ESO Very Large Telescope Interferometer, which allows astronomers to scrutinise objects with a precision equivalent to that of a 130-m telescope, is proving itself an unequalled success every day. One of the latest instruments installed, AMBER, has led to a flurry of scientific results, an anthology of which is being published this week as special features in the research journal Astronomy & Astrophysics. ESO PR Photo 06a/07 ESO PR Photo 06a/07 The AMBER Instrument "With its unique capabilities, the VLT Interferometer (VLTI) has created itself a niche in which it provide answers to many astronomical questions, from the shape of stars, to discs around stars, to the surroundings of the supermassive black holes in active galaxies," says Jorge Melnick (ESO), the VLT Project Scientist. The VLTI has led to 55 scientific papers already and is in fact producing more than half of the interferometric results worldwide. "With the capability of AMBER to combine up to three of the 8.2-m VLT Unit Telescopes, we can really achieve what nobody else can do," added Fabien Malbet, from the LAOG (France) and the AMBER Project Scientist. Eleven articles will appear this week in Astronomy & Astrophysics' special AMBER section. Three of them describe the unique instrument, while the other eight reveal completely new results about the early and late stages in the life of stars. ESO PR Photo 06b/07 ESO PR Photo 06b/07 The Inner Winds of Eta Carinae The first results presented in this issue cover various fields of stellar and circumstellar physics. Two papers deal with very young solar-like stars, offering new information about the geometry of the surrounding discs and associated outflowing winds. Other articles are devoted to the study of hot active stars of particular interest: Alpha Arae, Kappa Canis Majoris, and CPD -57o2874. They provide new, precise information about their rotating gas envelopes. An important new result concerns the enigmatic object Eta Carinae. Using AMBER with its high spatial and spectral resolution, it was possible to zoom into the very heart of this very massive star. In this innermost region, the observations are dominated by the extremely dense stellar wind that totally obscures the underlying central star. The AMBER observations show that this dense stellar wind is not spherically symmetric, but exhibits a clearly elongated structure. Overall, the AMBER observations confirm that the extremely high mass loss of Eta Carinae's massive central star is non-spherical and much stronger along the poles than in the equatorial plane. This is in agreement with theoretical models that predict such an enhanced polar mass-loss in the case of rapidly rotating stars. ESO PR Photo 06c/07 ESO PR Photo 06c/07 RS Ophiuchi in Outburst Several papers from this special feature focus on the later stages in a star's life. One looks at the binary system Gamma 2 Velorum, which contains the closest example of a star known as a Wolf-Rayet. A single AMBER observation allowed the astronomers to separate the spectra of the two components, offering new insights in the modeling of Wolf-Rayet stars, but made it also possible to measure the separation between the two stars. This led to a new determination of the distance of the system, showing that previous estimates were incorrect. The observations also revealed information on the region where the winds from the two stars collide. The famous binary system RS Ophiuchi, an example of a recurrent nova, was observed just 5 days after it was discovered to be in outburst on 12 February 2006, an event that has been expected for 21 years. AMBER was able to detect the extension of the expanding nova emission. These observations show a complex geometry and kinematics, far from the simple interpretation of a spherical fireball in extension. AMBER has detected a high velocity jet probably perpendicular to the orbital plane of the binary system, and allowed a precise and careful study of the wind and the shockwave coming from the nova. The stream of results from the VLTI and AMBER

  11. Quantifying renewable groundwater stress with GRACE

    NASA Astrophysics Data System (ADS)

    Richey, Alexandra S.; Thomas, Brian F.; Lo, Min-Hui; Reager, John T.; Famiglietti, James S.; Voss, Katalyn; Swenson, Sean; Rodell, Matthew

    2015-07-01

    Groundwater is an increasingly important water supply source globally. Understanding the amount of groundwater used versus the volume available is crucial to evaluate future water availability. We present a groundwater stress assessment to quantify the relationship between groundwater use and availability in the world's 37 largest aquifer systems. We quantify stress according to a ratio of groundwater use to availability, which we call the Renewable Groundwater Stress ratio. The impact of quantifying groundwater use based on nationally reported groundwater withdrawal statistics is compared to a novel approach to quantify use based on remote sensing observations from the Gravity Recovery and Climate Experiment (GRACE) satellite mission. Four characteristic stress regimes are defined: Overstressed, Variable Stress, Human-dominated Stress, and Unstressed. The regimes are a function of the sign of use (positive or negative) and the sign of groundwater availability, defined as mean annual recharge. The ability to mitigate and adapt to stressed conditions, where use exceeds sustainable water availability, is a function of economic capacity and land use patterns. Therefore, we qualitatively explore the relationship between stress and anthropogenic biomes. We find that estimates of groundwater stress based on withdrawal statistics are unable to capture the range of characteristic stress regimes, especially in regions dominated by sparsely populated biome types with limited cropland. GRACE-based estimates of use and stress can holistically quantify the impact of groundwater use on stress, resulting in both greater magnitudes of stress and more variability of stress between regions.

  12. The Art Show

    ERIC Educational Resources Information Center

    Scolarici, Alicia

    2004-01-01

    This article describes what once was thought to be impossible--a formal art show extravaganza at an elementary school with 1,000 students, a Department of Defense Dependent School (DODDS) located overseas, on RAF Lakenheath, England. The dream of this this event involved the transformation of the school cafeteria into an elegant art show

  13. Quantifying foot deformation using finite helical angle.

    PubMed

    Pothrat, Claude; Goislard de Monsabert, Benjamin; Vigouroux, Laurent; Viehweger, Elke; Berton, Eric; Rao, Guillaume

    2015-10-15

    Foot intrinsic motion originates from the combination of numerous joint motions giving this segment a high adaptive ability. Existing foot kinematic models are mostly focused on analyzing small scale foot bone to bone motions which require both complex experimental methodology and complex interpretative work to assess the global foot functionality. This study proposes a method to assess the total foot deformation by calculating a helical angle from the relative motions of the rearfoot and the forefoot. This method required a limited number of retro-reflective markers placed on the foot and was tested for five different movements (walking, forefoot impact running, heel impact running, 90° cutting, and 180° U-turn) and 12 participants. Overtime intraclass correlation coefficients were calculated to quantify the helical angle pattern repeatability for each movement. Our results indicated that the method was suitable to identify the different motions as different amplitudes of helical angle were observed according to the flexibility required in each movement. Moreover, the results showed that the repeatability could be used to identify the mastering of each motion as this repeatability was high for well mastered movements. Together with existing methods, this new protocol could be applied to fully assess foot function in sport or clinical contexts. PMID:26319503

  14. Quantifying the quiet epidemic

    PubMed Central

    2014-01-01

    During the late 20th century numerical rating scales became central to the diagnosis of dementia and helped transform attitudes about its causes and prevalence. Concentrating largely on the development and use of the Blessed Dementia Scale, I argue that rating scales served professional ends during the 1960s and 1970s. They helped old age psychiatrists establish jurisdiction over conditions such as dementia and present their field as a vital component of the welfare state, where they argued that ‘reliable modes of diagnosis’ were vital to the allocation of resources. I show how these arguments appealed to politicians, funding bodies and patient groups, who agreed that dementia was a distinct disease and claimed research on its causes and prevention should be designated ‘top priority’. But I also show that worries about the replacement of clinical acumen with technical and depersonalized methods, which could conceivably be applied by anyone, led psychiatrists to stress that rating scales had their limits and could be used only by trained experts. PMID:25866448

  15. Quantifying the seismicity on Taiwan

    NASA Astrophysics Data System (ADS)

    Wu, Yi-Hsuan; Chen, Chien-Chih; Turcotte, Donald L.; Rundle, John B.

    2013-07-01

    We quantify the seismicity on the island of Taiwan using the frequency-magnitude statistics of earthquakes since 1900. A break in Gutenberg-Richter scaling for large earthquakes in global seismicity has been observed, this break is also observed in our Taiwan study. The seismic data from the Central Weather Bureau Seismic Network are in good agreement with the Gutenberg-Richter relation taking b ? 1 when M < 7. For large earthquakes, M ? 7, the seismic data fit Gutenberg-Richter scaling with b ? 1.5. If the Gutenberg-Richter scaling for M < 7 earthquakes is extrapolated to larger earthquakes, we would expect a M > 8 earthquake in the study region about every 25 yr. However, our analysis shows a lower frequency of occurrence of large earthquakes so that the expected frequency of M > 8 earthquakes is about 200 yr. The level of seismicity for smaller earthquakes on Taiwan is about 12 times greater than in Southern California and the possibility of a M ? 9 earthquake north or south of Taiwan cannot be ruled out. In light of the Fukushima, Japan nuclear disaster, we also discuss the implications of our study for the three operating nuclear power plants on the coast of Taiwan.

  16. Quantifying periodicity in omics data

    PubMed Central

    Amariei, Cornelia; Tomita, Masaru; Murray, Douglas B.

    2014-01-01

    Oscillations play a significant role in biological systems, with many examples in the fast, ultradian, circadian, circalunar, and yearly time domains. However, determining periodicity in such data can be problematic. There are a number of computational methods to identify the periodic components in large datasets, such as signal-to-noise based Fourier decomposition, Fisher's g-test and autocorrelation. However, the available methods assume a sinusoidal model and do not attempt to quantify the waveform shape and the presence of multiple periodicities, which provide vital clues in determining the underlying dynamics. Here, we developed a Fourier based measure that generates a de-noised waveform from multiple significant frequencies. This waveform is then correlated with the raw data from the respiratory oscillation found in yeast, to provide oscillation statistics including waveform metrics and multi-periods. The method is compared and contrasted to commonly used statistics. Moreover, we show the utility of the program in the analysis of noisy datasets and other high-throughput analyses, such as metabolomics and flow cytometry, respectively. PMID:25364747

  17. A Holographic Road Show.

    ERIC Educational Resources Information Center

    Kirkpatrick, Larry D.; Rugheimer, Mac

    1979-01-01

    Describes the viewing sessions and the holograms of a holographic road show. The traveling exhibits, believed to stimulate interest in physics, include a wide variety of holograms and demonstrate several physical principles. (GA)

  18. Quantifying Dictyostelium discoideum Aggregation

    NASA Astrophysics Data System (ADS)

    McCann, Colin; Kriebel, Paul; Parent, Carole; Losert, Wolfgang

    2008-03-01

    Upon nutrient deprivation, the social amoebae Dictyostelium discoideum enter a developmental program causing them to aggregate into multicellular organisms. During this process cells sense and secrete chemical signals, often moving in a head-to-tail fashion called a `stream' as they assemble into larger entities. We measure Dictyostelium speed, shape, and directionality, both inside and outside of streams, and develop methods to distinguish group dynamics from behavior of individual cells. We observe an overall increase in speed during aggregation and a decrease in speed fluctuations once a cell joins a stream. Initial results indicate that when cells are in close proximity the trailing cells migrate specifically toward the backs of leading cells.

  19. On the freeze quantifier in Constraint LTL: decidability and complexity

    E-print Network

    Doyen, Laurent

    On the freeze quantifier in Constraint LTL: decidability and complexity St´ephane Demri a,1 , Ranko of operational models with constraints. The freeze quantifier can be part of the language, as in some real with -abstraction etc.). We show that Constraint LTL over the simple domain N, = augmented with the freeze

  20. Quantifying coherence of Gaussian states

    E-print Network

    Jianwei Xu

    2015-10-10

    Coherence arises from the superposition principle and plays a key role in quantum mechanics. Recently, Baumgratz et al. [T. Baumgratz, M. Cramer, and M. B. Plenio, Phys. Rev. Lett. 113, 140401 (2014)] established a rigorous framework for quantifying the coherence of finite dimensional quantum states. In this work we provide a framework for quantifying the coherence of Gaussian states and explicitly give a coherence measure based on the relative entropy.

  1. Show Me the Way

    ERIC Educational Resources Information Center

    Dicks, Matthew J.

    2005-01-01

    Because today's students have grown up steeped in video games and the Internet, most of them expect feedback, and usually gratification, very soon after they expend effort on a task. Teachers can get quick feedback to students by showing them videotapes of their learning performances. The author, a 3rd grade teacher describes how the seemingly…

  2. Stage a Water Show

    ERIC Educational Resources Information Center

    Frasier, Debra

    2008-01-01

    In the author's book titled "The Incredible Water Show," the characters from "Miss Alaineus: A Vocabulary Disaster" used an ocean of information to stage an inventive performance about the water cycle. In this article, the author relates how she turned the story into hands-on science teaching for real-life fifth-grade students. The author also…

  3. Showing What They Know

    ERIC Educational Resources Information Center

    Cech, Scott J.

    2008-01-01

    Having students show their skills in three dimensions, known as performance-based assessment, dates back at least to Socrates. Individual schools such as Barrington High School--located just outside of Providence--have been requiring students to actively demonstrate their knowledge for years. The Rhode Island's high school graduating class became…

  4. Obesity in show cats.

    PubMed

    Corbee, R J

    2014-12-01

    Obesity is an important disease with a high prevalence in cats. Because obesity is related to several other diseases, it is important to identify the population at risk. Several risk factors for obesity have been described in the literature. A higher incidence of obesity in certain cat breeds has been suggested. The aim of this study was to determine whether obesity occurs more often in certain breeds. The second aim was to relate the increased prevalence of obesity in certain breeds to the official standards of that breed. To this end, 268 cats of 22 different breeds investigated by determining their body condition score (BCS) on a nine-point scale by inspection and palpation, at two different cat shows. Overall, 45.5% of the show cats had a BCS > 5, and 4.5% of the show cats had a BCS > 7. There were significant differences between breeds, which could be related to the breed standards. Most overweight and obese cats were in the neutered group. It warrants firm discussions with breeders and cat show judges to come to different interpretations of the standards in order to prevent overweight conditions in certain breeds from being the standard of beauty. Neutering predisposes for obesity and requires early nutritional intervention to prevent obese conditions. PMID:24612018

  5. The Ozone Show.

    ERIC Educational Resources Information Center

    Mathieu, Aaron

    2000-01-01

    Uses a talk show activity for a final assessment tool for students to debate about the ozone hole. Students are assessed on five areas: (1) cooperative learning; (2) the written component; (3) content; (4) self-evaluation; and (5) peer evaluation. (SAH)

  6. Quantifying Spatiotemporal Chaos in Rayleigh-Bénard Convection

    E-print Network

    Alireza Karimi; Mark R. Paul

    2012-03-16

    Using large-scale parallel numerical simulations we explore spatiotemporal chaos in Rayleigh-B\\'enard convection in a cylindrical domain with experimentally relevant boundary conditions. We use the variation of the spectrum of Lyapunov exponents and the leading order Lyapunov vector with system parameters to quantify states of high-dimensional chaos in fluid convection. We explore the relationship between the time dynamics of the spectrum of Lyapunov exponents and the pattern dynamics. For chaotic dynamics we find that all of the Lyapunov exponents are positively correlated with the leading order Lyapunov exponent and we quantify the details of their response to the dynamics of defects. The leading order Lyapunov vector is used to identify topological features of the fluid patterns that contribute significantly to the chaotic dynamics. Our results show a transition from boundary dominated dynamics to bulk dominated dynamics as the system size is increased. The spectrum of Lyapunov exponents is used to compute the variation of the fractal dimension with system parameters to quantify how the underlying high-dimensional strange attractor accommodates a range of different chaotic dynamics.

  7. Quantifying uncertainty in stable isotope mixing models

    DOE PAGESBeta

    Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.

    2015-05-19

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (?15N and ?18O) but all methods testedmore »are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated mixing fractions.« less

  8. Quantifying uncertainty in stable isotope mixing models

    NASA Astrophysics Data System (ADS)

    Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.

    2015-05-01

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, Stable Isotope Analysis in R (SIAR), a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (?15N and ?18O) but all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated mixing fractions.

  9. Quantifying uncertainty in stable isotope mixing models

    SciTech Connect

    Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.

    2015-05-19

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (?15N and ?18O) but all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated mixing fractions.

  10. Mars Slide Show

    NASA Technical Reports Server (NTRS)

    2006-01-01

    15 September 2006 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a landslide that occurred off of a steep slope in Tithonium Chasma, part of the vast Valles Marineris trough system.

    Location near: 4.8oS, 84.6oW Image width: 3 km (1.9 mi) Illumination from: upper left Season: Southern Autumn

  11. Keeping Show Pigs Healthy 

    E-print Network

    Lawhorn, D. Bruce

    2006-10-13

    ? are approved antibiotics commonly used in rations. For more information on feed med- ication for specific diseases, check these Extension publications: Diarrheal Disease in Show Swine, L-5320; and Swine Pneumonia, L-5203. taking care dUring and after... requiring anesthesia, such as removal of a retained testicle (cryptorchidism), removal of an infected and enlarged urine pocket (preputial diverticulum removal), repair of scrotal or umbil- ical hernia, and removal of tumors. Before a sur- geon...

  12. A semi-automated system for quantifying the oxidative potential of ambient particles in aqueous extracts using the dithiothreitol (DTT) assay: results from the Southeastern Center for Air Pollution and Epidemiology (SCAPE)

    NASA Astrophysics Data System (ADS)

    Fang, T.; Verma, V.; Guo, H.; King, L. E.; Edgerton, E. S.; Weber, R. J.

    2014-07-01

    A variety of methods are used to measure the capability of particulate matter (PM) to catalytically generate reactive oxygen species (ROS) in vivo, also defined as the aerosol oxidative potential. A widely used measure of aerosol oxidative potential is the dithiothreitol (DTT) assay, which monitors the depletion of DTT (a surrogate for cellular antioxidants) as catalyzed by the redox-active species in PM. However, a major constraint in the routine use of the DTT assay for integrating it with the large-scale health studies is its labor-intensive and time-consuming protocol. To specifically address this concern, we have developed a semi-automated system for quantifying the oxidative potential of aerosol liquid extracts using the DTT assay. The system, capable of unattended analysis at one sample per hour, has a high analytical precision (Coefficient of Variation of 12% for standards, 4% for ambient samples), and reasonably low limit of detection (0.31 nmol min-1). Comparison of the automated approach with the manual method conducted on ambient samples yielded good agreement (slope = 1.08 ± 0.12, r2 = 0.92, N = 9). The system was utilized for the Southeastern Center for Air Pollution and Epidemiology (SCAPE) to generate an extensive data set on DTT activity of ambient particles collected from contrasting environments (urban, road-side, and rural) in the southeastern US. We find that water-soluble PM2.5 DTT activity on a per air volume basis was spatially uniform and often well correlated with PM2.5 mass (r = 0.49 to 0.88), suggesting regional sources contributing to the PM oxidative potential in southeast US. However, the greater heterogeneity in the intrinsic DTT activity (per PM mass basis) across seasons indicates variability in the DTT activity associated with aerosols from sources that vary with season. Although developed for the DTT assay, the instrument can also be used to determine oxidative potential with other acellular assays.

  13. Quantifying potential recharge in mantled sinkholes using ERT.

    PubMed

    Schwartz, Benjamin F; Schreiber, Madeline E

    2009-01-01

    Potential recharge through thick soils in mantled sinkholes was quantified using differential electrical resistivity tomography (ERT). Conversion of time series two-dimensional (2D) ERT profiles into 2D volumetric water content profiles using a numerically optimized form of Archie's law allowed us to monitor temporal changes in water content in soil profiles up to 9 m in depth. Combining Penman-Monteith daily potential evapotranspiration (PET) and daily precipitation data with potential recharge calculations for three sinkhole transects indicates that potential recharge occurred only during brief intervals over the study period and ranged from 19% to 31% of cumulative precipitation. Spatial analysis of ERT-derived water content showed that infiltration occurred both on sinkhole flanks and in sinkhole bottoms. Results also demonstrate that mantled sinkholes can act as regions of both rapid and slow recharge. Rapid recharge is likely the result of flow through macropores (such as root casts and thin gravel layers), while slow recharge is the result of unsaturated flow through fine-grained sediments. In addition to developing a new method for quantifying potential recharge at the field scale in unsaturated conditions, we show that mantled sinkholes are an important component of storage in a karst system. PMID:18823398

  14. A semi-automated system for quantifying the oxidative potential of ambient particles in aqueous extracts using the dithiothreitol (DTT) assay: results from the Southeastern Center for Air Pollution and Epidemiology (SCAPE)

    NASA Astrophysics Data System (ADS)

    Fang, T.; Verma, V.; Guo, H.; King, L. E.; Edgerton, E. S.; Weber, R. J.

    2015-01-01

    A variety of methods are used to measure the capability of particulate matter (PM) to catalytically generate reactive oxygen species (ROS) in vivo, also defined as the aerosol oxidative potential. A widely used measure of aerosol oxidative potential is the dithiothreitol (DTT) assay, which monitors the depletion of DTT (a surrogate for cellular antioxidants) as catalyzed by the redox-active species in PM. However, a major constraint in the routine use of the DTT assay for integrating it with large-scale health studies is its labor-intensive and time-consuming protocol. To specifically address this concern, we have developed a semi-automated system for quantifying the oxidative potential of aerosol liquid extracts using the DTT assay. The system, capable of unattended analysis at one sample per hour, has a high analytical precision (coefficient of variation of 15% for positive control, 4% for ambient samples) and reasonably low limit of detection (0.31 nmol min-1). Comparison of the automated approach with the manual method conducted on ambient samples yielded good agreement (slope = 1.08 ± 0.12, r2 = 0.92, N = 9). The system was utilized for the Southeastern Center for Air Pollution & Epidemiology (SCAPE) to generate an extensive data set on DTT activity of ambient particles collected from contrasting environments (urban, roadside, and rural) in the southeastern US. We find that water-soluble PM2.5 DTT activity on a per-air-volume basis was spatially uniform and often well correlated with PM2.5 mass (r = 0.49 to 0.88), suggesting regional sources contributing to the PM oxidative potential in the southeastern US. The correlation may also suggest a mechanistic explanation (oxidative stress) for observed PM2.5 mass-health associations. The heterogeneity in the intrinsic DTT activity (per-PM-mass basis) across seasons indicates variability in the DTT activity associated with aerosols from sources that vary with season. Although developed for the DTT assay, the instrument can also be used to determine oxidative potential with other acellular assays.

  15. Tracking and Quantifying Objects and Non-Cohesive Substances

    ERIC Educational Resources Information Center

    van Marle, Kristy; Wynn, Karen

    2011-01-01

    The present study tested infants' ability to assess and compare quantities of a food substance. Contrary to previous findings, the results suggest that by 10 months of age infants can quantify non-cohesive substances, and that this ability is different in important ways from their ability to quantify discrete objects: (1) In contrast to even much…

  16. Quantifying reliability uncertainty : a proof of concept.

    SciTech Connect

    Diegert, Kathleen V.; Dvorack, Michael A.; Ringland, James T.; Mundt, Michael Joseph; Huzurbazar, Aparna; Lorio, John F.; Fatherley, Quinn; Anderson-Cook, Christine; Wilson, Alyson G.; Zurn, Rena M.

    2009-10-01

    This paper develops Classical and Bayesian methods for quantifying the uncertainty in reliability for a system of mixed series and parallel components for which both go/no-go and variables data are available. Classical methods focus on uncertainty due to sampling error. Bayesian methods can explore both sampling error and other knowledge-based uncertainties. To date, the reliability community has focused on qualitative statements about uncertainty because there was no consensus on how to quantify them. This paper provides a proof of concept that workable, meaningful quantification methods can be constructed. In addition, the application of the methods demonstrated that the results from the two fundamentally different approaches can be quite comparable. In both approaches, results are sensitive to the details of how one handles components for which no failures have been seen in relatively few tests.

  17. Gaussian intrinsic entanglement: An entanglement quantifier based on secret correlations

    NASA Astrophysics Data System (ADS)

    Mišta, Ladislav; Tatham, Richard

    2015-06-01

    Intrinsic entanglement (IE) is a quantity which aims at quantifying bipartite entanglement carried by a quantum state as an optimal amount of the intrinsic information that can be extracted from the state by measurement. We investigate in detail the properties of a Gaussian version of IE, the so-called Gaussian intrinsic entanglement (GIE). We show explicitly how GIE simplifies to the mutual information of a distribution of outcomes of measurements on a conditional state obtained by a measurement on a purifying subsystem of the analyzed state, which is first minimized over all measurements on the purifying subsystem and then maximized over all measurements on the conditional state. By constructing for any separable Gaussian state a purification and a measurement on the purifying subsystem which projects the purification onto a product state, we prove that GIE vanishes on all Gaussian separable states. Via realization of quantum operations by teleportation, we further show that GIE is nonincreasing under Gaussian local trace-preserving operations and classical communication. For pure Gaussian states and a reduction of the continuous-variable GHZ state, we calculate GIE analytically and we show that it is always equal to the Gaussian Rényi-2 entanglement. We also extend the analysis of IE to a non-Gaussian case by deriving an analytical lower bound on IE for a particular form of the non-Gaussian continuous-variable Werner state. Our results indicate that mapping of entanglement onto intrinsic information is capable of transmitting also quantitative properties of entanglement and that this property can be used for introduction of a quantifier of Gaussian entanglement which is a compromise between computable and physically meaningful entanglement quantifiers.

  18. Quantifying Order in Poly(3-hexylthiophene)

    NASA Astrophysics Data System (ADS)

    Snyder, Chad; Nieuwendaal, Ryan; Delongchamp, Dean; Luscombe, Christine; Sista, Prakash; Boyd, Shane

    2014-03-01

    While poly(3-hexylthiophene) (P3HT) is one of the most studied polymers in organic electronics, it remains one of the most challenging in terms of quantitative measures of its order, e.g., crystallinity. To address this challenge, we prepared a series of highly regioregular P3HT fractions ranging from 3.3 kg/mol to 23 kg/mol. Using this series plus a high molar mass (62 kg/mol) commercial material, we compare different metrics for order in P3HT via calorimetry, solid state NMR, and x-ray diffraction. We reconcile the results of our work with those of recent studies on oligomeric (3-hexylthiophenes). One challenges of quantifying low molar mass P3HT samples via DSC is a thermal fractionation effect due to varying chain lengths. We quantify these effects in our molar mass series, and a clear crossover region from extended chain crystals to chain folded crystals is identified through the thermal fractionation process. New values for the enthalpy of fusion of high molar mass P3HT and its equilibrium melting temperature are established through our work. Another result of our research is the validation of high heating rate DSC methods for quantifying crystallinity in P3HT samples with device relevant film thicknesses.

  19. Shows 

    E-print Network

    Langley

    2009-01-01

    The present work examines the construction of race on reality television through the use of an exemplar in this genre, MTV's The Real World. By the sheer fact of its popularity and ubiquity, as The Real World is nearly two ...

  20. QUANTIFYING ASSAY VARIATION IN NUTRIENT ANALYSIS OF FEEDSTUFFS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Analytical results from different laboratories have greater variation than those from a single laboratory, and this variation differs by nutrient. Objectives of this presentation are to describe methods for quantifying the analytical reproducibility among and repeatability within laboratories, estim...

  1. A stochastic approach for quantifying immigrant integration: the Spanish test case

    NASA Astrophysics Data System (ADS)

    Agliari, Elena; Barra, Adriano; Contucci, Pierluigi; Sandell, Richard; Vernia, Cecilia

    2014-10-01

    We apply stochastic process theory to the analysis of immigrant integration. Using a unique and detailed data set from Spain, we study the relationship between local immigrant density and two social and two economic immigration quantifiers for the period 1999-2010. As opposed to the classic time-series approach, by letting immigrant density play the role of ‘time’ and the quantifier the role of ‘space,’ it becomes possible to analyse the behavior of the quantifiers by means of continuous time random walks. Two classes of results are then obtained. First, we show that social integration quantifiers evolve following diffusion law, while the evolution of economic quantifiers exhibits ballistic dynamics. Second, we make predictions of best- and worst-case scenarios taking into account large local fluctuations. Our stochastic process approach to integration lends itself to interesting forecasting scenarios which, in the hands of policy makers, have the potential to improve political responses to integration problems. For instance, estimating the standard first-passage time and maximum-span walk reveals local differences in integration performance for different immigration scenarios. Thus, by recognizing the importance of local fluctuations around national means, this research constitutes an important tool to assess the impact of immigration phenomena on municipal budgets and to set up solid multi-ethnic plans at the municipal level as immigration pressures build.

  2. Use of the Concept of Equivalent Biologically Effective Dose (BED) to Quantify the Contribution of Hyperthermia to Local Tumor Control in Radiohyperthermia Cervical Cancer Trials, and Comparison With Radiochemotherapy Results

    SciTech Connect

    Plataniotis, George A. Dale, Roger G.

    2009-04-01

    Purpose: To express the magnitude of contribution of hyperthermia to local tumor control in radiohyperthermia (RT/HT) cervical cancer trials, in terms of the radiation-equivalent biologically effective dose (BED) and to explore the potential of the combined modalities in the treatment of this neoplasm. Materials and Methods: Local control rates of both arms of each study (RT vs. RT+HT) reported from randomized controlled trials (RCT) on concurrent RT/HT for cervical cancer were reviewed. By comparing the two tumor control probabilities (TCPs) from each study, we calculated the HT-related log cell-kill and then expressed it in terms of the number of 2 Gy fraction equivalents, for a range of tumor volumes and radiosensitivities. We have compared the contribution of each modality and made some exploratory calculations on the TCPs that might be expected from a combined trimodality treatment (RT+CT+HT). Results: The HT-equivalent number of 2-Gy fractions ranges from 0.6 to 4.8 depending on radiosensitivity. Opportunities for clinically detectable improvement by the addition of HT are only available in tumors with an alpha value in the approximate range of 0.22-0.28 Gy{sup -1}. A combined treatment (RT+CT+HT) is not expected to improve prognosis in radioresistant tumors. Conclusion: The most significant improvements in TCP, which may result from the combination of RT/CT/HT for locally advanced cervical carcinomas, are likely to be limited only to those patients with tumors of relatively low-intermediate radiosensitivity.

  3. Measuring political polarization: Twitter shows the two sides of Venezuela.

    PubMed

    Morales, A J; Borondo, J; Losada, J C; Benito, R M

    2015-03-01

    We say that a population is perfectly polarized when divided in two groups of the same size and opposite opinions. In this paper, we propose a methodology to study and measure the emergence of polarization from social interactions. We begin by proposing a model to estimate opinions in which a minority of influential individuals propagate their opinions through a social network. The result of the model is an opinion probability density function. Next, we propose an index to quantify the extent to which the resulting distribution is polarized. Finally, we apply the proposed methodology to a Twitter conversation about the late Venezuelan president, Hugo Chávez, finding a good agreement between our results and offline data. Hence, we show that our methodology can detect different degrees of polarization, depending on the structure of the network. PMID:25833436

  4. Measuring Political Polarization: Twitter shows the two sides of Venezuela

    E-print Network

    Morales, A J; Losada, J C; Benito, R M

    2015-01-01

    We say that a population is perfectly polarized when divided in two groups of the same size and opposite opinions. In this paper, we propose a methodology to study and measure the emergence of polarization from social interactions. We begin by proposing a model to estimate opinions in which a minority of influential individuals propagate their opinions through a social network. The result of the model is an opinion probability density function. Next, we propose an index to quantify the extent to which the resulting distribution is polarized. Finally, we apply the proposed methodology to a Twitter conversation about the late Venezuelan president, Hugo Ch\\'avez, finding a good agreement between our results and offline data. Hence, we show that our methodology can detect different degrees of polarization, depending on the structure of the network.

  5. Measuring political polarization: Twitter shows the two sides of Venezuela

    NASA Astrophysics Data System (ADS)

    Morales, A. J.; Borondo, J.; Losada, J. C.; Benito, R. M.

    2015-03-01

    We say that a population is perfectly polarized when divided in two groups of the same size and opposite opinions. In this paper, we propose a methodology to study and measure the emergence of polarization from social interactions. We begin by proposing a model to estimate opinions in which a minority of influential individuals propagate their opinions through a social network. The result of the model is an opinion probability density function. Next, we propose an index to quantify the extent to which the resulting distribution is polarized. Finally, we apply the proposed methodology to a Twitter conversation about the late Venezuelan president, Hugo Chávez, finding a good agreement between our results and offline data. Hence, we show that our methodology can detect different degrees of polarization, depending on the structure of the network.

  6. Career and Technical Education: Show Us the Buck, We'll Show You the Bang!

    ERIC Educational Resources Information Center

    Whetstone, Ryan

    2011-01-01

    Adult and CTE programs in California have been cut by about 60 percent over the past three years. A number of school districts have summarily eliminated these programs to preserve funding for other educational endeavors. The author says part of the problem has been the community's inability to communicate quantifiable results. One of the hottest…

  7. Evaluation of two methods for quantifying passeriform lice

    PubMed Central

    Koop, Jennifer A. H.; Clayton, Dale H.

    2013-01-01

    Two methods commonly used to quantify ectoparasites on live birds are visual examination and dust-ruffling. Visual examination provides an estimate of ectoparasite abundance based on an observer’s timed inspection of various body regions on a bird. Dust-ruffling involves application of insecticidal powder to feathers that are then ruffled to dislodge ectoparasites onto a collection surface where they can then be counted. Despite the common use of these methods in the field, the proportion of actual ectoparasites they account for has only been tested with Rock Pigeons (Columba livia), a relatively large-bodied species (238–302 g) with dense plumage. We tested the accuracy of the two methods using European Starlings (Sturnus vulgaris; ~75 g). We first quantified the number of lice (Brueelia nebulosa) on starlings using visual examination, followed immediately by dust-ruffling. Birds were then euthanized and the proportion of lice accounted for by each method was compared to the total number of lice on each bird as determined with a body-washing method. Visual examination and dust-ruffling each accounted for a relatively small proportion of total lice (14% and 16%, respectively), but both were still significant predictors of abundance. The number of lice observed by visual examination accounted for 68% of the variation in total abundance. Similarly, the number of lice recovered by dust-ruffling accounted for 72% of the variation in total abundance. Our results show that both methods can be used to reliably quantify the abundance of lice on European Starlings and other similar-sized passerines. PMID:24039328

  8. National Orange Show Photovoltaic Demonstration

    SciTech Connect

    Dan Jimenez Sheri Raborn, CPA; Tom Baker

    2008-03-31

    National Orange Show Photovoltaic Demonstration created a 400KW Photovoltaic self-generation plant at the National Orange Show Events Center (NOS). The NOS owns a 120-acre state fairground where it operates an events center and produces an annual citrus fair known as the Orange Show. The NOS governing board wanted to employ cost-saving programs for annual energy expenses. It is hoped the Photovoltaic program will result in overall savings for the NOS, help reduce the State's energy demands as relating to electrical power consumption, improve quality of life within the affected grid area as well as increase the energy efficiency of buildings at our venue. In addition, the potential to reduce operational expenses would have a tremendous effect on the ability of the NOS to service its community.

  9. PARAMETERS FOR QUANTIFYING BEAM HALO

    SciTech Connect

    C.K. ALLEN; T.P. WANGLER

    2001-06-01

    Two different parameters for the quantitative description of beam halo are introduced, both based on moments of the particle distribution. One parameter is a measure of spatial halo formation and has been defined previously by Wangler and Crandall [3], termed the profile parameter. The second parameter relies on kinematic invariants to quantify halo formation in phase space; we call it the halo parameter. The profile parameter can be computed from experimental beam profile data. The halo parameter provides a theoretically more complete description of halo in phase space, but is difficult to obtain experimentally.

  10. The "Life Potential": a new complex algorithm to assess "Heart Rate Variability" from Holter records for cognitive and diagnostic aims. Preliminary experimental results showing its dependence on age, gender and health conditions

    E-print Network

    Barra, Orazio A

    2013-01-01

    Although HRV (Heart Rate Variability) analyses have been carried out for several decades, several limiting factors still make these analyses useless from a clinical point of view. The present paper aims at overcoming some of these limits by introducing the "Life Potential" (BMP), a new mathematical algorithm which seems to exhibit surprising cognitive and predictive capabilities. BMP is defined as a linear combination of five HRV Non-Linear Variables, in turn derived from the thermodynamic formalism of chaotic dynamic systems. The paper presents experimental measurements of BMP (Average Values and Standard Deviations) derived from 1048 Holter tests, matched in age and gender, including a control group of 356 healthy subjects. The main results are: (a) BMP always decreases when the age increases, and its dependence on age and gender is well established; (b) the shape of the age dependence within "healthy people" is different from that found in the general group: this behavior provides evidence of possible illn...

  11. Quantifying mixing using equilibrium reactions

    SciTech Connect

    Wheat, Philip M.; Posner, Jonathan D.

    2009-03-15

    A method of quantifying equilibrium reactions in a microchannel using a fluorometric reaction of Fluo-4 and Ca{sup 2+} ions is presented. Under the proper conditions, equilibrium reactions can be used to quantify fluid mixing without the challenges associated with constituent mixing measures such as limited imaging spatial resolution and viewing angle coupled with three-dimensional structure. Quantitative measurements of CaCl and calcium-indicating fluorescent dye Fluo-4 mixing are measured in Y-shaped microchannels. Reactant and product concentration distributions are modeled using Green's function solutions and a numerical solution to the advection-diffusion equation. Equilibrium reactions provide for an unambiguous, quantitative measure of mixing when the reactant concentrations are greater than 100 times their dissociation constant and the diffusivities are equal. At lower concentrations and for dissimilar diffusivities, the area averaged fluorescence signal reaches a maximum before the species have interdiffused, suggesting that reactant concentrations and diffusivities must be carefully selected to provide unambiguous, quantitative mixing measures. Fluorometric equilibrium reactions work over a wide range of pH and background concentrations such that they can be used for a wide variety of fluid mixing measures including industrial or microscale flows.

  12. Towards quantifying fuzzy stream power

    NASA Astrophysics Data System (ADS)

    Schwanghart, W.; Korup, O.

    2012-04-01

    Deterministic flow direction algorithms such as the D8 have wide application in numerical models of landscape evolution. These simple algorithms play a central role in quantifying drainage basin area, and hence approximating—via empirically derived relationships from regional flood frequency and hydraulic geometry—stream power or fluvial erosion potential. Here we explore how alternative algorithms that employ a probabilistic choice of flow direction affect quantitative estimates of stream power. We test a probabilistic multi-flow direction algorithm within the MATLAB TopoToolbox in model and real landscapes of low topographic relief and minute gradients, where potentially fuzzy drainage divides are dictated by, among others, alluvial fan dynamics, playa infill, and groundwater fluxes and seepage. We employ a simplistic numerical landscape evolution model that simulates fluvial incision and hillslope diffusion and explicitly models the existence and capture of endorheic basins that prevail in (semi-)arid, low-relief landscapes. We discuss how using this probabilistic multi-flow direction algorithm helps represent and quantify uncertainty about spatio-temporal drainage divide locations and how this bears on quantitative estimates of downstream stream power and fluvial erosion potential as well as their temporal dynamics.

  13. DISSERTATION QUANTIFYING SCALE RELATIONSHIPS IN SNOW DISTRIBUTIONS

    E-print Network

    Anderson, Charles W.

    DISSERTATION QUANTIFYING SCALE RELATIONSHIPS IN SNOW DISTRIBUTIONS Submitted by Jeffrey S. Deems QUANTIFYING SCALE RELATIONSHIPS IN SNOW DEPTH DISTRIBUTIONS BE ACCEPTED AS FULFILLING IN PART REQUIREMENTS IN SNOW DEPTH DISTRIBUTIONS Spatial distributions of snow in mountain environments represent the time

  14. Quantifying Evaporation in a Permeable Pavement System

    EPA Science Inventory

    Studies quantifying evaporation from permeable pavement systems are limited to a few laboratory studies and one field application. This research quantifies evaporation for a larger-scale field application by measuring the water balance from lined permeable pavement sections. Th...

  15. Quantifying Drosophila food intake: comparative analysis of current methodology.

    PubMed

    Deshpande, Sonali A; Carvalho, Gil B; Amador, Ariadna; Phillips, Angela M; Hoxha, Sany; Lizotte, Keith J; Ja, William W

    2014-05-01

    Food intake is a fundamental parameter in animal studies. Despite the prevalent use of Drosophila in laboratory research, precise measurements of food intake remain challenging in this model organism. Here, we compare several common Drosophila feeding assays: the capillary feeder (CAFE), food labeling with a radioactive tracer or colorimetric dye and observations of proboscis extension (PE). We show that the CAFE and radioisotope labeling provide the most consistent results, have the highest sensitivity and can resolve differences in feeding that dye labeling and PE fail to distinguish. We conclude that performing the radiolabeling and CAFE assays in parallel is currently the best approach for quantifying Drosophila food intake. Understanding the strengths and limitations of methods for measuring food intake will greatly advance Drosophila studies of nutrition, behavior and disease. PMID:24681694

  16. Size Matters Grounding Quantifiers in Spatial Perception

    E-print Network

    Koolen, Marijn

    Size Matters Grounding Quantifiers in Spatial Perception Simon Pauw #12;#12;Size Matters Grounding Quantifiers in Spatial Perception #12;ILLC Dissertation Series DS-2013-01 For further information about ILLC­94­6182­336­6 #12;Size Matters Grounding Quantifiers in Spatial Perception Academisch Proefschrift ter verkrijging

  17. Quantifying Global Uncertainties in a Simple Microwave Rainfall Algorithm

    NASA Technical Reports Server (NTRS)

    Kummerow, Christian; Berg, Wesley; Thomas-Stahle, Jody; Masunaga, Hirohiko

    2006-01-01

    While a large number of methods exist in the literature for retrieving rainfall from passive microwave brightness temperatures, little has been written about the quantitative assessment of the expected uncertainties in these rainfall products at various time and space scales. The latter is the result of two factors: sparse validation sites over most of the world's oceans, and algorithm sensitivities to rainfall regimes that cause inconsistencies against validation data collected at different locations. To make progress in this area, a simple probabilistic algorithm is developed. The algorithm uses an a priori database constructed from the Tropical Rainfall Measuring Mission (TRMM) radar data coupled with radiative transfer computations. Unlike efforts designed to improve rainfall products, this algorithm takes a step backward in order to focus on uncertainties. In addition to inversion uncertainties, the construction of the algorithm allows errors resulting from incorrect databases, incomplete databases, and time- and space-varying databases to be examined. These are quantified. Results show that the simple algorithm reduces errors introduced by imperfect knowledge of precipitation radar (PR) rain by a factor of 4 relative to an algorithm that is tuned to the PR rainfall. Database completeness does not introduce any additional uncertainty at the global scale, while climatologically distinct space/time domains add approximately 25% uncertainty that cannot be detected by a radiometer alone. Of this value, 20% is attributed to changes in cloud morphology and microphysics, while 5% is a result of changes in the rain/no-rain thresholds. All but 2%-3% of this variability can be accounted for by considering the implicit assumptions in the algorithm. Additional uncertainties introduced by the details of the algorithm formulation are not quantified in this study because of the need for independent measurements that are beyond the scope of this paper. A validation strategy for these errors is outlined.

  18. Quantifying Resource Use in Computations

    E-print Network

    van Son, R J J H

    2009-01-01

    It is currently not possible to quantify the resources needed to perform a computation. As a consequence, it is not possible to reliably evaluate the hardware resources needed for the application of algorithms or the running of programs. This is apparent in both computer science, for instance, in cryptanalysis, and in neuroscience, for instance, comparative neuro-anatomy. A System versus Environment game formalism is proposed based on Computability Logic that allows to define a computational work function that describes the theoretical and physical resources needed to perform any purely algorithmic computation. Within this formalism, the cost of a computation is defined as the sum of information storage over the steps of the computation. The size of the computational device, eg, the action table of a Universal Turing Machine, the number of transistors in silicon, or the number and complexity of synapses in a neural net, is explicitly included in the computational cost. The proposed cost function leads in a na...

  19. Quantifying entanglement with witness operators

    SciTech Connect

    Brandao, Fernando G.S.L.

    2005-08-15

    We present a unifying approach to the quantification of entanglement based on entanglement witnesses, which includes several already established entanglement measures such as the negativity, the concurrence, and the robustness of entanglement. We then introduce an infinite family of new entanglement quantifiers, having as its limits the best separable approximation measure and the generalized robustness. Gaussian states, states with symmetry, states constrained to super-selection rules, and states composed of indistinguishable particles are studied under the view of the witnessed entanglement. We derive new bounds to the fidelity of teleportation d{sub min}, for the distillable entanglement E{sub D} and for the entanglement of formation. A particular measure, the PPT-generalized robustness, stands out due to its easy calculability and provides sharper bounds to d{sub min} and E{sub D} than the negativity in most of the states. We illustrate our approach studying thermodynamical properties of entanglement in the Heisenberg XXX and dimerized models.

  20. Detecting and Quantifying Topography in Neural Maps

    PubMed Central

    Yarrow, Stuart; Razak, Khaleel A.; Seitz, Aaron R.; Seriès, Peggy

    2014-01-01

    Topographic maps are an often-encountered feature in the brains of many species, yet there are no standard, objective procedures for quantifying topography. Topographic maps are typically identified and described subjectively, but in cases where the scale of the map is close to the resolution limit of the measurement technique, identifying the presence of a topographic map can be a challenging subjective task. In such cases, an objective topography detection test would be advantageous. To address these issues, we assessed seven measures (Pearson distance correlation, Spearman distance correlation, Zrehen's measure, topographic product, topological correlation, path length and wiring length) by quantifying topography in three classes of cortical map model: linear, orientation-like, and clusters. We found that all but one of these measures were effective at detecting statistically significant topography even in weakly-ordered maps, based on simulated noisy measurements of neuronal selectivity and sparse sampling of the maps. We demonstrate the practical applicability of these measures by using them to examine the arrangement of spatial cue selectivity in pallid bat A1. This analysis shows that significantly topographic arrangements of interaural intensity difference and azimuth selectivity exist at the scale of individual binaural clusters. PMID:24505279

  1. Quantifying Emergent Behavior of Autonomous Robots

    NASA Astrophysics Data System (ADS)

    Martius, Georg; Olbrich, Eckehard

    2015-10-01

    Quantifying behaviors of robots which were generated autonomously from task-independent objective functions is an important prerequisite for objective comparisons of algorithms and movements of animals. The temporal sequence of such a behavior can be considered as a time series and hence complexity measures developed for time series are natural candidates for its quantification. The predictive information and the excess entropy are such complexity measures. They measure the amount of information the past contains about the future and thus quantify the nonrandom structure in the temporal sequence. However, when using these measures for systems with continuous states one has to deal with the fact that their values will depend on the resolution with which the systems states are observed. For deterministic systems both measures will diverge with increasing resolution. We therefore propose a new decomposition of the excess entropy in resolution dependent and resolution independent parts and discuss how they depend on the dimensionality of the dynamics, correlations and the noise level. For the practical estimation we propose to use estimates based on the correlation integral instead of the direct estimation of the mutual information using the algorithm by Kraskov et al. (2004) which is based on next neighbor statistics because the latter allows less control of the scale dependencies. Using our algorithm we are able to show how autonomous learning generates behavior of increasing complexity with increasing learning duration.

  2. Quantifying of bactericide properties of medicinal plants

    PubMed Central

    Ács, András; Gölöncsér, Flóra; Barabás, Anikó

    2011-01-01

    Extended research has been carried out to clarify the ecological role of plant secondary metabolites (SMs). Although their primary ecological function is self-defense, bioactive compounds have long been used in alternative medicine or in biological control of pests. Several members of the family Labiatae are known to have strong antimicrobial capacity. For testing and quantifying antibacterial activity, most often standard microbial protocols are used, assessing inhibitory activity on a selected strain. In this study, the applicability of a microbial ecotoxtest was evaluated to quantify the aggregate bactericide capacity of Labiatae species, based on the bioluminescence inhibition of the bacterium Vibrio fischeri. Striking differences were found amongst herbs, reaching even 10-fold toxicity. Glechoma hederacea L. proved to be the most toxic, with the EC50 of 0.4073 g dried plant/l. LC50 values generated by the standard bioassay seem to be a good indicator of the bactericide property of herbs. Traditional use of the selected herbs shows a good correlation with bioactivity expressed as bioluminescence inhibition, leading to the conclusion that the Vibrio fischeri bioassay can be a good indicator of the overall antibacterial capacity of herbs, at least on a screening level. PMID:21502819

  3. Quantifying capital goods for waste incineration

    SciTech Connect

    Brogaard, L.K.; Riber, C.; Christensen, T.H.

    2013-06-15

    Highlights: • Materials and energy used for the construction of waste incinerators were quantified. • The data was collected from five incineration plants in Scandinavia. • Included were six main materials, electronic systems, cables and all transportation. • The capital goods contributed 2–3% compared to the direct emissions impact on GW. - Abstract: Materials and energy used for the construction of modern waste incineration plants were quantified. The data was collected from five incineration plants (72,000–240,000 tonnes per year) built in Scandinavia (Norway, Finland and Denmark) between 2006 and 2012. Concrete for the buildings was the main material used amounting to 19,000–26,000 tonnes per plant. The quantification further included six main materials, electronic systems, cables and all transportation. The energy used for the actual on-site construction of the incinerators was in the range 4000–5000 MW h. In terms of the environmental burden of producing the materials used in the construction, steel for the building and the machinery contributed the most. The material and energy used for the construction corresponded to the emission of 7–14 kg CO{sub 2} per tonne of waste combusted throughout the lifetime of the incineration plant. The assessment showed that, compared to data reported in the literature on direct emissions from the operation of incinerators, the environmental impacts caused by the construction of buildings and machinery (capital goods) could amount to 2–3% with respect to kg CO{sub 2} per tonne of waste combusted.

  4. New Drug Shows Mixed Results Against Early Alzheimer's

    MedlinePLUS

    ... in Women with Alzheimer’s A Few Cups of Coffee a Day May Lower Alzheimer’s Risk Chocolate May ... May Be Less Likely to Get Alzheimer’s Drinking Coffee in Mid-Life May Help Ward Off Alzheimer’s ...

  5. Quantifying strain variability in modeling growth of Listeria monocytogenes.

    PubMed

    Aryani, D C; den Besten, H M W; Hazeleger, W C; Zwietering, M H

    2015-09-01

    Prediction of microbial growth kinetics can differ from the actual behavior of the target microorganisms. In the present study, the impact of strain variability on maximum specific growth rate (?max) (h(-1)) was quantified using twenty Listeria monocytogenes strains. The ?max was determined as function of four different variables, namely pH, water activity (aw)/NaCl concentration [NaCl], undissociated lactic acid concentration ([HA]), and temperature (T). The strain variability was compared to biological and experimental variabilities to determine their importance. The experiment was done in duplicate at the same time to quantify experimental variability and reproduced at least twice on different experimental days to quantify biological (reproduction) variability. For all variables, experimental variability was clearly lower than biological variability and strain variability; and remarkably, biological variability was similar to strain variability. Strain variability in cardinal growth parameters, namely pHmin, [NaCl]max, [HA]max, and Tmin was further investigated by fitting secondary growth models to the ?max data, including a modified secondary pH model. The fitting results showed that L. monocytogenes had an average pHmin of 4.5 (5-95% prediction interval (PI) 4.4-4.7), [NaCl]max of 2.0mM (PI 1.8-2.1), [HA]max of 5.1mM (PI 4.2-5.9), and Tmin of -2.2°C (PI (-3.3)-(-1.1)). The strain variability in cardinal growth parameters was benchmarked to available literature data, showing that the effect of strain variability explained around 1/3 or less of the variability found in literature. The cardinal growth parameters and their prediction intervals were used as input to illustrate the effect of strain variability on the growth of L. monocytogenes in food products with various characteristics, resulting in 2-4 logCFU/ml(g) difference in growth prediction between the most and least robust strains, depending on the type of food product. This underlined the importance to obtain quantitative knowledge on variability factors to realistically predict the microbial growth kinetics. PMID:26011600

  6. Towards quantifying complexity with quantum mechanics

    NASA Astrophysics Data System (ADS)

    Tan, Ryan; R. Terno, Daniel; Thompson, Jayne; Vedral, Vlatko; Gu, Mile

    2014-09-01

    While we have intuitive notions of structure and complexity, the formalization of this intuition is non-trivial. The statistical complexity is a popular candidate. It is based on the idea that the complexity of a process can be quantified by the complexity of its simplest mathematical model —the model that requires the least past information for optimal future prediction. Here we review how such models, known as -machines can be further simplified through quantum logic, and explore the resulting consequences for understanding complexity. In particular, we propose a new measure of complexity based on quantum -machines. We apply this to a simple system undergoing constant thermalization. The resulting quantum measure of complexity aligns more closely with our intuition of how complexity should behave.

  7. Quantifying Significance of MHC II Residues.

    PubMed

    Fan, Ying; Lu, Ruoshui; Wang, Lusheng; Andreatta, Massimo; Li, Shuai Cheng

    2014-01-01

    The major histocompatibility complex (MHC), a cell-surface protein mediating immune recognition, plays important roles in the immune response system of all higher vertebrates. MHC molecules are highly polymorphic and they are grouped into serotypes according to the specificity of the response. It is a common belief that a protein sequence determines its three dimensional structure and function. Hence, the protein sequence determines the serotype. Residues play different levels of importance. In this paper, we quantify the residue significance with the available serotype information. Knowing the significance of the residues will deepen our understanding of the MHC molecules and yield us a concise representation of the molecules. In this paper we propose a linear programming-based approach to find significant residue positions as well as quantifying their significance in MHC II DR molecules. Among all the residues in MHC II DR molecules, 18 positions are of particular significance, which is consistent with the literature on MHC binding sites, and succinct pseudo-sequences appear to be adequate to capture the whole sequence features. When the result is used for classification of MHC molecules with serotype assigned by WHO, a 98.4 percent prediction performance is achieved. The methods have been implemented in java (http://code.google.com/p/quassi/). PMID:26355503

  8. Quantifying radionuclide signatures from a ?-? coincidence system.

    PubMed

    Britton, Richard; Jackson, Mark J; Davies, Ashley V

    2015-11-01

    A method for quantifying gamma coincidence signatures has been developed, and tested in conjunction with a high-efficiency multi-detector system to quickly identify trace amounts of radioactive material. The ?-? system utilises fully digital electronics and list-mode acquisition to time-stamp each event, allowing coincidence matrices to be easily produced alongside typical 'singles' spectra. To quantify the coincidence signatures a software package has been developed to calculate efficiency and cascade summing corrected branching ratios. This utilises ENSDF records as an input, and can be fully automated, allowing the user to quickly and easily create/update a coincidence library that contains all possible ? and conversion electron cascades, associated cascade emission probabilities, and true-coincidence summing corrected ? cascade detection probabilities. It is also fully searchable by energy, nuclide, coincidence pair, ? multiplicity, cascade probability and half-life of the cascade. The probabilities calculated were tested using measurements performed on the ?-? system, and found to provide accurate results for the nuclides investigated. Given the flexibility of the method, (it only relies on evaluated nuclear data, and accurate efficiency characterisations), the software can now be utilised for a variety of systems, quickly and easily calculating coincidence signature probabilities. PMID:26254208

  9. Quantifying Einstein-Podolsky-Rosen steering.

    PubMed

    Skrzypczyk, Paul; Navascués, Miguel; Cavalcanti, Daniel

    2014-05-01

    Einstein-Podolsky-Rosen steering is a form of bipartite quantum correlation that is intermediate between entanglement and Bell nonlocality. It allows for entanglement certification when the measurements performed by one of the parties are not characterized (or are untrusted) and has applications in quantum key distribution. Despite its foundational and applied importance, Einstein-Podolsky-Rosen steering lacks a quantitative assessment. Here we propose a way of quantifying this phenomenon and use it to study the steerability of several quantum states. In particular, we show that every pure entangled state is maximally steerable and the projector onto the antisymmetric subspace is maximally steerable for all dimensions; we provide a new example of one-way steering and give strong support that states with positive-partial transposition are not steerable. PMID:24856679

  10. Dust as interstellar catalyst. I. Quantifying the chemical desorption process

    NASA Astrophysics Data System (ADS)

    Minissale, M.; Dulieu, F.; Cazaux, S.; Hocuk, S.

    2016-01-01

    Context. The presence of dust in the interstellar medium has profound consequences on the chemical composition of regions where stars are forming. Recent observations show that many species formed onto dust are populating the gas phase, especially in cold environments where UV- and cosmic-ray-induced photons do not account for such processes. Aims: The aim of this paper is to understand and quantify the process that releases solid species into the gas phase, the so-called chemical desorption process, so that an explicit formula can be derived that can be included in astrochemical models. Methods: We present a collection of experimental results of more than ten reactive systems. For each reaction, different substrates such as oxidized graphite and compact amorphous water ice were used. We derived a formula for reproducing the efficiencies of the chemical desorption process that considers the equipartition of the energy of newly formed products, followed by classical bounce on the surface. In part II of this study we extend these results to astrophysical conditions. Results: The equipartition of energy correctly describes the chemical desorption process on bare surfaces. On icy surfaces, the chemical desorption process is much less efficient, and a better description of the interaction with the surface is still needed. Conclusions: We show that the mechanism that directly transforms solid species into gas phase species is efficient for many reactions.

  11. Quantifying Uncertainty in Epidemiological Models

    SciTech Connect

    Ramanathan, Arvind; Jha, Sumit Kumar

    2012-01-01

    Modern epidemiology has made use of a number of mathematical models, including ordinary differential equation (ODE) based models and agent based models (ABMs) to describe the dynamics of how a disease may spread within a population and enable the rational design of strategies for intervention that effectively contain the spread of the disease. Although such predictions are of fundamental importance in preventing the next global pandemic, there is a significant gap in trusting the outcomes/predictions solely based on such models. Hence, there is a need to develop approaches such that mathematical models can be calibrated against historical data. In addition, there is a need to develop rigorous uncertainty quantification approaches that can provide insights into when a model will fail and characterize the confidence in the (possibly multiple) model outcomes/predictions, when such retrospective analysis cannot be performed. In this paper, we outline an approach to develop uncertainty quantification approaches for epidemiological models using formal methods and model checking. By specifying the outcomes expected from a model in a suitable spatio-temporal logic, we use probabilistic model checking methods to quantify the probability with which the epidemiological model satisfies the specification. We argue that statistical model checking methods can solve the uncertainty quantification problem for complex epidemiological models.

  12. Quantifying macromolecular conformational transition pathways

    NASA Astrophysics Data System (ADS)

    Seyler, Sean; Kumar, Avishek; Thorpe, Michael; Beckstein, Oliver

    2015-03-01

    Diverse classes of proteins function through large-scale conformational changes that are challenging for computer simulations. A range of fast path-sampling techniques have been used to generate transitions, but it has been difficult to compare paths from (and assess the relative strengths of) different methods. We introduce a comprehensive method (pathway similarity analysis, PSA) for quantitatively characterizing and comparing macromolecular pathways. The Hausdorff and Fréchet metrics (known from computational geometry) are used to quantify the degree of similarity between polygonal curves in configuration space. A strength of PSA is its use of the full information available from the 3 N-dimensional configuration space trajectory without requiring additional specific knowledge about the system. We compare a sample of eleven different methods for the closed-to-open transitions of the apo enzyme adenylate kinase (AdK) and also apply PSA to an ensemble of 400 AdK trajectories produced by dynamic importance sampling MD and the Geometrical Pathways algorithm. We discuss the method's potential to enhance our understanding of transition path sampling methods, validate them, and help guide future research toward deeper physical insights into conformational transitions.

  13. Quantifying uncertainty from material inhomogeneity.

    SciTech Connect

    Battaile, Corbett Chandler; Emery, John M.; Brewer, Luke N.; Boyce, Brad Lee

    2009-09-01

    Most engineering materials are inherently inhomogeneous in their processing, internal structure, properties, and performance. Their properties are therefore statistical rather than deterministic. These inhomogeneities manifest across multiple length and time scales, leading to variabilities, i.e. statistical distributions, that are necessary to accurately describe each stage in the process-structure-properties hierarchy, and are ultimately the primary source of uncertainty in performance of the material and component. When localized events are responsible for component failure, or when component dimensions are on the order of microstructural features, this uncertainty is particularly important. For ultra-high reliability applications, the uncertainty is compounded by a lack of data describing the extremely rare events. Hands-on testing alone cannot supply sufficient data for this purpose. To date, there is no robust or coherent method to quantify this uncertainty so that it can be used in a predictive manner at the component length scale. The research presented in this report begins to address this lack of capability through a systematic study of the effects of microstructure on the strain concentration at a hole. To achieve the strain concentration, small circular holes (approximately 100 {micro}m in diameter) were machined into brass tensile specimens using a femto-second laser. The brass was annealed at 450 C, 600 C, and 800 C to produce three hole-to-grain size ratios of approximately 7, 1, and 1/7. Electron backscatter diffraction experiments were used to guide the construction of digital microstructures for finite element simulations of uniaxial tension. Digital image correlation experiments were used to qualitatively validate the numerical simulations. The simulations were performed iteratively to generate statistics describing the distribution of plastic strain at the hole in varying microstructural environments. In both the experiments and simulations, the deformation behavior was found to depend strongly on the character of the nearby microstructure.

  14. Quantifying the isotopic ‘continental effect’

    NASA Astrophysics Data System (ADS)

    Winnick, Matthew J.; Chamberlain, C. Page; Caves, Jeremy K.; Welker, Jeffrey M.

    2014-11-01

    Since the establishment of the IAEA-WMO precipitation-monitoring network in 1961, it has been observed that isotope ratios in precipitation (? 2H and ? 18O) generally decrease from coastal to inland locations, an observation described as the 'continental effect.' While discussed frequently in the literature, there have been few attempts to quantify the variables controlling this effect despite the fact that isotopic gradients over continents can vary by orders of magnitude. In a number of studies, traditional Rayleigh fractionation has proven inadequate in describing the global variability of isotopic gradients due to its simplified treatment of moisture transport and its lack of moisture recycling processes. In this study, we use a one-dimensional idealized model of water vapor transport along a storm track to investigate the dominant variables controlling isotopic gradients in precipitation across terrestrial environments. We find that the sensitivity of these gradients to progressive rainout is controlled by a combination of the amount of evapotranspiration and the ratio of transport by advection to transport by eddy diffusion, with these variables becoming increasingly important with decreasing length scales of specific humidity. A comparison of modeled gradients with global precipitation isotope data indicates that these variables can account for the majority of variability in observed isotopic gradients between coastal and inland locations. Furthermore, the dependence of the 'continental effect' on moisture recycling allows for the quantification of evapotranspiration fluxes from measured isotopic gradients, with implications for both paleoclimate reconstructions and large-scale monitoring efforts in the context of global warming and a changing hydrologic cycle.

  15. Quantifying the Cognitive Extent of Science

    E-print Network

    Milojevi?, Staša

    2015-01-01

    While the modern science is characterized by an exponential growth in scientific literature, the increase in publication volume clearly does not reflect the expansion of the cognitive boundaries of science. Nevertheless, most of the metrics for assessing the vitality of science or for making funding and policy decisions are based on productivity. Similarly, the increasing level of knowledge production by large science teams, whose results often enjoy greater visibility, does not necessarily mean that "big science" leads to cognitive expansion. Here we present a novel, big-data method to quantify the extents of cognitive domains of different bodies of scientific literature independently from publication volume, and apply it to 20 million articles published over 60-130 years in physics, astronomy, and biomedicine. The method is based on the lexical diversity of titles of fixed quotas of research articles. Owing to large size of quotas, the method overcomes the inherent stochasticity of article titles to achieve...

  16. Quantifying the magnetic nature of light emission.

    PubMed

    Taminiau, Tim H; Karaveli, Sinan; van Hulst, Niek F; Zia, Rashid

    2012-01-01

    Tremendous advances in the study of magnetic light-matter interactions have recently been achieved using man-made nanostructures that exhibit and exploit an optical magnetic response. However, naturally occurring emitters can also exhibit magnetic resonances in the form of optical-frequency magnetic-dipole transitions. Here we quantify the magnetic nature of light emission using energy- and momentum-resolved spectroscopy, and leverage a pair of spectrally close electric- and magnetic-dipole transitions in trivalent europium to probe vacuum fluctuations in the electric and magnetic fields at the nanometre scale. These results reveal a new tool for nano-optics: an atomic-size quantum emitter that interacts with the magnetic component of light. PMID:22864572

  17. Quantifying the semantics of search behavior before stock market moves

    E-print Network

    Stanley, H. Eugene

    ); and trading volumes in US stock markets (30, 31). A recent study showed that Internet users from countriesQuantifying the semantics of search behavior before stock market moves Chester Curmea,b,1 , Tobias topics of interest before stock market moves. In an analysis of historic data from 2004 until 2012, we

  18. Quantifying anatomical shape variations in neurological disorders.

    PubMed

    Singh, Nikhil; Fletcher, P Thomas; Preston, J Samuel; King, Richard D; Marron, J S; Weiner, Michael W; Joshi, Sarang

    2014-04-01

    We develop a multivariate analysis of brain anatomy to identify the relevant shape deformation patterns and quantify the shape changes that explain corresponding variations in clinical neuropsychological measures. We use kernel Partial Least Squares (PLS) and formulate a regression model in the tangent space of the manifold of diffeomorphisms characterized by deformation momenta. The scalar deformation momenta completely encode the diffeomorphic changes in anatomical shape. In this model, the clinical measures are the response variables, while the anatomical variability is treated as the independent variable. To better understand the "shape-clinical response" relationship, we also control for demographic confounders, such as age, gender, and years of education in our regression model. We evaluate the proposed methodology on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database using baseline structural MR imaging data and neuropsychological evaluation test scores. We demonstrate the ability of our model to quantify the anatomical deformations in units of clinical response. Our results also demonstrate that the proposed method is generic and generates reliable shape deformations both in terms of the extracted patterns and the amount of shape changes. We found that while the hippocampus and amygdala emerge as mainly responsible for changes in test scores for global measures of dementia and memory function, they are not a determinant factor for executive function. Another critical finding was the appearance of thalamus and putamen as most important regions that relate to executive function. These resulting anatomical regions were consistent with very high confidence irrespective of the size of the population used in the study. This data-driven global analysis of brain anatomy was able to reach similar conclusions as other studies in Alzheimer's disease based on predefined ROIs, together with the identification of other new patterns of deformation. The proposed methodology thus holds promise for discovering new patterns of shape changes in the human brain that could add to our understanding of disease progression in neurological disorders. PMID:24667299

  19. Quantifying the value of redundant measurements at GRUAN sites

    NASA Astrophysics Data System (ADS)

    Madonna, F.; Rosoldi, M.; Güldner, J.; Haefele, A.; Kivi, R.; Cadeddu, M. P.; Sisterson, D.; Pappalardo, G.

    2014-06-01

    The potential for measurement redundancy to reduce uncertainty in atmospheric variables has not been investigated comprehensively for climate observations. We evaluated the usefulness of entropy and mutual correlation concepts, as defined in information theory, for quantifying random uncertainty and redundancy in time series of atmospheric water vapor provided by five highly instrumented GRUAN (GCOS [Global Climate Observing System] Reference Upper-Air Network) Stations in 2010-2012. Results show that the random uncertainties for radiosonde, frost-point hygrometer, Global Positioning System, microwave and infrared radiometers, and Raman lidar measurements differed by less than 8%. Comparisons of time series of the Integrated Water Vapor (IWV) content from ground-based remote sensing instruments with in situ soundings showed that microwave radiometers have the highest redundancy and therefore the highest potential to reduce random uncertainty of IWV time series estimated by radiosondes. Moreover, the random uncertainty of a time series from one instrument should be reduced of ~ 60% by constraining the measurements with those from another instrument. The best reduction of random uncertainty resulted from conditioning of Raman lidar measurements with microwave radiometer measurements. Specific instruments are recommended for atmospheric water vapor measurements at GRUAN sites. This approach can be applied to the study of redundant measurements for other climate variables.

  20. Rectal Swabs Are Suitable for Quantifying the Carriage Load of KPC-Producing Carbapenem-Resistant Enterobacteriaceae

    PubMed Central

    Lerner, A.; Romano, J.; Chmelnitsky, I.; Navon-Venezia, S.; Edgar, R.

    2013-01-01

    It is more convenient and practical to collect rectal swabs than stool specimens to study carriage of colon pathogens. In this study, we examined the ability to use rectal swabs rather than stool specimens to quantify Klebsiella pneumoniae carbapenemase (KPC)-producing carbapenem-resistant Enterobacteriaceae (CRE). We used a quantitative real-time PCR (qPCR) assay to determine the concentration of the blaKPC gene relative to the concentration of 16S rRNA genes and a quantitative culture-based method to quantify CRE relative to total aerobic bacteria. Our results demonstrated that rectal swabs are suitable for quantifying the concentration of KPC-producing CRE and that qPCR showed higher correlation between rectal swabs and stool specimens than the culture-based method. PMID:23295937

  1. Quantifying climatological ranges and anomalies for Pacific coral reef ecosystems.

    PubMed

    Gove, Jamison M; Williams, Gareth J; McManus, Margaret A; Heron, Scott F; Sandin, Stuart A; Vetter, Oliver J; Foley, David G

    2013-01-01

    Coral reef ecosystems are exposed to a range of environmental forcings that vary on daily to decadal time scales and across spatial scales spanning from reefs to archipelagos. Environmental variability is a major determinant of reef ecosystem structure and function, including coral reef extent and growth rates, and the abundance, diversity, and morphology of reef organisms. Proper characterization of environmental forcings on coral reef ecosystems is critical if we are to understand the dynamics and implications of abiotic-biotic interactions on reef ecosystems. This study combines high-resolution bathymetric information with remotely sensed sea surface temperature, chlorophyll-a and irradiance data, and modeled wave data to quantify environmental forcings on coral reefs. We present a methodological approach to develop spatially constrained, island- and atoll-scale metrics that quantify climatological range limits and anomalous environmental forcings across U.S. Pacific coral reef ecosystems. Our results indicate considerable spatial heterogeneity in climatological ranges and anomalies across 41 islands and atolls, with emergent spatial patterns specific to each environmental forcing. For example, wave energy was greatest at northern latitudes and generally decreased with latitude. In contrast, chlorophyll-a was greatest at reef ecosystems proximate to the equator and northern-most locations, showing little synchrony with latitude. In addition, we find that the reef ecosystems with the highest chlorophyll-a concentrations; Jarvis, Howland, Baker, Palmyra and Kingman are each uninhabited and are characterized by high hard coral cover and large numbers of predatory fishes. Finally, we find that scaling environmental data to the spatial footprint of individual islands and atolls is more likely to capture local environmental forcings, as chlorophyll-a concentrations decreased at relatively short distances (>7 km) from 85% of our study locations. These metrics will help identify reef ecosystems most exposed to environmental stress as well as systems that may be more resistant or resilient to future climate change. PMID:23637939

  2. Quantifying Climatological Ranges and Anomalies for Pacific Coral Reef Ecosystems

    PubMed Central

    Gove, Jamison M.; Williams, Gareth J.; McManus, Margaret A.; Heron, Scott F.; Sandin, Stuart A.; Vetter, Oliver J.; Foley, David G.

    2013-01-01

    Coral reef ecosystems are exposed to a range of environmental forcings that vary on daily to decadal time scales and across spatial scales spanning from reefs to archipelagos. Environmental variability is a major determinant of reef ecosystem structure and function, including coral reef extent and growth rates, and the abundance, diversity, and morphology of reef organisms. Proper characterization of environmental forcings on coral reef ecosystems is critical if we are to understand the dynamics and implications of abiotic–biotic interactions on reef ecosystems. This study combines high-resolution bathymetric information with remotely sensed sea surface temperature, chlorophyll-a and irradiance data, and modeled wave data to quantify environmental forcings on coral reefs. We present a methodological approach to develop spatially constrained, island- and atoll-scale metrics that quantify climatological range limits and anomalous environmental forcings across U.S. Pacific coral reef ecosystems. Our results indicate considerable spatial heterogeneity in climatological ranges and anomalies across 41 islands and atolls, with emergent spatial patterns specific to each environmental forcing. For example, wave energy was greatest at northern latitudes and generally decreased with latitude. In contrast, chlorophyll-a was greatest at reef ecosystems proximate to the equator and northern-most locations, showing little synchrony with latitude. In addition, we find that the reef ecosystems with the highest chlorophyll-a concentrations; Jarvis, Howland, Baker, Palmyra and Kingman are each uninhabited and are characterized by high hard coral cover and large numbers of predatory fishes. Finally, we find that scaling environmental data to the spatial footprint of individual islands and atolls is more likely to capture local environmental forcings, as chlorophyll-a concentrations decreased at relatively short distances (>7 km) from 85% of our study locations. These metrics will help identify reef ecosystems most exposed to environmental stress as well as systems that may be more resistant or resilient to future climate change. PMID:23637939

  3. Validity of ambulatory accelerometry to quantify physical activity in heart failure.

    PubMed

    van den Berg-Emons, H J; Bussmann, J B; Balk, A H; Stam, H J

    2000-12-01

    The purpose was to assess the validity of a novel Activity Monitor to quantify physical activity in congestive heart failure. The Activity Monitor is based on long-term ambulatory monitoring of signals from body-fixed accelerometers. Information can be obtained on which mobility-related activity is performed, when, how intense, and for how long. Ten patients performed several functional activities. Continuous registrations of accelerometer signals were made and the output was compared with visual analysis of simultaneously made video recordings (reference method). Overall results showed an agreement between both methods of 90%. Percentages of sensitivity and predictive value were higher than 80% for most activities. Overall number of transitions was determined well (Activity Monitor, 153; video, 149; p = 0.33). It was concluded that the Activity Monitor is a valid instrument to quantify several aspects of everyday physical activity in congestive heart failure. PMID:11201626

  4. Shakespeare and other English Renaissance authors as characterized by Information Theory complexity quantifiers

    NASA Astrophysics Data System (ADS)

    Rosso, Osvaldo A.; Craig, Hugh; Moscato, Pablo

    2009-03-01

    We introduce novel Information Theory quantifiers in a computational linguistic study that involves a large corpus of English Renaissance literature. The 185 texts studied (136 plays and 49 poems in total), with first editions that range from 1580 to 1640, form a representative set of its period. Our data set includes 30 texts unquestionably attributed to Shakespeare; in addition we also included A Lover’s Complaint, a poem which generally appears in Shakespeare collected editions but whose authorship is currently in dispute. Our statistical complexity quantifiers combine the power of Jensen-Shannon’s divergence with the entropy variations as computed from a probability distribution function of the observed word use frequencies. Our results show, among other things, that for a given entropy poems display higher complexity than plays, that Shakespeare’s work falls into two distinct clusters in entropy, and that his work is remarkable for its homogeneity and for its closeness to overall means.

  5. 28. MAP SHOWING LOCATION OF ARVFS FACILITY AS BUILT. SHOWS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    28. MAP SHOWING LOCATION OF ARVFS FACILITY AS BUILT. SHOWS LINCOLN BOULEVARD, BIG LOST RIVER, AND NAVAL REACTORS FACILITY. F.C. TORKELSON DRAWING NUMBER 842-ARVFS-101-2. DATED OCTOBER 12, 1965. INEL INDEX CODE NUMBER: 075 0101 851 151969. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID

  6. Quantifying temporal ventriloquism in audiovisual synchrony perception.

    PubMed

    Kuling, Irene A; Kohlrausch, Armin; Juola, James F

    2013-10-01

    The integration of visual and auditory inputs in the human brain works properly only if the components are perceived in close temporal proximity. In the present study, we quantified cross-modal interactions in the human brain for audiovisual stimuli with temporal asynchronies, using a paradigm from rhythm perception. In this method, participants had to align the temporal position of a target in a rhythmic sequence of four markers. In the first experiment, target and markers consisted of a visual flash or an auditory noise burst, and all four combinations of target and marker modalities were tested. In the same-modality conditions, no temporal biases and a high precision of the adjusted temporal position of the target were observed. In the different-modality conditions, we found a systematic temporal bias of 25-30 ms. In the second part of the first and in a second experiment, we tested conditions in which audiovisual markers with different stimulus onset asynchronies (SOAs) between the two components and a visual target were used to quantify temporal ventriloquism. The adjusted target positions varied by up to about 50 ms and depended in a systematic way on the SOA and its proximity to the point of subjective synchrony. These data allowed testing different quantitative models. The most satisfying model, based on work by Maij, Brenner, and Smeets (Journal of Neurophysiology 102, 490-495, 2009), linked temporal ventriloquism and the percept of synchrony and was capable of adequately describing the results from the present study, as well as those of some earlier experiments. PMID:23868564

  7. Portable XRF Technology to Quantify Pb in Bone In Vivo

    PubMed Central

    Specht, Aaron James; Weisskopf, Marc; Nie, Linda Huiling

    2014-01-01

    Lead is a ubiquitous toxicant. Bone lead has been established as an important biomarker for cumulative lead exposures and has been correlated with adverse health effects on many systems in the body. K-shell X-ray fluorescence (KXRF) is the standard method for measuring bone lead, but this approach has many difficulties that have limited the widespread use of this exposure assessment method. With recent advancements in X-ray fluorescence (XRF) technology, we have developed a portable system that can quantify lead in bone in vivo within 3 minutes. Our study investigated improvements to the system, four calibration methods, and system validation for in vivo measurements. Our main results show that the detection limit of the system is 2.9?ppm with 2?mm soft tissue thickness, the best calibration method for in vivo measurement is background subtraction, and there is strong correlation between KXRF and portable LXRF bone lead results. Our results indicate that the technology is ready to be used in large human population studies to investigate adverse health effects of lead exposure. The portability of the system and fast measurement time should allow for this technology to greatly advance the research on lead exposure and public/environmental health. PMID:26317033

  8. Deaf Learners' Knowledge of English Universal Quantifiers

    ERIC Educational Resources Information Center

    Berent, Gerald P.; Kelly, Ronald R.; Porter, Jeffrey E.; Fonzi, Judith

    2008-01-01

    Deaf and hearing students' knowledge of English sentences containing universal quantifiers was compared through their performance on a 50-item, multiple-picture task that required students to decide whether each of five pictures represented a possible meaning of a target sentence. The task assessed fundamental knowledge of quantifier sentences,…

  9. 1. Show the synthesis of prontosil. Show the starting

    E-print Network

    Gates, Kent. S.

    to prepare the other five derivatives shown below. 4. Show the synthesis of the sulfa drug on the right. Show in the synthetic route. SH2N O O NH2 Sulfa drug SH2N O O H N ON Sulfamethoxazole (Bactrim/Septra) SH2N O O H N N N answer will appear in every undergraduate organic textbook). N N 5. The (imaginary) drug shown below

  10. Quantifying Uncertainties in Land-Surface Microwave Emissivity Retrievals

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Prigent, Catherine; Norouzi, Hamidreza; Aires, Filipe; Boukabara, Sid-Ahmed; Furuzawa, Fumie A.; Masunaga, Hirohiko

    2013-01-01

    Uncertainties in the retrievals of microwaveland-surface emissivities are quantified over two types of land surfaces: desert and tropical rainforest. Retrievals from satellite-based microwave imagers, including the Special Sensor Microwave Imager, the Tropical Rainfall Measuring Mission Microwave Imager, and the Advanced Microwave Scanning Radiometer for Earth Observing System, are studied. Our results show that there are considerable differences between the retrievals from different sensors and from different groups over these two land-surface types. In addition, the mean emissivity values show different spectral behavior across the frequencies. With the true emissivity assumed largely constant over both of the two sites throughout the study period, the differences are largely attributed to the systematic and random errors inthe retrievals. Generally, these retrievals tend to agree better at lower frequencies than at higher ones, with systematic differences ranging 1%-4% (3-12 K) over desert and 1%-7% (3-20 K) over rainforest. The random errors within each retrieval dataset are in the range of 0.5%-2% (2-6 K). In particular, at 85.5/89.0 GHz, there are very large differences between the different retrieval datasets, and within each retrieval dataset itself. Further investigation reveals that these differences are most likely caused by rain/cloud contamination, which can lead to random errors up to 10-17 K under the most severe conditions.

  11. Quantifying dynamical spillover in co-evolving multiplex networks

    PubMed Central

    Vijayaraghavan, Vikram S.; Noël, Pierre-André; Maoz, Zeev; D’Souza, Raissa M.

    2015-01-01

    Multiplex networks (a system of multiple networks that have different types of links but share a common set of nodes) arise naturally in a wide spectrum of fields. Theoretical studies show that in such multiplex networks, correlated edge dynamics between the layers can have a profound effect on dynamical processes. However, how to extract the correlations from real-world systems is an outstanding challenge. Here we introduce the Multiplex Markov chain to quantify correlations in edge dynamics found in longitudinal data of multiplex networks. By comparing the results obtained from the multiplex perspective to a null model which assumes layers in a network are independent, we can identify real correlations as distinct from simultaneous changes that occur due to random chance. We use this approach on two different data sets: the network of trade and alliances between nation states, and the email and co-commit networks between developers of open source software. We establish the existence of “dynamical spillover” showing the correlated formation (or deletion) of edges of different types as the system evolves. The details of the dynamics over time provide insight into potential causal pathways. PMID:26459949

  12. Quantifying dynamical spillover in co-evolving multiplex networks

    NASA Astrophysics Data System (ADS)

    Vijayaraghavan, Vikram S.; Noël, Pierre-André; Maoz, Zeev; D'Souza, Raissa M.

    2015-10-01

    Multiplex networks (a system of multiple networks that have different types of links but share a common set of nodes) arise naturally in a wide spectrum of fields. Theoretical studies show that in such multiplex networks, correlated edge dynamics between the layers can have a profound effect on dynamical processes. However, how to extract the correlations from real-world systems is an outstanding challenge. Here we introduce the Multiplex Markov chain to quantify correlations in edge dynamics found in longitudinal data of multiplex networks. By comparing the results obtained from the multiplex perspective to a null model which assumes layers in a network are independent, we can identify real correlations as distinct from simultaneous changes that occur due to random chance. We use this approach on two different data sets: the network of trade and alliances between nation states, and the email and co-commit networks between developers of open source software. We establish the existence of “dynamical spillover” showing the correlated formation (or deletion) of edges of different types as the system evolves. The details of the dynamics over time provide insight into potential causal pathways.

  13. Quantifying nanosheet graphene oxide using electrospray-differential mobility analysis.

    PubMed

    Tai, Jui-Ting; Lai, Yen-Chih; Yang, Jian-He; Ho, Hsin-Chia; Wang, Hsiao-Fang; Ho, Rong-Ming; Tsai, De-Hao

    2015-04-01

    We report a high-resolution, traceable method to quantify number concentrations and dimensional properties of nanosheet graphene oxide (N-GO) colloids using electrospray-differential mobility analysis (ES-DMA). Transmission electron microscopy (TEM) was employed orthogonally to provide complementary data and imagery of N-GOs. Results show that the equivalent mobility sizes, size distributions, and number concentrations of N-GOs were able to be successfully measured by ES-DMA. Colloidal stability and filtration efficiency of N-GOs were shown to be effectively characterized based on the change of size distributions and number concentrations. Through the use of an analytical model, the DMA data were able to be converted into lateral size distributions, showing the average lateral size of N-GOs was ?32 nm with an estimated thickness ?0.8 nm. This prototype study demonstrates the proof of concept of using ES-DMA to quantitatively characterize N-GOs and provides traceability for applications involving the formulation of N-GOs. PMID:25783039

  14. Quantifying Uncertainties in Land Surface Microwave Emissivity Retrievals

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Prigent, Catherine; Norouzi, Hamidreza; Aires, Filipe; Boukabara, Sid-Ahmed; Furuzawa, Fumie A.; Masunaga, Hirohiko

    2012-01-01

    Uncertainties in the retrievals of microwave land surface emissivities were quantified over two types of land surfaces: desert and tropical rainforest. Retrievals from satellite-based microwave imagers, including SSM/I, TMI and AMSR-E, were studied. Our results show that there are considerable differences between the retrievals from different sensors and from different groups over these two land surface types. In addition, the mean emissivity values show different spectral behavior across the frequencies. With the true emissivity assumed largely constant over both of the two sites throughout the study period, the differences are largely attributed to the systematic and random errors in the retrievals. Generally these retrievals tend to agree better at lower frequencies than at higher ones, with systematic differences ranging 14% (312 K) over desert and 17% (320 K) over rainforest. The random errors within each retrieval dataset are in the range of 0.52% (26 K). In particular, at 85.0/89.0 GHz, there are very large differences between the different retrieval datasets, and within each retrieval dataset itself. Further investigation reveals that these differences are mostly likely caused by rain/cloud contamination, which can lead to random errors up to 1017 K under the most severe conditions.

  15. Quantifying uncertainty in the phylogenetics of Australian numeral systems.

    PubMed

    Zhou, Kevin; Bowern, Claire

    2015-09-22

    Researchers have long been interested in the evolution of culture and the ways in which change in cultural systems can be reconstructed and tracked. Within the realm of language, these questions are increasingly investigated with Bayesian phylogenetic methods. However, such work in cultural phylogenetics could be improved by more explicit quantification of reconstruction and transition probabilities. We apply such methods to numerals in the languages of Australia. As a large phylogeny with almost universal 'low-limit' systems, Australian languages are ideal for investigating numeral change over time. We reconstruct the most likely extent of the system at the root and use that information to explore the ways numerals evolve. We show that these systems do not increment serially, but most commonly vary their upper limits between 3 and 5. While there is evidence for rapid system elaboration beyond the lower limits, languages lose numerals as well as gain them. We investigate the ways larger numerals build on smaller bases, and show that there is a general tendency to both gain and replace 4 by combining 2 + 2 (rather than inventing a new unanalysable word 'four'). We develop a series of methods for quantifying and visualizing the results. PMID:26378214

  16. Quantifying Annual Aboveground Net Primary Production in the Intermountain West

    Technology Transfer Automated Retrieval System (TEKTRAN)

    As part of a larger project, methods were developed to quantify current year growth on grasses, forbs, and shrubs. Annual aboveground net primary production (ANPP) data are needed for this project to calibrate results from computer simulation models and remote-sensing data. Measuring annual ANPP of ...

  17. Exposure to air pollution Exposition is quantified as population weighted

    E-print Network

    Menut, Laurent

    Exposure to air pollution Exposition is quantified as population weighted concentration of relevant Benefit Analysis The sanitary benefits brought about by air pollution improvement as a result of climate by a collateral reduction of air pollutant emissions, hence a lower cost of AQ legislation. Modelling Framework

  18. Quantifying capital goods for biological treatment of organic waste.

    PubMed

    Brogaard, Line K; Petersen, Per H; Nielsen, Peter D; Christensen, Thomas H

    2015-02-01

    Materials and energy used for construction of anaerobic digestion (AD) and windrow composting plants were quantified in detail. The two technologies were quantified in collaboration with consultants and producers of the parts used to construct the plants. The composting plants were quantified based on the different sizes for the three different types of waste (garden and park waste, food waste and sludge from wastewater treatment) in amounts of 10,000 or 50,000 tonnes per year. The AD plant was quantified for a capacity of 80,000 tonnes per year. Concrete and steel for the tanks were the main materials for the AD plant. For the composting plants, gravel and concrete slabs for the pavement were used in large amounts. To frame the quantification, environmental impact assessments (EIAs) showed that the steel used for tanks at the AD plant and the concrete slabs at the composting plants made the highest contribution to Global Warming. The total impact on Global Warming from the capital goods compared to the operation reported in the literature on the AD plant showed an insignificant contribution of 1-2%. For the composting plants, the capital goods accounted for 10-22% of the total impact on Global Warming from composting. PMID:25595291

  19. Quantifying Coral Reef Ecosystem Services

    EPA Science Inventory

    Coral reefs have been declining during the last four decades as a result of both local and global anthropogenic stresses. Numerous research efforts to elucidate the nature, causes, magnitude, and potential remedies for the decline have led to the widely held belief that the recov...

  20. Quantifying chaos for ecological stoichiometry

    NASA Astrophysics Data System (ADS)

    Duarte, Jorge; Januário, Cristina; Martins, Nuno; Sardanyés, Josep

    2010-09-01

    The theory of ecological stoichiometry considers ecological interactions among species with different chemical compositions. Both experimental and theoretical investigations have shown the importance of species composition in the outcome of the population dynamics. A recent study of a theoretical three-species food chain model considering stoichiometry [B. Deng and I. Loladze, Chaos 17, 033108 (2007)] shows that coexistence between two consumers predating on the same prey is possible via chaos. In this work we study the topological and dynamical measures of the chaotic attractors found in such a model under ecological relevant parameters. By using the theory of symbolic dynamics, we first compute the topological entropy associated with unimodal Poincaré return maps obtained by Deng and Loladze from a dimension reduction. With this measure we numerically prove chaotic competitive coexistence, which is characterized by positive topological entropy and positive Lyapunov exponents, achieved when the first predator reduces its maximum growth rate, as happens at increasing ?1. However, for higher values of ?1 the dynamics become again stable due to an asymmetric bubble-like bifurcation scenario. We also show that a decrease in the efficiency of the predator sensitive to prey's quality (increasing parameter ?) stabilizes the dynamics. Finally, we estimate the fractal dimension of the chaotic attractors for the stoichiometric ecological model.

  1. Asteroid Geophysics and Quantifying the Impact Hazard

    NASA Technical Reports Server (NTRS)

    Sears, D.; Wooden, D. H.; Korycanksy, D. G.

    2015-01-01

    Probably the major challenge in understanding, quantifying, and mitigating the effects of an impact on Earth is understanding the nature of the impactor. Of the roughly 25 meteorite craters on the Earth that have associated meteorites, all but one was produced by an iron meteorite and only one was produced by a stony meteorite. Equally important, even meteorites of a given chemical class produce a wide variety of behavior in the atmosphere. This is because they show considerable diversity in their mechanical properties which have a profound influence on the behavior of meteorites during atmospheric passage. Some stony meteorites are weak and do not reach the surface or reach the surface as thousands of relatively harmless pieces. Some stony meteorites roll into a maximum drag configuration and are strong enough to remain intact so a large single object reaches the surface. Others have high concentrations of water that may facilitate disruption. However, while meteorite falls and meteorites provide invaluable information on the physical nature of the objects entering the atmosphere, there are many unknowns concerning size and scale that can only be determined by from the pre-atmospheric properties of the asteroids. Their internal structure, their thermal properties, their internal strength and composition, will all play a role in determining the behavior of the object as it passes through the atmosphere, whether it produces an airblast and at what height, and the nature of the impact and amount and distribution of ejecta.

  2. The missing metric: quantifying contributions of reviewers

    PubMed Central

    Cantor, Maurício; Gero, Shane

    2015-01-01

    The number of contributing reviewers often outnumbers the authors of publications. This has led to apathy towards reviewing and the conclusion that the peer-review system is broken. Given the trade-offs between submitting and reviewing manuscripts, reviewers and authors naturally want visibility for their efforts. While study after study has called for revolutionizing publication practices, the current paradigm does not recognize reviewers' time and expertise. We propose the R-index as a simple way to quantify scientists' contributions as reviewers. We modelled its performance using simulations based on real data to show that early–mid career scientists, who complete high-quality reviews of longer manuscripts within their field, can perform as well as leading scientists reviewing only for high-impact journals. By giving citeable academic recognition for reviewing, R-index will encourage more participation with better reviews, regardless of the career stage. Moreover, the R-index will allow editors to exploit scores to manage and improve their review team, and for journals to promote high average scores as signals of a practical and efficient service to authors. Peer-review is a pervasive necessity across disciplines and the simple utility of this missing metric will credit a valuable aspect of academic productivity without having to revolutionize the current peer-review system. PMID:26064609

  3. Polymer microlenses for quantifying cell sheet mechanics

    PubMed Central

    Miquelard-Garnier, Guillaume; Zimberlin, Jessica A.; Sikora, Christian B.; Wadsworth, Patricia

    2010-01-01

    Mechanical interactions between individual cells and their substrate have been studied extensively over the past decade; however, understanding how these interactions change as cells interact with neighboring cells in the development of a cell sheet, or early stage tissue, is less developed. We use a recently developed experimental technique for quantifying the mechanics of confluent cell sheets. Living cells are cultured on a thin film of polystyrene [PS], which is attached to a patterned substrate of crosslinked poly(dimethyl siloxane) [PDMS] microwells. As cells attach to the substrate and begin to form a sheet, they apply sufficient contractile force to buckle the PS film over individual microwells to form a microlens array. The curvature for each microlens is measured by confocal microscopy and can be related to the strain and stress applied by the cell sheet using simple mechanical analysis for the buckling of thin films. We demonstrate that this technique can provide insight into the important materials properties and length scales that govern cell sheet responses, especially the role of stiffness of the substrate. We show that intercellular forces can lead to significantly different behaviors than the ones observed for individual cells, where focal adhesion is the relevant parameter. PMID:20445765

  4. Quantifying biomechanical motion using Procrustes motion analysis.

    PubMed

    Adams, Dean C; Cerney, Melinda M

    2007-01-01

    The ability to quantify and compare the movements of organisms is a central focus of many studies in biology, anthropology, biomechanics, and ergonomics. However, while the importance of functional motion analysis has long been acknowledged, quantitative methods for identifying differences in motion have not been widely developed. In this article, we present an approach to the functional analysis of motion and quantification of motion types. Our approach, Procrustes Motion Analysis (PMA) can be used to distinguish differences in cyclical, repeated, or goal-directed motions. PMA exploits the fact that any motion can be represented by an ordered sequence of postures exhibited throughout the course of a motion. Changes in posture from time step to time step form a trajectory through a multivariate data space, representing a specific motion. By evaluating the size, shape, and orientation of these motion trajectories, it is possible to examine variation in motion type within and among groups or even with respect to continuous variables. This represents a significant analytical advance over current approaches. Using simulated and digitized data representing cyclical, repeated and goal-directed motions, we show that PMA correctly identifies distinct motion tasks in these data sets. PMID:16448654

  5. Diarrheal Disease in Show Swine 

    E-print Network

    Lawhorn, D. Bruce

    2007-02-27

    water is the main source of Giardia spp. Bacterial Causes Swine dysentery or ?bloody dysentery? from infec- tion with Brachyspira (Serpulina) hyodysenteriae is a major cause of diarrheal disease in show pigs. Pigs can be exposed to the organism.... Swine dysentery bac- teria are not known to cause disease in humans. Salmonella typhimurium infection is another impor- tant cause of diarrheal disease in show pigs. They become infected by exposure to contaminated swine manure on premises, trailers...

  6. Progress toward quantifying landscape-scale movement patterns of the glassy-winged sharpshooter and its natural enemies using a novel marl-capture technique

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Here we present the results of the first year of our research targeted at quantifying the landscape-level movement patterns of GWSS and its natural enemies. We showed that protein markers can be rapidly acquired and retained on insects for several weeks after marking directly in the field. Specifica...

  7. Quantifying avoided emissions from renewable generation

    E-print Network

    Gomez, Gabriel R. (Gabriel Rodriguez)

    2009-01-01

    Quantifying the reduced emissions due to renewable power integration and providing increasingly accurate emissions analysis has become more important for policy makers in the age of renewable portfolio standards (RPS) and ...

  8. Arches showing UV flaring activity

    NASA Technical Reports Server (NTRS)

    Fontenla, J. M.

    1988-01-01

    The UVSP data obtained in the previous maximum activity cycle show the frequent appearance of flaring events in the UV. In many cases these flaring events are characterized by at least two footpoints which show compact impulsive non-simultaneous brightenings and a fainter but clearly observed arch developes between the footpoints. These arches and footpoints are observed in line corresponding to different temperatures, as Lyman alpha, N V, and C IV, and when observed above the limb display large Doppler shifts at some stages. The size of the arches can be larger than 20 arcsec.

  9. Cross-Linguistic Relations between Quantifiers and Numerals in Language Acquisition: Evidence from Japanese

    ERIC Educational Resources Information Center

    Barner, David; Libenson, Amanda; Cheung, Pierina; Takasaki, Mayu

    2009-01-01

    A study of 104 Japanese-speaking 2- to 5-year-olds tested the relation between numeral and quantifier acquisition. A first study assessed Japanese children's comprehension of quantifiers, numerals, and classifiers. Relative to English-speaking counterparts, Japanese children were delayed in numeral comprehension at 2 years of age but showed no…

  10. Quantifying low levels of polymorphic impurity in clopidogrel bisulphate by vibrational spectroscopy and chemometrics.

    PubMed

    Német, Zoltán; Demeter, Adám; Pokol, György

    2009-01-15

    Vibrational spectroscopic methods were developed for quantitative analysis of Form II of clopidogrel bisulphate in Form I and Form II polymorphic mixtures. Results show that both IR and Raman spectroscopy combined with chemometrics are suitable to quantify low levels of Form II in Form I, down to 2 and 3%, respectively, with less than 1% limit of detection. Different preprocessing and multivariate methods were applied for spectral processing and were compared to find the best chemometric model. Common problems of quantitative vibrational spectroscopy in the solid phase are discussed; and procedures appropriate to eliminate them are proposed. PMID:19019611

  11. Quantifying dynamics of the financial correlations

    E-print Network

    S. Drozdz; J. Kwapien; F. Gruemmer; F. Ruf; J. Speth

    2001-02-22

    A novel application of the correlation matrix formalism to study dynamics of the financial evolution is presented. This formalism allows to quantify the memory effects as well as some potential repeatable intradaily structures in the financial time-series. The present study is based on the high-frequency Deutsche Aktienindex (DAX) data over the time-period between November 1997 and December 1999 and demonstrates a power of the method. In this way two significant new aspects of the DAX evolution are identified: (i) the memory effects turn out to be sizably shorter than what the standard autocorrelation function analysis seems to indicate and (ii) there exist short term repeatable structures in fluctuations that are governed by a distinct dynamics. The former of these results may provide an argument in favour of the market efficiency while the later one may indicate origin of the difficulty in reaching a Gaussian limit, expected from the central limit theorem, in the distribution of returns on longer time-horizons.

  12. Quantifying Wrinkle Features of Thin Membrane Structures

    NASA Technical Reports Server (NTRS)

    Jacobson, Mindy B.; Iwasa, Takashi; Naton, M. C.

    2004-01-01

    For future micro-systems utilizing membrane based structures, quantified predictions of wrinkling behavior in terms of amplitude, angle and wavelength are needed to optimize the efficiency and integrity of such structures, as well as their associated control systems. For numerical analyses performed in the past, limitations on the accuracy of membrane distortion simulations have often been related to the assumptions made. This work demonstrates that critical assumptions include: effects of gravity, supposed initial or boundary conditions, and the type of element used to model the membrane. In this work, a 0.2 m x 02 m membrane is treated as a structural material with non-negligible bending stiffness. Finite element modeling is used to simulate wrinkling behavior due to a constant applied in-plane shear load. Membrane thickness, gravity effects, and initial imperfections with respect to flatness were varied in numerous nonlinear analysis cases. Significant findings include notable variations in wrinkle modes for thickness in the range of 50 microns to 1000 microns, which also depend on the presence of an applied gravity field. However, it is revealed that relationships between overall strain energy density and thickness for cases with differing initial conditions are independent of assumed initial conditions. In addition, analysis results indicate that the relationship between wrinkle amplitude scale (W/t) and structural scale (L/t) is independent of the nonlinear relationship between thickness and stiffness.

  13. Quantifying truncation errors in effective field theory

    NASA Astrophysics Data System (ADS)

    Furnstahl, R. J.; Klco, N.; Phillips, D. R.; Wesolowski, S.

    2015-08-01

    Bayesian procedures designed to quantify truncation errors in perturbative calculations of quantum chromodynamics observables are adapted to expansions in effective field theory (EFT). In the Bayesian approach, such truncation errors are derived from degree-of-belief (DOB) intervals for EFT predictions. Computation of these intervals requires specification of prior probability distributions ("priors") for the expansion coefficients. By encoding expectations about the naturalness of these coefficients, this framework provides a statistical interpretation of the standard EFT procedure where truncation errors are estimated using the order-by-order convergence of the expansion. It also permits exploration of the ways in which such error bars are, and are not, sensitive to assumptions about EFT-coefficient naturalness. We first demonstrate the calculation of Bayesian probability distributions for the EFT truncation error in some representative examples and then focus on the application of chiral EFT to neutron-proton scattering. Epelbaum, Krebs, and Meißner recently articulated explicit rules for estimating truncation errors in such EFT calculations of few-nucleon-system properties. We find that their basic procedure emerges generically from one class of naturalness priors considered and that all such priors result in consistent quantitative predictions for 68% DOB intervals. We then explore several methods by which the convergence properties of the EFT for a set of observables may be used to check the statistical consistency of the EFT expansion parameter.

  14. Quantifying selection in immune receptor repertoires

    PubMed Central

    Elhanati, Yuval; Murugan, Anand; Callan, Curtis G.; Mora, Thierry; Walczak, Aleksandra M.

    2014-01-01

    The efficient recognition of pathogens by the adaptive immune system relies on the diversity of receptors displayed at the surface of immune cells. T-cell receptor diversity results from an initial random DNA editing process, called VDJ recombination, followed by functional selection of cells according to the interaction of their surface receptors with self and foreign antigenic peptides. Using high-throughput sequence data from the ?-chain of human T-cell receptors, we infer factors that quantify the overall effect of selection on the elements of receptor sequence composition: the V and J gene choice and the length and amino acid composition of the variable region. We find a significant correlation between biases induced by VDJ recombination and our inferred selection factors together with a reduction of diversity during selection. Both effects suggest that natural selection acting on the recombination process has anticipated the selection pressures experienced during somatic evolution. The inferred selection factors differ little between donors or between naive and memory repertoires. The number of sequences shared between donors is well-predicted by our model, indicating a stochastic origin of such public sequences. Our approach is based on a probabilistic maximum likelihood method, which is necessary to disentangle the effects of selection from biases inherent in the recombination process. PMID:24941953

  15. What Do Blood Tests Show?

    MedlinePLUS

    ... that are a sign of prediabetes or diabetes. Plasma Glucose Results (mg/dL)* Diagnosis 70 to 99 ... the better 60 mg/dL and above Considered protective against heart disease Rate This Content: NEXT >> Updated: ...

  16. COMPLEXITY & APPROXIMABILITY OF QUANTIFIED & STOCHASTIC CONSTRAINT SATISFACTION PROBLEMS

    SciTech Connect

    H. B. HUNT; M. V. MARATHE; R. E. STEARNS

    2001-06-01

    Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S and T be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SATc(S).) Here, we study simultaneously the complexity of decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. We present simple yet general techniques to characterize simultaneously, the complexity or efficient approximability of a number of versions/variants of the problems SAT(S), Q-SAT(S), S-SAT(S),MAX-Q-SAT(S) etc., for many different such D,C,S,T. These versions/variants include decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. Our unified approach is based on the following two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic representability. Some of the results extend the earlier results in [Pa85,LMP99,CF+93,CF+94] Our techniques and results reported here also provide significant steps towards obtaining dichotomy theorems, for a number of the problems above, including the problems MAX-Q-SAT(S), and MAX-S-SAT(S). The discovery of such dichotomy theorems, for unquantified formulas, has received significant recent attention in the literature [CF+93, CF+94, Cr95, KSW97]. Keywords: NP-hardness; Approximation Algorithms; PSPACE-hardness; Quantified and Stochastic Constraint Satisfaction Problems.

  17. Quantifying metastatic inefficiency: rare genotypes versus rare dynamics

    NASA Astrophysics Data System (ADS)

    Cisneros, Luis H.; Newman, Timothy J.

    2014-08-01

    We introduce and solve a ‘null model’ of stochastic metastatic colonization. The model is described by a single parameter ?: the ratio of the rate of cell division to the rate of cell death for a disseminated tumour cell in a given secondary tissue environment. We are primarily interested in the case in which colonizing cells are poorly adapted for proliferation in the local tissue environment, so that cell death is more likely than cell division, i.e. \\theta \\lt 1. We quantify the rare event statistics for the successful establishment of a metastatic colony of size N. For N\\gg 1, we find that the probability of establishment is exponentially rare, as expected, and yet the mean time for such rare events is of the form \\sim log (N)/(1-\\theta ) while the standard deviation of colonization times is \\sim 1/(1-\\theta ). Thus, counter to naive expectation, for \\theta \\lt 1, the average time for establishment of successful metastatic colonies decreases with decreasing cell fitness, and colonies seeded from lower fitness cells show less stochastic variation in their growth. These results indicate that metastatic growth from poorly adapted cells is rare, exponentially explosive and essentially deterministic. These statements are brought into sharper focus by the finding that the temporal statistics of the early stages of metastatic colonization from low-fitness cells (\\theta \\lt 1) are statistically indistinguishable from those initiated from high-fitness cells (\\theta \\gt 1), i.e. the statistics show a duality mapping (1-\\theta )\\to (\\theta -1). We conclude our analysis with a study of heterogeneity in the fitness of colonising cells, and describe a phase diagram delineating parameter regions in which metastatic colonization is dominated either by low or high fitness cells, showing that both are plausible given our current knowledge of physiological conditions in human cancer.

  18. Validation of an algorithm to quantify changes in actin cytoskeletal organization.

    PubMed

    Vindin, Howard; Bischof, Leanne; Gunning, Peter; Stehn, Justine

    2014-03-01

    The actin cytoskeleton plays an important role in most, if not all, processes necessary for cell survival. Given the fundamental role that the actin cytoskeleton plays in the progression of cancer, it is an ideal target for chemotherapy. Although it is possible to image the actin cytoskeleton in a high-throughput manner, there is currently no validated method to quantify changes in the cytoskeleton in the same capacity, which makes research into its organization and the development of anticytoskeletal drugs difficult. We have validated the use of a linear feature detection algorithm, allowing us to measure changes in actin filament organization. Its ability to quantify changes associated with cytoskeletal disruption will make it a valuable tool in the development of compounds that target the cytoskeleton in cancer. Our results show that this algorithm can quantify cytoskeletal changes in a cell-based system after addition of both well-established and novel anticytoskeletal agents using either fluorescence microscopy or a high-content imaging approach. This novel method gives us the potential to screen compounds in a high-throughput manner for cancer and other diseases in which the cytoskeleton plays a key role. PMID:24019255

  19. Quantifying and Communicating Uncertainty in Preclinical Human Dose-Prediction

    PubMed Central

    Sundqvist, M; Lundahl, A; Någård, MB; Bredberg, U; Gennemark, P

    2015-01-01

    Human dose-prediction is fundamental for ranking lead-optimization compounds in drug discovery and to inform design of early clinical trials. This tutorial describes how uncertainty in such predictions can be quantified and efficiently communicated to facilitate decision-making. Using three drug-discovery case studies, we show how several uncertain pieces of input information can be integrated into one single uncomplicated plot with key predictions, including their uncertainties, for many compounds or for many scenarios, or both. PMID:26225248

  20. Quantifying high dimensional entanglement with cameras and lenses

    E-print Network

    Paul Erker; Mario Krenn; Marcus Huber

    2015-12-16

    We derive a framework for quantifying entanglement in multipartite and high dimensional systems using only correlations in two mutually unbiased bases. We furthermore develop such bounds in cases where the second basis is not characterized, thus enabling entanglement quantification with minimal assumptions. To demonstrate the experimental feasibility of the approach we develop an experimental proposal with state of the art cameras and optical lenses and show that these methods enable a reliable characterization of sources of entangled photons with minimal effort.

  1. Statistical physics approach to quantifying differences in myelinated nerve fibers

    NASA Astrophysics Data System (ADS)

    Comin, César H.; Santos, João R.; Corradini, Dario; Morrison, Will; Curme, Chester; Rosene, Douglas L.; Gabrielli, Andrea; da F. Costa, Luciano; Stanley, H. Eugene

    2014-03-01

    We present a new method to quantify differences in myelinated nerve fibers. These differences range from morphologic characteristics of individual fibers to differences in macroscopic properties of collections of fibers. Our method uses statistical physics tools to improve on traditional measures, such as fiber size and packing density. As a case study, we analyze cross-sectional electron micrographs from the fornix of young and old rhesus monkeys using a semi-automatic detection algorithm to identify and characterize myelinated axons. We then apply a feature selection approach to identify the features that best distinguish between the young and old age groups, achieving a maximum accuracy of 94% when assigning samples to their age groups. This analysis shows that the best discrimination is obtained using the combination of two features: the fraction of occupied axon area and the effective local density. The latter is a modified calculation of axon density, which reflects how closely axons are packed. Our feature analysis approach can be applied to characterize differences that result from biological processes such as aging, damage from trauma or disease or developmental differences, as well as differences between anatomical regions such as the fornix and the cingulum bundle or corpus callosum.

  2. Statistical physics approach to quantifying differences in myelinated nerve fibers

    PubMed Central

    Comin, César H.; Santos, João R.; Corradini, Dario; Morrison, Will; Curme, Chester; Rosene, Douglas L.; Gabrielli, Andrea; da F. Costa, Luciano; Stanley, H. Eugene

    2014-01-01

    We present a new method to quantify differences in myelinated nerve fibers. These differences range from morphologic characteristics of individual fibers to differences in macroscopic properties of collections of fibers. Our method uses statistical physics tools to improve on traditional measures, such as fiber size and packing density. As a case study, we analyze cross–sectional electron micrographs from the fornix of young and old rhesus monkeys using a semi-automatic detection algorithm to identify and characterize myelinated axons. We then apply a feature selection approach to identify the features that best distinguish between the young and old age groups, achieving a maximum accuracy of 94% when assigning samples to their age groups. This analysis shows that the best discrimination is obtained using the combination of two features: the fraction of occupied axon area and the effective local density. The latter is a modified calculation of axon density, which reflects how closely axons are packed. Our feature analysis approach can be applied to characterize differences that result from biological processes such as aging, damage from trauma or disease or developmental differences, as well as differences between anatomical regions such as the fornix and the cingulum bundle or corpus callosum. PMID:24676146

  3. Graphene Oxides Show Angiogenic Properties.

    PubMed

    Mukherjee, Sudip; Sriram, Pavithra; Barui, Ayan Kumar; Nethi, Susheel Kumar; Veeriah, Vimal; Chatterjee, Suvro; Suresh, Kattimuttathu Ittara; Patra, Chitta Ranjan

    2015-08-01

    Angiogenesis, a process resulting in the formation of new capillaries from the pre-existing vasculature plays vital role for the development of therapeutic approaches for cancer, atherosclerosis, wound healing, and cardiovascular diseases. In this report, the synthesis, characterization, and angiogenic properties of graphene oxide (GO) and reduced graphene oxide (rGO) have been demonstrated, observed through several in vitro and in vivo angiogenesis assays. The results here demonstrate that the intracellular formation of reactive oxygen species and reactive nitrogen species as well as activation of phospho-eNOS and phospho-Akt might be the plausible mechanisms for GO and rGO induced angiogenesis. The results altogether suggest the possibilities for the development of alternative angiogenic therapeutic approach for the treatment of cardiovascular related diseases where angiogenesis plays a significant role. PMID:26033847

  4. Quantifying Sentiment and Influence in Blogspaces

    SciTech Connect

    Hui, Peter SY; Gregory, Michelle L.

    2010-07-25

    The weblog, or blog, has become a popular form of social media, through which authors can write posts, which can in turn generate feedback in the form of user comments. When considered in totality, a collection of blogs can thus be viewed as a sort of informal collection of mass sentiment and opinion. An obvious topic of interest might be to mine this collection to obtain some gauge of public sentiment over the wide variety of topics contained therein. However, the sheer size of the so-called blogosphere, combined with the fact that the subjects of posts can vary over a practically limitless number of topics poses some serious challenges when any meaningful analysis is attempted. Namely, the fact that largely anyone with access to the Internet can author their own blog, raises the serious issue of credibility— should some blogs be considered to be more influential than others, and consequently, when gauging sentiment with respect to a topic, should some blogs be weighted more heavily than others? In addition, as new posts and comments can be made on almost a constant basis, any blog analysis algorithm must be able to handle such updates efficiently. In this paper, we give a formalization of the blog model. We give formal methods of quantifying sentiment and influence with respect to a hierarchy of topics, with the specific aim of facilitating the computation of a per-topic, influence-weighted sentiment measure. Finally, as efficiency is a specific endgoal, we give upper bounds on the time required to update these values with new posts, showing that our analysis and algorithms are scalable.

  5. ShowMe3D

    Energy Science and Technology Software Center (ESTSC)

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from themore »displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.« less

  6. Phoenix Scoop Inverted Showing Rasp

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image taken by the Surface Stereo Imager on Sol 49, or the 49th Martian day of the mission (July 14, 2008), shows the silver colored rasp protruding from NASA's Phoenix Mars Lander's Robotic Arm scoop. The scoop is inverted and the rasp is pointing up.

    Shown with its forks pointing toward the ground is the thermal and electrical conductivity probe, at the lower right. The Robotic Arm Camera is pointed toward the ground.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is led by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  7. Interpolating Quantifier-Free Presburger Arithmetic

    NASA Astrophysics Data System (ADS)

    Kroening, Daniel; Leroux, Jérôme; Rümmer, Philipp

    Craig interpolation has become a key ingredient in many symbolic model checkers, serving as an approximative replacement for expensive quantifier elimination. In this paper, we focus on an interpolating decision procedure for the full quantifier-free fragment of Presburger Arithmetic, i.e., linear arithmetic over the integers, a theory which is a good fit for the analysis of software systems. In contrast to earlier procedures based on quantifier elimination and the Omega test, our approach uses integer linear programming techniques: relaxation of interpolation problems to the rationals, and a complete branch-and-bound rule tailored to efficient interpolation. Equations are handled via a dedicated polynomial-time sub-procedure. We have fully implemented our procedure on top of the SMT-solver OpenSMT and present an extensive experimental evaluation.

  8. Casimir experiments showing saturation effects

    SciTech Connect

    Sernelius, Bo E.

    2009-10-15

    We address several different Casimir experiments where theory and experiment disagree. First out is the classical Casimir force measurement between two metal half spaces; here both in the form of the torsion pendulum experiment by Lamoreaux and in the form of the Casimir pressure measurement between a gold sphere and a gold plate as performed by Decca et al.; theory predicts a large negative thermal correction, absent in the high precision experiments. The third experiment is the measurement of the Casimir force between a metal plate and a laser irradiated semiconductor membrane as performed by Chen et al.; the change in force with laser intensity is larger than predicted by theory. The fourth experiment is the measurement of the Casimir force between an atom and a wall in the form of the measurement by Obrecht et al. of the change in oscillation frequency of a {sup 87}Rb Bose-Einstein condensate trapped to a fused silica wall; the change is smaller than predicted by theory. We show that saturation effects can explain the discrepancies between theory and experiment observed in all these cases.

  9. Quantifying plasticity-independent creep compliance and relaxation of viscoelastoplastic materials under contact loading

    E-print Network

    Vandamme, Matthieu

    Here we quantify the time-dependent mechanical properties of a linear viscoelastoplastic material under contact loading. For contact load relaxation, we showed that the relaxation modulus can be measured independently of ...

  10. Quantified Histopathology of the Keratoconic Cornea

    PubMed Central

    Mathew, Jessica H.; Goosey, John D.; Bergmanson, Jan P. G.

    2011-01-01

    Purpose The present study systematically investigated and quantified histopathological changes in a series of keratoconic (Kc) corneas utilizing a physiologically formulated fixative to not further distort the already distorted diseased corneas. Methods Twelve surgically removed Kc corneal buttons were immediately preserved and processed for light and transmission electron microscopy using an established corneal protocol. Measurements were taken from the central cone and peripheral regions of the host button. The sample size examined ranged in length from 390–2608um centrally and 439–2242um peripherally. Results The average corneal thickness was 437um centrally and 559um peripherally. Epithelial thickness varied centrally from 14–92um and peripherally from 30–91um. A marked thickening of the epithelial basement membrane was noted in 58% of corneas. Centrally, anterior limiting lamina (ALL) was thinned or lost over 60% of the area examined, while peripheral cornea was also affected, but to a lesser extent. Histopathologically, posterior cornea remained undisturbed by the disease. Anteriorly in the stroma, an increased number of cells and tissue debris were encountered and some of these cells were clearly not keratocytes. Conclusions It is concluded that Kc pathology, at least initially, has a distinct anterior focus involving the epithelium, ALL and anterior stroma. The epithelium had lost its cellular uniformity and was compromised by the loss or damage to the ALL. The activity of the hitherto unreported recruited stromal cells may be to break down and remove ALL and anterior stromal lamellae leading to the overall thinning that accompanies this disease. PMID:21623252

  11. Toward quantifying the deep Atlantic carbon storage increase during the last glaciation

    NASA Astrophysics Data System (ADS)

    Yu, J.; Menviel, L.; Jin, Z.

    2014-12-01

    Ice core records show that atmospheric CO2 concentrations during peak glacial time were ~30% lower than the levels during interglacial periods. The terrestrial biosphere carbon stock was likely reduced during glacials. Increased carbon storage in the deep ocean is thought to play an important role in lowering glacial atmospheric CO2. However, it has been challenging to quantify carbon storage changes in the deep ocean using existing proxy data. Here, we present deepwater carbonate ion reconstructions for a few locations in the deep Atlantic. These data allow us to estimate the minimum carbon storage increase in the deep Atlantic Ocean during the last glaciation. Our results show that, despite its relative small volume, the deep Atlantic Ocean may contribute significantly to atmospheric CO2 variations at major climate transitions. Furthermore, our results suggest a strong coupling of ocean circulation and carbon cycle in the deep Atlantic during the last glaciation.

  12. DOE: Quantifying the Value of Hydropower in the Electric Grid

    SciTech Connect

    2012-12-31

    The report summarizes research to Quantify the Value of Hydropower in the Electric Grid. This 3-year DOE study focused on defining value of hydropower assets in a changing electric grid. Methods are described for valuation and planning of pumped storage and conventional hydropower. The project team conducted plant case studies, electric system modeling, market analysis, cost data gathering, and evaluations of operating strategies and constraints. Five other reports detailing these research results are available a project website, www.epri.com/hydrogrid. With increasing deployment of wind and solar renewable generation, many owners, operators, and developers of hydropower have recognized the opportunity to provide more flexibility and ancillary services to the electric grid. To quantify value of services, this study focused on the Western Electric Coordinating Council region. A security-constrained, unit commitment and economic dispatch model was used to quantify the role of hydropower for several future energy scenarios up to 2020. This hourly production simulation considered transmission requirements to deliver energy, including future expansion plans. Both energy and ancillary service values were considered. Addressing specifically the quantification of pumped storage value, no single value stream dominated predicted plant contributions in various energy futures. Modeling confirmed that service value depends greatly on location and on competition with other available grid support resources. In this summary, ten different value streams related to hydropower are described. These fell into three categories; operational improvements, new technologies, and electricity market opportunities. Of these ten, the study was able to quantify a monetary value in six by applying both present day and future scenarios for operating the electric grid. This study confirmed that hydropower resources across the United States contribute significantly to operation of the grid in terms of energy, capacity, and ancillary services. Many potential improvements to existing hydropower plants were found to be cost-effective. Pumped storage is the most likely form of large new hydro asset expansions in the U.S. however, justifying investments in new pumped storage plants remains very challenging with current electricity market economics. Even over a wide range of possible energy futures, up to 2020, no energy future was found to bring quantifiable revenues sufficient to cover estimated costs of plant construction. Value streams not quantified in this study may provide a different cost-benefit balance and an economic tipping point for hydro. Future studies are essential in the quest to quantify the full potential value. Additional research should consider the value of services provided by advanced storage hydropower and pumped storage at smaller time steps for integration of variable renewable resources, and should include all possible value streams such as capacity value and portfolio benefits i.e.; reducing cycling on traditional generation.

  13. Quantifying the CV: Adapting an Impact Assessment Model to Astronomy

    NASA Astrophysics Data System (ADS)

    Bohémier, K. A.

    2015-04-01

    We present the process and results of applying the Becker Model to the curriculum vitae of a Yale University astronomy professor. As background, in July 2013, the Becker Medical Library at Washington Univ. in St. Louis held a workshop for librarians on the Becker Model, a framework developed by research assessment librarians for quantifying medical researchers' individual and group outputs. Following the workshop, the model was analyzed for content to adapt it to the physical sciences.

  14. Quantifying Nitrogen Loss From Flooded Hawaiian Taro Fields

    NASA Astrophysics Data System (ADS)

    Deenik, J. L.; Penton, C. R.; Bruland, G. L.; Popp, B. N.; Engstrom, P.; Mueller, J. A.; Tiedje, J.

    2010-12-01

    In 2004 a field fertilization experiment showed that approximately 80% of the fertilizer nitrogen (N) added to flooded Hawaiian taro (Colocasia esculenta) fields could not be accounted for using classic N balance calculations. To quantify N loss through denitrification and anaerobic ammonium oxidation (anammox) pathways in these taro systems we utilized a slurry-based isotope pairing technique (IPT). Measured nitrification rates and porewater N profiles were also used to model ammonium and nitrate fluxes through the top 10 cm of soil. Quantitative PCR of nitrogen cycling functional genes was used to correlate porewater N dynamics with potential microbial activity. Rates of denitrification calculated using porewater profiles were compared to those obtained using the slurry method. Potential denitrification rates of surficial sediments obtained with the slurry method were found to drastically overestimate the calculated in-situ rates. The largest discrepancies were present in fields greater than one month after initial fertilization, reflecting a microbial community poised to denitrify the initial N pulse. Potential surficial nitrification rates varied between 1.3% of the slurry-measured denitrification potential in a heavily-fertilized site to 100% in an unfertilized site. Compared to the use of urea, fish bone meal fertilizer use resulted in decreased N loss through denitrification in the surface sediment, according to both porewater modeling and IPT measurements. In addition, sub-surface porewater profiles point to root-mediated coupled nitrification/denitrification as a potential N loss pathway that is not captured in surface-based incubations. Profile-based surface plus subsurface coupled nitrification/denitrification estimates were between 1.1 and 12.7 times denitrification estimates from the surface only. These results suggest that the use of a ‘classic’ isotope pairing technique that employs 15NO3- in fertilized agricultural systems can lead to a drastic overestimation of in-situ denitrification rates and that root-associated subsurface coupled nitrification/denitrification may be a major N loss pathway in these flooded agricultural systems.

  15. Quantifying the sources of error in measurements of urine activity

    SciTech Connect

    Mozley, P.D.; Kim, H.J.; McElgin, W.

    1994-05-01

    Accurate scintigraphic measurements of radioactivity in the bladder and voided urine specimens can be limited by scatter, attenuation, and variations in the volume of urine that a given dose is distributed in. The purpose of this study was to quantify some of the errors that these problems can introduce. Transmission scans and 41 conjugate images of the bladder were sequentially acquired on a dual headed camera over 24 hours in 6 subjects after the intravenous administration of 100-150 MBq (2.7-3.6 mCi) of a novel I-123 labeled benzamide. Renal excretion fractions were calculated by measuring the counts in conjugate images of 41 sequentially voided urine samples. A correction for scatter was estimated by comparing the count rates in images that were acquired with the photopeak centered an 159 keV and images that were made simultaneously with the photopeak centered on 126 keV. The decay and attenuation corrected, geometric mean activities were compared to images of the net dose injected. Checks of the results were performed by measuring the total volume of each voided urine specimen and determining the activity in a 20 ml aliquot of it with a dose calibrator. Modeling verified the experimental results which showed that 34% of the counts were attenuated when the bladder had been expanded to a volume of 300 ml. Corrections for attenuation that were based solely on the transmission scans were limited by the volume of non-radioactive urine in the bladder before the activity was administered. The attenuation of activity in images of the voided wine samples was dependent on the geometry of the specimen container. The images of urine in standard, 300 ml laboratory specimen cups had 39{plus_minus}5% fewer counts than images of the same samples laid out in 3 liter bedpans. Scatter through the carbon fiber table substantially increased the number of counts in the images by an average of 14%.

  16. Quantifying thermal modifications on laser welded skin tissue

    NASA Astrophysics Data System (ADS)

    Tabakoglu, Hasim Ö.; Gülsoy, Murat

    2011-02-01

    Laser tissue welding is a potential medical treatment method especially on closing cuts implemented during any kind of surgery. Photothermal effects of laser on tissue should be quantified in order to determine optimal dosimetry parameters. Polarized light and phase contrast techniques reveal information about extend of thermal change over tissue occurred during laser welding application. Change in collagen structure in skin tissue stained with hematoxilen and eosin samples can be detected. In this study, three different near infrared laser wavelengths (809 nm, 980 nm and 1070 nm) were compared for skin welding efficiency. 1 cm long cuts were treated spot by spot laser application on Wistar rats' dorsal skin, in vivo. In all laser applications, 0.5 W of optical power was delivered to the tissue, 5 s continuously, resulting in 79.61 J/cm2 energy density (15.92 W/cm2 power density) for each spot. The 1st, 4th, 7th, 14th, and 21st days of recovery period were determined as control days, and skin samples needed for histology were removed on these particular days. The stained samples were examined under a light microscope. Images were taken with a CCD camera and examined with imaging software. 809 Nm laser was found to be capable of creating strong full-thickness closure, but thermal damage was evident. The thermal damage from 980 nm laser welding was found to be more tolerable. The results showed that 1070 nm laser welding produced noticeably stronger bonds with minimal scar formation.

  17. Comparison of Approaches to Quantify Arterial Damping

    E-print Network

    Chesler, Naomi C.

    Comparison of Approaches to Quantify Arterial Damping Capacity From Pressurization Tests on Mouse Conduit Arteries Lian Tian e-mail: ltian22@wisc.edu Zhijie Wang e-mail: zwang48@wisc.edu Department-mail: chesler@engr.wisc.edu Large conduit arteries are not purely elastic, but viscoelastic, which affects

  18. Quantifying Energy Savings by Improving Boiler Operation 

    E-print Network

    Carpenter, K.; Kissock, J. K.

    2005-01-01

    Improved Boiler Operation Kevin Carpenter Kelly Kissock Graduate Research Assistant Associate Professor Department of Mechanical and Aerospace Engineering University of Dayton Dayton, OH ABSTRACT On.../off operation and excess combustion air reduce boiler energy efficiency. This paper presents methods to quantify energy savings from switching to modulation control mode and reducing excess air in natural gas fired boilers. The methods include calculation...

  19. Quantifying the Reuse of Learning Objects

    ERIC Educational Resources Information Center

    Elliott, Kristine; Sweeney, Kevin

    2008-01-01

    This paper reports the findings of one case study from a larger project, which aims to quantify the claimed efficiencies of reusing learning objects to develop e-learning resources. The case study describes how an online inquiry project "Diabetes: A waste of energy" was developed by searching for, evaluating, modifying and then integrating as many…

  20. Quantifying aphid behavioral responses to environmental change

    E-print Network

    Sudderth, Erik

    Quantifying aphid behavioral responses to environmental change Erika A. Sudderth1 * & Erik B chains, entropy, Hemiptera, Aphididae Abstract Aphids are the most common vector of plant viruses on aphid performance have been documented, but effects on aphid behavior are not known. We assessed

  1. Quantifying the Thermal Fatigue of CPV Modules

    SciTech Connect

    Bosco, N.; Kurtz, S.

    2011-02-01

    A method is presented to quantify thermal fatigue in the CPV die-attach from meteorological data. A comparative; study between cities demonstrates a significant difference in the accumulated damage. These differences are most; sensitive to the number of larger (?T) thermal cycles experienced for a location. High frequency data (<1/min) may; be required to most accurately employ this method.

  2. Classifying and quantifying basins of attraction.

    PubMed

    Sprott, J C; Xiong, Anda

    2015-08-01

    A scheme is proposed to classify the basins for attractors of dynamical systems in arbitrary dimensions. There are four basic classes depending on their size and extent, and each class can be further quantified to facilitate comparisons. The calculation uses a Monte Carlo method and is applied to numerous common dissipative chaotic maps and flows in various dimensions. PMID:26328552

  3. COMPLEXITY&APPROXIMABILITY OF QUANTIFIED&STOCHASTIC CONSTRAINT SATISFACTION PROBLEMS

    SciTech Connect

    Hunt, H. B.; Marathe, M. V.; Stearns, R. E.

    2001-01-01

    Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S and T be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SATc(S).) Here, we study simultaneously the complexity of decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. We present simple yet general techniques to characterize simultaneously, the complexity or efficient approximability of a number of versions/variants of the problems SAT(S), Q-SAT(S), S-SAT(S),MAX-Q-SAT(S) etc., for many different such D,C ,S, T. These versions/variants include decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. Our unified approach is based on the following two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic represent ability. Some of the results extend the earlier results in [Pa85,LMP99,CF+93,CF+94O]u r techniques and results reported here also provide significant steps towards obtaining dichotomy theorems, for a number of the problems above, including the problems MAX-&-SAT( S), and MAX-S-SAT(S). The discovery of such dichotomy theorems, for unquantified formulas, has received significant recent attention in the literature [CF+93,CF+94,Cr95,KSW97

  4. Entropy generation method to quantify thermal comfort

    NASA Technical Reports Server (NTRS)

    Boregowda, S. C.; Tiwari, S. N.; Chaturvedi, S. K.

    2001-01-01

    The present paper presents a thermodynamic approach to assess the quality of human-thermal environment interaction and quantify thermal comfort. The approach involves development of entropy generation term by applying second law of thermodynamics to the combined human-environment system. The entropy generation term combines both human thermal physiological responses and thermal environmental variables to provide an objective measure of thermal comfort. The original concepts and definitions form the basis for establishing the mathematical relationship between thermal comfort and entropy generation term. As a result of logic and deterministic approach, an Objective Thermal Comfort Index (OTCI) is defined and established as a function of entropy generation. In order to verify the entropy-based thermal comfort model, human thermal physiological responses due to changes in ambient conditions are simulated using a well established and validated human thermal model developed at the Institute of Environmental Research of Kansas State University (KSU). The finite element based KSU human thermal computer model is being utilized as a "Computational Environmental Chamber" to conduct series of simulations to examine the human thermal responses to different environmental conditions. The output from the simulation, which include human thermal responses and input data consisting of environmental conditions are fed into the thermal comfort model. Continuous monitoring of thermal comfort in comfortable and extreme environmental conditions is demonstrated. The Objective Thermal Comfort values obtained from the entropy-based model are validated against regression based Predicted Mean Vote (PMV) values. Using the corresponding air temperatures and vapor pressures that were used in the computer simulation in the regression equation generates the PMV values. The preliminary results indicate that the OTCI and PMV values correlate well under ideal conditions. However, an experimental study is needed in the future to fully establish the validity of the OTCI formula and the model. One of the practical applications of this index is that could it be integrated in thermal control systems to develop human-centered environmental control systems for potential use in aircraft, mass transit vehicles, intelligent building systems, and space vehicles.

  5. Quantifying population structure on short timescales.

    PubMed

    Raeymaekers, Joost A M; Lens, Luc; Van den Broeck, Frederik; Van Dongen, Stefan; Volckaert, Filip A M

    2012-07-01

    Quantifying the contribution of the various processes that influence population genetic structure is important, but difficult. One of the reasons is that no single measure appropriately quantifies all aspects of genetic structure. An increasing number of studies is analysing population structure using the statistic D, which measures genetic differentiation, next to G(ST) , which quantifies the standardized variance in allele frequencies among populations. Few studies have evaluated which statistic is most appropriate in particular situations. In this study, we evaluated which index is more suitable in quantifying postglacial divergence between three-spined stickleback (Gasterosteus aculeatus) populations from Western Europe. Population structure on this short timescale (10?000 generations) is probably shaped by colonization history, followed by migration and drift. Using microsatellite markers and anticipating that D and G(ST) might have different capacities to reveal these processes, we evaluated population structure at two levels: (i) between lowland and upland populations, aiming to infer historical processes; and (ii) among upland populations, aiming to quantify contemporary processes. In the first case, only D revealed clear clusters of populations, putatively indicative of population ancestry. In the second case, only G(ST) was indicative for the balance between migration and drift. Simulations of colonization and subsequent divergence in a hierarchical stepping stone model confirmed this discrepancy, which becomes particularly strong for markers with moderate to high mutation rates. We conclude that on short timescales, and across strong clines in population size and connectivity, D is useful to infer colonization history, whereas G(ST) is sensitive to more recent demographic events. PMID:22646231

  6. Effect of soil structure on the growth of bacteria in soil quantified using CARD-FISH

    NASA Astrophysics Data System (ADS)

    Juyal, Archana; Eickhorst, Thilo; Falconer, Ruth; Otten, Wilfred

    2014-05-01

    It has been reported that compaction of soil due to use of heavy machinery has resulted in the reduction of crop yield. Compaction affects the physical properties of soil such as bulk density, soil strength and porosity. This causes an alteration in the soil structure which limits the mobility of nutrients, water and air infiltration and root penetration in soil. Several studies have been conducted to explore the effect of soil compaction on plant growth and development. However, there is scant information on the effect of soil compaction on the microbial community and its activities in soil. Understanding the effect of soil compaction on microbial community is essential as microbial activities are very sensitive to abrupt environmental changes in soil. Therefore, the aim of this work was to investigate the effect of soil structure on growth of bacteria in soil. The bulk density of soil was used as a soil physical parameter to quantify the effect of soil compaction. To detect and quantify bacteria in soil the method of catalyzed reporter deposition-fluorescence in situ hybridization (CARD-FISH) was used. This technique results in high intensity fluorescent signals which make it easy to quantify bacteria against high levels of autofluorescence emitted by soil particles and organic matter. In this study, bacterial strains Pseudomonas fluorescens SBW25 and Bacillus subtilis DSM10 were used. Soils of aggregate size 2-1mm were packed at five different bulk densities in polyethylene rings (4.25 cm3).The soil rings were sampled at four different days. Results showed that the total number of bacteria counts was reduced significantly (P

  7. Quantifiers more or less quantify online: ERP evidence for partial incremental interpretation

    PubMed Central

    Urbach, Thomas P.; Kutas, Marta

    2010-01-01

    Event-related brain potentials were recorded during RSVP reading to test the hypothesis that quantifier expressions are incrementally interpreted fully and immediately. In sentences tapping general knowledge (Farmers grow crops/worms as their primary source of income), Experiment 1 found larger N400s for atypical (worms) than typical objects (crops). Experiment 2 crossed object typicality with non-logical subject-noun phrase quantifiers (most, few). Off-line plausibility ratings exhibited the crossover interaction predicted by full quantifier interpretation: Most farmers grow crops and Few farmers grow worms were rated more plausible than Most farmers grow worms and Few farmers grow crops. Object N400s, although modulated in the expected direction, did not reverse. Experiment 3 replicated these findings with adverbial quantifiers (Farmers often/rarely grow crops/worms). Interpretation of quantifier expressions thus is neither fully immediate nor fully delayed. Furthermore, object atypicality was associated with a frontal slow positivity in few-type/rarely quantifier contexts, suggesting systematic processing differences among quantifier types. PMID:20640044

  8. Quantifying Lead-Time Bias in Risk-Factor Studies of Cancer through Simulation

    PubMed Central

    Jansen, Rick J.; Alexander, Bruce H.; Anderson, Kristin E.; Church, Timothy R.

    2013-01-01

    Purpose Lead-time is inherent in early detection and creates bias in observational studies of screening efficacy, but its potential to bias effect estimates in risk-factor studies is not always recognized. We describe a form of this bias that conventional analyses cannot address and develop a model to quantify it. Methods Surveillance Epidemiology and End Results (SEER) data form the basis for estimates of age-specific preclinical incidence and log-normal distributions describe the preclinical duration distribution. Simulations assume a joint null hypothesis of no effect of either the risk factor or screening on the preclinical incidence of cancer, and then quantify the bias as the risk-factor odds ratio (OR) from this null study. This bias can be used as a factor to adjust observed OR in the actual study. Results Results showed that for this particular study design, as average preclinical duration increased, the bias in the total-physical-activity OR monotonically increased from 1% to 22% above the null, but the smoking OR monotonically decreased from 1% above the null to 5% below the null. Conclusion The finding of nontrivial bias in fixed risk-factor effect estimates demonstrates the importance of quantitatively evaluating it in susceptible studies. PMID:23988688

  9. Wiki surveys: Open and quantifiable social data collection

    E-print Network

    Salganik, Matthew J

    2012-01-01

    Research about attitudes and opinions is central to social science and relies on two common methodological approaches: surveys and interviews. While surveys enable the quantification of large amounts of information quickly and at a reasonable cost, they are routinely criticized for being "top-down" and rigid. In contrast, interviews allow unanticipated information to "bubble up" directly from respondents, but are slow, expensive, and difficult to quantify. Advances in computing technology now enable a hybrid approach that combines the quantifiability of a survey and the openness of an interview; we call this new class of data collection tools wiki surveys. Drawing on principles underlying successful information aggregation projects, such as Wikipedia, we propose three general criteria that wiki surveys should satisfy: they should be greedy, collaborative, and adaptive. We then present results from www.allourideas.org, a free and open-source website we created that enables groups all over the world to deploy w...

  10. Quantifying the Impact of Scenic Environments on Health

    PubMed Central

    Seresinhe, Chanuki Illushka; Preis, Tobias; Moat, Helen Susannah

    2015-01-01

    Few people would deny an intuitive sense of increased wellbeing when spending time in beautiful locations. Here, we ask: can we quantify the relationship between environmental aesthetics and human health? We draw on data from Scenic-Or-Not, a website that crowdsources ratings of “scenicness” for geotagged photographs across Great Britain, in combination with data on citizen-reported health from the Census for England and Wales. We find that inhabitants of more scenic environments report better health, across urban, suburban and rural areas, even when taking core socioeconomic indicators of deprivation into account, such as income, employment and access to services. Our results provide evidence in line with the striking hypothesis that the aesthetics of the environment may have quantifiable consequences for our wellbeing. PMID:26603464

  11. Quantifying the Impact of Scenic Environments on Health.

    PubMed

    Seresinhe, Chanuki Illushka; Preis, Tobias; Moat, Helen Susannah

    2015-01-01

    Few people would deny an intuitive sense of increased wellbeing when spending time in beautiful locations. Here, we ask: can we quantify the relationship between environmental aesthetics and human health? We draw on data from Scenic-Or-Not, a website that crowdsources ratings of "scenicness" for geotagged photographs across Great Britain, in combination with data on citizen-reported health from the Census for England and Wales. We find that inhabitants of more scenic environments report better health, across urban, suburban and rural areas, even when taking core socioeconomic indicators of deprivation into account, such as income, employment and access to services. Our results provide evidence in line with the striking hypothesis that the aesthetics of the environment may have quantifiable consequences for our wellbeing. PMID:26603464

  12. Crisis of Japanese Vascular Flora Shown By Quantifying Extinction Risks for 1618 Taxa

    PubMed Central

    Kadoya, Taku; Takenaka, Akio; Ishihama, Fumiko; Fujita, Taku; Ogawa, Makoto; Katsuyama, Teruo; Kadono, Yasuro; Kawakubo, Nobumitsu; Serizawa, Shunsuke; Takahashi, Hideki; Takamiya, Masayuki; Fujii, Shinji; Matsuda, Hiroyuki; Muneda, Kazuo; Yokota, Masatsugu; Yonekura, Koji; Yahara, Tetsukazu

    2014-01-01

    Although many people have expressed alarm that we are witnessing a mass extinction, few projections have been quantified, owing to limited availability of time-series data on threatened organisms, especially plants. To quantify the risk of extinction, we need to monitor changes in population size over time for as many species as possible. Here, we present the world's first quantitative projection of plant species loss at a national level, with stochastic simulations based on the results of population censuses of 1618 threatened plant taxa in 3574 map cells of ca. 100 km2. More than 500 lay botanists helped monitor those taxa in 1994–1995 and in 2003–2004. We projected that between 370 and 561 vascular plant taxa will go extinct in Japan during the next century if past trends of population decline continue. This extinction rate is approximately two to three times the global rate. Using time-series data, we show that existing national protected areas (PAs) covering ca. 7% of Japan will not adequately prevent population declines: even core PAs can protect at best <60% of local populations from decline. Thus, the Aichi Biodiversity Target to expand PAs to 17% of land (and inland water) areas, as committed to by many national governments, is not enough: only 29.2% of currently threatened species will become non-threatened under the assumption that probability of protection success by PAs is 0.5, which our assessment shows is realistic. In countries where volunteers can be organized to monitor threatened taxa, censuses using our method should be able to quantify how fast we are losing species and to assess how effective current conservation measures such as PAs are in preventing species extinction. PMID:24922311

  13. Quantifying the Determinants of Evolutionary Dynamics Leading to Drug Resistance

    PubMed Central

    Chevereau, Guillaume; Dravecká, Marta; Batur, Tugce; Guvenek, Aysegul; Ayhan, Dilay Hazal; Toprak, Erdal; Bollenbach, Tobias

    2015-01-01

    The emergence of drug resistant pathogens is a serious public health problem. It is a long-standing goal to predict rates of resistance evolution and design optimal treatment strategies accordingly. To this end, it is crucial to reveal the underlying causes of drug-specific differences in the evolutionary dynamics leading to resistance. However, it remains largely unknown why the rates of resistance evolution via spontaneous mutations and the diversity of mutational paths vary substantially between drugs. Here we comprehensively quantify the distribution of fitness effects (DFE) of mutations, a key determinant of evolutionary dynamics, in the presence of eight antibiotics representing the main modes of action. Using precise high-throughput fitness measurements for genome-wide Escherichia coli gene deletion strains, we find that the width of the DFE varies dramatically between antibiotics and, contrary to conventional wisdom, for some drugs the DFE width is lower than in the absence of stress. We show that this previously underappreciated divergence in DFE width among antibiotics is largely caused by their distinct drug-specific dose-response characteristics. Unlike the DFE, the magnitude of the changes in tolerated drug concentration resulting from genome-wide mutations is similar for most drugs but exceptionally small for the antibiotic nitrofurantoin, i.e., mutations generally have considerably smaller resistance effects for nitrofurantoin than for other drugs. A population genetics model predicts that resistance evolution for drugs with this property is severely limited and confined to reproducible mutational paths. We tested this prediction in laboratory evolution experiments using the “morbidostat”, a device for evolving bacteria in well-controlled drug environments. Nitrofurantoin resistance indeed evolved extremely slowly via reproducible mutations—an almost paradoxical behavior since this drug causes DNA damage and increases the mutation rate. Overall, we identified novel quantitative characteristics of the evolutionary landscape that provide the conceptual foundation for predicting the dynamics of drug resistance evolution. PMID:26581035

  14. Quantifying nonverbal communicative behavior in face-to-face human dialogues

    NASA Astrophysics Data System (ADS)

    Skhiri, Mustapha; Cerrato, Loredana

    2002-11-01

    The referred study is based on the assumption that understanding how humans use nonverbal behavior in dialogues can be very useful in the design of more natural-looking animated talking heads. The goal of the study is twofold: (1) to explore how people use specific facial expressions and head movements to serve important dialogue functions, and (2) to show evidence that it is possible to measure and quantify the entity of these movements with the Qualisys MacReflex motion tracking system. Naturally elicited dialogues between humans have been analyzed with focus on the attention on those nonverbal behaviors that serve the very relevant functions of regulating the conversational flux (i.e., turn taking) and producing information about the state of communication (i.e., feedback). The results show that eyebrow raising, head nods, and head shakes are typical signals involved during the exchange of speaking turns, as well as in the production and elicitation of feedback. These movements can be easily measured and quantified, and this measure can be implemented in animated talking heads.

  15. A mass-balance model to separate and quantify colloidal and solute redistributions in soil

    USGS Publications Warehouse

    Bern, C.R.; Chadwick, O.A.; Hartshorn, A.S.; Khomo, L.M.; Chorover, J.

    2011-01-01

    Studies of weathering and pedogenesis have long used calculations based upon low solubility index elements to determine mass gains and losses in open systems. One of the questions currently unanswered in these settings is the degree to which mass is transferred in solution (solutes) versus suspension (colloids). Here we show that differential mobility of the low solubility, high field strength (HFS) elements Ti and Zr can trace colloidal redistribution, and we present a model for distinguishing between mass transfer in suspension and solution. The model is tested on a well-differentiated granitic catena located in Kruger National Park, South Africa. Ti and Zr ratios from parent material, soil and colloidal material are substituted into a mixing equation to quantify colloidal movement. The results show zones of both colloid removal and augmentation along the catena. Colloidal losses of 110kgm-2 (-5% relative to parent material) are calculated for one eluviated soil profile. A downslope illuviated profile has gained 169kgm-2 (10%) colloidal material. Elemental losses by mobilization in true solution are ubiquitous across the catena, even in zones of colloidal accumulation, and range from 1418kgm-2 (-46%) for an eluviated profile to 195kgm-2 (-23%) at the bottom of the catena. Quantification of simultaneous mass transfers in solution and suspension provide greater specificity on processes within soils and across hillslopes. Additionally, because colloids include both HFS and other elements, the ability to quantify their redistribution has implications for standard calculations of soil mass balances using such index elements. ?? 2011.

  16. A new way of quantifying diagnostic information from multilead electrocardiogram for cardiac disease classification

    PubMed Central

    Sharma, L.N.; Dandapat, S.

    2014-01-01

    A new measure for quantifying diagnostic information from a multilead electrocardiogram (MECG) is proposed. This diagnostic measure is based on principal component (PC) multivariate multiscale sample entropy (PMMSE). The PC analysis is used to reduce the dimension of the MECG data matrix. The multivariate multiscale sample entropy is evaluated over the PC matrix. The PMMSE values along each scale are used as a diagnostic feature vector. The performance of the proposed measure is evaluated using a least square support vector machine classifier for detection and classification of normal (healthy control) and different cardiovascular diseases such as cardiomyopathy, cardiac dysrhythmia, hypertrophy and myocardial infarction. The results show that the cardiac diseases are successfully detected and classified with an average accuracy of 90.34%. Comparison with some of the recently published methods shows improved performance of the proposed measure of cardiac disease classification. PMID:26609392

  17. STAR Trial Shows Lower Toxicities from Raloxifene

    Cancer.gov

    Initial results in 2006 of the NCI-sponsored Study of Tamoxifen and Raloxifene (STAR) showed that a common osteoporosis drug, raloxifene, prevented breast cancer to the same degree, but with fewer serious side-effects, than the drug tamoxifen that had bee

  18. Quantifying Stock Return Distributions in Financial Markets

    PubMed Central

    Botta, Federico; Moat, Helen Susannah; Stanley, H. Eugene; Preis, Tobias

    2015-01-01

    Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales. PMID:26327593

  19. Quantifying the ventilatory control contribution to sleep apnoea using polysomnography

    PubMed Central

    Terrill, Philip I.; Edwards, Bradley A.; Nemati, Shamim; Butler, James P.; Owens, Robert L.; Eckert, Danny J.; White, David P.; Malhotra, Atul; Wellman, Andrew; Sands, Scott A.

    2015-01-01

    Elevated loop gain, consequent to hypersensitive ventilatory control, is a primary nonanatomical cause of obstructive sleep apnoea (OSA) but it is not possible to quantify this in the clinic. Here we provide a novel method to estimate loop gain in OSA patients using routine clinical polysomnography alone. We use the concept that spontaneous ventilatory fluctuations due to apnoeas/hypopnoeas (disturbance) result in opposing changes in ventilatory drive (response) as determined by loop gain (response/disturbance). Fitting a simple ventilatory control model (including chemical and arousal contributions to ventilatory drive) to the ventilatory pattern of OSA reveals the underlying loop gain. Following mathematical-model validation, we critically tested our method in patients with OSA by comparison with a standard (continuous positive airway pressure (CPAP) drop method), and by assessing its ability to detect the known reduction in loop gain with oxygen and acetazolamide. Our method quantified loop gain from baseline polysomnography (correlation versus CPAP-estimated loop gain: n=28; r=0.63, p<0.001), detected the known reduction in loop gain with oxygen (n=11; mean±SEM change in loop gain (?LG) ?0.23±0.08, p=0.02) and acetazolamide (n=11; ?LG ?0.20±0.06, p=0.005), and predicted the OSA response to loop gain-lowering therapy. We validated a means to quantify the ventilatory control contribution to OSA pathogenesis using clinical polysomnography, enabling identification of likely responders to therapies targeting ventilatory control. PMID:25323235

  20. Quantifying Mineralization Utilizing Bone Mineral Density Distribution in the Mandible

    PubMed Central

    Donneys, Alexis; Nelson, Noah S.; Deshpande, Sagar S.; Boguslawski, Matthew J.; Tchanque-Fossuo, Catherine N.; Farberg, Aaron S.; Buchman, Steven R.

    2012-01-01

    Background Microcomputed Tomography (?CT) is an efficient method for quantifying the density and mineralization of mandibular microarchitecture. Conventional radiomorphometrics such as Bone and Tissue Mineral Density are useful in determining the average, overall mineral content of a scanned specimen; however, solely relying on these metrics has limitations. Utilizing Bone Mineral Density Distribution (BMDD), the complex array of mineralization densities within a bone sample can be portrayed. This information is particularly useful as a computational feature reflective of the rate of bone turnover. Here we demonstrate the utility of BMDD analyses in the rat mandible and generate a platform for further exploration of mandibular pathology and treatment. Methods Male Sprague Dawley rats (n=8) underwent ?CT and histogram data was generated from a selected volume of interest. A standard curve was derived for each animal and reference criteria were defined. An average histogram was produced for the group and descriptive analyses including the means and standard deviations are reported for each of the normative metrics. Results Mpeak (3444 Hounsfield Units, SD =138) and Mwidth (2221 Hounsfield Units SD =628) are two metrics demonstrating reproducible parameters of BMDD with minimal variance. A total of eight valuable metrics quantifying biologically significant events concerning mineralization are reported. Conclusion Here we quantify the vast wealth of information depicted in the complete spectrum of mineralization established by the BMDD analysis. We demonstrate its potential in delivering mineralization data that encompasses and enhances conventional reporting of radiomorphometrics. Moreover, we explore its role and translational potential in craniofacial experimentation. PMID:22976646

  1. Quantifying the surface chemistry of 3D matrices in situ

    NASA Astrophysics Data System (ADS)

    Tzeranis, Dimitrios S.; So, Peter T. C.; Yannas, Ioannis V.

    2014-03-01

    Despite the major role of the matrix (the insoluble environment around cells) in physiology and pathology, there are very few and limited methods that can quantify the surface chemistry of a 3D matrix such as a biomaterial or tissue ECM. This study describes a novel optical-based methodology that can quantify the surface chemistry (density of adhesion ligands for particular cell adhesion receptors) of a matrix in situ. The methodology utilizes fluorescent analogs (markers) of the receptor of interest and a series of binding assays, where the amount of bound markers on the matrix is quantified via spectral multi-photon imaging. The study provides preliminary results for the quantification of the ligands for the two major collagen-binding integrins (?1?1, ?2?1) in porous collagen scaffolds that have been shown to be able to induce maximum regeneration in transected peripheral nerves. The developed methodology opens the way for quantitative descriptions of the insoluble microenvironment of cells in physiology and pathology, and for integrating the matrix in quantitative models of cell signaling. ?

  2. Quantifying Hierarchy Stimuli in Systematic Desensitization Via GSR: A Preliminary Investigation

    ERIC Educational Resources Information Center

    Barabasz, Arreed F.

    1974-01-01

    The aim of the method for quantifying hierarchy stimuli by Galvanic Skin Resistance recordings is to improve the results of systematic desensitization by attenuating the subjective influences in hierarchy construction which are common in traditional procedures. (Author/CS)

  3. The development of a methodology to quantify the impacts of information management strategies on EPC projects 

    E-print Network

    Moreau, Karen Anne

    1997-01-01

    This research develops and demonstrates a methodology to quantify time and cost impacts on Engineering, Procurement, and Construction (EPC) projects resulting from information management driven process changes in design related activities. Many...

  4. Quantifying alosine prey in the diets of marine piscivores in the Gulf of Maine.

    PubMed

    McDermott, S P; Bransome, N C; Sutton, S E; Smith, B E; Link, J S; Miller, T J

    2015-06-01

    The objectives of this work were to quantify the spatial and temporal distribution of the occurrence of anadromous fishes (alewife Alosa pseudoharengus, blueback herring Alosa aestivalis and American shad Alosa sapidissima) in the stomachs of demersal fishes in coastal waters of the north-west Atlantic Ocean. Results show that anadromous fishes were detectable and quantifiable in the diets of common marine piscivores for every season sampled. Even though anadromous fishes were not the most abundant prey, they accounted for c. 5-10% of the diet by mass for several marine piscivores. Statistical comparisons of these data with fish diet data from a broad-scale survey of the north-west Atlantic Ocean indicate that the frequency of this trophic interaction was significantly higher within spatially and temporally focused sampling areas of this study than in the broad-scale survey. Odds ratios of anadromous predation were as much as 460 times higher in the targeted sampling as compared with the broad-scale sampling. Analyses indicate that anadromous prey consumption was more concentrated in the near-coastal waters compared with consumption of a similar, but more widely distributed species, the Atlantic herring Clupea harengus. In the context of ecosystem-based fisheries management, the results suggest that even low-frequency feeding events may be locally important, and should be incorporated into ecosystem models. PMID:25943427

  5. Quantifying the Robustness of the English Sibilant Fricative Contrast in Children

    PubMed Central

    Reidy, Patrick F.; Beckman, Mary E.; Edwards, Jan

    2015-01-01

    Purpose Four measures of children's developing robustness of phonological contrast were compared to see how they correlated with age, vocabulary size, and adult listeners' correctness ratings. Method Word-initial sibilant fricative productions from eighty-one 2- to 5-year-old children and 20 adults were phonetically transcribed and acoustically analyzed. Four measures of robustness of contrast were calculated for each speaker on the basis of the centroid frequency measured from each fricative token. Productions that were transcribed as correct from different children were then used as stimuli in a perception experiment in which adult listeners rated the goodness of each production. Results Results showed that the degree of category overlap, quantified as the percentage of a child's productions whose category could be correctly predicted from the output of a mixed-effects logistic regression model, was the measure that correlated best with listeners' goodness judgments. Conclusions Even when children's productions have been transcribed as correct, adult listeners are sensitive to within-category variation quantified by the child's degree of category overlap. Further research is needed to explore the relationship between the age of a child and adults' sensitivity to different types of within-category variation in children's speech. PMID:25766040

  6. Quantifying and scaling airplane performance in turbulence

    NASA Astrophysics Data System (ADS)

    Richardson, Johnhenri R.

    This dissertation studies the effects of turbulent wind on airplane airspeed and normal load factor, determining how these effects scale with airplane size and developing envelopes to account for them. The results have applications in design and control of aircraft, especially small scale aircraft, for robustness with respect to turbulence. Using linearized airplane dynamics and the Dryden gust model, this dissertation presents analytical and numerical scaling laws for airplane performance in gusts, safety margins that guarantee, with specified probability, that steady flight can be maintained when stochastic wind gusts act upon an airplane, and envelopes to visualize these safety margins. Presented here for the first time are scaling laws for the phugoid natural frequency, phugoid damping ratio, airspeed variance in turbulence, and flight path angle variance in turbulence. The results show that small aircraft are more susceptible to high frequency gusts, that the phugoid damping ratio does not depend directly on airplane size, that the airspeed and flight path angle variances can be parameterized by the ratio of the phugoid natural frequency to a characteristic turbulence frequency, and that the coefficient of variation of the airspeed decreases with increasing airplane size. Accompanying numerical examples validate the results using eleven different airplanes models, focusing on NASA's hypothetical Boeing 757 analog the Generic Transport Model and its operational 5.5% scale model, the NASA T2. Also presented here for the first time are stationary flight, where the flight state is a stationary random process, and the stationary flight envelope, an adjusted steady flight envelope to visualize safety margins for stationary flight. The dissertation shows that driving the linearized airplane equations of motion with stationary, stochastic gusts results in stationary flight. It also shows how feedback control can enlarge the stationary flight envelope by alleviating gust loads, though the enlargement is significantly limited by control surface saturation. The results end with a numerical example of a Navion general aviation aircraft performing various steady flight maneuvers in moderate turbulence, showing substantial reductions in the steady flight envelope for some combinations of maneuvers, turbulence, and safety margins.

  7. 3D Wind: Quantifying wind speed and turbulence intensity

    NASA Astrophysics Data System (ADS)

    Barthelmie, R. J.; Pryor, S. C.; Wang, H.; Crippa, P.

    2013-12-01

    Integrating measurements and modeling of wind characteristics for wind resource assessment and wind farm control is increasingly challenging as the scales of wind farms increases. Even offshore or in relatively homogeneous landscapes, there are significant gradients of both wind speed and turbulence intensity on scales that typify large wind farms. Our project is, therefore, focused on (i) improving methods to integrate remote sensing and in situ measurements with model simulations to produce a 3-dimensional view of the flow field on wind farm scales and (ii) investigating important controls on the spatiotemporal variability of flow fields within the coastal zone. The instrument suite deployed during the field experiments includes; 3-D sonic and cup anemometers deployed on meteorological masts and buoys, anemometers deployed on tethersondes and an Unmanned Aerial Vehicle, multiple vertically-pointing continuous-wave lidars and scanning Doppler lidars. We also integrate data from satellite-borne instrumentation - specifically synthetic aperture radar and scatterometers and output from the Weather Research and Forecast (WRF) model. Spatial wind fields and vertical profiles of wind speed from WRF and from the full in situ observational suite exhibit excellent agreement in a proof-of-principle experiment conducted in north Indiana particularly during convective conditions, but showed some discrepancies during the breakdown of the nocturnal stable layer. Our second experiment in May 2013 focused on triangulating a volume above an area of coastal water extending from the port in Cleveland out to an offshore water intake crib (about 5 km) and back to the coast, and includes extremely high resolution WRF simulations designed to characterize the coastal zone. Vertically pointing continuous-wave lidars were operated at each apex of the triangle, while the scanning Doppler lidar scanned out across the water over 90 degrees azimuth angle. Preliminary results pertaining to objective (i) indicate there is good agreement between wind and turbulence profiles to 200 m as measured by the vertically pointing lidars with the expected modification of the offshore profiles relative to the profiles at the coastline. However, these profiles do not always fully agree with wind speed and direction profiles measured by the scanning Doppler lidar. Further investigation is required to elucidate these results and to analyze whether these discrepancies occur during particular atmospheric conditions. Preliminary results regarding controls on flow in the coastal zone (i.e. objective ii) include clear evidence that the wind profile to 200 m was modified due to swell during unstable conditions even under moderate to high wind speed conditions. The measurement campaigns will be described in detail, with a view to evaluating optimal strategies for offshore measurement campaigns and in the context of quantifying wind and turbulence in a 3D volume.

  8. Quantifying and predicting interpretational uncertainty in cross-sections

    NASA Astrophysics Data System (ADS)

    Randle, Charles; Bond, Clare; Monaghan, Alison; Lark, Murray

    2015-04-01

    Cross-sections are often constructed from data to create a visual impression of the geologist's interpretation of the sub-surface geology. However as with all interpretations, this vision of the sub-surface geology is uncertain. We have designed and carried out an experiment with the aim of quantifying the uncertainty in geological cross-sections created by experts interpreting borehole data. By analysing different attributes of the data and interpretations we reflect on the main controls on uncertainty. A group of ten expert modellers at the British Geological Survey were asked to interpret an 11.4 km long cross-section from south-east Glasgow, UK. The data provided consisted of map and borehole data of the superficial deposits and shallow bedrock. Each modeller had a unique set of 11 boreholes removed from their dataset, to which their interpretations of the top of the bedrock were compared. This methodology allowed quantification of how far from the 'correct answer' each interpretation is at 11 points along each interpreted cross-section line; through comparison of the interpreted and actual bedrock elevations in the boreholes. This resulted in the collection of 110 measurements of the error to use in further analysis. To determine the potential control on uncertainty various attributes relating to the modeller, the interpretation and the data were recorded. Modellers were asked to fill out a questionnaire asking for information; such as how much 3D modelling experience they had, and how long it took them to complete the interpretation. They were also asked to record their confidence in their interpretations graphically, in the form of a confidence level drawn onto the cross-section. Initial analysis showed the majority of the experts' interpreted bedrock elevations within 5 metres of those recorded in the withheld boreholes. Their distribution is peaked and symmetrical about a mean of zero, indicating that there was no tendency for the experts to either under or over estimate the elevation of the bedrock. More complex analysis was completed in the form of linear mixed effects modelling. The modelling was used to determine if there were any correlations between the error and any other parameter recorded in the questionnaire, section or the initial dataset. This has resulted in the determination of both data based and interpreter based controls on uncertainty, adding insight into how uncertainty can be predicted, as well as how interpretation workflows can be improved. Our results will inform further experiments across a wide variety of geological situations to build understanding and best practice workflows for cross-section interpretation to reduce uncertainty.

  9. Quantifying Coherence in Infinite Dimensional Systems

    E-print Network

    Yu-Ran Zhang; Lian-He Shao; Yongming Li; Heng Fan

    2015-05-20

    We study the quantification of coherence in infinite dimensional systems, especially the infinite dimensional bosonic systems in Fock space. We show that given the energy constraints, the relative entropy of coherence serves as a well-defined quantification of coherence in infinite dimensional systems. Via using the relative entropy of coherence, we also generalize the problem to multi-mode Fock space and special examples are considered. It is shown that with a finite average particle number, increasing the number of modes of light can enhance the relative entropy of coherence. With the mean energy constraint, our results can also be extended to other infinite-dimensional systems.

  10. Quantifying individual performance in Cricket — A network analysis of batsmen and bowlers

    NASA Astrophysics Data System (ADS)

    Mukherjee, Satyam

    2014-01-01

    Quantifying individual performance in the game of Cricket is critical for team selection in International matches. The number of runs scored by batsmen and wickets taken by bowlers serves as a natural way of quantifying the performance of a cricketer. Traditionally the batsmen and bowlers are rated on their batting or bowling average respectively. However, in a game like Cricket it is always important the manner in which one scores the runs or claims a wicket. Scoring runs against a strong bowling line-up or delivering a brilliant performance against a team with a strong batting line-up deserves more credit. A player’s average is not able to capture this aspect of the game. In this paper we present a refined method to quantify the ‘quality’ of runs scored by a batsman or wickets taken by a bowler. We explore the application of Social Network Analysis (SNA) to rate the players in a team performance. We generate a directed and weighted network of batsmen-bowlers using the player-vs-player information available for Test cricket and ODI cricket. Additionally we generate a network of batsmen and bowlers based on the dismissal record of batsmen in the history of cricket-Test (1877-2011) and ODI (1971-2011). Our results show that M. Muralitharan is the most successful bowler in the history of Cricket. Our approach could potentially be applied in domestic matches to judge a player’s performance which in turn paves the way for a balanced team selection for International matches.

  11. Quantifying magma mixing with the Shannon entropy: Application to simulations and experiments

    NASA Astrophysics Data System (ADS)

    Perugini, D.; De Campos, C. P.; Petrelli, M.; Morgavi, D.; Vetere, F. P.; Dingwell, D. B.

    2015-11-01

    We introduce a new quantity to petrology, the Shannon entropy, as a tool for quantifying mixing as well as the rate of production of hybrid compositions in the mixing system. The Shannon entropy approach is applied to time series numerical simulations and high-temperature experiments performed with natural melts. We note that in both cases the Shannon entropy increases linearly during the initial stages of mixing and then saturates toward constant values. Furthermore, chemical elements with different mobilities display different rates of increase of the Shannon entropy. This indicates that the hybrid composition for the different elements is attained at different times generating a wide range of spatio-compositional domains which further increase the apparent complexity of the mixing process. Results from the application of the Shannon entropy analysis are compared with the concept of Relaxation of Concentration Variance (RCV), a measure recently introduced in petrology to quantify chemical exchanges during magma mixing. We derive a linear expression relating the change of concentration variance during mixing and the Shannon entropy. We show that the combined use of Shannon entropy and RCV provides the most complete information about the space and time complexity of magma mixing. As a consequence, detailed information about this fundamental petrogenetic and volcanic process can be gathered. In particular, the Shannon entropy can be used as complement to the RCV method to quantify the mobility of chemical elements in magma mixing systems, to obtain information about the rate of production of compositional heterogeneities, and to derive empirical relationships linking the rate of chemical exchanges between interacting magmas and mixing time.

  12. The Physics of Equestrian Show Jumping

    NASA Astrophysics Data System (ADS)

    Stinner, Art

    2014-04-01

    This article discusses the kinematics and dynamics of equestrian show jumping. For some time I have attended a series of show jumping events at Spruce Meadows, an international equestrian center near Calgary, Alberta, often referred to as the "Wimbledon of equestrian jumping." I have always had a desire to write an article such as this one, but when I searched the Internet for information and looked at YouTube presentations, I could only find simplistic references to Newton's laws and the conservation of mechanical energy principle. Nowhere could I find detailed calculations. On the other hand, there were several biomechanical articles with empirical reports of the results of kinetic and dynamic investigations of show jumping using high-speed digital cameras and force plates. They summarize their results in tables that give information about the motion of a horse jumping over high fences (1.40 m) and the magnitudes of the forces encountered when landing. However, they do not describe the physics of these results.

  13. Quantifying Subsidence in the 1999-2000 Arctic Winter Vortex

    NASA Technical Reports Server (NTRS)

    Greenblatt, Jeffery B.; Jost, Hans-juerg; Loewenstein, Max; Podolske, James R.; Bui, T. Paul; Elkins, James W.; Moore, Fred L.; Ray, Eric A.; Sen, Bhaswar; Margitan, James J.; Hipskind, R. Stephen (Technical Monitor)

    2000-01-01

    Quantifying the subsidence of the polar winter stratospheric vortex is essential to the analysis of ozone depletion, as chemical destruction often occurs against a large, altitude-dependent background ozone concentration. Using N2O measurements made during SOLVE on a variety of platforms (ER-2, in-situ balloon and remote balloon), the 1999-2000 Arctic winter subsidence is determined from N2O-potential temperature correlations along several N2O isopleths. The subsidence rates are compared to those determined in other winters, and comparison is also made with results from the SLIMCAT stratospheric chemical transport model.

  14. Quantifying uncertainty in observational rainfall datasets

    NASA Astrophysics Data System (ADS)

    Lennard, Chris; Dosio, Alessandro; Nikulin, Grigory; Pinto, Izidine; Seid, Hussen

    2015-04-01

    The CO-ordinated Regional Downscaling Experiment (CORDEX) has to date seen the publication of at least ten journal papers that examine the African domain during 2012 and 2013. Five of these papers consider Africa generally (Nikulin et al. 2012, Kim et al. 2013, Hernandes-Dias et al. 2013, Laprise et al. 2013, Panitz et al. 2013) and five have regional foci: Tramblay et al. (2013) on Northern Africa, Mariotti et al. (2014) and Gbobaniyi el al. (2013) on West Africa, Endris et al. (2013) on East Africa and Kalagnoumou et al. (2013) on southern Africa. There also are a further three papers that the authors know about under review. These papers all use an observed rainfall and/or temperature data to evaluate/validate the regional model output and often proceed to assess projected changes in these variables due to climate change in the context of these observations. The most popular reference rainfall data used are the CRU, GPCP, GPCC, TRMM and UDEL datasets. However, as Kalagnoumou et al. (2013) point out there are many other rainfall datasets available for consideration, for example, CMORPH, FEWS, TAMSAT & RIANNAA, TAMORA and the WATCH & WATCH-DEI data. They, with others (Nikulin et al. 2012, Sylla et al. 2012) show that the observed datasets can have a very wide spread at a particular space-time coordinate. As more ground, space and reanalysis-based rainfall products become available, all which use different methods to produce precipitation data, the selection of reference data is becoming an important factor in model evaluation. A number of factors can contribute to a uncertainty in terms of the reliability and validity of the datasets such as radiance conversion algorithims, the quantity and quality of available station data, interpolation techniques and blending methods used to combine satellite and guage based products. However, to date no comprehensive study has been performed to evaluate the uncertainty in these observational datasets. We assess 18 gridded rainfall datasets available over Africa on monthly, daily and sub-daily time scales as appropriate to quantify spatial and temporal differences between the datasets. We find regional wet and dry biases between datasets (using the ensemble mean as a reference) with generally larger biases in reanalysis products. Rainfall intensity is poorly represented in some datasets which demonstrates some datasets should not be used for rainfall intensity analyses. Using 10 CORDEX models we show in east Africa that the spread between observed datasets is often similar to the spread between models. We recommend that specific observational rainfall datasets datasets be used for specific investigations and also that where many datasets are applicable to an investigation, a probabilistic view be adopted for rainfall studies over Africa. Endris, H. S., P. Omondi, S. Jain, C. Lennard, B. Hewitson, L. Chang'a, J. L. Awange, A. Dosio, P. Ketiem, G. Nikulin, H-J. Panitz, M. Büchner, F. Stordal, and L. Tazalika (2013) Assessment of the Performance of CORDEX Regional Climate Models in Simulating East African Rainfall. J. Climate, 26, 8453-8475. DOI: 10.1175/JCLI-D-12-00708.1 Gbobaniyi, E., A. Sarr, M. B. Sylla, I. Diallo, C. Lennard, A. Dosio, A. Dhie ?diou, A. Kamga, N. A. B. Klutse, B. Hewitson, and B. Lamptey (2013) Climatology, annual cycle and interannual variability of precipitation and temperature in CORDEX simulations over West Africa. Int. J. Climatol., DOI: 10.1002/joc.3834 Hernández-Díaz, L., R. Laprise, L. Sushama, A. Martynov, K. Winger, and B. Dugas (2013) Climate simulation over CORDEX Africa domain using the fifth-generation Canadian Regional Climate Model (CRCM5). Clim. Dyn. 40, 1415-1433. DOI: 10.1007/s00382-012-1387-z Kalognomou, E., C. Lennard, M. Shongwe, I. Pinto, A. Favre, M. Kent, B. Hewitson, A. Dosio, G. Nikulin, H. Panitz, and M. Büchner (2013) A diagnostic evaluation of precipitation in CORDEX models over southern Africa. Journal of Climate, 26, 9477-9506. DOI:10.1175/JCLI-D-12-00703.1 Kim, J., D. E. Waliser, C. A. Mattmann, C. E. Goodale, A. F. Hart

  15. Quantifying Biofilm in Porous Media Using Rock Physics Models

    NASA Astrophysics Data System (ADS)

    Alhadhrami, F. M.; Jaiswal, P.; Atekwana, E. A.

    2012-12-01

    Biofilm formation and growth in porous rocks can change their material properties such as porosity, permeability which in turn will impact fluid flow. Finding a non-intrusive method to quantify biofilms and their byproducts in rocks is a key to understanding and modeling bioclogging in porous media. Previous geophysical investigations have documented that seismic techniques are sensitive to biofilm growth. These studies pointed to the fact that microbial growth and biofilm formation induces heterogeneity in the seismic properties. Currently there are no rock physics models to explain these observations and to provide quantitative interpretation of the seismic data. Our objectives are to develop a new class of rock physics model that incorporate microbial processes and their effect on seismic properties. Using the assumption that biofilms can grow within pore-spaces or as a layer coating the mineral grains, P-wave velocity (Vp) and S-wave (Vs) velocity models were constructed using travel-time and waveform tomography technique. We used generic rock physics schematics to represent our rock system numerically. We simulated the arrival times as well as waveforms by treating biofilms either as fluid (filling pore spaces) or as part of matrix (coating sand grains). The preliminary results showed that there is a 1% change in Vp and 3% change in Vs when biofilms are represented discrete structures in pore spaces. On the other hand, a 30% change in Vp and 100% change in Vs was observed when biofilm was represented as part of matrix coating sand grains. Therefore, Vp and Vs changes are more rapid when biofilm grows as grain-coating phase. The significant change in Vs associated with biofilms suggests that shear velocity can be used as a diagnostic tool for imaging zones of bioclogging in the subsurface. The results obtained from this study have significant implications for the study of the rheological properties of biofilms in geological media. Other applications include assessing biofilms used as barriers in CO2 sequestration studies as well as assisting in evaluating microbial enhanced oil recovery methods (MEOR), where microorganisms are used to plug highly porous rocks for efficient oil production.

  16. Approach to quantify human dermal skin aging using multiphoton laser scanning microscopy

    NASA Astrophysics Data System (ADS)

    Puschmann, Stefan; Rahn, Christian-Dennis; Wenck, Horst; Gallinat, Stefan; Fischer, Frank

    2012-03-01

    Extracellular skin structures in human skin are impaired during intrinsic and extrinsic aging. Assessment of these dermal changes is conducted by subjective clinical evaluation and histological and molecular analysis. We aimed to develop a new parameter for the noninvasive quantitative determination of dermal skin alterations utilizing the high-resolution three-dimensional multiphoton laser scanning microscopy (MPLSM) technique. To quantify structural differences between chronically sun-exposed and sun-protected human skin, the respective collagen-specific second harmonic generation and the elastin-specific autofluorescence signals were recorded in young and elderly volunteers using the MPLSM technique. After image processing, the elastin-to-collagen ratio (ELCOR) was calculated. Results show that the ELCOR parameter of volar forearm skin significantly increases with age. For elderly volunteers, the ELCOR value calculated for the chronically sun-exposed temple area is significantly augmented compared to the sun-protected upper arm area. Based on the MPLSM technology, we introduce the ELCOR parameter as a new means to quantify accurately age-associated alterations in the extracellular matrix.

  17. Quantifying Land Use Impacts on Biodiversity: Combining Species-Area Models and Vulnerability Indicators.

    PubMed

    Chaudhary, Abhishek; Verones, Francesca; de Baan, Laura; Hellweg, Stefanie

    2015-08-18

    Habitat degradation and subsequent biodiversity damage often take place far from the place of consumption because of globalization and the increasing level of international trade. Informing consumers and policy makers about the biodiversity impacts "hidden" in the life cycle of imported products is an important step toward achieving sustainable consumption patterns. Spatially explicit methods are needed in life cycle assessment to accurately quantify biodiversity impacts of products and processes. We use the Countryside species-area relationship (SAR) to quantify regional species loss due to land occupation and transformation for five taxa and six land use types in 804 terrestrial ecoregions. Further, we calculate vulnerability scores for each ecoregion based on the fraction of each species' geographic range (endemic richness) hosted by the ecoregion and the IUCN assigned threat level of each species. Vulnerability scores are multiplied with SAR-predicted regional species loss to estimate potential global extinctions per unit of land use. As a case study, we assess the land use biodiversity impacts of 1 kg of bioethanol produced using six different feed stocks in different parts of the world. Results show that the regions with highest biodiversity impacts differed markedly when the vulnerability of species was included. PMID:26197362

  18. An integrated method for quantifying root architecture of field-grown maize

    PubMed Central

    Wu, Jie; Guo, Yan

    2014-01-01

    Background and Aims A number of techniques have recently been developed for studying the root system architecture (RSA) of seedlings grown in various media. In contrast, methods for sampling and analysis of the RSA of field-grown plants, particularly for details of the lateral root components, are generally inadequate. Methods An integrated methodology was developed that includes a custom-made root-core sampling system for extracting intact root systems of individual maize plants, a combination of proprietary software and a novel program used for collecting individual RSA information, and software for visualizing the measured individual nodal root architecture. Key Results Example experiments show that large root cores can be sampled, and topological and geometrical structure of field-grown maize root systems can be quantified and reconstructed using this method. Second- and higher order laterals are found to contribute substantially to total root number and length. The length of laterals of distinct orders varies significantly. Abundant higher order laterals can arise from a single first-order lateral, and they concentrate in the proximal axile branching zone. Conclusions The new method allows more meaningful sampling than conventional methods because of its easily opened, wide corer and sampling machinery, and effective analysis of RSA using the software. This provides a novel technique for quantifying RSA of field-grown maize and also provides a unique evaluation of the contribution of lateral roots. The method also offers valuable potential for parameterization of root architectural models. PMID:24532646

  19. Quantifying the Short-Term Costs of Conservation Interventions for Fishers at Lake Alaotra, Madagascar

    PubMed Central

    Wallace, Andrea P. C.; Milner-Gulland, E. J.; Jones, Julia P. G.; Bunnefeld, Nils; Young, Richard; Nicholson, Emily

    2015-01-01

    Artisanal fisheries are a key source of food and income for millions of people, but if poorly managed, fishing can have declining returns as well as impacts on biodiversity. Management interventions such as spatial and temporal closures can improve fishery sustainability and reduce environmental degradation, but may carry substantial short-term costs for fishers. The Lake Alaotra wetland in Madagascar supports a commercially important artisanal fishery and provides habitat for a Critically Endangered primate and other endemic wildlife of conservation importance. Using detailed data from more than 1,600 fisher catches, we used linear mixed effects models to explore and quantify relationships between catch weight, effort, and spatial and temporal restrictions to identify drivers of fisher behaviour and quantify the potential effect of fishing restrictions on catch. We found that restricted area interventions and fishery closures would generate direct short-term costs through reduced catch and income, and these costs vary between groups of fishers using different gear. Our results show that conservation interventions can have uneven impacts on local people with different fishing strategies. This information can be used to formulate management strategies that minimise the adverse impacts of interventions, increase local support and compliance, and therefore maximise conservation effectiveness. PMID:26107284

  20. New measurements quantify atmospheric greenhouse effect

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Atreyee

    2012-10-01

    In spite of a large body of existing measurements of incoming short-wave solar radiation and outgoing long-wave terrestrial radiation at the surface of the Earth and, more recently, in the upper atmosphere, there are few observations documenting how radiation profiles change through the atmosphere—information that is necessary to fully quantify the greenhouse effect of Earth's atmosphere. Through the use of existing technology but employing improvements in observational techniques it may now be possible not only to quantify but also to understand how different components of the atmosphere (e.g., concentration of gases, cloud cover, moisture, and aerosols) contribute to the greenhouse effect. Using weather balloons equipped with radiosondes, Philipona et al. continuously measured radiation fluxes from the surface of Earth up to altitudes of 35 kilometers in the upper stratosphere. Combining data from flights conducted during both day and night with continuous 24-hour measurements made at the surface of the Earth, the researchers created radiation profiles of all four components necessary to fully capture the radiation budget of Earth, namely, the upward and downward short-wave and long-wave radiation as a function of altitude.

  1. Quantifying chemical reactions by using mixing analysis.

    PubMed

    Jurado, Anna; Vázquez-Suñé, Enric; Carrera, Jesús; Tubau, Isabel; Pujades, Estanislao

    2015-01-01

    This work is motivated by a sound understanding of the chemical processes that affect the organic pollutants in an urban aquifer. We propose an approach to quantify such processes using mixing calculations. The methodology consists of the following steps: (1) identification of the recharge sources (end-members) and selection of the species (conservative and non-conservative) to be used, (2) identification of the chemical processes and (3) evaluation of mixing ratios including the chemical processes. This methodology has been applied in the Besòs River Delta (NE Barcelona, Spain), where the River Besòs is the main aquifer recharge source. A total number of 51 groundwater samples were collected from July 2007 to May 2010 during four field campaigns. Three river end-members were necessary to explain the temporal variability of the River Besòs: one river end-member is from the wet periods (W1) and two are from dry periods (D1 and D2). This methodology has proved to be useful not only to compute the mixing ratios but also to quantify processes such as calcite and magnesite dissolution, aerobic respiration and denitrification undergone at each observation point. PMID:25280248

  2. Precise thermal NDE for quantifying structural damage

    SciTech Connect

    Del Grande, N.K.; Durbin, P.F.

    1995-09-18

    The authors demonstrated a fast, wide-area, precise thermal NDE imaging system to quantify aircraft corrosion damage, such as percent metal loss, above a threshold of 5% with 3% overall uncertainties. The DBIR precise thermal imaging and detection method has been used successfully to characterize defect types, and their respective depths, in aircraft skins, and multi-layered composite materials used for wing patches, doublers and stiffeners. This precise thermal NDE inspection tool has long-term potential benefits to evaluate the structural integrity of airframes, pipelines and waste containers. They proved the feasibility of the DBIR thermal NDE imaging system to inspect concrete and asphalt-concrete bridge decks. As a logical extension to the successful feasibility study, they plan to inspect a concrete bridge deck from a moving vehicle to quantify the volumetric damage within the deck and the percent of the deck which has subsurface delaminations. Potential near-term benefits are in-service monitoring from a moving vehicle to inspect the structural integrity of the bridge deck. This would help prioritize the repair schedule for a reported 200,000 bridge decks in the US which need substantive repairs. Potential long-term benefits are affordable, and reliable, rehabilitation for bridge decks.

  3. Quantifying the BICEP2-Planck tension over gravitational waves.

    PubMed

    Smith, Kendrick M; Dvorkin, Cora; Boyle, Latham; Turok, Neil; Halpern, Mark; Hinshaw, Gary; Gold, Ben

    2014-07-18

    The recent BICEP2 measurement of B-mode polarization in the cosmic microwave background (r = 0.2(-0.05)(+0.07)), a possible indication of primordial gravity waves, appears to be in tension with the upper limit from WMAP (r < 0.13 at 95% C.L.) and Planck (r < 0.11 at 95% C.L.). We carefully quantify the level of tension and show that it is very significant (around 0.1% unlikely) when the observed deficit of large-scale temperature power is taken into account. We show that measurements of TE and EE power spectra in the near future will discriminate between the hypotheses that this tension is either a statistical fluke or a sign of new physics. We also discuss extensions of the standard cosmological model that relieve the tension and some novel ways to constrain them. PMID:25083631

  4. Using nitrate to quantify quick flow in a karst aquifer

    USGS Publications Warehouse

    Mahler, B.J.; Garner, B.D.

    2009-01-01

    In karst aquifers, contaminated recharge can degrade spring water quality, but quantifying the rapid recharge (quick flow) component of spring flow is challenging because of its temporal variability. Here, we investigate the use of nitrate in a two-endmember mixing model to quantify quick flow in Barton Springs, Austin, Texas. Historical nitrate data from recharging creeks and Barton Springs were evaluated to determine a representative nitrate concentration for the aquifer water endmember (1.5 mg/L) and the quick flow endmember (0.17 mg/L for nonstormflow conditions and 0.25 mg/L for stormflow conditions). Under nonstormflow conditions for 1990 to 2005, model results indicated that quick flow contributed from 0% to 55% of spring flow. The nitrate-based two-endmember model was applied to the response of Barton Springs to a storm and results compared to those produced using the same model with ??18O and specific conductance (SC) as tracers. Additionally, the mixing model was modified to allow endmember quick flow values to vary over time. Of the three tracers, nitrate appears to be the most advantageous because it is conservative and because the difference between the concentrations in the two endmembers is large relative to their variance. The ??18O- based model was very sensitive to variability within the quick flow endmember, and SC was not conservative over the timescale of the storm response. We conclude that a nitrate-based two-endmember mixing model might provide a useful approach for quantifying the temporally variable quick flow component of spring flow in some karst systems. ?? 2008 National Ground Water Association.

  5. Quantifying Uncertainties in Rainfall Maps from Cellular Communication Networks

    NASA Astrophysics Data System (ADS)

    Uijlenhoet, R.; Rios Gaona, M. F.; Overeem, A.; Leijnse, H.

    2014-12-01

    The core idea behind rainfall retrievals from commercial microwave link networks is to measure the decrease in power due to attenuation of the electromagnetic signal by raindrops along the link path. Accurate rainfall measurements are of vital importance in hydrological applications, for instance, flash-flood early-warning systems, agriculture, and climate modeling. Hence, such an alternative technique fulfills the need for measurements with higher resolution in time and space, especially in places where standard rain gauge-networks are scarce or poorly maintained. Rainfall estimation via commercial microwave link networks, at country-wide scales, has recently been demonstrated. Despite their potential applicability in rainfall estimation at higher spatiotemporal resolutions, the uncertainties present in link-based rainfall maps are not yet fully comprehended. Now we attempt to quantify the inherent sources of uncertainty present in interpolated maps computed from commercial microwave link rainfall retrievals. In order to disentangle these sources of uncertainty we identified four main sources of error: 1) microwave link measurements, 2) availability of microwave link measurements, 3) spatial distribution of the network, and 4) interpolation methodology. We computed more than 1000 rainfall fields, for The Netherlands, from real and simulated microwave link data. These rainfall fields were compared to quality-controlled gauge-adjusted radar rainfall maps considered as ground-truth. Thus we were able to quantify the contribution of errors in microwave link measurements to the overall uncertainty. The actual performance of the commercial microwave link network is affected by the intermittent availability of the links, not only in time but also in space. We simulated a fully-operational network in time and space, and thus we quantified the role of the availability of microwave link measurements to the overall uncertainty. This research showed that the largest source of uncertainty is related to the microwave link measurements themselves (~55%). The second largest source of uncertainty (~20%) is attributed to the intermittence in the availability of microwave link data.

  6. Design and Analysis of a Micromechanical Three-Component Force Sensor for Characterizing and Quantifying Surface Roughness

    NASA Astrophysics Data System (ADS)

    Liang, Q.; Wu, W.; Zhang, D.; Wei, B.; Sun, W.; Wang, Y.; Ge, Y.

    2015-10-01

    Roughness, which can represent the trade-off between manufacturing cost and performance of mechanical components, is a critical predictor of cracks, corrosion and fatigue damage. In order to measure polished or super-finished surfaces, a novel touch probe based on three-component force sensor for characterizing and quantifying surface roughness is proposed by using silicon micromachining technology. The sensor design is based on a cross-beam structure, which ensures that the system possesses high sensitivity and low coupling. The results show that the proposed sensor possesses high sensitivity, low coupling error, and temperature compensation function. The proposed system can be used to investigate micromechanical structures with nanometer accuracy.

  7. Automated Reasoning in Quantified Modal and Temporal Logics 

    E-print Network

    Castellini, Claudio

    This thesis is about automated reasoning in quantified modal and temporal logics, with an application to formal methods. Quantified modal and temporal logics are extensions of classical first-order logic in which the notion ...

  8. Interpreting cortical bone adaptation and load history by quantifying osteon morphotypes in circularly polarized light images.

    PubMed

    Skedros, John G; Mendenhall, Shaun D; Kiser, Casey J; Winet, Howard

    2009-03-01

    Birefringence variations in circularly polarized light (CPL) images of thin plane-parallel sections of cortical bone can be used to quantify regional differences in predominant collagen fiber orientation (CFO). Using CPL images of equine third metacarpals (MC3s), R.B. Martin, V.A. Gibson, S.M. Stover, J.C. Gibeling, and L.V. Griffin. (40) described six secondary osteon variants ('morphotypes') and suggested that differences in their regional prevalence affect fatigue resistance and toughness. They devised a numerical osteon morphotype score (MTS) for quantifying regional differences in osteon morphotypes. We have observed that a modification of this score could significantly improve its use for interpreting load history. We hypothesized that our modified osteon MTS would more accurately reveal differences in osteon MTSs between opposing "tension" and "compression" cortices of diaphyses of habitually bent bones. This was tested using CPL images in transverse sections of calcanei from sheep, deer, and horses, and radii from sheep and horses. Equine MC3s and sheep tibiae were examined as controls because they experience comparatively greater load complexity that, because of increased prevalence of torsion/shear, would not require regional mechanical enhancements provided by different osteon morphotypes. Predominant CFO, which can reliably reflect adaptation for a regionally prevalent strain mode, was quantified as mean gray levels from birefringence of entire images (excluding pore spaces) in anterior, posterior, medial, and lateral cortices. Results showed that, in contrast to the original scoring scheme of Martin et al., the modified scheme revealed significant anterior/posterior differences in osteon MTSs in nearly all "tension/compression" bones (p<0.0001), but not in equine MC3s (p=0.30) and sheep tibiae (p=0.35). Among habitually bent bones, sheep radii were the exception; relatively lower osteon populations and the birefringence of the primary bone contributed to this result. Correlations between osteon MTSs using the scoring scheme of Martin et al. with CFO data from all regions of each bone invariably demonstrated weak-to-moderate negative correlations. This contrasts with typically high positive correlations between modified osteon MTSs and regional CFO. These results show that the modified osteon MTS can be a strong correlate of predominant CFO and of the non-uniform strain distribution produced by habitual bending. PMID:19049911

  9. Life cycle assessment of urban wastewater systems: Quantifying the relative contribution of sewer systems.

    PubMed

    Risch, Eva; Gutierrez, Oriol; Roux, Philippe; Boutin, Catherine; Corominas, Lluís

    2015-06-15

    This study aims to propose a holistic, life cycle assessment (LCA) of urban wastewater systems (UWS) based on a comprehensive inventory including detailed construction and operation of sewer systems and wastewater treatment plants (WWTPs). For the first time, the inventory of sewers infrastructure construction includes piping materials and aggregates, manholes, connections, civil works and road rehabilitation. The operation stage comprises energy consumption in pumping stations together with air emissions of methane and hydrogen sulphide, and water emissions from sewer leaks. Using a real case study, this LCA aims to quantify the contributions of sewer systems to the total environmental impacts of the UWS. The results show that the construction of sewer infrastructures has an environmental impact (on half of the 18 studied impact categories) larger than both the construction and operation of the WWTP. This study highlights the importance of including the construction and operation of sewer systems in the environmental assessment of centralised versus decentralised options for UWS. PMID:25839834

  10. Quantifying fiber formation in meat analogs under high moisture extrusion using image processing

    NASA Astrophysics Data System (ADS)

    Ranasinghesagara, J.; Hsieh, F.; Yao, G.

    2005-11-01

    High moisture extrusion using twin-screw extruders shows great promise of producing meat analog products with vegetable proteins. The resulting products have well defined fiber formations; resemble real meat in both visual appearance and taste sensation. Developing reliable non-destructive techniques to quantify the textural properties of extrudates is important for quality control in the manufacturing process. In this study, we developed an image processing technique to automatically characterize sample fiber formation using digital imaging. The algorithm is based on statistical analysis of Hough transform. This objective method can be used as a standard method for evaluating other non-invasive methods. We have compared the fiber formation indices measured using this technique and a non-invasive fluorescence polarization method and obtained a high correlation.

  11. Quantifying compositional impacts of ambient aerosol on cloud droplet formation

    NASA Astrophysics Data System (ADS)

    Lance, Sara

    It has been historically assumed that most of the uncertainty associated with the aerosol indirect effect on climate can be attributed to the unpredictability of updrafts. In Chapter 1, we analyze the sensitivity of cloud droplet number density, to realistic variations in aerosol chemical properties and to variable updraft velocities using a 1-dimensional cloud parcel model in three important environmental cases (continental, polluted and remote marine). The results suggest that aerosol chemical variability may be as important to the aerosol indirect effect as the effect of unresolved cloud dynamics, especially in polluted environments. We next used a continuous flow streamwise thermal gradient Cloud Condensation Nuclei counter (CCNc) to study the water-uptake properties of the ambient aerosol, by exposing an aerosol sample to a controlled water vapor supersaturation and counting the resulting number of droplets. In Chapter 2, we modeled and experimentally characterized the heat transfer properties and droplet growth within the CCNc. Chapter 3 describes results from the MIRAGE field campaign, in which the CCNc and a Hygroscopicity Tandem Differential Mobility Analyzer (HTDMA) were deployed at a ground-based site during March, 2006. Size-resolved CCN activation spectra and growth factor distributions of the ambient aerosol in Mexico City were obtained, and an analytical technique was developed to quantify a probability distribution of solute volume fractions for the CCN in addition to the aerosol mixing-state. The CCN were shown to be much less CCN active than ammonium sulfate, with water uptake properties more consistent with low molecular weight organic compounds. The pollution outflow from Mexico City was shown to have CCN with an even lower fraction of soluble material. "Chemical Closure" was attained for the CCN, by comparing the inferred solute volume fraction with that from direct chemical measurements. A clear diurnal pattern was observed for the CCN solute volume fraction, showing that measurable aging of the aerosol population occurs during the day, on the timescale of a few hours. The mixing state of the aerosol, also showing a consistent diurnal pattern, clearly correlates with a chemical tracer for local combustion sources. Chapter 4 describes results from the GoMACCS field study, in which the CCNc was subsequently deployed on an airborne field campaign in Houston, Texas during August-September, 2006. GoMACCS tested our ability to predict CCN for highly polluted conditions with limited chemical information. Assuming the particles were composed purely of ammonium sulfate, CCN closure was obtained with a 10% overprediction bias on average for CCN concentrations ranging from less than 100 cm-3 to over 10,000 cm-3, but with on average 50% variability. Assuming measured concentrations of organics to be internally mixed and insoluble tended to reduce the overprediction bias for less polluted conditions, but led to underprediction bias in the most polluted conditions. A likely explanation is that the high organic concentrations in the polluted environments depress the surface tension of the droplets, thereby enabling activation at lower soluble fractions.

  12. Quantifying the Behavioural Relevance of Hippocampal Neurogenesis

    PubMed Central

    Lazic, Stanley E.; Fuss, Johannes; Gass, Peter

    2014-01-01

    Few studies that examine the neurogenesis–behaviour relationship formally establish covariation between neurogenesis and behaviour or rule out competing explanations. The behavioural relevance of neurogenesis might therefore be overestimated if other mechanisms account for some, or even all, of the experimental effects. A systematic review of the literature was conducted and the data reanalysed using causal mediation analysis, which can estimate the behavioural contribution of new hippocampal neurons separately from other mechanisms that might be operating. Results from eleven eligible individual studies were then combined in a meta-analysis to increase precision (representing data from 215 animals) and showed that neurogenesis made a negligible contribution to behaviour (standarised effect ?=?0.15; 95% CI ?=??0.04 to 0.34; p?=?0.128); other mechanisms accounted for the majority of experimental effects (standardised effect ?=?1.06; 95% CI ?=?0.74 to 1.38; p?=?1.7×10?11). PMID:25426717

  13. Molecular Marker Approach on Characterizing and Quantifying Charcoal in Environmental Media

    NASA Astrophysics Data System (ADS)

    Kuo, L.; Herbert, B. E.; Louchouarn, P.

    2006-12-01

    Black carbon (BC) is widely distributed in natural environments including soils, sediments, freshwater, seawater and the atmosphere. It is produced mostly from the incomplete combustion of fossil fuels and vegetation. In recent years, increasing attention has been given to BC due to its potential influence in many biogeochemical processes. In the environment, BC exists as a continuum ranging from partly charred plant materials, charcoal residues to highly condensed soot and graphite particles. The heterogeneous nature of black carbon means that BC is always operationally-defined, highlighting the need for standard methods that support data comparisons. Unlike soot and graphite that can be quantified with well-established methods, it is difficult to directly quantify charcoal in geologic media due to its chemical and physical heterogeneity. Most of the available charcoal quantification methods detect unknown fractions of the BC continuum. To specifically identify and quantify charcoal in soils and sediments, we adopted and validated an innovative molecular marker approach that quantifies levoglucosan, a pyrogenic derivative of cellulose, as a proxy of charcoal. Levoglucosan is source-specific, stable and is able to be detected at low concentrations using gas chromatograph-mass spectrometer (GC-MS). In the present study, two different plant species, honey mesquite and cordgrass, were selected as the raw materials to synthesize charcoals. The lab-synthesize charcoals were made under control conditions to eliminate the high heterogeneity often found in natural charcoals. The effects of two major combustion factors, temperature and duration, on the yield of levoglucosan were characterized in the lab-synthesize charcoals. Our results showed that significant levoglucosan production in the two types of charcoal was restricted to relatively low combustion temperatures (150-350 degree C). The combustion duration did not cause significant differences in the yield of levoglucosan in the two charcoals. Interestingly, the low temperature charcoals are undetectable by the acid dichromate oxidation method, a popular soot/charcoal analytical approach. Our study demonstrates that levoglucosan can serve as a proxy of low temperature charcoals that are undetectable using other BC methods. Moreover, our study highlights the limitations of the common BC quantification methods to characterize the entire BC continuum.

  14. Quantifying transmission by stage of infection in the field: the example of SIV-1 and STLV-1 infecting mandrills.

    PubMed

    Roussel, Marion; Pontier, Dominique; Kazanji, Mirdad; Ngoubangoye, Barthélémy; Mahieux, Renaud; Verrier, Delphine; Fouchet, David

    2015-03-01

    The early stage of viral infection is often followed by an important increase of viral load and is generally considered to be the most at risk for pathogen transmission. Most methods quantifying the relative importance of the different stages of infection were developed for studies aimed at measuring HIV transmission in Humans. However, they cannot be transposed to animal populations in which less information is available. Here we propose a general method to quantify the importance of the early and late stages of the infection on micro-organism transmission from field studies. The method is based on a state space dynamical model parameterized using Bayesian inference. It is illustrated by a 28 years dataset in mandrills infected by Simian Immunodeficiency Virus type-1 (SIV-1) and the Simian T-Cell Lymphotropic Virus type-1 (STLV-1). For both viruses we show that transmission is predominant during the early stage of the infection (transmission ratio for SIV-1: 1.16 [0.0009; 18.15] and 9.92 [0.03; 83.8] for STLV-1). However, in terms of basic reproductive number (R0 ), which quantifies the weight of both stages in the spread of the virus, the results suggest that the epidemics of SIV-1 and STLV-1 are mainly driven by late transmissions in this population. PMID:25296992

  15. Quantifying Meteorite Impact Craters Individual Volume Data Sheet

    E-print Network

    Polly, David

    Quantifying Meteorite Impact Craters Individual Volume Data Sheet Experiment One (Volume) Drop 1 150 Trial 2 150 Trial 3 150 #12;Quantifying Meteorite Impact Craters Individual Speed Data Sheet 100 Trial 3 100 50 cm Height Trial 1 50 Trial 2 50 Trial 3 50 #12;Quantifying Meteorite Impact Craters

  16. Quantifying the effect size of changing environmental controls on carbon release from permafrost-affected soils

    NASA Astrophysics Data System (ADS)

    Schaedel, C.; Bader, M. K. F.; Schuur, E. A. G.; Bracho, R. G.; Capek, P.; De Baets, S. L.; Diakova, K.; Ernakovich, J. G.; Hartley, I. P.; Iversen, C. M.; Kane, E. S.; Knoblauch, C.; Lupascu, M.; Natali, S.; Norby, R. J.; O'Donnell, J. A.; Roy Chowdhury, T.; Santruckova, H.; Shaver, G. R.; Sloan, V. L.; Treat, C. C.; Waldrop, M. P.

    2014-12-01

    High-latitude surface air temperatures are rising twice as fast as the global mean, causing permafrost to thaw and thereby exposing large quantities of previously frozen organic carbon (C) to microbial decomposition. Increasing temperatures in high latitude ecosystems not only increase C emissions from previously frozen C in permafrost but also indirectly affect the C cycle through changes in regional and local hydrology. Warmer temperatures increase thawing of ice-rich permafrost, causing land surface subsidence where soils become waterlogged, anoxic conditions prevail and C is released in form of anaerobic CO2 and CH4. Although substrate quality, physical protection, and nutrient availability affect C decomposition, increasing temperatures and changes in surface and sub-surface hydrology are likely the dominant factors affecting the rate and form of C release from permafrost; however, their effect size on C release is poorly quantified. We have compiled a database of 24 incubation studies with soils from active layer and permafrost from across the entire permafrost zone to quantify a) the effect size of increasing temperatures and b) the changes from aerobic to anaerobic environmental soil conditions on C release. Results from two different meta-analyses show that a 10°C increase in temperature increased C release by a factor of two in boreal forest, peatland and tundra ecosystems. Under aerobic incubation conditions, soils released on average three times more C than under anaerobic conditions with large variation among the different ecosystems. While peatlands showed similar amounts of C release under aerobic and anaerobic soil conditions, tundra and boreal forest ecosystems released up to 8 times more C under anoxic conditions. This pan-arctic synthesis shows that boreal forest and tundra soils will have a larger impact on climate change when newly thawed permafrost C decomposes in an aerobic environment compared to an anaerobic environment even when accounting for the higher heat trapping capacity of CH4 over a 100-year timescale.

  17. European gasoline survey shows decreasing lead, MON

    SciTech Connect

    1995-10-02

    Associated Octel Co. Ltd., London, has released the results of its 1994 survey of European gasoline quality. Octel collected and analyzed more than 200 gasoline samples taken from sampling points close to major European refineries. Over the past decade, Octel`s surveys have demonstrated reduced use of lead antiknock compounds and increased use of high-octane blending components. Despite increased blending of alkylate and isomerate into gasolines at European refineries, many gasolines tested had MONs close to minimum nation requirements. Figures show trends in, respectively, MON and RON, in four important European markets: France, Germany, Iberia (defined by Octel as Spain and Portugal), and the U.K.

  18. The processing of polar quantifiers, and numerosity perception.

    PubMed

    Deschamps, Isabelle; Agmon, Galit; Loewenstein, Yonatan; Grodzinsky, Yosef

    2015-10-01

    We investigated the course of language processing in the context of a verification task that required numerical estimation and comparison. Participants listened to sentences with complex quantifiers that contrasted in Polarity, a logical property (e.g., more-than-half, less-than-half), and then performed speeded verification on visual scenarios that displayed a proportion between 2 discrete quantities. We varied systematically not only the sentences, but also the visual materials, in order to study their effect on the verification process. Next, we used the same visual scenarios with analogous non-verbal probes that featured arithmetical inequality symbols (<, >). This manipulation enabled us to measure not only Polarity effects, but also, to compare the effect of different probe types (linguistic, non-linguistic) on processing. Like many previous studies, our results demonstrate that perceptual difficulty affects error rate and reaction time in keeping with Weber's Law. Interestingly, these performance parameters are also affected by the Polarity of the quantifiers used, despite the fact that sentences had the exact same meaning, sentence structure, number of words, syllables, and temporal structure. Moreover, an analogous contrast between the non-linguistic probes (<, >) had no effect on performance. Finally, we observed no interaction between performance parameters governed by Weber's Law and those affected by Polarity. We consider 4 possible accounts of the results (syntactic, semantic, pragmatic, frequency-based), and discuss their relative merit. PMID:26142825

  19. Quantifying uncertainty in LCA-modelling of waste management systems

    SciTech Connect

    Clavreul, Julie; Guyonnet, Dominique; Christensen, Thomas H.

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer Uncertainty in LCA-modelling of waste management is significant. Black-Right-Pointing-Pointer Model, scenario and parameter uncertainties contribute. Black-Right-Pointing-Pointer Sequential procedure for quantifying uncertainty is proposed. Black-Right-Pointing-Pointer Application of procedure is illustrated by a case-study. - Abstract: Uncertainty analysis in LCA studies has been subject to major progress over the last years. In the context of waste management, various methods have been implemented but a systematic method for uncertainty analysis of waste-LCA studies is lacking. The objective of this paper is (1) to present the sources of uncertainty specifically inherent to waste-LCA studies, (2) to select and apply several methods for uncertainty analysis and (3) to develop a general framework for quantitative uncertainty assessment of LCA of waste management systems. The suggested method is a sequence of four steps combining the selected methods: (Step 1) a sensitivity analysis evaluating the sensitivities of the results with respect to the input uncertainties, (Step 2) an uncertainty propagation providing appropriate tools for representing uncertainties and calculating the overall uncertainty of the model results, (Step 3) an uncertainty contribution analysis quantifying the contribution of each parameter uncertainty to the final uncertainty and (Step 4) as a new approach, a combined sensitivity analysis providing a visualisation of the shift in the ranking of different options due to variations of selected key parameters. This tiered approach optimises the resources available to LCA practitioners by only propagating the most influential uncertainties.

  20. Quantifying Power Grid Risk from Geomagnetic Storms

    NASA Astrophysics Data System (ADS)

    Homeier, N.; Wei, L. H.; Gannon, J. L.

    2012-12-01

    We are creating a statistical model of the geophysical environment that can be used to quantify the geomagnetic storm hazard to power grid infrastructure. Our model is developed using a database of surface electric fields for the continental United States during a set of historical geomagnetic storms. These electric fields are derived from the SUPERMAG compilation of worldwide magnetometer data and surface impedances from the United States Geological Survey. This electric field data can be combined with a power grid model to determine GICs per node and reactive MVARs at each minute during a storm. Using publicly available substation locations, we derive relative risk maps by location by combining magnetic latitude and ground conductivity. We also estimate the surface electric fields during the August 1972 geomagnetic storm that caused a telephone cable outage across the middle of the United States. This event produced the largest surface electric fields in the continental U.S. in at least the past 40 years.

  1. How to quantify conduits in wood?

    PubMed Central

    Scholz, Alexander; Klepsch, Matthias; Karimi, Zohreh; Jansen, Steven

    2013-01-01

    Vessels and tracheids represent the most important xylem cells with respect to long distance water transport in plants. Wood anatomical studies frequently provide several quantitative details of these cells, such as vessel diameter, vessel density, vessel element length, and tracheid length, while important information on the three dimensional structure of the hydraulic network is not considered. This paper aims to provide an overview of various techniques, although there is no standard protocol to quantify conduits due to high anatomical variation and a wide range of techniques available. Despite recent progress in image analysis programs and automated methods for measuring cell dimensions, density, and spatial distribution, various characters remain time-consuming and tedious. Quantification of vessels and tracheids is not only important to better understand functional adaptations of tracheary elements to environment parameters, but will also be essential for linking wood anatomy with other fields such as wood development, xylem physiology, palaeobotany, and dendrochronology. PMID:23507674

  2. Quantifying International Travel Flows Using Flickr

    PubMed Central

    Barchiesi, Daniele; Moat, Helen Susannah; Alis, Christian; Bishop, Steven; Preis, Tobias

    2015-01-01

    Online social media platforms are opening up new opportunities to analyse human behaviour on an unprecedented scale. In some cases, the fast, cheap measurements of human behaviour gained from these platforms may offer an alternative to gathering such measurements using traditional, time consuming and expensive surveys. Here, we use geotagged photographs uploaded to the photo-sharing website Flickr to quantify international travel flows, by extracting the location of users and inferring trajectories to track their movement across time. We find that Flickr based estimates of the number of visitors to the United Kingdom significantly correlate with the official estimates released by the UK Office for National Statistics, for 28 countries for which official estimates are calculated. Our findings underline the potential for indicators of key aspects of human behaviour, such as mobility, to be generated from data attached to the vast volumes of photographs posted online. PMID:26147500

  3. Crowdsourcing for quantifying transcripts: An exploratory study.

    PubMed

    Azzam, Tarek; Harman, Elena

    2016-02-01

    This exploratory study attempts to demonstrate the potential utility of crowdsourcing as a supplemental technique for quantifying transcribed interviews. Crowdsourcing is the harnessing of the abilities of many people to complete a specific task or a set of tasks. In this study multiple samples of crowdsourced individuals were asked to rate and select supporting quotes from two different transcripts. The findings indicate that the different crowdsourced samples produced nearly identical ratings of the transcripts, and were able to consistently select the same supporting text from the transcripts. These findings suggest that crowdsourcing, with further development, can potentially be used as a mixed method tool to offer a supplemental perspective on transcribed interviews. PMID:26519690

  4. Quantifying fault recovery in multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Malek, Miroslaw; Harary, Frank

    1990-01-01

    Various aspects of reliable computing are formalized and quantified with emphasis on efficient fault recovery. The mathematical model which proves to be most appropriate is provided by the theory of graphs. New measures for fault recovery are developed and the value of elements of the fault recovery vector are observed to depend not only on the computation graph H and the architecture graph G, but also on the specific location of a fault. In the examples, a hypercube is chosen as a representative of parallel computer architecture, and a pipeline as a typical configuration for program execution. Dependability qualities of such a system is defined with or without a fault. These qualities are determined by the resiliency triple defined by three parameters: multiplicity, robustness, and configurability. Parameters for measuring the recovery effectiveness are also introduced in terms of distance, time, and the number of new, used, and moved nodes and edges.

  5. Quantifiers More or Less Quantify On-Line: ERP Evidence for Partial Incremental Interpretation

    ERIC Educational Resources Information Center

    Urbach, Thomas P.; Kutas, Marta

    2010-01-01

    Event-related brain potentials were recorded during RSVP reading to test the hypothesis that quantifier expressions are incrementally interpreted fully and immediately. In sentences tapping general knowledge ("Farmers grow crops/worms as their primary source of income"), Experiment 1 found larger N400s for atypical ("worms") than typical objects…

  6. SANTA: Quantifying the Functional Content of Molecular Networks

    PubMed Central

    Cornish, Alex J.; Markowetz, Florian

    2014-01-01

    Linking networks of molecular interactions to cellular functions and phenotypes is a key goal in systems biology. Here, we adapt concepts of spatial statistics to assess the functional content of molecular networks. Based on the guilt-by-association principle, our approach (called SANTA) quantifies the strength of association between a gene set and a network, and functionally annotates molecular networks like other enrichment methods annotate lists of genes. As a general association measure, SANTA can (i) functionally annotate experimentally derived networks using a collection of curated gene sets and (ii) annotate experimentally derived gene sets using a collection of curated networks, as well as (iii) prioritize genes for follow-up analyses. We exemplify the efficacy of SANTA in several case studies using the S. cerevisiae genetic interaction network and genome-wide RNAi screens in cancer cell lines. Our theory, simulations, and applications show that SANTA provides a principled statistical way to quantify the association between molecular networks and cellular functions and phenotypes. SANTA is available from http://bioconductor.org/packages/release/bioc/html/SANTA.html. PMID:25210953

  7. Quantifying Flow Resistance of Mountain Streams Using the HHT Approach

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Fu, X.

    2014-12-01

    This study quantifies the flow resistance of mountain streams with gravel bed and remarkable bed forms. The motivation is to follow the previous ideas (Robert, A. 1990) that the bed surface can be divided into micro-scale and macro-scale roughness, respectively. We processed the field data of longitudinal bed profiles of the Longxi River, Sichuan Province, China, using the Hilbert-Huang Transformation Method (HHT). Each longitudinal profile was decomposed into a set of curves with different frequencies of spatial fluctuation. The spectrogram was accordingly obtained. We supposed that a certain high and low frequency curves correspond to the micro- and macro-roughness of stream bed, respectively. We specified the characteristic height and length with the spectrogram, which represent the macro bed form accounting for bed form roughness. We then estimated the bed form roughness as being proportional to the ratio of the height to length multiplied by the height(Yang et al,2005). We also assumed the parameter, Sp, defined as the sinuosity of the highest frequency curve as the measure of the micro-scale roughness. We then took into account the effect of bed material sizes through using the product of d50/R and Sp, where d50 is the sediment median size and R is the hydraulic radius. The macro- and micro-scale roughness parameters were merged together nonlinearly to evaluate the flow resistance caused by the interplaying friction and form drag forces. Validation results show that the square of the determinant coefficient can reach as high as 0.84 in the case of the Longxi River. Future studies will focus on the verification against more field data as well as the combination of skin friction and form drag. Key words: flow resistance; roughness; HHT; spectrogram; form drag Robert, A. (1990), Boundary roughness in coarse-grained channels, Prog. Phys. Geogr., 14(1), 42-69. Yang, S.-Q., S.-K. Tan, and S.-Y. Lim. (2005), Flow resistance and bed form geometry in a wide alluvial channel, Water Resour. Res., 41, W09419.

  8. Quantifying VOC emissions for the strategic petroleum reserve.

    SciTech Connect

    Knowlton, Robert G.; Lord, David L.

    2013-06-01

    A very important aspect of the Department of Energy's (DOE's) Strategic Petroleum Reserve (SPR) program is regulatory compliance. One of the regulatory compliance issues deals with limiting the amount of volatile organic compounds (VOCs) that are emitted into the atmosphere from brine wastes when they are discharged to brine holding ponds. The US Environmental Protection Agency (USEPA) has set limits on the amount of VOCs that can be discharged to the atmosphere. Several attempts have been made to quantify the VOC emissions associated with the brine ponds going back to the late 1970's. There are potential issues associated with each of these quantification efforts. Two efforts were made to quantify VOC emissions by analyzing VOC content of brine samples obtained from wells. Efforts to measure air concentrations were mentioned in historical reports but no data have been located to confirm these assertions. A modeling effort was also performed to quantify the VOC emissions. More recently in 2011- 2013, additional brine sampling has been performed to update the VOC emissions estimate. An analysis of the statistical confidence in these results is presented here. Arguably, there are uncertainties associated with each of these efforts. The analysis herein indicates that the upper confidence limit in VOC emissions based on recent brine sampling is very close to the 0.42 ton/MMB limit used historically on the project. Refining this estimate would require considerable investment in additional sampling, analysis, and monitoring. An analysis of the VOC emissions at each site suggests that additional discharges could be made and stay within current regulatory limits.

  9. Quantifying Speech Rhythm Abnormalities in the Dysarthrias

    PubMed Central

    Liss, Julie M.; White, Laurence; Mattys, Sven L.; Lansford, Kaitlin; Lotto, Andrew J.; Spitzer, Stephanie M.; Caviness, John N.

    2013-01-01

    Purpose In this study, the authors examined whether rhythm metrics capable of distinguishing languages with high and low temporal stress contrast also can distinguish among control and dysarthric speakers of American English with perceptually distinct rhythm patterns. Methods Acoustic measures of vocalic and consonantal segment durations were obtained for speech samples from 55 speakers across 5 groups (hypokinetic, hyperkinetic, flaccid-spastic, ataxic dysarthrias, and controls). Segment durations were used to calculate standard and new rhythm metrics. Discriminant function analyses (DFAs) were used to determine which sets of predictor variables (rhythm metrics) best discriminated between groups (control vs. dysarthrias; and among the 4 dysarthrias). A cross-validation method was used to test the robustness of each original DFA. Results The majority of classification functions were more than 80% successful in classifying speakers into their appropriate group. New metrics that combined successive vocalic and consonantal segments emerged as important predictor variables. DFAs pitting each dysarthria group against the combined others resulted in unique constellations of predictor variables that yielded high levels of classification accuracy. Conclusions: This study confirms the ability of rhythm metrics to distinguish control speech from dysarthrias and to discriminate dysarthria subtypes. Rhythm metrics show promise for use as a rational and objective clinical tool. PMID:19717656

  10. Land cover change and remote sensing: Examples of quantifying spatiotemporal dynamics in tropical forests

    SciTech Connect

    Krummel, J.R.; Su, Haiping; Fox, J.; Yarnasan, S.; Ekasingh, M.

    1995-06-01

    Research on human impacts or natural processes that operate over broad geographic areas must explicitly address issues of scale and spatial heterogeneity. While the tropical forests of Southeast Asia and Mexico have been occupied and used to meet human needs for thousands of years, traditional forest management systems are currently being transformed by rapid and far-reaching demographic, political, economic, and environmental changes. The dynamics of population growth, migration into the remaining frontiers, and responses to national and international market forces result in a demand for land to produce food and fiber. These results illustrate some of the mechanisms that drive current land use changes, especially in the tropical forest frontiers. By linking the outcome of individual land use decisions and measures of landscape fragmentation and change, the aggregated results shows the hierarchy of temporal and spatial events that in summation result in global changes to the most complex and sensitive biome -- tropical forests. By quantifying the spatial and temporal patterns of tropical forest change, researchers can assist policy makers by showing how landscape systems in these tropical forests are controlled by physical, biological, social, and economic parameters.

  11. Adults with Autism Show Increased Sensitivity to Outcomes at Low Error Rates during Decision-Making

    ERIC Educational Resources Information Center

    Minassian, Arpi; Paulus, Martin; Lincoln, Alan; Perry, William

    2007-01-01

    Decision-making is an important function that can be quantified using a two-choice prediction task. Individuals with Autistic Disorder (AD) often show highly restricted and repetitive behavior that may interfere with adaptive decision-making. We assessed whether AD adults showed repetitive behavior on the choice task that was unaffected by…

  12. Cross-linguistic relations between quantifiers and numerals in language acquisition: evidence from Japanese.

    PubMed

    Barner, David; Libenson, Amanda; Cheung, Pierina; Takasaki, Mayu

    2009-08-01

    A study of 104 Japanese-speaking 2- to 5-year-olds tested the relation between numeral and quantifier acquisition. A first study assessed Japanese children's comprehension of quantifiers, numerals, and classifiers. Relative to English-speaking counterparts, Japanese children were delayed in numeral comprehension at 2 years of age but showed no difference at 3 and 4 years of age. Also, Japanese 2-year-olds had better comprehension of quantifiers, indicating that their delay was specific to numerals. A second study examined the speech of Japanese and English caregivers to explore the syntactic cues that might affect integer acquisition. Quantifiers and numerals occurred in similar syntactic positions and overlapped to a greater degree in English than in Japanese. Also, Japanese nouns were often dropped, and both quantifiers and numerals exhibited variable positions relative to the nouns they modified. We conclude that syntactic cues in English facilitate bootstrapping numeral meanings from quantifier meanings and that such cues are weaker in classifier languages such as Japanese. PMID:19162276

  13. Quantifiable outcomes from corporate and higher education learning collaborations

    NASA Astrophysics Data System (ADS)

    Devine, Thomas G.

    The study investigated the existence of measurable learning outcomes that emerged out of the shared strengths of collaborating sponsors. The study identified quantifiable learning outcomes that confirm corporate, academic and learner participation in learning collaborations. Each of the three hypotheses and the synergy indicator quantitatively and qualitatively confirmed learning outcomes benefiting participants. The academic-indicator quantitatively confirmed that learning outcomes attract learners to the institution. The corporate-indicator confirmed that learning outcomes include knowledge exchange and enhanced workforce talents for careers in the energy-utility industry. The learner-indicator confirmed that learning outcomes provide professional development opportunities for employment. The synergy-indicator confirmed that best learning practices in learning collaborations emanate out of the sponsors' shared strengths, and that partnerships can be elevated to strategic alliances, going beyond response to the desires of sponsors to create learner-centered cultures. The synergy-indicator confirmed the value of organizational processes that elevate sponsors' interactions to sharing strength, to create a learner-centered culture. The study's series of qualitative questions confirmed prior success factors, while verifying the hypothesis results and providing insight not available from quantitative data. The direct benefactors of the study are the energy-utility learning-collaboration participants of the study, and corporation, academic institutions, and learners of the collaboration. The indirect benefactors are the stakeholders of future learning collaborations, through improved knowledge of the existence or absence of quantifiable learning outcomes.

  14. Quantifying complexity in translational research: an integrated approach

    PubMed Central

    Munoz, David A.; Nembhard, Harriet Black; Kraschnewski, Jennifer L.

    2014-01-01

    Purpose This article quantifies complexity in translational research. The impact of major operational steps and technical requirements (TR) is calculated with respect to their ability to accelerate moving new discoveries into clinical practice. Design/Methodology/Approach A three-phase integrated Quality Function Deployment (QFD) and Analytic Hierarchy Process (AHP) method was used to quantify complexity in translational research. A case study in obesity was used to usability. Findings Generally, the evidence generated was valuable for understanding various components in translational research. Particularly, we found that collaboration networks, multidisciplinary team capacity and community engagement are crucial for translating new discoveries into practice. Research limitations/implications As the method is mainly based on subjective opinion, some argue that the results may be biased. However, a consistency ratio is calculated and used as a guide to subjectivity. Alternatively, a larger sample may be incorporated to reduce bias. Practical implications The integrated QFD-AHP framework provides evidence that could be helpful to generate agreement, develop guidelines, allocate resources wisely, identify benchmarks and enhance collaboration among similar projects. Originality/value Current conceptual models in translational research provide little or no clue to assess complexity. The proposed method aimed to fill this gap. Additionally, the literature review includes various features that have not been explored in translational research. PMID:25417380

  15. Methods for quantifying uncertainty in fast reactor analyses.

    SciTech Connect

    Fanning, T. H.; Fischer, P. F.

    2008-04-07

    Liquid-metal-cooled fast reactors in the form of sodium-cooled fast reactors have been successfully built and tested in the U.S. and throughout the world. However, no fast reactor has operated in the U.S. for nearly fourteen years. More importantly, the U.S. has not constructed a fast reactor in nearly 30 years. In addition to reestablishing the necessary industrial infrastructure, the development, testing, and licensing of a new, advanced fast reactor concept will likely require a significant base technology program that will rely more heavily on modeling and simulation than has been done in the past. The ability to quantify uncertainty in modeling and simulations will be an important part of any experimental program and can provide added confidence that established design limits and safety margins are appropriate. In addition, there is an increasing demand from the nuclear industry for best-estimate analysis methods to provide confidence bounds along with their results. The ability to quantify uncertainty will be an important component of modeling that is used to support design, testing, and experimental programs. Three avenues of UQ investigation are proposed. Two relatively new approaches are described which can be directly coupled to simulation codes currently being developed under the Advanced Simulation and Modeling program within the Reactor Campaign. A third approach, based on robust Monte Carlo methods, can be used in conjunction with existing reactor analysis codes as a means of verification and validation of the more detailed approaches.

  16. Quantifying variability on thermal resistance of Listeria monocytogenes.

    PubMed

    Aryani, D C; den Besten, H M W; Hazeleger, W C; Zwietering, M H

    2015-01-16

    Knowledge of the impact of strain variability and growth history on thermal resistance is needed to provide a realistic prediction and an adequate design of thermal treatments. In the present study, apart from quantifying strain variability on thermal resistance of Listeria monocytogenes, also biological variability and experimental variability were determined to prioritize their importance. Experimental variability was defined as the repeatability of parallel experimental replicates and biological variability was defined as the reproducibility of biologically independent reproductions. Furthermore, the effect of growth history was quantified. The thermal inactivation curves of 20 L. monocytogenes strains were fitted using the modified Weibull model, resulting in total 360 D-value estimates. The D-value ranged from 9 to 30 min at 55 °C; from 0.6 to 4 min at 60 °C; and from 0.08 to 0.6 min at 65 °C. The estimated z-values of all strains ranged from 4.4 to 5.7 °C. The strain variability was ten times higher than the experimental variability and four times higher than the biological variability. Furthermore, the effect of growth history on thermal resistance variability was not significantly different from that of strain variability and was mainly determined by the growth phase. PMID:25462932

  17. Risk-Quantified Decision-Making at Rocky Flats

    SciTech Connect

    Myers, Jeffrey C.

    2008-01-15

    Surface soils in the 903 Pad Lip Area of the Rocky Flats Environmental Technology Site (RFETS) were contaminated with {sup 239/240}Pu by site operations. To meet remediation goals, accurate definition of areas where {sup 239/240}Pu activity exceeded the threshold level of 50 pCi/g and those below 50- pCi/g needed definition. In addition, the confidence for remedial decisions needed to be quantified and displayed visually. Remedial objectives needed to achieve a 90 percent certainty that unremediated soils had less than a 10 percent chance of {sup 239/240}Pu activity exceeding 50-pCi/g. Removing areas where the chance of exceedance is greater than 10 percent creates a 90 percent confidence in the remedial effort results. To achieve the stipulated goals, the geostatistical approach of probability kriging (Myers 1997) was implemented. Lessons learnt: Geostatistical techniques provided a risk-quantified approach to remedial decision-making and provided visualizations of the excavation area. Error analysis demonstrated compliance and confirmed that more than sufficient soils were removed. Error analysis also illustrated that any soils above the threshold that were not removed would be of nominal activity. These quantitative approaches were useful from a regulatory, engineering, and stakeholder satisfaction perspective.

  18. Quantifying chaotic dynamics from integrate-and-fire processes

    NASA Astrophysics Data System (ADS)

    Pavlov, A. N.; Pavlova, O. N.; Mohammad, Y. K.; Kurths, J.

    2015-01-01

    Characterizing chaotic dynamics from integrate-and-fire (IF) interspike intervals (ISIs) is relatively easy performed at high firing rates. When the firing rate is low, a correct estimation of Lyapunov exponents (LEs) describing dynamical features of complex oscillations reflected in the IF ISI sequences becomes more complicated. In this work we discuss peculiarities and limitations of quantifying chaotic dynamics from IF point processes. We consider main factors leading to underestimated LEs and demonstrate a way of improving numerical determining of LEs from IF ISI sequences. We show that estimations of the two largest LEs can be performed using around 400 mean periods of chaotic oscillations in the regime of phase-coherent chaos. Application to real data is discussed.

  19. Quantifying chaotic dynamics from integrate-and-fire processes

    SciTech Connect

    Pavlov, A. N.; Pavlova, O. N.; Mohammad, Y. K.; Kurths, J.

    2015-01-15

    Characterizing chaotic dynamics from integrate-and-fire (IF) interspike intervals (ISIs) is relatively easy performed at high firing rates. When the firing rate is low, a correct estimation of Lyapunov exponents (LEs) describing dynamical features of complex oscillations reflected in the IF ISI sequences becomes more complicated. In this work we discuss peculiarities and limitations of quantifying chaotic dynamics from IF point processes. We consider main factors leading to underestimated LEs and demonstrate a way of improving numerical determining of LEs from IF ISI sequences. We show that estimations of the two largest LEs can be performed using around 400 mean periods of chaotic oscillations in the regime of phase-coherent chaos. Application to real data is discussed.

  20. Quantifying Age-dependent Extinction from Species Phylogenies

    PubMed Central

    Alexander, Helen K.; Lambert, Amaury; Stadler, Tanja

    2016-01-01

    Several ecological factors that could play into species extinction are expected to correlate with species age, i.e., time elapsed since the species arose by speciation. To date, however, statistical tools to incorporate species age into likelihood-based phylogenetic inference have been lacking. We present here a computational framework to quantify age-dependent extinction through maximum likelihood parameter estimation based on phylogenetic trees, assuming species lifetimes are gamma distributed. Testing on simulated trees shows that neglecting age dependence can lead to biased estimates of key macroevolutionary parameters. We then apply this method to two real data sets, namely a complete phylogeny of birds (class Aves) and a clade of self-compatible and -incompatible nightshades (Solanaceae), gaining initial insights into the extent to which age-dependent extinction may help explain macroevolutionary patterns. Our methods have been added to the R package TreePar. PMID:26405218

  1. Quantifying Age-dependent Extinction from Species Phylogenies.

    PubMed

    Alexander, Helen K; Lambert, Amaury; Stadler, Tanja

    2016-01-01

    Several ecological factors that could play into species extinction are expected to correlate with species age, i.e., time elapsed since the species arose by speciation. To date, however, statistical tools to incorporate species age into likelihood-based phylogenetic inference have been lacking. We present here a computational framework to quantify age-dependent extinction through maximum likelihood parameter estimation based on phylogenetic trees, assuming species lifetimes are gamma distributed. Testing on simulated trees shows that neglecting age dependence can lead to biased estimates of key macroevolutionary parameters. We then apply this method to two real data sets, namely a complete phylogeny of birds (class Aves) and a clade of self-compatible and -incompatible nightshades (Solanaceae), gaining initial insights into the extent to which age-dependent extinction may help explain macroevolutionary patterns. Our methods have been added to the R package TreePar. PMID:26405218

  2. Manipulating and quantifying temperature-triggered coalescence with microcentrifugation.

    PubMed

    Feng, Huanhuan; Ershov, Dmitry; Krebs, Thomas; Schroen, Karin; Stuart, Martien A Cohen; van der Gucht, Jasper; Sprakel, Joris

    2015-01-01

    In this paper we describe a new approach to quantify the stability and coalescence kinetics of thermally switchable emulsions using an imaging-based microcentrifugation method. We first show that combining synchronized high-speed imaging with microfluidic centrifugation allows the direct measurement of the thermodynamic stability of emulsions, as expressed by the critical disjoining pressure. We apply this to a thermoresponsive emulsion, allowing us to measure the critical disjoining pressure as a function of temperature. The same method, combined with quantitative image analysis, also gives access to droplet-scale details of the coalescence process. We illustrate this by measuring temperature-dependent coalescence rates and by analysing the temperature-induced switching between two distinct microscopic mechanisms by which dense emulsions can destabilise to form a homogeneous oil phase. PMID:25337820

  3. A LC-MS method to quantify tenofovir urinary concentrations in treated patients.

    PubMed

    Simiele, Marco; Carcieri, Chiara; De Nicolò, Amedeo; Ariaudo, Alessandra; Sciandra, Mauro; Calcagno, Andrea; Bonora, Stefano; Di Perri, Giovanni; D'Avolio, Antonio

    2015-10-10

    Tenofovir disoproxil fumarate is a prodrug of tenofovir used in the treatment of HIV and HBV infections: it is the most used antiretroviral worldwide. Tenofovir is nucleotidic HIV reverse trascriptase inhibitor that showed excellent long-term efficacy and tolerability. However renal and bone complications (proximal tubulopathy, hypophosphatemia, decreased bone mineral density, and reduced creatinine clearance) limit its use. Tenofovir renal toxicity has been suggested as the consequence of drug entrapment in proximal tubular cells: measuring tenofovir urinary concentrations may be a proxy of this event and it may be used as predictor of tenofovir side effects. No method is currently available for quantifying tenofovir in this matrix: then, the aim of this work was to validate a new LC-MS method for the quantification of urinary tenofovir. Chromatographic separation was achieved with a gradient (acetonitrile and water with formic acid 0.05%) on an Atlantis 5 ?m T3, 4.6 mm × 150 mm, reversed phase analytical column. Detection of tenofovir and internal standard was achieved by electrospray ionization mass spectrometry in the positive ion mode. Calibration ranged from 391 to 100,000 ng/mL. The limit of quantification was 391 ng/mL and the limit of detection was 195 ng/mL. Mean recovery of tenofovir and internal standard were consistent and stable, while matrix effect resulted low and stable. The method was tested on 35 urine samples from HIV-positive patients treated with tenofovir-based HAARTs and did not show any significant interference with antiretrovirals or other concomitantly administered drugs. All the observed concentrations in real samples fitted the calibration range, confirming the capability of this method for the use in clinical routine. Whether confirmed in ad hoc studies this method may be used for quantifying tenofovir urinary concentrations and help managing HIV-positive patients treated with tenofovir. PMID:25997174

  4. Quantifying and Generalizing Hydrologic Responses to Dam Regulation using a Statistical Modeling Approach

    SciTech Connect

    McManamay, Ryan A

    2014-01-01

    Despite the ubiquitous existence of dams within riverscapes, much of our knowledge about dams and their environmental effects remains context-specific. Hydrology, more than any other environmental variable, has been studied in great detail with regard to dam regulation. While much progress has been made in generalizing the hydrologic effects of regulation by large dams, many aspects of hydrology show site-specific fidelity to dam operations, small dams (including diversions), and regional hydrologic regimes. A statistical modeling framework is presented to quantify and generalize hydrologic responses to varying degrees of dam regulation. Specifically, the objectives were to 1) compare the effects of local versus cumulative dam regulation, 2) determine the importance of different regional hydrologic regimes in influencing hydrologic responses to dams, and 3) evaluate how different regulation contexts lead to error in predicting hydrologic responses to dams. Overall, model performance was poor in quantifying the magnitude of hydrologic responses, but performance was sufficient in classifying hydrologic responses as negative or positive. Responses of some hydrologic indices to dam regulation were highly dependent upon hydrologic class membership and the purpose of the dam. The opposing coefficients between local and cumulative-dam predictors suggested that hydrologic responses to cumulative dam regulation are complex, and predicting the hydrology downstream of individual dams, as opposed to multiple dams, may be more easy accomplished using statistical approaches. Results also suggested that particular contexts, including multipurpose dams, high cumulative regulation by multiple dams, diversions, close proximity to dams, and certain hydrologic classes are all sources of increased error when predicting hydrologic responses to dams. Statistical models, such as the ones presented herein, show promise in their ability to model the effects of dam regulation effects at large spatial scales as to generalize the directionality of hydrologic responses.

  5. New primers for detecting and quantifying denitrifying anaerobic methane oxidation archaea in different ecological niches.

    PubMed

    Ding, Jing; Ding, Zhao-Wei; Fu, Liang; Lu, Yong-Ze; Cheng, Shuk H; Zeng, Raymond J

    2015-11-01

    The significance of ANME-2d in methane sink in the environment has been overlooked, and there was no any study evaluating the distribution of ANME-2d in the environment. New primers were thus needed to be designed for following research. In this paper, a pair of primers (DP397F and DP569R) was designed to quantify ANME-2d. The specificity and amplification efficiency of this primer pair were acceptable. PCR amplification of another pair of primers (DP142F and DP779R) generated a single, bright targeted band from the enrichment sample, but yielded faint, multiple bands from the environmental samples. Nested PCR was conducted using the primers DP142F/DP779R in the first round and DP142F/DP569R in the second round, which generated a bright targeted band. Further phylogenetic analysis showed that these targeted bands were ANME-2d-related sequences. Real-time PCR showed that the copies of the 16s ribosomal RNA gene of ANME-2d in these samples ranged from 3.72 × 10(4) to 2.30 × 10(5) copies ?g(-1) DNA, indicating that the percentage of ANME-2d was greatest in a polluted river sample and least in a rice paddy sample. These results demonstrate that the newly developed real-time PCR primers could sufficiently quantify ANME-2d and that nested PCR with an appropriate combination of the new primers could successfully detect ANME-2d in environmental samples; the latter finding suggests that ANME-2d may spread in environments. PMID:26300291

  6. A comparison of tracer methods for quantifying CO2 sources in an urban region

    NASA Astrophysics Data System (ADS)

    Djuricin, Sonja; Pataki, Diane E.; Xu, Xiaomei

    2010-06-01

    The relative contribution of anthropogenic (natural gas and gasoline combustion) and biogenic (aboveground and belowground respiration) CO2 sources has previously been quantified with the 13C, 18O, and 14C isotopes of CO2. The unique combination of isotopic signatures of each source allows for top-down attribution of sources using atmospheric measurements. Other tracers of CO2 include carbon monoxide (CO), which is a direct tracer of fossil fuel combustion-derived CO2 as CO and CO2 are evolved at specific ratios (RCO/CO2) during combustion depending on fuel source and combustion efficiency. We used the 13C, 18O, and 14C tracers to partition between natural gas, gasoline, and aboveground and belowground respiration during four sampling events in the Los Angeles basin. Additionally, we compared the effectiveness of the independent CO tracer with the 14C tracer to distinguish between anthropogenic and biogenic CO2. The three isotope tracer results showed that during the sampling period, fossil fuel combustion was not a dominant source of CO2 and aboveground respiration contributed up to approximately 70% of CO2 sources during the spring. However, the percent fossil fuel CO2 calculated by the CO tracer was not entirely consistent with the fossil fuel CO2 calculated by 14C, which predicted up to ˜70% of winter CO2 from fossil fuel sources. The CO tracer was useful for showing diurnal patterns of CO2 sources. However, combustion RCO/CO2 values vary significantly, which poses a challenge for accurately identifying CO2 sources. Detailed local information about RCO/CO2 is required to effectively utilize the CO tracer for quantifying sources of CO2.

  7. Hyperspectral remote sensing tools for quantifying plant litter and invasive species in arid ecosystems

    USGS Publications Warehouse

    Nagler, Pamela L.; Sridhar, B.B. Maruthi; Olsson, Aaryn Dyami; Glenn, Edward P.; van Leeuwen, Willem J.D.

    2012-01-01

    Green vegetation can be distinguished using visible and infrared multi-band and hyperspectral remote sensing methods. The problem has been in identifying and distinguishing the non-photosynthetically active radiation (PAR) landscape components, such as litter and soils, and from green vegetation. Additionally, distinguishing different species of green vegetation is challenging using the relatively few bands available on most satellite sensors. This chapter focuses on hyperspectral remote sensing characteristics that aim to distinguish between green vegetation, soil, and litter (or senescent vegetation). Quantifying litter by remote sensing methods is important in constructing carbon budgets of natural and agricultural ecosystems. Distinguishing between plant types is important in tracking the spread of invasive species. Green leaves of different species usually have similar spectra, making it difficult to distinguish between species. However, in this chapter we show that phenological differences between species can be used to detect some invasive species by their distinct patterns of greening and dormancy over an annual cycle based on hyperspectral data. Both applications require methods to quantify the non-green cellulosic fractions of plant tissues by remote sensing even in the presence of soil and green plant cover. We explore these methods and offer three case studies. The first concerns distinguishing surface litter from soil using the Cellulose Absorption Index (CAI), as applied to no-till farming practices where plant litter is left on the soil after harvest. The second involves using different band combinations to distinguish invasive saltcedar from agricultural and native riparian plants on the Lower Colorado River. The third illustrates the use of the CAI and NDVI in time-series analyses to distinguish between invasive buffelgrass and native plants in a desert environment in Arizona. Together the results show how hyperspectral imagery can be applied to solve problems that are not amendable to solution by the simple band combinations normally used in remote sensing.

  8. Quantifying invasion resistance: the use of recruitment functions to control for propagule pressure.

    PubMed

    Miller, Alice L; Diez, Jeffrey M; Sullivan, Jon J; Wangen, Steven R; Wiser, Susan K; Meffin, Ross; Duncan, Richard P

    2014-04-01

    Invasive species distributions tend to be biased towards some habitats compared to others due to the combined effects of habitat-specific resistance to invasion and non-uniform propagule pressure. These two factors may also interact, with habitat resistance varying as a function of propagule supply rate. Recruitment experiments, in which the number of individuals recruiting into a population is measured under different propagule supply rates, can help us understand these interactions and quantify habitat resistance to invasion while controlling for variation in propagule supply rate. Here, we constructed recruitment functions for the invasive herb Hieracium lepidulum by sowing seeds at five different densities into six different habitat types in New Zealand's Southern Alps repeated over two successive years, and monitored seedling recruitment and survival over a four year period. We fitted recruitment functions that allowed us to estimate the total number of safe sites available for plants to occupy, which we used as a measure of invasion resistance, and tested several hypotheses concerning how invasion resistance differed among habitats and over time. We found significant differences in levels of H. lepidulum recruitment among habitats, which did not match the species' current distribution in the landscape. Local biotic and abiotic characteristics helped explain some of the between-habitat variation, with vascular plant species richness, vascular plant cover, and light availability, all positively correlated with the number of safe sites for recruitment. Resistance also varied over time however, with cohorts sown in successive years showing different levels of recruitment in some habitats but not others. These results show that recruitment functions can be used to quantify habitat resistance to invasion and to identify potential mechanisms of invasion resistance. PMID:24933811

  9. Structural property of soybean lunasin and development of a method to quantify lunasin in plasma using an optimized immunoassay protocol.

    PubMed

    Dia, Vermont P; Frankland-Searby, Sarah; del Hierro, Francisco Laso; Garcia, Guadalupe; de Mejia, Elvira Gonzalez

    2013-05-01

    Lunasin is a 43-amino acid naturally occurring chemopreventive peptide with demonstrated anti-cancer and anti-inflammatory properties. The objectives of this study were to determine the effect of temperature on the secondary structure of lunasin, to develop a method of isolating lunasin from human plasma using an ion-exchange microspin column and to quantify the amount of lunasin using an optimized enzyme-linked immunosorbent assay. Lunasin was purified using a combination of ion-exchange chromatography, ultrafiltration and gel filtration chromatography. Circular dichroism showed that increased in temperature from 25 to 100 °C resulted in changes on the secondary structure of lunasin and its capability to interact with rabbit polyclonal antibody. Enzyme linked immunosorbent assay showed that lunasin rabbit polyclonal antibody has a titer of 250 and a specific activity of 0.05 mL/?g. A linear response was detected between 16 to 48 ng lunasin per mL (y=0.03x-0.38, R(2)=0.96). The use of diethylaminoethyl microspin column to isolate spiked lunasin in human plasma showed that most lunasin (37.8-46.5%) bound to the column eluted with Tris-HCl buffer, pH 7.5 with a yield up to 76.6%. In conclusion, lunasin can be isolated from human plasma by a simple DEAE microspin column technique and can be quantified using a validated and optimized immunoassay procedure. This method can be used directly to quantify lunasin from plasma in different human and animal studies aiming to determine its bioavailability. PMID:23265496

  10. Lemurs and macaques show similar numerical sensitivity.

    PubMed

    Jones, Sarah M; Pearson, John; DeWind, Nicholas K; Paulsen, David; Tenekedjieva, Ana-Maria; Brannon, Elizabeth M

    2014-05-01

    We investigated the precision of the approximate number system (ANS) in three lemur species (Lemur catta, Eulemur mongoz, and Eulemur macaco flavifrons), one Old World monkey species (Macaca mulatta) and humans (Homo sapiens). In Experiment 1, four individuals of each nonhuman primate species were trained to select the numerically larger of two visual arrays on a touchscreen. We estimated numerical acuity by modeling Weber fractions (w) and found quantitatively equivalent performance among all four nonhuman primate species. In Experiment 2, we tested adult humans in a similar procedure, and they outperformed the four nonhuman species but showed qualitatively similar performance. These results indicate that the ANS is conserved over the primate order. PMID:24068469

  11. Prostatic ductal adenocarcinoma showing Bcl-2 expression.

    PubMed

    Tulunay, Ozden; Orhan, Diclehan; Baltaci, Sümer; Gögü?, Cagatay; Müftüoglu, Yusuf Z

    2004-09-01

    Prostatic ductal adenocarcinoma represents a rare histological variant of prostatic carcinoma with features of a papillary lesion at cystoscopy. There are conflicts regarding the existence, origin, staging, grading, treatment and clinical behavior of this tumor. The aim of the present study is to examine the expression of Bcl-2 and p53 in prostatic ductal adenocarcinoma and to evaluate its origin by analyzing prostate specific antigen, prostate specific acid phosphatase, cytokeratins, epithelial membrane antigen and carcinoembryonic antigen expressions. The results confirmed the expression of prostate specific antigen and prostate specific acid phosphatase in prostatic ductal adenocarcinoma. The demonstrated expression of Bcl-2 was predominant in the better-differentiated tumor. Bcl-2 expression appears not to be associated with neuroendocrine differentiation as assessed by chromogranin A reactivity. Thus, the first case of a prostatic ductal adenocarcinoma showing Bcl-2 expression is presented. The tumor was negative for p53. PMID:15379952

  12. Radon as a Natural Partitioning Tracer for Locating and Quantifying DNAPL Saturation in the Subsurface

    NASA Astrophysics Data System (ADS)

    Davis, B. M.; Istok, J.; Semprini, L.

    2002-12-01

    The inability to locate and quantify dense nonaqueous phase liquid (DNAPL) saturation in the subsurface presents obstacles to site characterization and remediation. The objective of this study is to evaluate the use of naturally occurring radon as an in-situ, partitioning tracer to locate and quantify DNAPL saturation. In the saturated zone, radon emanating from aquifer solids occurs as a dissolved gas and, due to its non-polarity, partitions into DNAPL. Partitioning between the DNAPL and aqueous phases results in retarded radon transport during groundwater flow. The radon retardation factor can be determined using single-well 'push-pull' tracer tests, enabling the calculation of the DNAPL saturation. Radon can also be used as a 'static' partitioning tracer, whereby grab samples of radon from monitoring wells in contaminated and non-contaminated portions of an aquifer are collected and compared to calculate the DNAPL saturation and to monitor saturation changes as remediation proceeds. The utility of these methods was investigated in the laboratory using a physical aquifer model (PAM). Static and push-pull tests were performed before and after contamination of a portion of the PAM sediment pack with trichloroethene (TCE). The PAM was then remediated using alcohol cosolvent and tap water flushes, and static and push-pull tests were performed to assess the efficacy of remediation. Numerical simulations were used to estimate the retardation factor for radon in the push-pull tests. Radon partitioning was observed in static and push-pull tests conducted after TCE contamination. Calculated TCE saturations ranged up to 1.4 % (static test) and 14.1 % (push-pull test), based on the numerical method modeling approach used to analyze the results. Post-remediation tests showed decreases in TCE saturations. The results show that radon is sensitive to changes in DNAPL (e.g., TCE) saturation in space and time. Recent advances in numerical modeling of radon in push-pull tests have shown the influence of TCE saturation distribution and initial radon concentrations on radon breakthrough curves and calculated TCE saturations. These advances have led to more accurate predictions of the TCE saturation in the PAM. The push-pull method was applied at a field site at Dover Air Force Base, Delaware. The site consists of an aquifer 'test cell' 27 ft long and 18 ft wide surrounded by steel pilings to a clay confining unit 40 ft below grade. Push-pull tests were performed before and after contamination of the test cell with perchloroethene (PCE). Push-pull tests performed before contamination showed no evidence of radon retardation, while tests performed after contamination showed evidence of retardation and suggested the presence of PCE.

  13. Quantifying and Mapping Global Data Poverty.

    PubMed

    Leidig, Mathias; Teeuw, Richard M

    2015-01-01

    Digital information technologies, such as the Internet, mobile phones and social media, provide vast amounts of data for decision-making and resource management. However, access to these technologies, as well as their associated software and training materials, is not evenly distributed: since the 1990s there has been concern about a "Digital Divide" between the data-rich and the data-poor. We present an innovative metric for evaluating international variations in access to digital data: the Data Poverty Index (DPI). The DPI is based on Internet speeds, numbers of computer owners and Internet users, mobile phone ownership and network coverage, as well as provision of higher education. The datasets used to produce the DPI are provided annually for almost all the countries of the world and can be freely downloaded. The index that we present in this 'proof of concept' study is the first to quantify and visualise the problem of global data poverty, using the most recent datasets, for 2013. The effects of severe data poverty, particularly limited access to geoinformatic data, free software and online training materials, are discussed in the context of sustainable development and disaster risk reduction. The DPI highlights countries where support is needed for improving access to the Internet and for the provision of training in geoinfomatics. We conclude that the DPI is of value as a potential metric for monitoring the Sustainable Development Goals of the Sendai Framework for Disaster Risk Reduction. PMID:26560884

  14. Data Used in Quantified Reliability Models

    NASA Technical Reports Server (NTRS)

    DeMott, Diana; Kleinhammer, Roger K.; Kahn, C. J.

    2014-01-01

    Data is the crux to developing quantitative risk and reliability models, without the data there is no quantification. The means to find and identify reliability data or failure numbers to quantify fault tree models during conceptual and design phases is often the quagmire that precludes early decision makers consideration of potential risk drivers that will influence design. The analyst tasked with addressing a system or product reliability depends on the availability of data. But, where is does that data come from and what does it really apply to? Commercial industries, government agencies, and other international sources might have available data similar to what you are looking for. In general, internal and external technical reports and data based on similar and dissimilar equipment is often the first and only place checked. A common philosophy is "I have a number - that is good enough". But, is it? Have you ever considered the difference in reported data from various federal datasets and technical reports when compared to similar sources from national and/or international datasets? Just how well does your data compare? Understanding how the reported data was derived, and interpreting the information and details associated with the data is as important as the data itself.

  15. Fluorescence imaging to quantify crop residue cover

    NASA Technical Reports Server (NTRS)

    Daughtry, C. S. T.; Mcmurtrey, J. E., III; Chappelle, E. W.

    1994-01-01

    Crop residues, the portion of the crop left in the field after harvest, can be an important management factor in controlling soil erosion. Methods to quantify residue cover are needed that are rapid, accurate, and objective. Scenes with known amounts of crop residue were illuminated with long wave ultraviolet (UV) radiation and fluorescence images were recorded with an intensified video camera fitted with a 453 to 488 nm band pass filter. A light colored soil and a dark colored soil were used as background for the weathered soybean stems. Residue cover was determined by counting the proportion of the pixels in the image with fluorescence values greater than a threshold. Soil pixels had the lowest gray levels in the images. The values of the soybean residue pixels spanned nearly the full range of the 8-bit video data. Classification accuracies typically were within 3(absolute units) of measured cover values. Video imaging can provide an intuitive understanding of the fraction of the soil covered by residue.

  16. Quantifying Climate Risks for Urban Environments

    NASA Astrophysics Data System (ADS)

    Hayhoe, K.; Stoner, A. K.; Dickson, L.

    2013-12-01

    High-density urban areas are both uniquely vulnerable and uniquely able to adapt to climate change. Enabling this potential requires identifying the vulnerabilities, however, and these depend strongly on location: the challenges climate change poses for a southern coastal city such as Miami, for example, have little in common with those facing a northern inland city such as Chicago. By combining local knowledge with climate science, risk assessment, engineering analysis, and adaptation planning, it is possible to develop relevant climate information that feeds directly into vulnerability assessment and long-term planning. Key steps include: developing climate projections tagged to long-term weather stations within the city itself that reflect local characteristics; mining local knowledge to identify existing vulnerabilities to, and costs of, weather and climate extremes; understanding how future projections can be integrated into the planning process; and identifying ways in which the city may adapt. Using examples from our work in the cities of Boston, Chicago, and Mobile we illustrate the practical application of this approach to quantify the impacts of climate change on these cities and identify robust adaptation options as diverse as reducing the urban heat island effect, protecting essential infrastructure, changing design standards and building codes, developing robust emergency management plans, and rebuilding storm sewer systems.

  17. Choosing appropriate techniques for quantifying groundwater recharge

    USGS Publications Warehouse

    Scanlon, B.R.; Healy, R.W.; Cook, P.G.

    2002-01-01

    Various techniques are available to quantify recharge; however, choosing appropriate techniques is often difficult. Important considerations in choosing a technique include space/time scales, range, and reliability of recharge estimates based on different techniques; other factors may limit the application of particular techniques. The goal of the recharge study is important because it may dictate the required space/time scales of the recharge estimates. Typical study goals include water-resource evaluation, which requires information on recharge over large spatial scales and on decadal time scales; and evaluation of aquifer vulnerability to contamination, which requires detailed information on spatial variability and preferential flow. The range of recharge rates that can be estimated using different approaches should be matched to expected recharge rates at a site. The reliability of recharge estimates using different techniques is variable. Techniques based on surface-water and unsaturated-zone data provide estimates of potential recharge, whereas those based on groundwater data generally provide estimates of actual recharge. Uncertainties in each approach to estimating recharge underscore the need for application of multiple techniques to increase reliability of recharge estimates.

  18. Optical metabolic imaging quantifies heterogeneous cell populations

    PubMed Central

    Walsh, Alex J.; Skala, Melissa C.

    2015-01-01

    The genetic and phenotypic heterogeneity of cancers can contribute to tumor aggressiveness, invasion, and resistance to therapy. Fluorescence imaging occupies a unique niche to investigate tumor heterogeneity due to its high resolution and molecular specificity. Here, heterogeneous populations are identified and quantified by combined optical metabolic imaging and subpopulation analysis (OMI-SPA). OMI probes the fluorescence intensities and lifetimes of metabolic enzymes in cells to provide images of cellular metabolism, and SPA models cell populations as mixed Gaussian distributions to identify cell subpopulations. In this study, OMI-SPA is characterized by simulation experiments and validated with cell experiments. To generate heterogeneous populations, two breast cancer cell lines, SKBr3 and MDA-MB-231, were co-cultured at varying proportions. OMI-SPA correctly identifies two populations with minimal mean and proportion error using the optical redox ratio (fluorescence intensity of NAD(P)H divided by the intensity of FAD), mean NAD(P)H fluorescence lifetime, and OMI index. Simulation experiments characterized the relationships between sample size, data standard deviation, and subpopulation mean separation distance required for OMI-SPA to identify subpopulations. PMID:25780745

  19. Quantifying and Mapping Global Data Poverty

    PubMed Central

    2015-01-01

    Digital information technologies, such as the Internet, mobile phones and social media, provide vast amounts of data for decision-making and resource management. However, access to these technologies, as well as their associated software and training materials, is not evenly distributed: since the 1990s there has been concern about a "Digital Divide" between the data-rich and the data-poor. We present an innovative metric for evaluating international variations in access to digital data: the Data Poverty Index (DPI). The DPI is based on Internet speeds, numbers of computer owners and Internet users, mobile phone ownership and network coverage, as well as provision of higher education. The datasets used to produce the DPI are provided annually for almost all the countries of the world and can be freely downloaded. The index that we present in this ‘proof of concept’ study is the first to quantify and visualise the problem of global data poverty, using the most recent datasets, for 2013. The effects of severe data poverty, particularly limited access to geoinformatic data, free software and online training materials, are discussed in the context of sustainable development and disaster risk reduction. The DPI highlights countries where support is needed for improving access to the Internet and for the provision of training in geoinfomatics. We conclude that the DPI is of value as a potential metric for monitoring the Sustainable Development Goals of the Sendai Framework for Disaster Risk Reduction. PMID:26560884

  20. Automated Counting of Particles To Quantify Cleanliness

    NASA Technical Reports Server (NTRS)

    Rhode, James

    2005-01-01

    A machine vision system, similar to systems used in microbiological laboratories to count cultured microbes, has been proposed for quantifying the cleanliness of nominally precisely cleaned hardware by counting residual contaminant particles. The system would include a microscope equipped with an electronic camera and circuitry to digitize the camera output, a personal computer programmed with machine-vision and interface software, and digital storage media. A filter pad, through which had been aspirated solvent from rinsing the hardware in question, would be placed on the microscope stage. A high-resolution image of the filter pad would be recorded. The computer would analyze the image and present a histogram of sizes of particles on the filter. On the basis of the histogram and a measure of the desired level of cleanliness, the hardware would be accepted or rejected. If the hardware were accepted, the image would be saved, along with other information, as a quality record. If the hardware were rejected, the histogram and ancillary information would be recorded for analysis of trends. The software would perceive particles that are too large or too numerous to meet a specified particle-distribution profile. Anomalous particles or fibrous material would be flagged for inspection.

  1. Quantifying ternary mixtures of different solid-state forms of indomethacin by Raman and near-infrared spectroscopy.

    PubMed

    Heinz, Andrea; Savolainen, Marja; Rades, Thomas; Strachan, Clare J

    2007-11-01

    This study assessed the ability of vibrational spectroscopy combined with multivariate analysis to quantify ternary mixtures of different solid-state forms, including the amorphous form. Raman and near-infrared spectroscopy were used to quantify mixtures of alpha-, gamma-, and amorphous indomethacin. Partial least squares regression was employed to create quantitative models. To improve the model performance various pre-treatment algorithms and scaling methods were applied to the spectral data and different spectral regions were tested. Standard normal variate transformation and scaling by mean centering proved to be the best approaches to pre-process the data. With four partial least squares factors, root mean square errors of prediction ranging from 5.3% to 6.5% for Raman spectroscopy and 4.0% to 5.9% for near-infrared spectroscopy were calculated. In addition, the effects of potential sources of error were investigated. Sample fluorescence predominantly caused by yellow amorphous indomethacin was observed to have a significant impact on the Raman spectra. Nevertheless, this source of error could be minimized in the quantitative models. Sample inhomogeneity, particularly in conjunction with a small sampling area when stationary sample holders were used, introduced the largest variation into both spectroscopic assays. The overall method errors were found to be very similar, resulting in relative standard deviations up to 12.0% for Raman spectroscopy and up to 13.0% for near-infrared spectroscopy. The results show that both spectroscopic techniques in combination with multivariate modeling are well suited to rapidly quantify ternary mixtures of crystalline and amorphous indomethacin. Furthermore, this study shows that quantitative analysis of powder mixtures using Raman spectroscopy can be performed in the presence of limited fluorescence. PMID:17716878

  2. Children with Autism Show Reduced Somatosensory Response: An MEG Study

    PubMed Central

    Marco, Elysa J.; Khatibi, Kasra; Hill, Susanna S.; Siegel, Bryna; Arroyo, Monica S.; Dowling, Anne F.; Neuhaus, John M.; Sherr, Elliott H.; Hinkley, Leighton N. B.; Nagarajan, Srikantan S.

    2012-01-01

    Lay Abstract Autism spectrum disorders are reported to affect nearly one out of every one hundred children, with over 90% of these children showing behavioral disturbances related to the processing of basic sensory information. Behavioral sensitivity to light touch, such as profound discomfort with clothing tags and physical contact, is a ubiquitous finding in children on the autism spectrum. In this study, we investigate the strength and timing of brain activity in response to simple, light taps to the fingertip. Our results suggest that children with autism show a diminished early response in the primary somatosensory cortex (S1). This finding is most evident in the left hemisphere. In exploratory analysis, we also show that tactile sensory behavior, as measured by the Sensory Profile, may be a better predictor of the intensity and timing of brain activity related to touch than a clinical autism diagnosis. We report that children with atypical tactile behavior have significantly lower amplitude somatosensory cortical responses in both hemispheres. Thus sensory behavioral phenotype appears to be a more powerful strategy for investigating neural activity in this cohort. This study provides evidence for atypical brain activity during sensory processing in autistic children and suggests that our sensory behavior based methodology may be an important approach to investigating brain activity in people with autism and neurodevelopmental disorders. Scientific Abstract The neural underpinnings of sensory processing differences in autism remain poorly understood. This prospective magnetoencephalography (MEG) study investigates whether children with autism show atypical cortical activity in the primary somatosensory cortex (S1) in comparison to matched controls. Tactile stimuli were clearly detectable, painless taps applied to the distal phalanx of the second (D2) and third (D3) fingers of the right and left hands. Three tactile paradigms were administered: an oddball paradigm (standard taps to D3 at an inter-stimulus interval (ISI) of 0.33 and deviant taps to D2 with ISI ranging from 1.32–1.64s); a slow-rate paradigm (D2) with an ISI matching the deviant taps in the oddball paradigm; and a fast-rate paradigm (D2) with an ISI matching the standard taps in the oddball. Study subjects were boys (age 7–11 years) with and without autism disorder. Sensory behavior was quantified using the Sensory Profile questionnaire. Boys with autism exhibited smaller amplitude left hemisphere S1 response to slow and deviant stimuli during the right hand paradigms. In post-hoc analysis, tactile behavior directly correlated with the amplitude of cortical response. Consequently, the children were re-categorized by degree of parent-report tactile sensitivity. This regrouping created a more robust distinction between the groups with amplitude diminution in the left and right hemispheres and latency prolongation in the right hemisphere in the deviant and slow-rate paradigms for the affected children. This study suggests that children with autism have early differences in somatosensory processing, which likely influence later stages of cortical activity from integration to motor response. PMID:22933354

  3. New England 4-H Horse Show

    E-print Network

    New Hampshire, University of

    New England 4-H Horse Show Rules and Guidelines Basic guide to local, county, and state/regional 4-H Horse shows as well as for those classes in open shows limited to 4-H membership entry. This rules-H Rule Book Page 2 Revised April 2014 All revisions of the New England 4-H Horse Show Rules

  4. Aortic function quantified: the heart's essential cushion.

    PubMed

    Saouti, Nabil; Marcus, J Tim; Vonk Noordegraaf, Anton; Westerhof, Nico

    2012-10-15

    Arterial compliance is mainly determined by the elasticity of proximal large-conduit arteries of which the aorta is the largest contributor. Compliance forms an important part of the cardiac load and plays a role in organ (especially coronary) perfusion. To follow local changes in aortic compliance, as in aging, noninvasive determination of compliance distribution would be of great value. Our goal is to determine regional aortic compliance noninvasively in the human. In seven healthy individuals at six locations, aortic blood flow and systolic/diastolic area (?A) was measured with MRI. Simultaneously brachial pulse pressure (?P) was measured with standard cuff. With a transfer function we derived ?P at the same aortic locations as the MRI measurements. Regional aortic compliance was calculated with two approaches, the pulse pressure method, and local area compliance (?A/?P) times segment length, called area compliance method. For comparison, pulse wave velocity (PWV) from local flows at two locations was determined, and compliance was derived from PWV. Both approaches show that compliance is largest in the proximal aorta and decreases toward the distal aorta. Similar results were found with PWV-derived compliance. Of total arterial compliance, ascending to distal arch (segments 1-3) contributes 40% (of which 15% is in head and arms), descending aorta (segments 4 and 5) 25%, and "hip, pelvic and leg arteries" 20%. Pulse pressure method includes compliance of side branches and is therefore larger than the area compliance method. Regional aortic compliance can be obtained noninvasively. Therefore, this technique allows following changes in local compliance with age and cardiovascular diseases. PMID:22936729

  5. Quantifying Riverscape Connectivity with Graph Theory

    NASA Astrophysics Data System (ADS)

    Carbonneau, P.; Milledge, D.; Sinha, R.; Tandon, S. K.

    2013-12-01

    Fluvial catchments convey fluxes of water, sediment, nutrients and aquatic biota. At continental scales, crustal topography defines the overall path of channels whilst at local scales depositional and/or erosional features generally determine the exact path of a channel. Furthermore, constructions such as dams, for either water abstraction or hydropower, often have a significant impact on channel networks.The concept of ';connectivity' is commonly invoked when conceptualising the structure of a river network.This concept is easy to grasp but there have been uneven efforts across the environmental sciences to actually quantify connectivity. Currently there have only been a few studies reporting quantitative indices of connectivity in river sciences, notably, in the study of avulsion processes. However, the majority of current work describing some form of environmental connectivity in a quantitative manner is in the field of landscape ecology. Driven by the need to quantify habitat fragmentation, landscape ecologists have returned to graph theory. Within this formal setting, landscape ecologists have successfully developed a range of indices which can model connectivity loss. Such formal connectivity metrics are currently needed for a range of applications in fluvial sciences. One of the most urgent needs relates to dam construction. In the developed world, hydropower development has generally slowed and in many countries, dams are actually being removed. However, this is not the case in the developing world where hydropower is seen as a key element to low-emissions power-security. For example, several dam projects are envisaged in Himalayan catchments in the next 2 decades. This region is already under severe pressure from climate change and urbanisation, and a better understanding of the network fragmentation which can be expected in this system is urgently needed. In this paper, we apply and adapt connectivity metrics from landscape ecology. We then examine the connectivity structure of the Gangetic riverscape with fluvial remote sensing. Our study reach extends from the heavily dammed headwaters of the Bhagirathi, Mandakini and Alaknanda rivers which form the source of the Ganga to Allahabad ~900 km downstream on the main stem. We use Landsat-8 imagery as the baseline dataset. Channel width along the Ganga (i.e. Ganges) is often several kilometres. Therefore, the pan-sharpened 15m pixels of Landsat-8 are in fact capable of resolving inner channel features for over 80% of the channel length thus allowing a riverscape approach to be adopted. We examine the following connectivity metrics: size distribution of connected components, betweeness centrality and the integrated index of connectivity. A geographic perspective is added by mapping local (25 km-scale) values for these metrics in order to examine spatial patterns of connectivity. This approach allows us to map impacts of dam construction and has the potential to inform policy decisions in the area as well as open-up new avenues of investigation.

  6. Quantifying human vitamin kinetics using AMS

    SciTech Connect

    Hillegonds, D; Dueker, S; Ognibene, T; Buchholz, B; Lin, Y; Vogel, J; Clifford, A

    2004-02-19

    Tracing vitamin kinetics at physiologic concentrations has been hampered by a lack of quantitative sensitivity for chemically equivalent tracers that could be used safely in healthy people. Instead, elderly or ill volunteers were sought for studies involving pharmacologic doses with radioisotopic labels. These studies fail to be relevant in two ways: vitamins are inherently micronutrients, whose biochemical paths are saturated and distorted by pharmacological doses; and while vitamins remain important for health in the elderly or ill, their greatest effects may be in preventing slow and cumulative diseases by proper consumption throughout youth and adulthood. Neither the target dose nor the target population are available for nutrient metabolic studies through decay counting of radioisotopes at high levels. Stable isotopic labels are quantified by isotope ratio mass spectrometry at levels that trace physiologic vitamin doses, but the natural background of stable isotopes severely limits the time span over which the tracer is distinguishable. Indeed, study periods seldom ranged over a single biological mean life of the labeled nutrients, failing to provide data on the important final elimination phase of the compound. Kinetic data for the absorption phase is similarly rare in micronutrient research because the phase is rapid, requiring many consecutive plasma samples for accurate representation. However, repeated blood samples of sufficient volume for precise stable or radio-isotope quantitations consume an indefensible amount of the volunteer's blood over a short period. Thus, vitamin pharmacokinetics in humans has often relied on compartmental modeling based upon assumptions and tested only for the short period of maximal blood circulation, a period that poorly reflects absorption or final elimination kinetics except for the most simple models.

  7. Quantifying the mixing due to bars

    NASA Astrophysics Data System (ADS)

    Sanchez-Blazquez, Patricia

    2015-03-01

    We will present star formation histories and the stellar and gaseous metallicity gradients in the disk of a sample of 50 face-on spiral galaxies with and without bars observed with the integral field unit spectrograph PMAS. The final aim is to quantify the redistribution of mass and angular momentum in the galactic disks due to bars by comparing both the gas-phase and star-phase metallicity gradients on the disk of barred and non-barred galaxies. Numerical simulations have shown that strong gravitational torque by non-axisymmetric components induce evolutionary processes such as redistribution of mass and angular momentum in the galactic disks (Sellwood & Binney 2002) and consequent change of chemical abundance profiles. If we hope to understand chemical evolution gradients and their evolution we must understand the secular processes and re-arrangement of material by non-axisymmetric components and vice-versa. Furthermore, the re-arrangement of stellar disk material influences the interpretation of various critical observed metrics of Galaxy evolution, including the age-metallicity relation in the solar neighborhood and the local G-dwarf metallicity distribution. Perhaps the most obvious of these aforementioned non-axisymmetric components are bars - at least 2/3 of spiral galaxies host a bar, and possibly all disk galaxies have hosted a bar at some point in their evolution. While observationally it has been found that barred galaxies have shallower gas-phase metallicity gradients than non-barred galaxies, a complementary analysis of the stellar abundance profiles has not yet been undertaken. This is unfortunate because the study of both gas and stars is important in providing a complete picture, as the two components undergo (and suffer from) very different evolutionary processes.

  8. Quantifying collective attention from tweet stream.

    PubMed

    Sasahara, Kazutoshi; Hirata, Yoshito; Toyoda, Masashi; Kitsuregawa, Masaru; Aihara, Kazuyuki

    2013-01-01

    Online social media are increasingly facilitating our social interactions, thereby making available a massive "digital fossil" of human behavior. Discovering and quantifying distinct patterns using these data is important for studying social behavior, although the rapid time-variant nature and large volumes of these data make this task difficult and challenging. In this study, we focused on the emergence of "collective attention" on Twitter, a popular social networking service. We propose a simple method for detecting and measuring the collective attention evoked by various types of events. This method exploits the fact that tweeting activity exhibits a burst-like increase and an irregular oscillation when a particular real-world event occurs; otherwise, it follows regular circadian rhythms. The difference between regular and irregular states in the tweet stream was measured using the Jensen-Shannon divergence, which corresponds to the intensity of collective attention. We then associated irregular incidents with their corresponding events that attracted the attention and elicited responses from large numbers of people, based on the popularity and the enhancement of key terms in posted messages or "tweets." Next, we demonstrate the effectiveness of this method using a large dataset that contained approximately 490 million Japanese tweets by over 400,000 users, in which we identified 60 cases of collective attentions, including one related to the Tohoku-oki earthquake. "Retweet" networks were also investigated to understand collective attention in terms of social interactions. This simple method provides a retrospective summary of collective attention, thereby contributing to the fundamental understanding of social behavior in the digital era. PMID:23637913

  9. Quantifying Collective Attention from Tweet Stream

    PubMed Central

    Sasahara, Kazutoshi; Hirata, Yoshito; Toyoda, Masashi; Kitsuregawa, Masaru; Aihara, Kazuyuki

    2013-01-01

    Online social media are increasingly facilitating our social interactions, thereby making available a massive “digital fossil” of human behavior. Discovering and quantifying distinct patterns using these data is important for studying social behavior, although the rapid time-variant nature and large volumes of these data make this task difficult and challenging. In this study, we focused on the emergence of “collective attention” on Twitter, a popular social networking service. We propose a simple method for detecting and measuring the collective attention evoked by various types of events. This method exploits the fact that tweeting activity exhibits a burst-like increase and an irregular oscillation when a particular real-world event occurs; otherwise, it follows regular circadian rhythms. The difference between regular and irregular states in the tweet stream was measured using the Jensen-Shannon divergence, which corresponds to the intensity of collective attention. We then associated irregular incidents with their corresponding events that attracted the attention and elicited responses from large numbers of people, based on the popularity and the enhancement of key terms in posted messages or “tweets.” Next, we demonstrate the effectiveness of this method using a large dataset that contained approximately 490 million Japanese tweets by over 400,000 users, in which we identified 60 cases of collective attentions, including one related to the Tohoku-oki earthquake. “Retweet” networks were also investigated to understand collective attention in terms of social interactions. This simple method provides a retrospective summary of collective attention, thereby contributing to the fundamental understanding of social behavior in the digital era. PMID:23637913

  10. Path Similarity Analysis: A Method for Quantifying Macromolecular Pathways

    PubMed Central

    Seyler, Sean L.; Kumar, Avishek; Thorpe, M. F.; Beckstein, Oliver

    2015-01-01

    Diverse classes of proteins function through large-scale conformational changes and various sophisticated computational algorithms have been proposed to enhance sampling of these macromolecular transition paths. Because such paths are curves in a high-dimensional space, it has been difficult to quantitatively compare multiple paths, a necessary prerequisite to, for instance, assess the quality of different algorithms. We introduce a method named Path Similarity Analysis (PSA) that enables us to quantify the similarity between two arbitrary paths and extract the atomic-scale determinants responsible for their differences. PSA utilizes the full information available in 3N-dimensional configuration space trajectories by employing the Hausdorff or Fréchet metrics (adopted from computational geometry) to quantify the degree of similarity between piecewise-linear curves. It thus completely avoids relying on projections into low dimensional spaces, as used in traditional approaches. To elucidate the principles of PSA, we quantified the effect of path roughness induced by thermal fluctuations using a toy model system. Using, as an example, the closed-to-open transitions of the enzyme adenylate kinase (AdK) in its substrate-free form, we compared a range of protein transition path-generating algorithms. Molecular dynamics-based dynamic importance sampling (DIMS) MD and targeted MD (TMD) and the purely geometric FRODA (Framework Rigidity Optimized Dynamics Algorithm) were tested along with seven other methods publicly available on servers, including several based on the popular elastic network model (ENM). PSA with clustering revealed that paths produced by a given method are more similar to each other than to those from another method and, for instance, that the ENM-based methods produced relatively similar paths. PSA applied to ensembles of DIMS MD and FRODA trajectories of the conformational transition of diphtheria toxin, a particularly challenging example, showed that the geometry-based FRODA occasionally sampled the pathway space of force field-based DIMS MD. For the AdK transition, the new concept of a Hausdorff-pair map enabled us to extract the molecular structural determinants responsible for differences in pathways, namely a set of conserved salt bridges whose charge-charge interactions are fully modelled in DIMS MD but not in FRODA. PSA has the potential to enhance our understanding of transition path sampling methods, validate them, and to provide a new approach to analyzing conformational transitions. PMID:26488417

  11. Quantifying the Relationship Between Financial News and the Stock Market

    PubMed Central

    Alanyali, Merve; Moat, Helen Susannah; Preis, Tobias

    2013-01-01

    The complex behavior of financial markets emerges from decisions made by many traders. Here, we exploit a large corpus of daily print issues of the Financial Times from 2nd January 2007 until 31st December 2012 to quantify the relationship between decisions taken in financial markets and developments in financial news. We find a positive correlation between the daily number of mentions of a company in the Financial Times and the daily transaction volume of a company's stock both on the day before the news is released, and on the same day as the news is released. Our results provide quantitative support for the suggestion that movements in financial markets and movements in financial news are intrinsically interlinked. PMID:24356666

  12. Quantifying the Impact of Unavailability in Cyber-Physical Environments

    SciTech Connect

    Aissa, Anis Ben; Abercrombie, Robert K; Sheldon, Federick T.; Mili, Ali

    2014-01-01

    The Supervisory Control and Data Acquisition (SCADA) system discussed in this work manages a distributed control network for the Tunisian Electric & Gas Utility. The network is dispersed over a large geographic area that monitors and controls the flow of electricity/gas from both remote and centralized locations. The availability of the SCADA system in this context is critical to ensuring the uninterrupted delivery of energy, including safety, security, continuity of operations and revenue. Such SCADA systems are the backbone of national critical cyber-physical infrastructures. Herein, we propose adapting the Mean Failure Cost (MFC) metric for quantifying the cost of unavailability. This new metric combines the classic availability formulation with MFC. The resulting metric, so-called Econometric Availability (EA), offers a computational basis to evaluate a system in terms of the gain/loss ($/hour of operation) that affects each stakeholder due to unavailability.

  13. Quantifying the Behavior of Stock Correlations Under Market Stress

    PubMed Central

    Preis, Tobias; Kenett, Dror Y.; Stanley, H. Eugene; Helbing, Dirk; Ben-Jacob, Eshel

    2012-01-01

    Understanding correlations in complex systems is crucial in the face of turbulence, such as the ongoing financial crisis. However, in complex systems, such as financial systems, correlations are not constant but instead vary in time. Here we address the question of quantifying state-dependent correlations in stock markets. Reliable estimates of correlations are absolutely necessary to protect a portfolio. We analyze 72 years of daily closing prices of the 30 stocks forming the Dow Jones Industrial Average (DJIA). We find the striking result that the average correlation among these stocks scales linearly with market stress reflected by normalized DJIA index returns on various time scales. Consequently, the diversification effect which should protect a portfolio melts away in times of market losses, just when it would most urgently be needed. Our empirical analysis is consistent with the interesting possibility that one could anticipate diversification breakdowns, guiding the design of protected portfolios. PMID:23082242

  14. Quantifying the Behavior of Stock Correlations Under Market Stress

    NASA Astrophysics Data System (ADS)

    Preis, Tobias; Kenett, Dror Y.; Stanley, H. Eugene; Helbing, Dirk; Ben-Jacob, Eshel

    2012-10-01

    Understanding correlations in complex systems is crucial in the face of turbulence, such as the ongoing financial crisis. However, in complex systems, such as financial systems, correlations are not constant but instead vary in time. Here we address the question of quantifying state-dependent correlations in stock markets. Reliable estimates of correlations are absolutely necessary to protect a portfolio. We analyze 72 years of daily closing prices of the 30 stocks forming the Dow Jones Industrial Average (DJIA). We find the striking result that the average correlation among these stocks scales linearly with market stress reflected by normalized DJIA index returns on various time scales. Consequently, the diversification effect which should protect a portfolio melts away in times of market losses, just when it would most urgently be needed. Our empirical analysis is consistent with the interesting possibility that one could anticipate diversification breakdowns, guiding the design of protected portfolios.

  15. Quantifying the Relationship Between Financial News and the Stock Market

    NASA Astrophysics Data System (ADS)

    Alanyali, Merve; Moat, Helen Susannah; Preis, Tobias

    2013-12-01

    The complex behavior of financial markets emerges from decisions made by many traders. Here, we exploit a large corpus of daily print issues of the Financial Times from 2nd January 2007 until 31st December 2012 to quantify the relationship between decisions taken in financial markets and developments in financial news. We find a positive correlation between the daily number of mentions of a company in the Financial Times and the daily transaction volume of a company's stock both on the day before the news is released, and on the same day as the news is released. Our results provide quantitative support for the suggestion that movements in financial markets and movements in financial news are intrinsically interlinked.

  16. Toward quantifying uncertainty in travel time tomography using the null-space shuttle

    E-print Network

    Utrecht, Universiteit

    Toward quantifying uncertainty in travel time tomography using the null-space shuttle R. W. L. de the null-space of the forward operator. We show that with the null-space shuttle it is possible to assess in travel time tomography using the null-space shuttle, J. Geophys. Res., 117, B03301, doi:10.1029/2011JB

  17. THE VARIABILITY INDEX: A NEW AND NOVEL METRIC FOR QUANTIFYING IRRADIANCE AND PV OUTPUT VARIABILITY

    E-print Network

    between different sites and climates. This paper proposes a metric for quantifying irradiance variability are changed to mitigate these effects. This paper develops and evaluates a simple yet novel approach measurement intervals. By evaluating the variability index at several sites, we show how annual and monthly

  18. gene encoding enhanced green fluorescent protein to the repressor gene, and quantify

    E-print Network

    Weeks, Eric R.

    gene encoding enhanced green fluorescent protein to the repressor gene, and quantify of gene expression in the feedback network, compared with the control networks. They also show concentrations of anhydrotetra- cycline--achemicalinhibitorofTetR. In past theoretical studies of gene

  19. Quantifying the Nonlinear, Anisotropic Material Response of Spinal Ligaments

    NASA Astrophysics Data System (ADS)

    Robertson, Daniel J.

    Spinal ligaments may be a significant source of chronic back pain, yet they are often disregarded by the clinical community due to a lack of information with regards to their material response, and innervation characteristics. The purpose of this dissertation was to characterize the material response of spinal ligaments and to review their innervation characteristics. Review of relevant literature revealed that all of the major spinal ligaments are innervated. They cause painful sensations when irritated and provide reflexive control of the deep spinal musculature. As such, including the neurologic implications of iatrogenic ligament damage in the evaluation of surgical procedures aimed at relieving back pain will likely result in more effective long-term solutions. The material response of spinal ligaments has not previously been fully quantified due to limitations associated with standard soft tissue testing techniques. The present work presents and validates a novel testing methodology capable of overcoming these limitations. In particular, the anisotropic, inhomogeneous material constitutive properties of the human supraspinous ligament are quantified and methods for determining the response of the other spinal ligaments are presented. In addition, a method for determining the anisotropic, inhomogeneous pre-strain distribution of the spinal ligaments is presented. The multi-axial pre-strain distributions of the human anterior longitudinal ligament, ligamentum flavum and supraspinous ligament were determined using this methodology. Results from this work clearly demonstrate that spinal ligaments are not uniaxial structures, and that finite element models which account for pre-strain and incorporate ligament's complex material properties may provide increased fidelity to the in vivo condition.

  20. Quantifying radial diffusion coefficients of radiation belt electrons based on global MHD simulation and spacecraft measurements

    NASA Astrophysics Data System (ADS)

    Tu, Weichao; Elkington, Scot R.; Li, Xinlin; Liu, Wenlong; Bonnell, J.

    2012-10-01

    Radial diffusion is one of the most important acceleration mechanisms for radiation belt electrons, which can be enhanced from drift-resonant interactions with large-scale fluctuations of the magnetosphere's magnetic and electric fields (Pc5 range of ULF waves). In order to physically quantify the radial diffusion coefficient, DLL, we run the global Lyon-Fedder-Mobarry (LFM) MHD simulations to obtain the mode structure and power spectrum of the ULF waves and validate the simulation results with available satellite measurements. The calculated diffusion coefficients, directly from the MHD fields over a Corotating Interaction Region (CIR) storm in March 2008, are generally higher when solar wind dynamic pressure is enhanced or AE index is high. In contrary to the conventional understanding, our results show that inside geosynchronous orbit the total diffusion coefficient from MHD fields is dominated by the contribution from electric field perturbations, rather than the magnetic field perturbations. The calculated diffusion coefficient has a physical dependence on ? (or electron energy) and L, which is missing in the empirical diffusion coefficient, DLLKp as a function of Kp index, and DLLKp are generally greater than our calculated DLL during the storm event. Validation of the MHD ULF waves by spacecraft field data shows that for this event the LFM code reasonably well-reproduces the Bz wave power observed by GOES and THEMIS satellites, while the E? power observed by THEMIS probes are generally underestimated by LFM fields, on average by about a factor of ten.

  1. Quantifying the leakage of quantum protocols for classical two-party cryptography

    NASA Astrophysics Data System (ADS)

    Salvail, Louis; Schaffner, Christian; Sotáková, Miroslava

    2015-12-01

    We study quantum protocols among two distrustful parties. By adopting a rather strict definition of correctness — guaranteeing that honest players obtain their correct outcomes only — we can show that every strictly correct quantum protocol implementing a non-trivial classical primitive necessarily leaks information to a dishonest player. This extends known impossibility results to all non-trivial primitives. We provide a framework for quantifying this leakage and argue that leakage is a good measure for the privacy provided to the players by a given protocol. Our framework also covers the case where the two players are helped by a trusted third party. We show that despite the help of a trusted third party, the players cannot amplify the cryptographic power of any primitive. All our results hold even against quantum honest-but-curious adversaries who honestly follow the protocol but purify their actions and apply a different measurement at the end of the protocol. As concrete examples, we establish lower bounds on the leakage of standard universal two-party primitives such as oblivious transfer.

  2. The Local Dimension: a method to quantify the Cosmic Web

    NASA Astrophysics Data System (ADS)

    Sarkar, Prakash; Bharadwaj, Somnath

    2009-03-01

    It is now well accepted that the galaxies are distributed in filaments, sheets and clusters, all of which form an interconnected network known as the Cosmic Web. It is a big challenge to quantify the shapes of the interconnected structural elements that form this network. Tools like the Minkowski functionals which use global properties, though well-suited for an isolated object like a single sheet or filament, are not suited for an interconnected network of such objects. We consider the Local Dimension D, defined through N(R) = A RD, where N(R) is the galaxy number count within a sphere of comoving radius R centred on a particular galaxy, as a tool to locally quantify the shape in the neighbourhood of different galaxies along the Cosmic Web. We expect D ~ 1, 2 and 3 for a galaxy located in a filament, sheet and cluster, respectively. Using LCDM N-body simulations, we find that it is possible to determine D through a power-law fit to N(R) across the length-scales 2 to 10Mpc for ~33 per cent of the galaxies. We have visually identified the filaments and sheets corresponding to many of the galaxies with D ~ 1 and 2, respectively. In several other situations, the structure responsible for the D value could not be visually identified, either due to it being tenuous or due to other dominating structures in the vicinity. We also show that the global distribution of the D values can be used to visualize and interpret how the different structural elements are woven into the Cosmic Web.

  3. Quantifying induced effects of subsurface renewable energy storage

    NASA Astrophysics Data System (ADS)

    Bauer, Sebastian; Beyer, Christof; Pfeiffer, Tilmann; Boockmeyer, Anke; Popp, Steffi; Delfs, Jens-Olaf; Wang, Bo; Li, Dedong; Dethlefsen, Frank; Dahmke, Andreas

    2015-04-01

    New methods and technologies for energy storage are required for the transition to renewable energy sources. Subsurface energy storage systems such as salt caverns or porous formations offer the possibility of hosting large amounts of energy or substance. When employing these systems, an adequate system and process understanding is required in order to assess the feasibility of the individual storage option at the respective site and to predict the complex and interacting effects induced. This understanding is the basis for assessing the potential as well as the risks connected with a sustainable usage of these storage options, especially when considering possible mutual influences. For achieving this aim, in this work synthetic scenarios for the use of the geological underground as an energy storage system are developed and parameterized. The scenarios are designed to represent typical conditions in North Germany. The types of subsurface use investigated here include gas storage and heat storage in porous formations. The scenarios are numerically simulated and interpreted with regard to risk analysis and effect forecasting. For this, the numerical simulators Eclipse and OpenGeoSys are used. The latter is enhanced to include the required coupled hydraulic, thermal, geomechanical and geochemical processes. Using the simulated and interpreted scenarios, the induced effects are quantified individually and monitoring concepts for observing these effects are derived. This presentation will detail the general investigation concept used and analyze the parameter availability for this type of model applications. Then the process implementation and numerical methods required and applied for simulating the induced effects of subsurface storage are detailed and explained. Application examples show the developed methods and quantify induced effects and storage sizes for the typical settings parameterized. This work is part of the ANGUS+ project, funded by the German Ministry of Education and Research (BMBF).

  4. Walljet Electrochemistry: Quantifying Molecular Transport through Metallopolymeric and Zirconium

    E-print Network

    Walljet Electrochemistry: Quantifying Molecular Transport through Metallopolymeric and Zirconium electrochemistry to the study of molecular transport through model metallopolymeric films on indium tin oxide

  5. UV Photography Shows Hidden Sun Damage

    MedlinePLUS

    ... skin UV photography shows hidden sun damage UV photography shows hidden sun damage A UV photograph gives ... developing skin cancer and prematurely aged skin. Normal photography UV photography 18 months of age: This boy's ...

  6. New Hampshire Guide 4H Dog Shows

    E-print Network

    New Hampshire, University of

    New Hampshire Guide to 4H Dog Shows UNH Cooperative Extension 4H Youth ................................................................................................................................. 2 Purpose of the 4H Dog Project ................................................................................................................ 2 4H DOG SHOW GENERAL INFORMATION

  7. Quantifying the impacts of global disasters

    NASA Astrophysics Data System (ADS)

    Jones, L. M.; Ross, S.; Wilson, R. I.; Borrero, J. C.; Brosnan, D.; Bwarie, J. T.; Geist, E. L.; Hansen, R. A.; Johnson, L. A.; Kirby, S. H.; Long, K.; Lynett, P. J.; Miller, K. M.; Mortensen, C. E.; Perry, S. C.; Porter, K. A.; Real, C. R.; Ryan, K. J.; Thio, H. K.; Wein, A. M.; Whitmore, P.; Wood, N. J.

    2012-12-01

    The US Geological Survey, National Oceanic and Atmospheric Administration, California Geological Survey, and other entities are developing a Tsunami Scenario, depicting a realistic outcome of a hypothetical but plausible large tsunami originating in the eastern Aleutian Arc, affecting the west coast of the United States, including Alaska and Hawaii. The scenario includes earth-science effects, damage and restoration of the built environment, and social and economic impacts. Like the earlier ShakeOut and ARkStorm disaster scenarios, the purpose of the Tsunami Scenario is to apply science to quantify the impacts of natural disasters in a way that can be used by decision makers in the affected sectors to reduce the potential for loss. Most natural disasters are local. A major hurricane can destroy a city or damage a long swath of coastline while mostly sparing inland areas. The largest earthquake on record caused strong shaking along 1500 km of Chile, but left the capital relatively unscathed. Previous scenarios have used the local nature of disasters to focus interaction with the user community. However, the capacity for global disasters is growing with the interdependency of the global economy. Earthquakes have disrupted global computer chip manufacturing and caused stock market downturns. Tsunamis, however, can be global in their extent and direct impact. Moreover, the vulnerability of seaports to tsunami damage can increase the global consequences. The Tsunami Scenario is trying to capture the widespread effects while maintaining the close interaction with users that has been one of the most successful features of the previous scenarios. The scenario tsunami occurs in the eastern Aleutians with a source similar to the 2011 Tohoku event. Geologic similarities support the argument that a Tohoku-like source is plausible in Alaska. It creates a major nearfield tsunami in the Aleutian arc and peninsula, a moderate tsunami in the US Pacific Northwest, large but not the maximum in Hawaii, and the largest plausible tsunami in southern California. To support the analysis of global impacts, we begin with the Ports of Los Angeles and Long Beach which account for >40% of the imports to the United States. We expand from there throughout California for the first level economic analysis. We are looking to work with Alaska and Hawaii, especially on similar economic issues in ports, over the next year and to expand the analysis to consideration of economic interactions between the regions.

  8. Should Air Bubble Detectors Be Used to Quantify Microbubble Activity during Cardiopulmonary Bypass?

    PubMed Central

    Newland, Richard F.; Baker, Robert A.; Mazzone, Annette L.; Valiyapurayil, Vijaykumar N.

    2015-01-01

    Abstract: Air bubble detectors (ABDs) are utilized during cardiopulmonary bypass (CPB) to protect against massive air embolism. Stockert (Munich, Germany) ABD quantify microbubbles >300 ?m; however, their reliability has not been reported. The aim of this study was to assess the reliability of the microbubble data from the ABD with the SIII and S5 heart–lung machines. Microbubble counts from the ABD with the SIII (SIII ABD) and S5 (S5 ABD) were measured simultaneously with the emboli detection and classification (EDAC) quantifier in 12 CPB procedures using two EDAC detectors and two ABDs in series in the arterial line. Reliability was assessed by the Spearman correlation co-efficient (r) between measurements for each detector type, and between each ABD and EDAC detector for counts >300 ?m. No correlation was found between the SIII ABD (r = .008, p = .793). A weak negative correlation was found with the S5 ABD (r = ?.16, p < .001). A strong correlation was found between the EDAC detectors (SIII; r = .958, p < .001), (S5; r = .908, p < .001). With counts >300 ?m, the SIII ABDs showed a correlation of small–medium effect size between EDAC detectors and ABD1 (r = .286, p < .001 [EDAC1], r = .347, p < .001 [EDAC2]). There was no correlation found between ABD2 and either EDAC detector (r = .003, p = .925 (EDAC1), r = .003, p = .929 [EDAC2]). A correlation between EDAC and the S5 ABD, was not able to be determined due to the low bubble count detected by the EDAC >300 ?m. Both SIII ABD and S5 ABD were found to be unreliable for quantification of microbubble activity during CPB in comparison with the EDAC. These results highlight the importance of ensuring that data included in the CPB report is accurate and clinically relevant, and suggests that microbubble counts from devices such as the SIII ABD and S5 ABD should not be reported. PMID:26543252

  9. Signal enhancement ratio (SER) quantified from breast DCE-MRI and breast cancer risk

    NASA Astrophysics Data System (ADS)

    Wu, Shandong; Kurland, Brenda F.; Berg, Wendie A.; Zuley, Margarita L.; Jankowitz, Rachel C.; Sumkin, Jules; Gur, David

    2015-03-01

    Breast magnetic resonance imaging (MRI) is recommended as an adjunct to mammography for women who are considered at elevated risk of developing breast cancer. As a key component of breast MRI, dynamic contrast-enhanced MRI (DCE-MRI) uses a contrast agent to provide high intensity contrast between breast tissues, making it sensitive to tissue composition and vascularity. Breast DCE-MRI characterizes certain physiologic properties of breast tissue that are potentially related to breast cancer risk. Studies have shown that increased background parenchymal enhancement (BPE), which is the contrast enhancement occurring in normal cancer-unaffected breast tissues in post-contrast sequences, predicts increased breast cancer risk. Signal enhancement ratio (SER) computed from pre-contrast and post-contrast sequences in DCE-MRI measures change in signal intensity due to contrast uptake over time and is a measure of contrast enhancement kinetics. SER quantified in breast tumor has been shown potential as a biomarker for characterizing tumor response to treatments. In this work we investigated the relationship between quantitative measures of SER and breast cancer risk. A pilot retrospective case-control study was performed using a cohort of 102 women, consisting of 51 women who had diagnosed with unilateral breast cancer and 51 matched controls (by age and MRI date) with a unilateral biopsy-proven benign lesion. SER was quantified using fully-automated computerized algorithms and three SER-derived quantitative volume measures were compared between the cancer cases and controls using logistic regression analysis. Our preliminary results showed that SER is associated with breast cancer risk, after adjustment for the Breast Imaging Reporting and Data System (BI-RADS)-based mammographic breast density measures. This pilot study indicated that SER has potential for use as a risk factor for breast cancer risk assessment in women at elevated risk of developing breast cancer.

  10. Quantifying the kinetic stability of hyperstable proteins via time-dependent SDS trapping.

    PubMed

    Xia, Ke; Zhang, Songjie; Bathrick, Brendan; Liu, Shuangqi; Garcia, Yeidaliz; Colón, Wilfredo

    2012-01-10

    Globular proteins are usually in equilibrium with unfolded conformations, whereas kinetically stable proteins (KSPs) are conformationally trapped by their high unfolding transition state energy. Kinetic stability (KS) could allow proteins to maintain their activity under harsh conditions, increase a protein's half-life, or protect against misfolding-aggregation. Here we show the development of a simple method for quantifying a protein's KS that involves incubating a protein in SDS at high temperature as a function of time, running the unheated samples on SDS-PAGE, and quantifying the bands to determine the time-dependent loss of a protein's SDS resistance. Six diverse proteins, including two monomer, two dimers, and two tetramers, were studied by this method, and the kinetics of the loss of SDS resistance correlated linearly with their unfolding rate determined by circular dichroism. These results imply that the mechanism by which SDS denatures proteins involves conformational trapping, with a trapping rate that is determined and limited by the rate of protein unfolding. We applied the SDS trapping of proteins (S-TraP) method to superoxide dismutase (SOD) and transthyretin (TTR), which are highly KSPs with native unfolding rates that are difficult to measure by conventional spectroscopic methods. A combination of S-TraP experiments between 75 and 90 °C combined with Eyring plot analysis yielded an unfolding half-life of 70 ± 37 and 18 ± 6 days at 37 °C for SOD and TTR, respectively. The S-TraP method shown here is extremely accessible, sample-efficient, cost-effective, compatible with impure or complex samples, and will be useful for exploring the biological and pathological roles of kinetic stability. PMID:22106876

  11. Understanding and Quantifying Controls of Arsenic Mobility during Deepwell Re-injection of CSG Waters

    NASA Astrophysics Data System (ADS)

    Davis, J. A.; Rathi, B.; Prommer, H.; Donn, M.; Siade, A. J.; Berg, M.

    2014-12-01

    In Australia, the injection of reverse-osmosis treated production water from coal seams into the surrounding, deep aquifers may provide the most viable method to dispose of large quantities of production water. The geochemical disequilibrium between the injectant water composition and the target aquifer can potentially drive a range of water-sediment interactions that must be clearly understood and quantified in order to anticipate and manage future water quality changes at both the local and regional scale. In this study, we use a multi-scale geochemical characterisation of a proposed reinjection site in combination with geochemical/reactive transport modeling to understand and predict the long-term fate of arsenic; and explore means for suitably mitigating an undesired increase of naturally occurring arsenic concentrations. We use a series of arsenic sorption experiments with the aquifer material from an injection trial site in Queensland, Australia to quantify As sorption/desorption from mineral surfaces in response to changes in site-specific geochemical conditions. Batch experiments with arsenite were performed under anoxic conditions to replicate the highly reducing in-situ conditions. The results showed significant arsenic mobility at pH >8. Competitive sorption effects with phosphate and the impact of varying temperatures were also tested in batch mode. A site-specific general composite (GC) surface complexation model (SCM) was derived through inverse geochemical modeling, i.e., selection of appropriate surface complexation reactions and optimization of sorption constants. The SCM was subsequently tested and further improved during the interpretation of data from column flow-through experiments and from a field injection trial. Eventually the uncertainty associated with estimates of sorption constants was addressed and the effects of this uncertainty on field-scale model predictions were analyzed.

  12. Quantifying fluvial topography using UAS imagery and SfM photogrammetry

    NASA Astrophysics Data System (ADS)

    Woodget, Amy; Carbonneau, Patrice; Visser, Fleur; Maddock, Ian; Habit, Evelyn

    2014-05-01

    The measurement and monitoring of fluvial topography at high spatial and temporal resolutions is in increasing demand for a range of river science and management applications, including change detection, hydraulic models, habitat assessments, river restorations and sediment budgets. Existing approaches are yet to provide a single technique for rapidly quantifying fluvial topography in both exposed and submerged areas, with high spatial resolution, reach-scale continuous coverage, high accuracy and reasonable cost. In this paper, we explore the potential of using imagery acquired from a small unmanned aerial system (UAS) and processed using Structure-from-Motion (SfM) photogrammetry for filling this gap. We use a rotary winged hexacopter known as the Draganflyer X6, a consumer grade digital camera (Panasonic Lumix DMC-LX3) and the commercially available PhotoScan Pro SfM software (Agisoft LLC). We test the approach on three contrasting river systems; a shallow margin of the San Pedro River in the Valdivia region of south-central Chile, the lowland River Arrow in Warwickshire, UK, and the upland Coledale Beck in Cumbria, UK. Digital elevation models (DEMs) and orthophotos of hyperspatial resolution (0.01-0.02m) are produced. Mean elevation errors are found to vary somewhat between sites, dependent on vegetation coverage and the spatial arrangement of ground control points (GCPs) used to georeference the data. Mean errors are in the range 4-44mm for exposed areas and 17-89mm for submerged areas. Errors in submerged areas can be improved to 4-56mm with the application of a simple refraction correction procedure. Multiple surveys of the River Arrow site show consistently high quality results, indicating the repeatability of the approach. This work therefore demonstrates the potential of a UAS-SfM approach for quantifying fluvial topography.

  13. Learned control over spinal nociception reduces supraspinal nociception as quantified by late somatosensory evoked potentials.

    PubMed

    Ruscheweyh, Ruth; Bäumler, Maximilian; Feller, Moritz; Krafft, Stefanie; Sommer, Jens; Straube, Andreas

    2015-12-01

    We have recently shown that subjects can learn to use cognitive-emotional strategies to suppress their spinal nociceptive flexor reflex (RIII reflex) under visual RIII feedback and proposed that this reflects learned activation of descending pain inhibition. Here, we investigated whether learned RIII suppression also affects supraspinal nociception and whether previous relaxation training increases success. Subjects were trained over 3 sessions to reduce their RIII size by self-selected cognitive-emotional strategies. Two groups received true RIII feedback (with or without previous relaxation training) and a sham group received false feedback (15 subjects per group). RIII reflexes, late somatosensory evoked potentials (SEPs), and F-waves were recorded and pain intensity ratings collected. Both true feedback groups achieved significant (P < 0.01) but similar RIII suppression (to 79% ± 21% and 70% ± 17% of control). Somatosensory evoked potential amplitude (100-150 milliseconds after stimulation) was reduced in parallel with the RIII size (r = 0.57, P < 0.01). In the sham group, neither RIII size nor SEP amplitude was significantly reduced during feedback training. Pain intensity was significantly reduced in all 3 groups and also correlated with RIII reduction (r = 0.44, P < 0.01). F-wave parameters were not affected during RIII suppression. The present results show that learned RIII suppression also affects supraspinal nociception as quantified by SEPs, although effects on pain ratings were less clear. Lower motor neuron excitability as quantified by F-waves was not affected. Previous relaxation training did not significantly improve RIII feedback training success. PMID:26270584

  14. Quantifying forearm muscle activity during wrist and finger movements by means of multi-channel electromyography.

    PubMed

    Gazzoni, Marco; Celadon, Nicolò; Mastrapasqua, Davide; Paleari, Marco; Margaria, Valentina; Ariano, Paolo

    2014-01-01

    The study of hand and finger movement is an important topic with applications in prosthetics, rehabilitation, and ergonomics. Surface electromyography (sEMG) is the gold standard for the analysis of muscle activation. Previous studies investigated the optimal electrode number and positioning on the forearm to obtain information representative of muscle activation and robust to movements. However, the sEMG spatial distribution on the forearm during hand and finger movements and its changes due to different hand positions has never been quantified. The aim of this work is to quantify 1) the spatial localization of surface EMG activity of distinct forearm muscles during dynamic free movements of wrist and single fingers and 2) the effect of hand position on sEMG activity distribution. The subjects performed cyclic dynamic tasks involving the wrist and the fingers. The wrist tasks and the hand opening/closing task were performed with the hand in prone and neutral positions. A sensorized glove was used for kinematics recording. sEMG signals were acquired from the forearm muscles using a grid of 112 electrodes integrated into a stretchable textile sleeve. The areas of sEMG activity have been identified by a segmentation technique after a data dimensionality reduction step based on Non Negative Matrix Factorization applied to the EMG envelopes. The results show that 1) it is possible to identify distinct areas of sEMG activity on the forearm for different fingers; 2) hand position influences sEMG activity level and spatial distribution. This work gives new quantitative information about sEMG activity distribution on the forearm in healthy subjects and provides a basis for future works on the identification of optimal electrode configuration for sEMG based control of prostheses, exoskeletons, or orthoses. An example of use of this information for the optimization of the detection system for the estimation of joint kinematics from sEMG is reported. PMID:25289669

  15. 792 2003 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim DOI: 10.1002/cphc.200200565 CHEMPHYSCHEM 2003, 4, 792 808 The inset shows the result of a calculation of the angular distribution of radiation of a single

    E-print Network

    Enderlein, Jörg

    microscope. Every molecule shows up as a bilaterally symmetric figure, which is directly connected with its reflects the emission dipole orientations of the molecules. #12;CHEMPHYSCHEM 2003, 4, 792± 808 DOI: 10) in liquids and on surfaces under ambient conditions. The various techniques of SMS, such as confocal

  16. Quantifying higher-order correlations in a neuronal pool

    NASA Astrophysics Data System (ADS)

    Montangie, Lisandro; Montani, Fernando

    2015-03-01

    Recent experiments involving a relatively large population of neurons have shown a very significant amount of higher-order correlations. However, little is known of how these affect the integration and firing behavior of a population of neurons beyond the second order statistics. To investigate how higher-order inputs statistics can shape beyond pairwise spike correlations and affect information coding in the brain, we consider a neuronal pool where each neuron fires stochastically. We develop a simple mathematically tractable model that makes it feasible to account for higher-order spike correlations in a neuronal pool with highly interconnected common inputs beyond second order statistics. In our model, correlations between neurons appear from q-Gaussian inputs into threshold neurons. The approach constitutes the natural extension of the Dichotomized Gaussian model, where the inputs to the model are just Gaussian distributed and therefore have no input interactions beyond second order. We obtain an exact analytical expression for the joint distribution of firing, quantifying the degree of higher-order spike correlations, truly emphasizing the functional aspects of higher-order statistics, as we account for beyond second order inputs correlations seen by each neuron within the pool. We determine how higher-order correlations depend on the interaction structure of the input, showing that the joint distribution of firing is skewed as the parameter q increases inducing larger excursions of synchronized spikes. We show how input nonlinearities can shape higher-order correlations and enhance coding performance by neural populations.

  17. Monitoring Microemboli During Cardiopulmonary Bypass with the EDAC® Quantifier

    PubMed Central

    Lynch, John E.; Wells, Christopher; Akers, Tom; Frantz, Paul; Garrett, Donna; Scott, M. Lance; Williamson, Lisa; Agnew, Barbara; Lynch, John K.

    2010-01-01

    Abstract: Gaseous emboli may be introduced into the bypass circuit both from the surgical field and during perfusionist interventions. While circuits provide good protection against massive air embolism, they do not remove gaseous microemboli (GME) from the bypass circuit. The purpose of this preliminary study is to assess the incidence of GME during bypass surgery and determine if increased GME counts were associated with specific events during bypass surgery. In 30 cases divided between 15 coronary artery bypass grafts and 15 valve repairs, GME were counted and sized at the three locations on the bypass circuit using the EDAC® Quantifier (Luna Innovations, Roanoke, VA). A mean of 45,276 GME were detected after the arterial line filter during these 30 cases, with significantly more detected (p = .04) post filter during valve cases (mean = 72,137 ± 22,113) than coronary artery bypass graft cases (mean = 18,416 ± 7831). GME detected post filter were significantly correlated in time with counts detected in the venous line (p < .001). Specific events associated with high counts included the initiation of cardiopulmonary bypass, heart manipulations, insertion and removal of clamps, and the administration of drugs. Global factors associated with increased counts post filter included higher venous line counts and higher post reservoir/bubble trap counts. The mean number of microemboli detected during bypass surgery was much higher than reported in other studies of emboli incidence, most likely due to the increased sensitivity of the EDAC® Quantifier compared to other detection modalities. The results furthermore suggest the need for further study of the clinical significance of these microemboli and what practices may be used to reduce GME incidence. Increased in vitro testing of the air handling capability of different circuit designs, along with more clinical studies assessing best clinical practices for reducing GME activity, is recommended. PMID:21114224

  18. Quantifying Selective Pressures Driving Bacterial Evolution Using Lineage Analysis

    NASA Astrophysics Data System (ADS)

    Lambert, Guillaume; Kussell, Edo

    2015-01-01

    Organisms use a variety of strategies to adapt to their environments and maximize long-term growth potential, but quantitative characterization of the benefits conferred by the use of such strategies, as well as their impact on the whole population's rate of growth, remains challenging. Here, we use a path-integral framework that describes how selection acts on lineages—i.e., the life histories of individuals and their ancestors—to demonstrate that lineage-based measurements can be used to quantify the selective pressures acting on a population. We apply this analysis to Escherichia coli bacteria exposed to cyclical treatments of carbenicillin, an antibiotic that interferes with cell-wall synthesis and affects cells in an age-dependent manner. While the extensive characterization of the life history of thousands of cells is necessary to accurately extract the age-dependent selective pressures caused by carbenicillin, the same measurement can be recapitulated using lineage-based statistics of a single surviving cell. Population-wide evolutionary pressures can be extracted from the properties of the surviving lineages within a population, providing an alternative and efficient procedure to quantify the evolutionary forces acting on a population. Importantly, this approach is not limited to age-dependent selection, and the framework can be generalized to detect signatures of other trait-specific selection using lineage-based measurements. Our results establish a powerful way to study the evolutionary dynamics of life under selection and may be broadly useful in elucidating selective pressures driving the emergence of antibiotic resistance and the evolution of survival strategies in biological systems.

  19. Quantifying selective pressures driving bacterial evolution using lineage analysis

    PubMed Central

    Lambert, Guillaume; Kussell, Edo

    2015-01-01

    Organisms use a variety of strategies to adapt to their environments and maximize long-term growth potential, but quantitative characterization of the benefits conferred by the use of such strategies, as well as their impact on the whole population’s rate of growth, remains challenging. Here, we use a path-integral framework that describes how selection acts on lineages –i.e. the life-histories of individuals and their ancestors– to demonstrate that lineage-based measurements can be used to quantify the selective pressures acting on a population. We apply this analysis to E. coli bacteria exposed to cyclical treatments of carbenicillin, an antibiotic that interferes with cell-wall synthesis and affects cells in an age-dependent manner. While the extensive characterization of the life-history of thousands of cells is necessary to accurately extract the age-dependent selective pressures caused by carbenicillin, the same measurement can be recapitulated using lineage-based statistics of a single surviving cell. Population-wide evolutionary pressures can be extracted from the properties of the surviving lineages within a population, providing an alternative and efficient procedure to quantify the evolutionary forces acting on a population. Importantly, this approach is not limited to age-dependent selection, and the framework can be generalized to detect signatures of other trait-specific selection using lineage-based measurements. Our results establish a powerful way to study the evolutionary dynamics of life under selection, and may be broadly useful in elucidating selective pressures driving the emergence of antibiotic resistance and the evolution of survival strategies in biological systems. PMID:26213639

  20. Quantifying the prevalence of frailty in English hospitals

    PubMed Central

    Soong, J; Poots, AJ; Scott, S; Donald, K; Woodcock, T; Lovett, D; Bell, D

    2015-01-01

    Objectives Population ageing has been associated with an increase in comorbid chronic disease, functional dependence, disability and associated higher health care costs. Frailty Syndromes have been proposed as a way to define this group within older persons. We explore whether frailty syndromes are a reliable methodology to quantify clinically significant frailty within hospital settings, and measure trends and geospatial variation using English secondary care data set Hospital Episode Statistics (HES). Setting National English Secondary Care Administrative Data HES. Participants All 50?540?141 patient spells for patients over 65?years admitted to acute provider hospitals in England (January 2005—March 2013) within HES. Primary and secondary outcome measures We explore the prevalence of Frailty Syndromes as coded by International Statistical Classification of Diseases, Injuries and Causes of Death (ICD-10) over time, and their geographic distribution across England. We examine national trends for admission spells, inpatient mortality and 30-day readmission. Results A rising trend of admission spells was noted from January 2005 to March 2013(daily average admissions for month rising from over 2000 to over 4000). The overall prevalence of coded frailty is increasing (64?559 spells in January 2005 to 150?085 spells by Jan 2013). The majority of patients had a single frailty syndrome coded (10.2% vs total burden of 13.9%). Cognitive impairment and falls (including significant fracture) are the most common frailty syndromes coded within HES. Geographic variation in frailty burden was in keeping with known distribution of prevalence of the English elderly population and location of National Health Service (NHS) acute provider sites. Overtime, in-hospital mortality has decreased (>65?years) whereas readmission rates have increased (esp.>85?years). Conclusions This study provides a novel methodology to reliably quantify clinically significant frailty. Applications include evaluation of health service improvement over time, risk stratification and optimisation of services. PMID:26490097

  1. Shortcuts to Quantifier Interpretation in Children and Adults

    ERIC Educational Resources Information Center

    Brooks, Patricia J.; Sekerina, Irina

    2006-01-01

    Errors involving universal quantification are common in contexts depicting sets of individuals in partial, one-to-one correspondence. In this article, we explore whether quantifier-spreading errors are more common with distributive quantifiers each and every than with all. In Experiments 1 and 2, 96 children (5- to 9-year-olds) viewed pairs of…

  2. Visual Attention and Quantifier-Spreading in Heritage Russian Bilinguals

    ERIC Educational Resources Information Center

    Sekerina, Irina A.; Sauermann, Antje

    2015-01-01

    It is well established in language acquisition research that monolingual children and adult second language learners misinterpret sentences with the universal quantifier "every" and make quantifier-spreading errors that are attributed to a preference for a match in number between two sets of objects. The present Visual World eye-tracking…

  3. QUANTIFYING DEMOCRACY OF WAVELET BASES IN LORENTZ SPACES

    E-print Network

    Martell, José María

    QUANTIFYING DEMOCRACY OF WAVELET BASES IN LORENTZ SPACES EUGENIO HERN´ANDEZ, JOS´E MAR´IA MARTELL it is interesting to ask how far wavelet bases are from being democratic in Lp,q (Rd ), p = q. To quantify democracy

  4. Separation Logic with One Quantified Variable and D. Galmiche2

    E-print Network

    Doyen, Laurent

    Separation Logic with One Quantified Variable S. Demri1 and D. Galmiche2 and D. Larchey-Wendling2 investigate first-order separation logic with one record field restricted to a unique quantified variable (1SL of fragments of first-order separation logic that can specify properties about the memory heap of programs

  5. Children's Knowledge of the Quantifier "Dou" in Mandarin Chinese

    ERIC Educational Resources Information Center

    Zhou, Peng; Crain, Stephen

    2011-01-01

    The quantifier "dou" (roughly corresponding to English "all") in Mandarin Chinese has been the topic of much discussion in the theoretical literature. This study investigated children's knowledge of this quantifier using a new methodological technique, which we dubbed the Question-Statement Task. Three questions were addressed: (i) whether young…

  6. LTL with the Freeze Quantifier and Register Automata Stephane Demri

    E-print Network

    Doyen, Laurent

    LTL with the Freeze Quantifier and Register Automata St´ephane Demri LSV, CNRS & ENS Cachan, temporal logics are extended with the freeze quantifier, first-order logics with predicates over the data of standard decision problems for LTL with the freeze quan- tifier (LTL ), 2-variable first-order logic (FO2

  7. FixBag : A Fixpoint Calculator for Quantified Bag Constraints

    E-print Network

    Chin, Wei Ngan

    FixBag : A Fixpoint Calculator for Quantified Bag Constraints Tuan-Hung Pham1 , Minh-Thai Trinh2 range of programs, we have developed a tool to compute symbolic fixpoints for quantified bag domain invariants and method pre/post conditions via fix- point analysis of recursive bag constraints. To support

  8. Quantifying the contributions to stratospheric ozone changes from ozone

    E-print Network

    Wirosoetisno, Djoko

    , with the exception of the lower polar stratosphere where recovery of ozone in the sec- ond half of the 21st centuryQuantifying the contributions to stratospheric ozone changes from ozone depleting substances., Reader, M. C. and Jonsson, A. I. (2010) Quantifying the contributions to stratospheric ozone changes from

  9. New methods to quantify the cracking performance of cementitious systems made with internal curing

    NASA Astrophysics Data System (ADS)

    Schlitter, John L.

    The use of high performance concretes that utilize low water-cement ratios have been promoted for use in infrastructure based on their potential to increase durability and service life because they are stronger and less porous. Unfortunately, these benefits are not always realized due to the susceptibility of high performance concrete to undergo early age cracking caused by shrinkage. This problem is widespread and effects federal, state, and local budgets that must maintain or replace deterioration caused by cracking. As a result, methods to reduce or eliminate early age shrinkage cracking have been investigated. Internal curing is one such method in which a prewetted lightweight sand is incorporated into the concrete mixture to provide internal water as the concrete cures. This action can significantly reduce or eliminate shrinkage and in some cases causes a beneficial early age expansion. Standard laboratory tests have been developed to quantify the shrinkage cracking potential of concrete. Unfortunately, many of these tests may not be appropriate for use with internally cured mixtures and only provide limited amounts of information. Most standard tests are not designed to capture the expansive behavior of internally cured mixtures. This thesis describes the design and implementation of two new testing devices that overcome the limitations of current standards. The first device discussed in this thesis is called the dual ring. The dual ring is a testing device that quantifies the early age restrained shrinkage performance of cementitious mixtures. The design of the dual ring is based on the current ASTM C 1581-04 standard test which utilizes one steel ring to restrain a cementitious specimen. The dual ring overcomes two important limitations of the standard test. First, the standard single ring test cannot restrain the expansion that takes place at early ages which is not representative of field conditions. The dual ring incorporates a second restraining ring which is located outside of the sample to provide restraint against expansion. Second, the standard ring test is a passive test that only relies on the autogenous and drying shrinkage of the mixture to induce cracking. The dual ring test can be an active test because it has the ability to vary the temperature of the specimen in order to induce thermal stress and produce cracking. This ability enables the study of the restrained cracking capacity as the mixture ages in order to quantify crack sensitive periods of time. Measurements made with the dual ring quantify the benefits from using larger amounts of internal curing. Mixtures that resupplied internal curing water to match that of chemical shrinkage could sustain three times the magnitude of thermal change before cracking. The second device discussed in this thesis is a large scale slab testing device. This device tests the cracking potential of 15' long by 4" thick by 24" wide slab specimens in an environmentally controlled chamber. The current standard testing devices can be considered small scale and encounter problems when linking their results to the field due to size effects. Therefore, the large scale slab testing device was developed in order to calibrate the results of smaller scale tests to real world field conditions such as a pavement or bridge deck. Measurements made with the large scale testing device showed that the cracking propensity of the internally cured mixtures was reduced and that a significant benefit could be realized.

  10. Complexity and approximability of quantified and stochastic constraint satisfaction problems

    SciTech Connect

    Hunt, H. B.; Stearns, R. L.; Marathe, M. V.

    2001-01-01

    Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SAT{sub c}(S)). Here, we study simultaneously the complexity of and the existence of efficient approximation algorithms for a number of variants of the problems SAT(S) and SAT{sub c}(S), and for many different D, C, and S. These problem variants include decision and optimization problems, for formulas, quantified formulas stochastically-quantified formulas. We denote these problems by Q-SAT(S), MAX-Q-SAT(S), S-SAT(S), MAX-S-SAT(S) MAX-NSF-Q-SAT(S) and MAX-NSF-S-SAT(S). The main contribution is the development of a unified predictive theory for characterizing the the complexity of these problems. Our unified approach is based on the following basic two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic representability. Let k {ge} 2. Let S be a finite set of finite-arity relations on {Sigma}{sub k} with the following condition on S: All finite arity relations on {Sigma}{sub k} can be represented as finite existentially-quantified conjunctions of relations in S applied to variables (to variables and constant symbols in C), Then we prove the following new results: (1) The problems SAT(S) and SAT{sub c}(S) are both NQL-complete and {le}{sub logn}{sup bw}-complete for NP. (2) The problems Q-SAT(S), Q-SAT{sub c}(S), are PSPACE-complete. Letting k = 2, the problem S-SAT(S) and S-SAT{sub c}(S) are PSPACE-complete. (3) {exists} {epsilon} > 0 for which approximating the problems MAX-Q-SAT(S) within {epsilon} times optimum is PSPACE-hard. Letting k =: 2, {exists} {epsilon} > 0 for which approximating the problems MAX-S-SAT(S) within {epsilon} times optimum is PSPACE-hard. (4) {forall} {epsilon} > 0 the problems MAX-NSF-Q-SAT(S) and MAX-NSF-S-SAT(S), are PSPACE-hard to approximate within a factor of n{sup {epsilon}} times optimum. These results significantly extend the earlier results by (i) Papadimitriou [Pa851] on complexity of stochastic satisfiability, (ii) Condon, Feigenbaum, Lund and Shor [CF+93, CF+94] by identifying natural classes of PSPACE-hard optimization problems with provably PSPACE-hard {epsilon}-approximation problems. Moreover, most of our results hold not just for Boolean relations: most previous results were done only in the context of Boolean domains. The results also constitute as a significant step towards obtaining a dichotomy theorems for the problems MAX-S-SAT(S) and MAX-Q-SAT(S): a research area of recent interest [CF+93, CF+94, Cr95, KSW97, LMP99].

  11. Differential GPS measurements as a tool to quantify Late Cenozoic crustal deformation (Oman, Arabian Peninsula)

    NASA Astrophysics Data System (ADS)

    Rupprechter, M.; Roepert, A.; Hoffmann, G.

    2012-04-01

    The Sultanate of Oman is situated in the north-eastern part of the Arabian Plate. It therefore represents the leading edge as the plate is drifting north relative to the Eurasian Plate. The movement results in continent-continent collision in the northwest (Zagros fold and thrust belt) and ocean-continent collision in the northeast (Makran subduction zone). We follow the hypothesis that this plate tectonic setting results in an internal deformation of the Arabian Plate. The study presented here is part of a larger project that aims at quantifying the forcing factors of coastal evolution (Hoffmann et al. 2012). The sea level development, climate - and associated rates of weathering and sediment supply - and differential land movement (neotectonics) are identified as key factors during the Late Cenozoic. Recent vertical land movement is obvious and expressed in differences of the coastal morphology. Parts of the coastline are subsiding: these areas show drowned wadi mouths. Other parts are characterised by a straightened coastline and raised wave-cut terraces are evident well above present mean sea-level. Despite these erosional terraces, depositional terraces on alluvial fans are also encountered in close vicinity to the mountain chain. Detailed topographic profile measurements are carried out using a LEICA Viva GNSS-GS15 differential GPS. The instrument yields data with an accuracy of 1-2 cm relatively to the base station. The profile measurements are orientated perpendicular to the coastline and therefore perpendicular to the raised wave-cut terraces. Up to 6 terraces are encountered in elevations up to 400 m above present sea level with the older ones being the highest. The data allow calculating the scarp height, tread length and tread angle of the terraces. The results indicate that the terraces show an increased seaward tilting with age. This observation is interpreted as reflecting ongoing uplift. A coast-parallel deformation pattern becomes obvious when comparing parallel profiles. Profiles measured along depositional fluvial terraces also indicate a direct correlation of the age of the deposits and the dip-angle of the surface. Further evidence for ongoing uplift is seen as the older fluvial terraces are situated further inland. Additional dating evidence is needed to quantify the uplift and to resolve the differential land movement in time and space.

  12. Quantifying Product Line Benefits Peter Knauber1

    E-print Network

    Leite, Julio Cesar Sampaio do Prado

    for product-line engineering. From a business point of view, we should be able to identify at least five convincing arguments, based on economic anal- yses, for the use of product line engineering. A convincing business case should have the property that it shows how the use of product line engineering leads

  13. Quantifying Qualitative Data Using Cognitive Maps

    ERIC Educational Resources Information Center

    Scherp, Hans-Ake

    2013-01-01

    The aim of the article is to show how substantial qualitative material consisting of graphic cognitive maps can be analysed by using digital CmapTools, Excel and SPSS. Evidence is provided of how qualitative and quantitative methods can be combined in educational research by transforming qualitative data into quantitative data to facilitate…

  14. Advanced image processing methods as a tool to map and quantify different types of biological soil crust

    NASA Astrophysics Data System (ADS)

    Rodríguez-Caballero, Emilio; Escribano, Paula; Cantón, Yolanda

    2014-04-01

    Biological soil crusts (BSCs) modify numerous soil surface properties and affect many key ecosystem processes. As BSCs are considered one of the most important components of semiarid ecosystems, accurate characterisation of their spatial distribution is increasingly in demand. This paper describes a novel methodology for identifying the areas dominated by different types of BSCs and quantifying their relative cover at subpixel scale in a semiarid ecosystem of SE Spain. The approach consists of two consecutive steps: (i) First, Support Vector Machine (SVM) classification to identify the main ground units, dominated by homogenous surface cover (bare soil, cyanobacteria BSC, lichen BSC, green and dry vegetation), which are of strong ecological relevance. (ii) Spectral mixture analysis (SMA) of the ground units to quantify the proportion of each type of surface cover within each pixel, to correctly characterize the complex spatial heterogeneity inherent to semiarid ecosystems. SVM classification showed very good results with a Kappa coefficient of 0.93%, discriminating among areas dominated by bare soil, cyanobacteria BSC, lichen BSC, green and dry vegetation. Subpixel relative abundance images achieved relatively high accuracy for both types of BSCs (about 80%), whereas general overestimation of vegetation was observed. Our results open the possibility of introducing the effect of presence and of relative cover of BSCs in spatially distributed hydrological and ecological models, and assessment and monitoring aimed at reducing degradation in these areas.

  15. Quantifying human response capabilities towards tsunami threats at community level

    NASA Astrophysics Data System (ADS)

    Post, J.; Mück, M.; Zosseder, K.; Wegscheider, S.; Taubenböck, H.; Strunz, G.; Muhari, A.; Anwar, H. Z.; Birkmann, J.; Gebert, N.

    2009-04-01

    Decision makers at the community level need detailed information on tsunami risks in their area. Knowledge on potential hazard impact, exposed elements such as people, critical facilities and lifelines, people's coping capacity and recovery potential are crucial to plan precautionary measures for adaptation and to mitigate potential impacts of tsunamis on society and the environment. A crucial point within a people-centred tsunami risk assessment is to quantify the human response capabilities towards tsunami threats. Based on this quantification and spatial representation in maps tsunami affected and safe areas, difficult-to-evacuate areas, evacuation target points and evacuation routes can be assigned and used as an important contribution to e.g. community level evacuation planning. Major component in the quantification of human response capabilities towards tsunami impacts is the factor time. The human response capabilities depend on the estimated time of arrival (ETA) of a tsunami, the time until technical or natural warning signs (ToNW) can be received, the reaction time (RT) of the population (human understanding of a tsunami warning and the decision to take appropriate action), the evacuation time (ET, time people need to reach a safe area) and the actual available response time (RsT = ETA - ToNW - RT). If RsT is larger than ET, people in the respective areas are able to reach a safe area and rescue themselves. Critical areas possess RsT values equal or even smaller ET and hence people whin these areas will be directly affected by a tsunami. Quantifying the factor time is challenging and an attempt to this is presented here. The ETA can be derived by analyzing pre-computed tsunami scenarios for a respective area. For ToNW we assume that the early warning center is able to fulfil the Indonesian presidential decree to issue a warning within 5 minutes. RT is difficult as here human intrinsic factors as educational level, believe, tsunami knowledge and experience besides others play a role. An attempt to quantify this variable under high uncertainty is also presented. Quantifying ET is based on a GIS modelling using a Cost Weighted Distance approach. Basic principle is to define the best evacuation path from a given point to the next safe area (shelter location). Here the fastest path from that point to the shelter location has to be found. Thereby the impact of land cover, slope, population density, population age and gender distribution are taken into account as literature studies prove these factors as highly important. Knowing the fastest path and the distance to the next safe area together with a spatially distributed pattern of evacuation speed delivers the time needed from each location to a safe area. By considering now the obtained time value for RsT the coverage area of an evacuation target point (safe area) can be assigned. Incorporating knowledge on people capacity of an evacuation target point the respective coverage area is refined. Hence areas with weak, moderate and good human response capabilities can be detected. This allows calculation of potential amount of people affected (dead or injured) and amount of people dislocated. First results for Kuta (Bali) for a worst case tsunami event deliver people affected of approx. 25 000 when RT = 0 minutes (direct evacuation when receiving a tsunami warning to 120 000 when RT > ETA (no evacuation action until tsunami hits the land). Additionally fastest evacuation routes to the evacuation target points can be assigned. Areas with weak response capabilities can be assigned as priority areas to install e.g. additional evacuation target points or to increase tsunami knowledge and awareness to promote a faster reaction time. Especially in analyzing underlying socio-economic properties causing deficiencies in responding to a tsunami threat can lead to valuable information and direct planning of adaptation measures. Keywords: Community level, Risk and vulnerability assessment, Early warning, Disaster management, Tsunami, Indonesia

  16. The Language of Show Biz: A Dictionary.

    ERIC Educational Resources Information Center

    Sergel, Sherman Louis, Ed.

    This dictionary of the language of show biz provides the layman with definitions and essays on terms and expressions often used in show business. The overall pattern of selection was intended to be more rather than less inclusive, though radio, television, and film terms were deliberately omitted. Lengthy explanations are sometimes used to express…

  17. Incident Response Planning for Selected Livestock Shows 

    E-print Network

    Tomascik, Chelsea Roxanne

    2012-02-14

    RESPONSE PLANNING FOR SELECTED LIVESTOCK SHOWS A Thesis by CHELSEA ROXANNE TOMASCIK Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE... December 2011 Major Subject: Agricultural Leadership, Education and Communications Incident Response Planning for Selected Livestock Shows Copyright 2011 Chelsea Roxanne Tomascik INCIDENT RESPONSE...

  18. Salton Sea Satellite Image Showing Fault Slip

    USGS Multimedia Gallery

    Landsat satellite image (LE70390372003084EDC00) showing location of surface slip triggered along faults in the greater Salton Trough area. Red bars show the generalized location of 2010 surface slip along faults in the central Salton Trough and many additional faults in the southwestern section of t...

  19. 2015 4-H State Food Show Guidelines

    E-print Network

    1 2015 4-H State Food Show Guidelines Bringing Texas to the Table Educational programs of the Texas A&M AgriLife Extension Service are open to all people without regard to race, color, sex, religion.D. Shawnte Clawson, MS Subject: 2015 4-H State Food Show Guidelines Being transmitted to you this year via e

  20. The Physics of Equestrian Show Jumping

    ERIC Educational Resources Information Center

    Stinner, Art

    2014-01-01

    This article discusses the kinematics and dynamics of equestrian show jumping. For some time I have attended a series of show jumping events at Spruce Meadows, an international equestrian center near Calgary, Alberta, often referred to as the "Wimbledon of equestrian jumping." I have always had a desire to write an article such as this…

  1. 47 CFR 90.505 - Showing required.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Showing required. 90.505 Section 90.505 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES PRIVATE LAND MOBILE RADIO SERVICES Developmental Operation § 90.505 Showing required. (a) Except as provided in paragraph (b) of this section, each...

  2. 47 CFR 90.505 - Showing required.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 5 2011-10-01 2011-10-01 false Showing required. 90.505 Section 90.505 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES PRIVATE LAND MOBILE RADIO SERVICES Developmental Operation § 90.505 Showing required. (a) Except as provided in paragraph (b) of this section, each...

  3. Quantifying uncertainty in material damage from vibrational data

    NASA Astrophysics Data System (ADS)

    Butler, T.; Huhtala, A.; Juntunen, M.

    2015-02-01

    The response of a vibrating beam to a force depends on many physical parameters including those determined by material properties. Damage caused by fatigue or cracks results in local reductions in stiffness parameters and may drastically alter the response of the beam. Data obtained from the vibrating beam are often subject to uncertainties and/or errors typically modeled using probability densities. The goal of this paper is to estimate and quantify the uncertainty in damage modeled as a local reduction in stiffness using uncertain data. We present various frameworks and methods for solving this parameter determination problem. We also describe a mathematical analysis to determine and compute useful output data for each method. We apply the various methods in a specified sequence that allows us to interface the various inputs and outputs of these methods in order to enhance the inferences drawn from the numerical results obtained from each method. Numerical results are presented using both simulated and experimentally obtained data from physically damaged beams.

  4. Quantifying Spatial Genetic Structuring in Mesophotic Populations of the Precious Coral Corallium rubrum

    PubMed Central

    Costantini, Federica; Carlesi, Lorenzo; Abbiati, Marco

    2013-01-01

    While shallow water red coral populations have been overharvested in the past, nowadays, commercial harvesting shifted its pressure on mesophotic organisms. An understanding of red coral population structure, particularly larval dispersal patterns and connectivity among harvested populations is paramount to the viability of the species. In order to determine patterns of genetic spatial structuring of deep water Corallium rubrum populations, for the first time, colonies found between 58–118 m depth within the Tyrrhenian Sea were collected and analyzed. Ten microsatellite loci and two regions of mitochondrial DNA (mtMSH and mtC) were used to quantify patterns of genetic diversity within populations and to define population structuring at spatial scales from tens of metres to hundreds of kilometres. Microsatellites showed heterozygote deficiencies in all populations. Significant levels of genetic differentiation were observed at all investigated spatial scales, suggesting that populations are likely to be isolated. This differentiation may by the results of biological interactions, occurring within a small spatial scale and/or abiotic factors acting at a larger scale. Mitochondrial markers revealed significant genetic structuring at spatial scales greater then 100 km showing the occurrence of a barrier to gene flow between northern and southern Tyrrhenian populations. These findings provide support for the establishment of marine protected areas in the deep sea and off-shore reefs, in order to effectively maintain genetic diversity of mesophotic red coral populations. PMID:23646109

  5. Quantifying spatial genetic structuring in mesophotic populations of the precious coral Corallium rubrum.

    PubMed

    Costantini, Federica; Carlesi, Lorenzo; Abbiati, Marco

    2013-01-01

    While shallow water red coral populations have been overharvested in the past, nowadays, commercial harvesting shifted its pressure on mesophotic organisms. An understanding of red coral population structure, particularly larval dispersal patterns and connectivity among harvested populations is paramount to the viability of the species. In order to determine patterns of genetic spatial structuring of deep water Corallium rubrum populations, for the first time, colonies found between 58-118 m depth within the Tyrrhenian Sea were collected and analyzed. Ten microsatellite loci and two regions of mitochondrial DNA (mtMSH and mtC) were used to quantify patterns of genetic diversity within populations and to define population structuring at spatial scales from tens of metres to hundreds of kilometres. Microsatellites showed heterozygote deficiencies in all populations. Significant levels of genetic differentiation were observed at all investigated spatial scales, suggesting that populations are likely to be isolated. This differentiation may by the results of biological interactions, occurring within a small spatial scale and/or abiotic factors acting at a larger scale. Mitochondrial markers revealed significant genetic structuring at spatial scales greater then 100 km showing the occurrence of a barrier to gene flow between northern and southern Tyrrhenian populations. These findings provide support for the establishment of marine protected areas in the deep sea and off-shore reefs, in order to effectively maintain genetic diversity of mesophotic red coral populations. PMID:23646109

  6. Identifying and quantifying the stromal fibrosis in muscularis propria of colorectal carcinoma by multiphoton microscopy

    NASA Astrophysics Data System (ADS)

    Chen, Sijia; Yang, Yinghong; Jiang, Weizhong; Feng, Changyin; Chen, Zhifen; Zhuo, Shuangmu; Zhu, Xiaoqin; Guan, Guoxian; Chen, Jianxin

    2014-10-01

    The examination of stromal fibrosis within colorectal cancer is overlooked, not only because the routine pathological examinations seem to focus more on tumour staging and precise surgical margins, but also because of the lack of efficient diagnostic methods. Multiphoton microscopy (MPM) can be used to study the muscularis stroma of normal and colorectal carcinoma tissue at the molecular level. In this work, we attempt to show the feasibility of MPM for discerning the microstructure of the normal human rectal muscle layer and fibrosis colorectal carcinoma tissue practicably. Three types of muscularis propria stromal fibrosis beneath the colorectal cancer infiltration were first observed through the MPM imaging system by providing intercellular microstructural details in fresh, unstained tissue samples. Our approach also presents the capability of quantifying the extent of stromal fibrosis from both amount and orientation of collagen, which may further characterize the severity of fibrosis. By comparing with the pathology analysis, these results show that the MPM has potential advantages in becoming a histological tool for detecting the stromal fibrosis and collecting prognosis evidence, which may guide subsequent therapy procedures for patients into good prognosis.

  7. QUANTIFYING DIFFUSIVE MASS TRANSFER IN FRACTURED SHALEBEDROCK

    EPA Science Inventory

    A significant limitation in defining remediation needs at contaminated sites often results from aninsufficient understanding of the transport processes that control contaminant migration. Theobjectives of this research were to help resolve this dilemma by providing an improved...

  8. Incorporating both physical and kinetic limitations in quantifying dissolved oxygen flux to aquatic sediments

    USGS Publications Warehouse

    O'Connor, B.L.; Hondzo, Miki; Harvey, J.W.

    2009-01-01

    Traditionally, dissolved oxygen (DO) fluxes have been calculated using the thin-film theory with DO microstructure data in systems characterized by fine sediments and low velocities. However, recent experimental evidence of fluctuating DO concentrations near the sediment-water interface suggests that turbulence and coherent motions control the mass transfer, and the surface renewal theory gives a more mechanistic model for quantifying fluxes. Both models involve quantifying the mass transfer coefficient (k) and the relevant concentration difference (??C). This study compared several empirical models for quantifying k based on both thin-film and surface renewal theories, as well as presents a new method for quantifying ??C (dynamic approach) that is consistent with the observed DO concentration fluctuations near the interface. Data were used from a series of flume experiments that includes both physical and kinetic uptake limitations of the flux. Results indicated that methods for quantifying k and ??C using the surface renewal theory better estimated the DO flux across a range of fluid-flow conditions. ?? 2009 ASCE.

  9. The likelihood of achieving quantified road safety targets: a binary logistic regression model for possible factors.

    PubMed

    Sze, N N; Wong, S C; Lee, C Y

    2014-12-01

    In past several decades, many countries have set quantified road safety targets to motivate transport authorities to develop systematic road safety strategies and measures and facilitate the achievement of continuous road safety improvement. Studies have been conducted to evaluate the association between the setting of quantified road safety targets and road fatality reduction, in both the short and long run, by comparing road fatalities before and after the implementation of a quantified road safety target. However, not much work has been done to evaluate whether the quantified road safety targets are actually achieved. In this study, we used a binary logistic regression model to examine the factors - including vehicle ownership, fatality rate, and national income, in addition to level of ambition and duration of target - that contribute to a target's success. We analyzed 55 quantified road safety targets set by 29 countries from 1981 to 2009, and the results indicate that targets that are in progress and with lower level of ambitions had a higher likelihood of eventually being achieved. Moreover, possible interaction effects on the association between level of ambition and the likelihood of success are also revealed. PMID:25255417

  10. "The Cosby Show": The View from the Black Middle Class.

    ERIC Educational Resources Information Center

    Inniss, Leslie B.; Feagin, Joe R.

    1995-01-01

    Examines the black middle-class response to "The Cosby Show." The study asked about the portrayal of blacks in the media, but did not specifically ask about "The Cosby Show." Results from 100 respondents revealed two significant aspects: that the show renders black problems as irrelevant and that it fosters hope and optimism that the black…

  11. Quantifying commuter exposures to volatile organic compounds

    NASA Astrophysics Data System (ADS)

    Kayne, Ashleigh

    Motor-vehicles can be a predominant source of air pollution in cities. Traffic-related air pollution is often unavoidable for people who live in populous areas. Commuters may have high exposures to traffic-related air pollution as they are close to vehicle tailpipes. Volatile organic compounds (VOCs) are one class of air pollutants of concern because exposure to VOCs carries risk for adverse health effects. Specific VOCs of interest for this work include benzene, toluene, ethylbenzene, and xylenes (BTEX), which are often found in gasoline and combustion products. Although methods exist to measure time-integrated personal exposures to BTEX, there are few practical methods to measure a commuter's time-resolved BTEX exposure which could identify peak exposures that could be concealed with a time-integrated measurement. This study evaluated the ability of a photoionization detector (PID) to measure commuters' exposure to BTEX using Tenax TA samples as a reference and quantified the difference in BTEX exposure between cyclists and drivers with windows open and closed. To determine the suitability of two measurement methods (PID and Tenax TA) for use in this study, the precision, linearity, and limits of detection (LODs) for both the PID and Tenax TA measurement methods were determined in the laboratory with standard BTEX calibration gases. Volunteers commuted from their homes to their work places by cycling or driving while wearing a personal exposure backpack containing a collocated PID and Tenax TA sampler. Volunteers completed a survey and indicated if the windows in their vehicle were open or closed. Comparing pairs of exposure data from the Tenax TA and PID sampling methods determined the suitability of the PID to measure the BTEX exposures of commuters. The difference between BTEX exposures of cyclists and drivers with windows open and closed in Fort Collins was determined. Both the PID and Tenax TA measurement methods were precise and linear when evaluated in the laboratory using standard BTEX gases. The LODs for the Tenax TA sampling tubes (determined with a sample volume of 1,000 standard cubic centimeters which is close to the approximate commuter sample volumes collected) were orders of magnitude lower (0.04 to 0.7 parts per billion (ppb) for individual compounds of BTEX) compared to the PIDs' LODs (9.3 to 15 ppb of a BTEX mixture), which makes the Tenax TA sampling method more suitable to measure BTEX concentrations in the sub-parts per billion (ppb) range. PID and Tenax TA data for commuter exposures were inversely related. The concentrations of VOCs measured by the PID were substantially higher than BTEX concentrations measured by collocated Tenax TA samplers. The inverse trend and the large difference in magnitude between PID responses and Tenax TA BTEX measurements indicates the two methods may have been measuring different air pollutants that are negatively correlated. Drivers in Fort Collins, Colorado with closed windows experienced greater time-weighted average BTEX exposures than cyclists (p: 0.04). Commuter BTEX exposures measured in Fort Collins were lower than commuter exposures measured in prior studies that occurred in larger cities (Boston and Copenhagen). Although route and intake may affect a commuter's BTEX dose, these variables are outside of the scope of this study. Within the limitations of this study (including: small sample size, small representative area of Fort Collins, and respiration rates not taken into account), it appears health risks associated with traffic-induced BTEX exposures may be reduced by commuting via cycling instead of driving with windows closed and living in a less populous area that has less vehicle traffic. Although the PID did not reliably measure low-level commuter BTEX exposures, the Tenax TA sampling method did. The PID measured BTEX concentrations reliably in a controlled environment, at high concentrations (300-800 ppb), and in the absence of other air pollutants. In environments where there could be multiple chemicals present that may produce a PID signal (such a

  12. Quantifying the Chemical Weathering Efficiency of Basaltic Catchments

    NASA Astrophysics Data System (ADS)

    Ibarra, D. E.; Caves, J. K.; Thomas, D.; Chamberlain, C. P.; Maher, K.

    2014-12-01

    The geographic distribution and areal extent of rock type, along with the hydrologic cycle, influence the efficiency of global silicate weathering. Here we define weathering efficiency as the production of HCO3- for a given land surface area. Modern basaltic catchments located on volcanic arcs and continental flood basalts are particularly efficient, as they account for <5% of sub-aerial bedrock but produce ~30% of the modern global weathering flux. Indeed, changes in this weathering efficiency are thought to play an important role in modulating Earth's past climate via changes in the areal extent and paleo-latitude of basaltic catchments (e.g., Deccan and Ethiopian Traps, southeast Asia basaltic terranes). We analyze paired river discharge and solute concentration data for basaltic catchments from both literature studies and the USGS NWIS database to mechanistically understand geographic and climatic influences on weathering efficiency. To quantify the chemical weathering efficiency of modern basalt catchments we use solute production equations and compare the results to global river datasets. The weathering efficiency, quantified via the Damköhler coefficient (Dw [m/yr]), is calculated from fitting concentration-discharge relationships for catchments with paired solute and discharge measurements. Most basalt catchments do not demonstrate 'chemostatic' behavior. The distribution of basalt catchment Dw values (0.194 ± 0.176 (1?)), derived using SiO2(aq) concentrations, is significantly higher than global river Dw values (mean Dw of 0.036), indicating a greater chemical weathering efficiency. Despite high Dw values and total weathering fluxes per unit area, many basaltic catchments are producing near their predicted weathering flux limit. Thus, weathering fluxes from basaltic catchments are proportionally less responsive to increases in runoff than other lithologies. The results of other solute species (Mg2+ and Ca2+) are comparable, but are influenced both by the stoichiometry of local primary minerals and secondary clays. Our results provide a framework to interpret how small changes in the areal extent or geographic distribution of basaltic catchments may markedly influence the silicate weathering feedback.

  13. Quantifying the Connectivity of a Semantic Warehouse Yannis Tzitzikas1,2

    E-print Network

    Tzitzikas, Yannis

    Quantifying the Connectivity of a Semantic Warehouse Yannis Tzitzikas1,2 , Nikos Minadakis1 such a semantic warehouse. We focus on the aspects of quality and value of the warehouse, and for this reason we warehouse. The results are very promising: the proposed metrics-based matrixes al- low someone to get

  14. Quantifying supercoiling-induced denaturation bubbles in DNA Jozef Adamcik,a

    E-print Network

    Potsdam, Universität

    Quantifying supercoiling-induced denaturation bubbles in DNA Jozef Adamcik,a Jae-Hyung Jeon single DNA plasmid imaging. We demonstrate that long-lived single-stranded denaturation bubbles exist and temperature conditions. The results presented herein underline the important role of denaturation bubbles

  15. Parkinson's Drug Shows Promise Against Macular Degeneration

    MedlinePLUS

    ... nlm.nih.gov/medlineplus/news/fullstory_155695.html Parkinson's Drug Shows Promise Against Macular Degeneration But more ... THURSDAY, Nov. 12, 2015 (HealthDay News) -- A common Parkinson's disease medication might hold potential for preventing or ...

  16. map showing predicted habitat potentional for tortoise

    USGS Multimedia Gallery

    This map shows the spatial representation of the predicted habitat potential index values for desert tortoise in the Mojave and parts of the Sonoran Deserts of California, Nevada, Utah, and Arizona. Map: USGS. ...

  17. Glial fibrillary acidic protein (GFAP) shows circadian oscillations in crayfish Procambarus clarkii putative pacemakers.

    PubMed

    Rodríguez-Muñoz, María de la Paz; Escamilla-Chimal, Elsa G

    2015-10-01

    Although several studies of glia have examined glial fibrillary acid protein (GFAP) and its relationship to the circadian rhythms of different organisms, they have not explored the daily GFAP oscillations in the putative pacemakers of the crayfish Procambarus clarkii or in other crustaceans. In this study we investigated the daily variations in GFAP concentrations in the eyestalk and brain, which are considered to be putative pacemakers in adult P. clarkii. In both structures, the glial GFAP was quantified using the indirect enzyme-linked immunosorbent assay (ELISA), and double labeling immunofluorescence was used to detect it and its co-localization with protein Period (PER), an important component of the circadian clock, in various regions of both structures. The ELISA results were analyzed using Cosinor and one-way ANOVA with Bonferroni and Scheffé's post hoc tests. The results of this analysis showed that the GFAP levels present circadian oscillations in both structures. Moreover, GFAP was localized in different structures of the eyestalk and brain; however, co-localization with PER occurred only in the lamina ganglionaris, specifically in the cartridges of the eyestalk and in some of the cluster 9 brain cells. These results suggest that as in other invertebrates and vertebrates, glial cells could be involved in the circadian system of P. clarkii; however, thus far we cannot know whether the glial cells are only effectors, participate in afferent pathways, or are part of the circadian clock. PMID:26362020

  18. Quantifiable effectiveness of experimental scaling of river- and delta morphodynamics and stratigraphy

    NASA Astrophysics Data System (ADS)

    Kleinhans, Maarten G.; van Dijk, Wout M.; van de Lageweg, Wietse I.; Hoyal, David C. J. D.; Markies, Henk; van Maarseveen, Marcel; Roosendaal, Chris; van Weesep, Wendell; van Breemen, Dimitri; Hoendervoogt, Remko; Cheshier, Nathan

    2014-06-01

    Laboratory experiments to simulate landscapes and stratigraphy often suffer from scale effects, because reducing length- and time scales leads to different behaviour of water and sediment. Classically, scaling proceeded from dimensional analysis of the equations of motion and sediment transport, and minor concessions, such as vertical length scale distortion, led to acceptable results. In the past decade many experiments were done that seriously violated these scaling rules, but nevertheless produced significant and insightful results that resemble the real world in quantifiable ways. Here we focus on self-formed fluvial channels and channel patterns in experiments. The objectives of this paper are 1) to identify what aspects of scaling considerations are most important for experiments that simulate morphodynamics and stratigraphy of rivers and deltas, 2) to establish a design strategy for experiments based on a combination of relaxed classical scale rules, theory of bars and meanders, and small-scale experiments focussed at specific processes. We present a number of small laboratory setups and protocols that we use to rapidly quantify erosional and depositional types of forms and dynamics that develop in the landscape experiments as a function of detailed properties, such as effective material strength, and to assess potential scale effects. Most importantly, the width-to-depth ratio of channels determines the bar pattern and meandering tendency. The strength of floodplain material determines these channel dimensions, and theory predicts that laboratory rivers should have 1.5 times larger width-to-depth ratios for the same bar pattern. We show how floodplain formation can be controlled by adding silt-sized silicaflour, bentonite, Medicago sativa (alfalfa) or Partially Hydrolyzed PolyAcrylamide (a synthetic polymer) to poorly sorted sediment. The experiments demonstrate that there is a narrow range of conditions between no mobility of bed or banks, and too much mobility. The density of vegetation and the volume proportion of silt allow well-controllable channel dimensions whereas the polymer proved difficult to control. The theory, detailed methods of quantification, and experimental setups presented here show that the rivers and deltas created in the laboratory seem to behave as natural rivers when the experimental conditions adhere to the relaxed scaling rules identified herein, and that required types of fluvio-deltaic morphodynamics can be reproduced based on conditions and sediments selected on the basis of a series of small-scale experiments.

  19. Spreadsheet software to assess locomotor disability to quantify permanent physical impairment

    PubMed Central

    Ellur, Sunderraj

    2012-01-01

    Context: Assessment of physical disability is an important duty of a plastic surgeon especially for those of us who are in an institutional practice. Aim: The Gazette of India notification gives a guideline regarding the assessment of the disability. However, the calculations as per the guidelines are time consuming. In this article, a spreadsheet program which is based on the notification is presented. The aim of this article is to design a spreadsheet program which is simple, reproducible, user friendly, less time consuming and accurate. Materials and Methods: This spreadsheet program was designed using the Microsoft Excel. The spreadsheet program was designed on the basis of the guidelines in the Gazette of India Notification regarding the assessment of Locomotor Disability to Quantify Permanent Physical Impairment. Two representative examples are presented to help understand the application of this program. Results: Two spreadsheet programs, one for upper limb and another for the lower limb are presented. The representative examples show the accuracy of the program to match the results of the traditional method of calculation. Conclusion: A simple spreadsheet program can be designed to assess disability as per the Gazette of India Notification. This program is easy to use and is accurate. PMID:23450138

  20. Quantifying movement intentions with multimodal neuroimaging for functional electrical stimulation-based rehabilitation.

    PubMed

    Lee, Min-Ho; Kim, Bum-Joo; Lee, Seong-Whan

    2016-01-20

    Functional electrical stimulation (FES) is a common rehabilitation method for the purpose of recovery of paralyzed muscle by means of sequential electrical stimulation. Reports indicate that active participation by the patient, as opposed to simple stimulation, leads to improved recovery when using FES and other rehabilitation techniques. In this paper, we investigate the neurophysiological effect of an active participant's intention in the FES rehabilitation task. To observe the difference in brain signal between intentional and involuntary movement during FES, electroencephalography and near-infrared spectroscopy were simultaneously measured in the motor cortex area. The result showed that the presence of intention affects the activation of the brain significantly in both hemodynamic responses (near-infrared spectroscopy) and electrical (electroencephalography) patterns, and the accuracy of classification between passive and active mental states during FES was 85.3%. Our result implies the possibility to quantify motivation, or active participation, during rehabilitation, which has not been considered a measurable value in the rehabilitation field. PMID:26656935

  1. Quantifying the Impact of Dust on Heterogeneous Ice Generation in Midlevel Supercooled Stratiform Clouds

    SciTech Connect

    Zhang, Damao; Wang, Zhien; Heymsfield, Andrew J.; Fan, Jiwen; Liu, Dong; Zhao, Ming

    2012-09-26

    Dust aerosols have been regarded as effective ice nuclei (IN), but large uncertainties regarding their efficiencies remain. Here, four years of collocated CALIPSO and CloudSat measurements are used to quantify the impact of dust on heterogeneous ice generation in midlevel supercooled stratiform clouds (MSSCs) over the ‘dust belt’. The results show that the dusty MSSCs have an up to 20% higher mixed-phase cloud occurrence, up to 8 dBZ higher mean maximum Ze (Ze_max), and up to 11.5 g/m2 higher ice water path (IWP) than similar MSSCs under background aerosol conditions. Assuming similar ice growth and fallout history in similar MSSCs, the significant differences in Ze_max between dusty and non-dusty MSSCs reflect ice particle number concentration differences. Therefore, observed Ze_max differences indicate that dust could enhance ice particle concentration in MSSCs by a factor of 2 to 6 at temperatures colder than ?12°C. The enhancements are strongly dependent on the cloud top temperature, large dust particle concentration and chemical compositions. These results imply an important role of dust particles in modifying mixed-phase cloud properties globally.

  2. Technical Note: Mesocosm approach to quantify dissolved inorganic carbon percolation fluxes

    NASA Astrophysics Data System (ADS)

    Thaysen, E. M.; Jessen, S.; Ambus, P.; Beier, C.; Postma, D.; Jakobsen, I.

    2014-02-01

    Dissolved inorganic carbon (DIC) fluxes across the vadose zone are influenced by a complex interplay of biological, chemical and physical factors. A novel soil mesocosm system was evaluated as a tool for providing information on the mechanisms behind DIC percolation to the groundwater from unplanted soil. Carbon dioxide partial pressure (pCO2), alkalinity, soil moisture and temperature were measured with depth and time, and DIC in the percolate was quantified using a sodium hydroxide trap. Results showed good reproducibility between two replicate mesocosms. The pCO2 varied between 0.2 and 1.1%, and the alkalinity was 0.1-0.6 meq L-1. The measured cumulative effluent DIC flux over the 78-day experimental period was 185-196 mg L-1 m-2 and in the same range as estimates derived from pCO2 and alkalinity in samples extracted from the side of the mesocosm column and the drainage flux. Our results indicate that the mesocosm system is a promising tool for studying DIC percolation fluxes and other biogeochemical transport processes in unsaturated environments.

  3. Quantifying complexity of financial short-term time series by composite multiscale entropy measure

    NASA Astrophysics Data System (ADS)

    Niu, Hongli; Wang, Jun

    2015-05-01

    It is significant to study the complexity of financial time series since the financial market is a complex evolved dynamic system. Multiscale entropy is a prevailing method used to quantify the complexity of a time series. Due to its less reliability of entropy estimation for short-term time series at large time scales, a modification method, the composite multiscale entropy, is applied to the financial market. To qualify its effectiveness, its applications in the synthetic white noise and 1 / f noise with different data lengths are reproduced first in the present paper. Then it is introduced for the first time to make a reliability test with two Chinese stock indices. After conducting on short-time return series, the CMSE method shows the advantages in reducing deviations of entropy estimation and demonstrates more stable and reliable results when compared with the conventional MSE algorithm. Finally, the composite multiscale entropy of six important stock indices from the world financial markets is investigated, and some useful and interesting empirical results are obtained.

  4. A Supervised Approach to Quantifying Sentence Similarity: With Application to Evidence Based Medicine

    PubMed Central

    Hassanzadeh, Hamed; Groza, Tudor; Nguyen, Anthony; Hunter, Jane

    2015-01-01

    Following the Evidence Based Medicine (EBM) practice, practitioners make use of the existing evidence to make therapeutic decisions. This evidence, in the form of scientific statements, is usually found in scholarly publications such as randomised control trials and systematic reviews. However, finding such information in the overwhelming amount of published material is particularly challenging. Approaches have been proposed to automatically extract scientific artefacts in EBM using standardised schemas. Our work takes this stream a step forward and looks into consolidating extracted artefacts—i.e., quantifying their degree of similarity based on the assumption that they carry the same rhetorical role. By semantically connecting key statements in the literature of EBM, practitioners are not only able to find available evidence more easily, but also can track the effects of different treatments/outcomes in a number of related studies. We devise a regression model based on a varied set of features and evaluate it both on a general English corpus (the SICK corpus), as well as on an EBM corpus (the NICTA-PIBOSO corpus). Experimental results show that our approach performs on par with the state of the art on the general English and achieves encouraging results on the biomedical text when compared against human judgement. PMID:26039310

  5. Quantifying Security Threats and Their Impact

    SciTech Connect

    Aissa, Anis Ben; Abercrombie, Robert K; Sheldon, Frederick T; Mili, Ali

    2009-01-01

    In earlier works, we present a computational infrastructure that allows an analyst to estimate the security of a system in terms of the loss that each stakeholder stands to sustain as a result of security breakdowns. In this paper we illustrate this infrastructure by means of a sample example involving an e-commerce application.

  6. Quantifying Effects Of Water Stress On Sunflowers

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This poster presentation describes the data collection and analysis procedures and results for 2009 from a research grant funded by the National Sunflower Association. The primary objective was to evaluate the use of crop canopy temperature measured with infrared temperature sensors, as a more time ...

  7. Quantifying MCMC exploration of phylogenetic tree space.

    PubMed

    Whidden, Chris; Matsen, Frederick A

    2015-05-01

    In order to gain an understanding of the effectiveness of phylogenetic Markov chain Monte Carlo (MCMC), it is important to understand how quickly the empirical distribution of the MCMC converges to the posterior distribution. In this article, we investigate this problem on phylogenetic tree topologies with a metric that is especially well suited to the task: the subtree prune-and-regraft (SPR) metric. This metric directly corresponds to the minimum number of MCMC rearrangements required to move between trees in common phylogenetic MCMC implementations. We develop a novel graph-based approach to analyze tree posteriors and find that the SPR metric is much more informative than simpler metrics that are unrelated to MCMC moves. In doing so, we show conclusively that topological peaks do occur in Bayesian phylogenetic posteriors from real data sets as sampled with standard MCMC approaches, investigate the efficiency of Metropolis-coupled MCMC (MCMCMC) in traversing the valleys between peaks, and show that conditional clade distribution (CCD) can have systematic problems when there are multiple peaks. PMID:25631175

  8. Quantifying MCMC Exploration of Phylogenetic Tree Space

    PubMed Central

    Whidden, Chris; Matsen, Frederick A.

    2015-01-01

    In order to gain an understanding of the effectiveness of phylogenetic Markov chain Monte Carlo (MCMC), it is important to understand how quickly the empirical distribution of the MCMC converges to the posterior distribution. In this article, we investigate this problem on phylogenetic tree topologies with a metric that is especially well suited to the task: the subtree prune-and-regraft (SPR) metric. This metric directly corresponds to the minimum number of MCMC rearrangements required to move between trees in common phylogenetic MCMC implementations. We develop a novel graph-based approach to analyze tree posteriors and find that the SPR metric is much more informative than simpler metrics that are unrelated to MCMC moves. In doing so, we show conclusively that topological peaks do occur in Bayesian phylogenetic posteriors from real data sets as sampled with standard MCMC approaches, investigate the efficiency of Metropolis-coupled MCMC (MCMCMC) in traversing the valleys between peaks, and show that conditional clade distribution (CCD) can have systematic problems when there are multiple peaks. PMID:25631175

  9. Quantifying tissue mechanical properties using photoplethysmography

    PubMed Central

    Akl, Tony J.; Wilson, Mark A.; Ericson, M. Nance; Coté, Gerard L.

    2014-01-01

    Photoplethysmography (PPG) is a non-invasive optical method that can be used to detect blood volume changes in the microvascular bed of tissue. The PPG signal comprises two components; a pulsatile waveform (AC) attributed to changes in the interrogated blood volume with each heartbeat, and a slowly varying baseline (DC) combining low frequency fluctuations mainly due to respiration and sympathetic nervous system activity. In this report, we investigate the AC pulsatile waveform of the PPG pulse for ultimate use in extracting information regarding the biomechanical properties of tissue and vasculature. By analyzing the rise time of the pulse in the diastole period, we show that PPG is capable of measuring changes in the Young’s Modulus of tissue mimicking phantoms with a resolution of 4 KPa in the range of 12 to 61 KPa. In addition, the shape of the pulse can potentially be used to diagnose vascular complications by differentiating upstream from downstream complications. A Windkessel model was used to model changes in the biomechanical properties of the circulation and to test the proposed concept. The modeling data confirmed the response seen in vitro and showed the same trends in the PPG rise and fall times with changes in compliance and vascular resistance. PMID:25071970

  10. Quantifying tissue mechanical properties using photoplethysmography

    SciTech Connect

    Akl, Tony; Wilson, Mark A.; Ericson, Milton Nance; Cote, Gerard L.

    2014-01-01

    Photoplethysmography (PPG) is a non-invasive optical method that can be used to detect blood volume changes in the microvascular bed of tissue. The PPG signal comprises two components; a pulsatile waveform (AC) attributed to changes in the interrogated blood volume with each heartbeat, and a slowly varying baseline (DC) combining low frequency fluctuations mainly due to respiration and sympathetic nervous system activity. In this report, we investigate the AC pulsatile waveform of the PPG pulse for ultimate use in extracting information regarding the biomechanical properties of tissue and vasculature. By analyzing the rise time of the pulse in the diastole period, we show that PPG is capable of measuring changes in the Young s Modulus of tissue mimicking phantoms with a resolution of 4 KPa in the range of 12 to 61 KPa. In addition, the shape of the pulse can potentially be used to diagnose vascular complications by differentiating upstream from downstream complications. A Windkessel model was used to model changes in the biomechanical properties of the circulation and to test the proposed concept. The modeling data confirmed the response seen in vitro and showed the same trends in the PPG rise and fall times with changes in compliance and vascular resistance.

  11. Quantifying dose to the reconstructed breast: Can we adequately treat?

    SciTech Connect

    Chung, Eugene; Marsh, Robin B.; Griffith, Kent A.; Moran, Jean M.; Pierce, Lori J.

    2013-04-01

    To evaluate how immediate reconstruction (IR) impacts postmastectomy radiotherapy (PMRT) dose distributions to the reconstructed breast (RB), internal mammary nodes (IMN), heart, and lungs using quantifiable dosimetric end points. 3D conformal plans were developed for 20 IR patients, 10 autologous reconstruction (AR), and 10 expander-implant (EI) reconstruction. For each reconstruction type, 5 right- and 5 left-sided reconstructions were selected. Two plans were created for each patient, 1 with RB coverage alone and 1 with RB + IMN coverage. Left-sided EI plans without IMN coverage had higher heart Dmean than left-sided AR plans (2.97 and 0.84 Gy, p = 0.03). Otherwise, results did not vary by reconstruction type and all remaining metrics were evaluated using a combined AR and EI dataset. RB coverage was adequate regardless of laterality or IMN coverage (Dmean 50.61 Gy, D95 45.76 Gy). When included, IMN Dmean and D95 were 49.57 and 40.96 Gy, respectively. Mean heart doses increased with left-sided treatment plans and IMN inclusion. Right-sided treatment plans and IMN inclusion increased mean lung V{sub 20}. Using standard field arrangements and 3D planning, we observed excellent coverage of the RB and IMN, regardless of laterality or reconstruction type. Our results demonstrate that adequate doses can be delivered to the RB with or without IMN coverage.

  12. Method for quantifying optical properties of the human lens

    DOEpatents

    Loree, deceased, Thomas R. (late of Albuquerque, NM); Bigio, Irving J. (Los Alamos, NM); Zuclich, Joseph A. (San Antonio, TX); Shimada, Tsutomu (Los Alamos, NM); Strobl, Karlheinz (Fiskdale, MA)

    1999-01-01

    Method for quantifying optical properties of the human lens. The present invention includes the application of fiberoptic, OMA-based instrumentation as an in vivo diagnostic tool for the human ocular lens. Rapid, noninvasive and comprehensive assessment of the optical characteristics of a lens using very modest levels of exciting light are described. Typically, the backscatter and fluorescence spectra (from about 300- to 900-nm) elicited by each of several exciting wavelengths (from about 300- to 600-nm) are collected within a few seconds. The resulting optical signature of individual lenses is then used to assess the overall optical quality of the lens by comparing the results with a database of similar measurements obtained from a reference set of normal human lenses having various ages. Several metrics have been identified which gauge the optical quality of a given lens relative to the norm for the subject's chronological age. These metrics may also serve to document accelerated optical aging and/or as early indicators of cataract or other disease processes.

  13. Quantifying the direct use value of Condor seamount

    NASA Astrophysics Data System (ADS)

    Ressurreição, Adriana; Giacomello, Eva

    2013-12-01

    Seamounts often satisfy numerous uses and interests. Multiple uses can generate multiple benefits but also conflicts and impacts, calling, therefore, for integrated and sustainable management. To assist in developing comprehensive management strategies, policymakers recognise the need to include measures of socioeconomic analysis alongside ecological data so that practical compromises can be made. This study assessed the direct output impact (DOI) of the relevant marine activities operating at Condor seamount (Azores, central northeast Atlantic) as proxies of the direct use values provided by the resource system. Results demonstrated that Condor seamount supported a wide range of uses yielding distinct economic outputs. Demersal fisheries, scientific research and shark diving were the top-three activities generating the highest revenues, while tuna fisheries, whale watching and scuba-diving had marginal economic significance. Results also indicated that the economic importance of non-extractive uses of Condor is considerable, highlighting the importance of these uses as alternative income-generating opportunities for local communities. It is hoped that quantifying the direct use values provided by Condor seamount will contribute to the decision making process towards its long-term conservation and sustainable use.

  14. Method for quantifying optical properties of the human lens

    DOEpatents

    Loree, T.R.; Bigio, I.J.; Zuclich, J.A.; Shimada, Tsutomu; Strobl, K.

    1999-04-13

    A method is disclosed for quantifying optical properties of the human lens. The present invention includes the application of fiberoptic, OMA-based instrumentation as an in vivo diagnostic tool for the human ocular lens. Rapid, noninvasive and comprehensive assessment of the optical characteristics of a lens using very modest levels of exciting light are described. Typically, the backscatter and fluorescence spectra (from about 300- to 900-nm) elicited by each of several exciting wavelengths (from about 300- to 600-nm) are collected within a few seconds. The resulting optical signature of individual lenses is then used to assess the overall optical quality of the lens by comparing the results with a database of similar measurements obtained from a reference set of normal human lenses having various ages. Several metrics have been identified which gauge the optical quality of a given lens relative to the norm for the subject`s chronological age. These metrics may also serve to document accelerated optical aging and/or as early indicators of cataract or other disease processes. 8 figs.

  15. QUANTIFYING THE EVOLVING MAGNETIC STRUCTURE OF ACTIVE REGIONS

    SciTech Connect

    Conlon, Paul A.; McAteer, R.T. James; Gallagher, Peter T.; Fennell, Linda

    2010-10-10

    The topical and controversial issue of parameterizing the magnetic structure of solar active regions has vital implications in the understanding of how these structures form, evolve, produce solar flares, and decay. This interdisciplinary and ill-constrained problem of quantifying complexity is addressed by using a two-dimensional wavelet transform modulus maxima (WTMM) method to study the multifractal properties of active region photospheric magnetic fields. The WTMM method provides an adaptive space-scale partition of a fractal distribution, from which one can extract the multifractal spectra. The use of a novel segmentation procedure allows us to remove the quiet Sun component and reliably study the evolution of active region multifractal parameters. It is shown that prior to the onset of solar flares, the magnetic field undergoes restructuring as Dirac-like features (with a Hoelder exponent, h = -1) coalesce to form step functions (where h = 0). The resulting configuration has a higher concentration of gradients along neutral line features. We propose that when sufficient flux is present in an active region for a period of time, it must be structured with a fractal dimension greater than 1.2, and a Hoelder exponent greater than -0.7, in order to produce M- and X-class flares. This result has immediate applications in the study of the underlying physics of active region evolution and space weather forecasting.

  16. Computational Protein Design Quantifies Structural Constraints on Amino Acid Covariation

    PubMed Central

    Ollikainen, Noah; Kortemme, Tanja

    2013-01-01

    Amino acid covariation, where the identities of amino acids at different sequence positions are correlated, is a hallmark of naturally occurring proteins. This covariation can arise from multiple factors, including selective pressures for maintaining protein structure, requirements imposed by a specific function, or from phylogenetic sampling bias. Here we employed flexible backbone computational protein design to quantify the extent to which protein structure has constrained amino acid covariation for 40 diverse protein domains. We find significant similarities between the amino acid covariation in alignments of natural protein sequences and sequences optimized for their structures by computational protein design methods. These results indicate that the structural constraints imposed by protein architecture play a dominant role in shaping amino acid covariation and that computational protein design methods can capture these effects. We also find that the similarity between natural and designed covariation is sensitive to the magnitude and mechanism of backbone flexibility used in computational protein design. Our results thus highlight the necessity of including backbone flexibility to correctly model precise details of correlated amino acid changes and give insights into the pressures underlying these correlations. PMID:24244128

  17. AN INDUSTRIAL HYGIENE SAMPLING STRATEGY TO QUANTIFY EMPLOYEE EXPOSURE

    SciTech Connect

    Thompson, Aaron L.; Hylko, James M.

    2003-02-27

    Depending on the invasive nature of performing waste management activities, excessive concentrations of mists, vapors, gases, dusts or fumes may be present thus creating hazards to the employee from either inhalation into the lungs or absorption through the skin. To address these hazards, similar exposure groups and an exposure profile result consisting of: (1) a hazard index (concentration); (2) an exposure rating (monitoring results or exposure probabilities); and (3) a frequency rating (hours of potential exposure per week) are used to assign an exposure risk rating (ERR). The ERR determines if the potential hazards pose significant risks to employees linking potential exposure and breathing zone (BZ) monitoring requirements. Three case studies consisting of: (1) a hazard-task approach; (2) a hazard-job classification-task approach; and (3) a hazard approach demonstrate how to conduct exposure assessments using this methodology. Environment, safety and health professionals can then categorize levels of risk and evaluate the need for BZ monitoring, thereby quantifying employee exposure levels accurately.

  18. Quantifying the benefits of vehicle pooling with shareability networks

    PubMed Central

    Santi, Paolo; Resta, Giovanni; Szell, Michael; Sobolevsky, Stanislav; Strogatz, Steven H.; Ratti, Carlo

    2014-01-01

    Taxi services are a vital part of urban transportation, and a considerable contributor to traffic congestion and air pollution causing substantial adverse effects on human health. Sharing taxi trips is a possible way of reducing the negative impact of taxi services on cities, but this comes at the expense of passenger discomfort quantifiable in terms of a longer travel time. Due to computational challenges, taxi sharing has traditionally been approached on small scales, such as within airport perimeters, or with dynamical ad hoc heuristics. However, a mathematical framework for the systematic understanding of the tradeoff between collective benefits of sharing and individual passenger discomfort is lacking. Here we introduce the notion of shareability network, which allows us to model the collective benefits of sharing as a function of passenger inconvenience, and to efficiently compute optimal sharing strategies on massive datasets. We apply this framework to a dataset of millions of taxi trips taken in New York City, showing that with increasing but still relatively low passenger discomfort, cumulative trip length can be cut by 40% or more. This benefit comes with reductions in service cost, emissions, and with split fares, hinting toward a wide passenger acceptance of such a shared service. Simulation of a realistic online system demonstrates the feasibility of a shareable taxi service in New York City. Shareability as a function of trip density saturates fast, suggesting effectiveness of the taxi sharing system also in cities with much sparser taxi fleets or when willingness to share is low. PMID:25197046

  19. Quantifying global dust devil occurrence from meteorological analyses

    PubMed Central

    Jemmett-Smith, Bradley C; Marsham, John H; Knippertz, Peter; Gilkeson, Carl A

    2015-01-01

    Dust devils and nonrotating dusty plumes are effective uplift mechanisms for fine particles, but their contribution to the global dust budget is uncertain. By applying known bulk thermodynamic criteria to European Centre for Medium-Range Weather Forecasts (ECMWF) operational analyses, we provide the first global hourly climatology of potential dust devil and dusty plume (PDDP) occurrence. In agreement with observations, activity is highest from late morning into the afternoon. Combining PDDP frequencies with dust source maps and typical emission values gives the best estimate of global contributions of 3.4% (uncertainty 0.9–31%), 1 order of magnitude lower than the only estimate previously published. Total global hours of dust uplift by dry convection are ?0.002% of the dust-lifting winds resolved by ECMWF, consistent with dry convection making a small contribution to global uplift. Reducing uncertainty requires better knowledge of factors controlling PDDP occurrence, source regions, and dust fluxes induced by dry convection. Key Points Global potential dust devil occurrence quantified from meteorological analyses Climatology shows realistic diurnal cycle and geographical distribution Best estimate of global contribution of 3.4% is 10 times smaller than the previous estimate PMID:26681815

  20. Parkinson's Law quantified: three investigations on bureaucratic inefficiency

    NASA Astrophysics Data System (ADS)

    Klimek, Peter; Hanel, Rudolf; Thurner, Stefan

    2009-03-01

    We formulate three famous, descriptive essays of Parkinson on bureaucratic inefficiency in a quantifiable and dynamical socio-physical framework. In the first model we show how the use of recent opinion formation models for small groups can be used to understand Parkinson's observation that decision-making bodies such as cabinets or boards become highly inefficient once their size exceeds a critical 'Coefficient of Inefficiency', typically around 20. A second observation of Parkinson—which is sometimes referred to as Parkinson's Law—is that the growth of bureaucratic or administrative bodies usually goes hand in hand with a drastic decrease of its overall efficiency. In our second model we view a bureaucratic body as a system of a flow of workers, who enter, become promoted to various internal levels within the system over time, and leave the system after having served for a certain time. Promotion usually is associated with an increase of subordinates. Within the proposed model it becomes possible to work out the phase diagram under which conditions of bureaucratic growth can be confined. In our last model we assign individual efficiency curves to workers throughout their life in administration, and compute the optimum time to give them the old age pension, in order to ensure a maximum of efficiency within the body—in Parkinson's words we compute the 'Pension Point'.

  1. Quantifying Stochastic Effects in Biochemical Reaction Networks using Partitioned Leaping

    E-print Network

    Leonard A. Harris; Aaron M. Piccirilli; Emily R. Majusiak; Paulette Clancy

    2009-07-06

    "Leaping" methods show great promise for significantly accelerating stochastic simulations of complex biochemical reaction networks. However, few practical applications of leaping have appeared in the literature to date. Here, we address this issue using the "partitioned leaping algorithm" (PLA) [L.A. Harris and P. Clancy, J. Chem. Phys. 125, 144107 (2006)], a recently-introduced multiscale leaping approach. We use the PLA to investigate stochastic effects in two model biochemical reaction networks. The networks that we consider are simple enough so as to be accessible to our intuition but sufficiently complex so as to be generally representative of real biological systems. We demonstrate how the PLA allows us to quantify subtle effects of stochasticity in these systems that would be difficult to ascertain otherwise as well as not-so-subtle behaviors that would strain commonly-used "exact" stochastic methods. We also illustrate bottlenecks that can hinder the approach and exemplify and discuss possible strategies for overcoming them. Overall, our aim is to aid and motivate future applications of leaping by providing stark illustrations of the benefits of the method while at the same time elucidating obstacles that are often encountered in practice.

  2. Quantifying the benefits of vehicle pooling with shareability networks.

    PubMed

    Santi, Paolo; Resta, Giovanni; Szell, Michael; Sobolevsky, Stanislav; Strogatz, Steven H; Ratti, Carlo

    2014-09-16

    Taxi services are a vital part of urban transportation, and a considerable contributor to traffic congestion and air pollution causing substantial adverse effects on human health. Sharing taxi trips is a possible way of reducing the negative impact of taxi services on cities, but this comes at the expense of passenger discomfort quantifiable in terms of a longer travel time. Due to computational challenges, taxi sharing has traditionally been approached on small scales, such as within airport perimeters, or with dynamical ad hoc heuristics. However, a mathematical framework for the systematic understanding of the tradeoff between collective benefits of sharing and individual passenger discomfort is lacking. Here we introduce the notion of shareability network, which allows us to model the collective benefits of sharing as a function of passenger inconvenience, and to efficiently compute optimal sharing strategies on massive datasets. We apply this framework to a dataset of millions of taxi trips taken in New York City, showing that with increasing but still relatively low passenger discomfort, cumulative trip length can be cut by 40% or more. This benefit comes with reductions in service cost, emissions, and with split fares, hinting toward a wide passenger acceptance of such a shared service. Simulation of a realistic online system demonstrates the feasibility of a shareable taxi service in New York City. Shareability as a function of trip density saturates fast, suggesting effectiveness of the taxi sharing system also in cities with much sparser taxi fleets or when willingness to share is low. PMID:25197046

  3. Quantifying photometric observing conditions on Paranal using an IR camera

    NASA Astrophysics Data System (ADS)

    Kerber, Florian; Querel, Richard R.; Hanuschik, Reinhard

    2014-08-01

    A Low Humidity and Temperature Profiling (LHATPRO) microwave radiometer, manufactured by Radiometer Physics GmbH (RPG), is used to monitor sky conditions over ESO's Paranal observatory in support of VLT science operations. In addition to measuring precipitable water vapour (PWV) the instrument also contains an IR camera measuring sky brightness temperature at 10.5 ?m. Due to its extended operating range down to -100 °C it is capable of detecting very cold and very thin, even sub-visual, cirrus clouds. We present a set of instrument flux calibration values as compared with a detrended fluctuation analysis (DFA) of the IR camera zenith-looking sky brightness data measured above Paranal taken over the past two years. We show that it is possible to quantify photometric observing conditions and that the method is highly sensitive to the presence of even very thin clouds but robust against variations of sky brightness caused by effects other than clouds such as variations of precipitable water vapour. Hence it can be used to determine photometric conditions for science operations. About 60 % of nights are free of clouds on Paranal. More work will be required to classify the clouds using this technique. For the future this approach might become part of VLT science operations for evaluating nightly sky conditions.

  4. Quantifying capture efficiency of gas collection wells with gas tracers.

    PubMed

    Yazdani, Ramin; Imhoff, Paul; Han, Byunghyun; Mei, Changen; Augenstein, Don

    2015-09-01

    A new in situ method for directly measuring the gas collection efficiency in the region around a gas extraction well was developed. Thirteen tests were conducted by injecting a small volume of gas tracer sequentially at different locations in the landfill cell, and the gas tracer mass collected from each test was used to assess the collection efficiency at each injection point. For 11 tests the gas collection was excellent, always exceeding 70% with seven tests showing a collection efficiency exceeding 90%. For one test the gas collection efficiency was 8±6%. Here, the poor efficiency was associated with a water-laden refuse or remnant daily cover soil located between the point of tracer injection and the extraction well. The utility of in situ gas tracer tests for quantifying landfill gas capture at particular locations within a landfill cell was demonstrated. While there are certainly limitations to this technology, this method may be a valuable tool to help answer questions related to landfill gas collection efficiency and gas flow within landfills. Quantitative data from tracer tests may help assess the utility and cost-effectiveness of alternative cover systems, well designs and landfill gas collection management practices. PMID:26148643

  5. Quantifying the Climate-Scale Accuracy of Satellite Cloud Retrievals

    NASA Astrophysics Data System (ADS)

    Roberts, Y.; Wielicki, B. A.; Sun-Mack, S.; Minnis, P.; Liang, L.; Di Girolamo, L.

    2014-12-01

    Instrument calibration and cloud retrieval algorithms have been developed to minimize retrieval errors on small scales. However, measurement uncertainties and assumptions within retrieval algorithms at the pixel level may alias into decadal-scale trends of cloud properties. We first, therefore, quantify how instrument calibration changes could alias into cloud property trends. For a perfect observing system the climate trend accuracy is limited only by the natural variability of the climate variable. Alternatively, for an actual observing system, the climate trend accuracy is additionally limited by the measurement uncertainty. Drifts in calibration over time may therefore be disguised as a true climate trend. We impose absolute calibration changes to MODIS spectral reflectance used as input to the CERES Cloud Property Retrieval System (CPRS) and run the modified MODIS reflectance through the CPRS to determine the sensitivity of cloud properties to calibration changes. We then use these changes to determine the impact of instrument calibration changes on trend uncertainty in reflected solar cloud properties. Secondly, we quantify how much cloud retrieval algorithm assumptions alias into cloud optical retrieval trends by starting with the largest of these biases: the plane-parallel assumption in cloud optical thickness (?C) retrievals. First, we collect liquid water cloud fields obtained from Multi-angle Imaging Spectroradiometer (MISR) measurements to construct realistic probability distribution functions (PDFs) of 3D cloud anisotropy (a measure of the degree to which clouds depart from plane-parallel) for different ISCCP cloud types. Next, we will conduct a theoretical study with dynamically simulated cloud fields and a 3D radiative transfer model to determine the relationship between 3D cloud anisotropy and 3D ?C bias for each cloud type. Combining these results provides distributions of 3D ?C bias by cloud type. Finally, we will estimate the change in frequency of occurrence of cloud types between two decades and will have the information needed to calculate the total change in 3D optical thickness bias between two decades. If we uncover aliases in this study, the results will motivate the development and rigorous testing of climate specific cloud retrieval algorithms.

  6. Quantifying the Restorable Water Volume of California's Sierra Nevada Meadows

    NASA Astrophysics Data System (ADS)

    Emmons, J. D.; Yarnell, S. M.; Fryjoff-Hung, A.; Viers, J.

    2013-12-01

    The Sierra Nevada is estimated to provide over 66% of California's water supply, which is largely derived from snowmelt. Global climate warming is expected to result in a decrease in snow pack and an increase in melting rate, making the attenuation of snowmelt by any means, an important ecosystem service for ensuring water availability. Montane meadows are dispersed throughout the mountain range and can act like natural reservoirs, and also provide wildlife habitat, water filtration, and water storage. Despite the important role of meadows in the Sierra Nevada, a large proportion is degraded from stream incision, which increases volume outflows and reduces overbank flooding, thus reducing infiltration and potential water storage. Restoration of meadow stream channels would therefore improve hydrological functioning, including increased water storage. The potential water holding capacity of restored meadows has yet to be quantified, thus this research seeks to address this knowledge gap by estimating the restorable water volume due to stream incision. More than 17,000 meadows were analyzed by categorizing their erosion potential using channel slope and soil texture, ultimately resulting in six general erodibility types. Field measurements of over 100 meadows, stratified by latitude, elevation, and geologic substrate, were then taken and analyzed for each erodibility type to determine average depth of incision. Restorable water volume was then quantified as a function of water holding capacity of the soil, meadow area and incised depth. Total restorable water volume was found to be 120 x 10^6 m3, or approximately 97,000 acre-feet. Using 95% confidence intervals for incised depth, the upper and lower bounds of the total restorable water volume were found to be 107 - 140 x 10^6 m3. Though this estimate of restorable water volume is small in regards to the storage capacity of typical California reservoirs, restoration of Sierra Nevada meadows remains an important objective. Storage of water in meadows benefits California wildlife, potentially attenuate floods, and elevates base flows, which can ease effects to the spring recession curve from the expected decline in Sierran snowpack with atmospheric warming.

  7. Educational Outreach: The Space Science Road Show

    NASA Astrophysics Data System (ADS)

    Cox, N. L. J.

    2002-01-01

    The poster presented will give an overview of a study towards a "Space Road Show". The topic of this show is space science. The target group is adolescents, aged 12 to 15, at Dutch high schools. The show and its accompanying experiments would be supported with suitable educational material. Science teachers at schools can decide for themselves if they want to use this material in advance, afterwards or not at all. The aims of this outreach effort are: to motivate students for space science and engineering, to help them understand the importance of (space) research, to give them a positive feeling about the possibilities offered by space and in the process give them useful knowledge on space basics. The show revolves around three main themes: applications, science and society. First the students will get some historical background on the importance of space/astronomy to civilization. Secondly they will learn more about novel uses of space. On the one hand they will learn of "Views on Earth" involving technologies like Remote Sensing (or Spying), Communication, Broadcasting, GPS and Telemedicine. On the other hand they will experience "Views on Space" illustrated by past, present and future space research missions, like the space exploration missions (Cassini/Huygens, Mars Express and Rosetta) and the astronomy missions (Soho and XMM). Meanwhile, the students will learn more about the technology of launchers and satellites needed to accomplish these space missions. Throughout the show and especially towards the end attention will be paid to the third theme "Why go to space"? Other reasons for people to get into space will be explored. An important question in this is the commercial (manned) exploration of space. Thus, the questions of benefit of space to society are integrated in the entire show. It raises some fundamental questions about the effects of space travel on our environment, poverty and other moral issues. The show attempts to connect scientific with community thought. The difficulty with a show this elaborate and intricate is communicating on a level understandable for teenagers, whilst not treating them like children. Professional space scientists know how easy it is to lose oneself in technical specifics. This would, of course, only confuse young people. The author would like to discuss the ideas for this show with a knowledgeable audience and hopefully get some (constructive) feedback.

  8. Quantifying protein diffusion and capture on filaments

    E-print Network

    Emanuel Reithmann; Louis Reese; Erwin Frey

    2015-03-03

    The functional relevance of regulating proteins is often limited to specific binding sites such as the ends of microtubules or actin-filaments. A localization of proteins on these functional sites is of great importance. We present a quantitative theory for a diffusion and capture process, where proteins diffuse on a filament and stop diffusing when reaching the filament's end. It is found that end-association after one-dimensional diffusion is the main source for tip-localization of such proteins. As a consequence, diffusion and capture is highly efficient in enhancing the reaction velocity of enzymatic reactions, where proteins and filament ends are to each other as enzyme and substrate. We show that the reaction velocity can effectively be described within a Michaelis-Menten framework. Together one-dimensional diffusion and capture beats the (three-dimensional) Smoluchowski diffusion limit for the rate of protein association to filament ends.

  9. Quantifying systematic uncertainties in supernova cosmology

    SciTech Connect

    Nordin, Jakob; Goobar, Ariel; Joensson, Jakob E-mail: ariel@physto.se

    2008-02-15

    Observations of Type Ia supernovae used to map the expansion history of the Universe suffer from systematic uncertainties that need to be propagated into the estimates of cosmological parameters. We propose an iterative Monte Carlo simulation and cosmology fitting technique (SMOCK) to investigate the impact of sources of error upon fits of the dark energy equation of state. This approach is especially useful to track the impact of non-Gaussian, correlated effects, e.g. reddening correction errors, brightness evolution of the supernovae, K-corrections, gravitational lensing, etc. While the tool is primarily aimed at studies and optimization of future instruments, we use the Gold data-set in Riess et al (2007 Astrophys. J. 659 98) to show examples of potential systematic uncertainties that could exceed the quoted statistical uncertainties.

  10. Visual modeling shows that avian host parents use multiple visual cues in rejecting parasitic eggs

    PubMed Central

    Spottiswoode, Claire N.; Stevens, Martin

    2010-01-01

    One of the most striking outcomes of coevolution between species is egg mimicry by brood parasitic birds, resulting from rejection behavior by discriminating host parents. Yet, how exactly does a host detect a parasitic egg? Brood parasitism and egg rejection behavior provide a model system for exploring the relative importance of different visual cues used in a behavioral task. Although hosts are discriminating, we do not know exactly what cues they use, and to answer this it is crucial to account for the receiver's visual perception. Color, luminance (“perceived lightness”) and pattern information have never been simultaneously quantified and experimentally tested through a bird's eye. The cuckoo finch Anomalospiza imberbis and its hosts show spectacular polymorphisms in egg appearance, providing a good opportunity for investigating visual discrimination owing to the large range of patterns and colors involved. Here we combine field experiments in Africa with modeling of avian color vision and pattern discrimination to identify the specific visual cues used by hosts in making rejection decisions. We found that disparity between host and foreign eggs in both color and several aspects of pattern (dispersion, principal marking size, and variability in marking size) were important predictors of rejection, especially color. These cues correspond exactly to the principal differences between host and parasitic eggs, showing that hosts use the most reliable available cues in making rejection decisions, and select for parasitic eggs that are increasingly mimetic in a range of visual attributes. PMID:20421497

  11. Quantifying determinants of cash crop expansion and their relative effects using logistic regression modeling and variance partitioning

    NASA Astrophysics Data System (ADS)

    Xiao, Rui; Su, Shiliang; Mai, Gengchen; Zhang, Zhonghao; Yang, Chenxue

    2015-02-01

    Cash crop expansion has been a major land use change in tropical and subtropical regions worldwide. Quantifying the determinants of cash crop expansion should provide deeper spatial insights into the dynamics and ecological consequences of cash crop expansion. This paper investigated the process of cash crop expansion in Hangzhou region (China) from 1985 to 2009 using remotely sensed data. The corresponding determinants (neighborhood, physical, and proximity) and their relative effects during three periods (1985-1994, 1994-2003, and 2003-2009) were quantified by logistic regression modeling and variance partitioning. Results showed that the total area of cash crops increased from 58,874.1 ha in 1985 to 90,375.1 ha in 2009, with a net growth of 53.5%. Cash crops were more likely to grow in loam soils. Steep areas with higher elevation would experience less likelihood of cash crop expansion. A consistently higher probability of cash crop expansion was found on places with abundant farmland and forest cover in the three periods. Besides, distance to river and lake, distance to county center, and distance to provincial road were decisive determinants for farmers' choice of cash crop plantation. Different categories of determinants and their combinations exerted different influences on cash crop expansion. The joint effects of neighborhood and proximity determinants were the strongest, and the unique effect of physical determinants decreased with time. Our study contributed to understanding of the proximate drivers of cash crop expansion in subtropical regions.

  12. Revelle revisited: Buffer factors that quantify the response of ocean chemistry to changes in DIC and alkalinity

    NASA Astrophysics Data System (ADS)

    Egleston, Eric S.; Sabine, Christopher L.; Morel, FrançOis M. M.

    2010-03-01

    We derive explicit expressions of the Revelle factor and several other buffer factors of interest to climate change scientists and those studying ocean acidification. These buffer factors quantify the sensitivity of CO2 and H+ concentrations ([CO2] and [H+]) and CaCO3 saturation (?) to changes in dissolved inorganic carbon concentration (DIC) and alkalinity (Alk). The explicit expressions of these buffer factors provide a convenient means to compare the degree of buffering of [CO2], [H+], and ? in different regions of the oceans and at different times in the future and to gain insight into the buffering mechanisms. All six buffer factors have roughly similar values, and all reach an absolute minimum when DIC = Alk (pH ˜ 7.5). Surface maps of the buffer factors generally show stronger buffering capacity in the subtropical gyres relative to the polar regions. As the dissolution of anthropogenic CO2 increases the DIC of surface seawater over the next century, all the buffer factors will decrease, resulting in a much greater sensitivity to local variations in DIC and Alk. For example, diurnal and seasonal variations in pH and ? caused by photosynthesis and respiration will be greatly amplified. Buffer factors provide convenient means to quantify the effect that changes in DIC and Alk have on seawater chemistry. They should also help illuminate the role that various physical and biological processes have in determining the oceanic response to an increase in atmospheric CO2.

  13. Comparing 3D Gyrification Index and area-independent curvature-based measures in quantifying neonatal brain folding

    NASA Astrophysics Data System (ADS)

    Rodriguez-Carranza, Claudia E.; Mukherjee, P.; Vigneron, Daniel; Barkovich, James; Studholme, Colin

    2007-03-01

    In this work we compare 3D Gyrification Index and our recently proposed area-independent curvature-based surface measures [26] for the in-vivo quantification of brain surface folding in clinically acquired neonatal MR image data. A meaningful comparison of gyrification across brains of different sizes and their subregions will only be possible through the quantification of folding with measures that are independent of the area of the region of analysis. This work uses a 3D implementation of the classical Gyrification Index, a 2D measure that quantifies folding based on the ratio of the inner and outer contours of the brain and which has been used to study gyral patterns in adults with schizophrenia, among other conditions. The new surface curvature-based measures and the 3D Gyrification Index were calculated on twelve premature infants (age 28-37 weeks) from which surfaces of cerebrospinal fluid/gray matter (CSF/GM) interface and gray matter/white matter (GM/WM) interface were extracted. Experimental results show that our measures better quantify folding on the CSF/GM interface than Gyrification Index, and perform similarly on the GM/WM interface.

  14. Quantifying oil filtration effects on bearing life

    NASA Technical Reports Server (NTRS)

    Needelman, William M.; Zaretsky, Erwin V.

    1991-01-01

    Rolling-element bearing life is influenced by the number, size, and material properties of particles entering the Hertzian contact of the rolling element and raceway. In general, rolling-element bearing life increases with increasing level of oil filtration. Based upon test results, two equations are presented which allow for the adjustment of bearing L(sub 10) or catalog life based upon oil filter rating. It is recommended that where no oil filtration is used catalog life be reduced by 50 percent.

  15. Show Them You Really Want the Job

    ERIC Educational Resources Information Center

    Perlmutter, David D.

    2012-01-01

    Showing that one really "wants" the job entails more than just really wanting the job. An interview is part Broadway casting call, part intellectual dating game, part personality test, and part, well, job interview. When there are 300 applicants for a position, many of them will "fit" the required (and even the preferred) skills listed in the job…

  16. Tilapia show immunization response against Ich

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This study compares the immune response of Nile tilapia and red tilapia against parasite Ichthyophthirius multifiliis (Ich) using a cohabitation challenge model. Both Nile and red tilapia showed strong immune response post immunization with live Ich theronts by IP injection or immersion. Blood serum...

  17. George Arcement Shows Locations of USGS Streamgages

    USGS Multimedia Gallery

    USGS Louisiana Water Science Center Director George Arcement shows the locations of USGS' streamgage network to WAFB Meteorologist Jay Grymes.  USGS maintains more than 30 real-time streamgages throughout the area affected by the 2011 Flood. In addition, more than 50 non-real-time gages were...

  18. The Morning Show at WLES-TV.

    ERIC Educational Resources Information Center

    Blondell, Beverley

    1979-01-01

    Describes the production and programing of daily quarter-hour television shows by different groups of students at Laurel (Maryland) Elementary School, guided by the library media specialist who started them five years ago. The video experience has improved students' reading, writing, and math skills, as well as behavior. (MF)

  19. Martinkus docos show reality of Afghanistan war

    E-print Network

    Wapstra, Erik

    Martinkus docos show reality of Afghanistan war By ShAron Webb Journalist John Martinkus reels off the date he was kidnapped in the Iraq war as if it's perma- nently scratched on his brain. "It happened. It was terrible. I thought: Here we go." It's experiences like this that gave credibility to the television war

  20. What Pain Asymbolia Really Shows Colin Klein

    E-print Network

    Klein, Colin

    What Pain Asymbolia Really Shows Colin Klein Macquarie University cvklein@gmail.com Abstract Pain asymbolics feel pain, but act as if they are indifferent to it. Nikola Grahek argues that such patients present a clear counterexample to motivationalism about pain. I argue that Grahek has mischaracterised

  1. Laser entertainment and light shows in education

    NASA Astrophysics Data System (ADS)

    Sabaratnam, Andrew T.; Symons, Charles

    2002-05-01

    Laser shows and beam effects have been a source of entertainment since its first public performance May 9, 1969, at Mills College in Oakland, California. Since 1997, the Photonics Center, NgeeAnn Polytechnic, Singapore, has been using laser shows as a teaching tool. Students are able to exhibit their creative skills and learn at the same time how lasers are used in the entertainment industry. Students will acquire a number of skills including handling three- phase power supply, operation of cooling system, and laser alignment. Students also acquire an appreciation of the arts, learning about shapes and contours as they develop graphics for the shows. After holography, laser show animation provides a combination of the arts and technology. This paper aims to briefly describe how a krypton-argon laser, galvanometer scanners, a polychromatic acousto-optic modulator and related electronics are put together to develop a laser projector. The paper also describes how students are trained to make their own laser animation and beam effects with music, and at the same time have an appreciation of the operation of a Class IV laser and the handling of optical components.

  2. Showing Enantiomorphous Crystals of Tartaric Acid

    ERIC Educational Resources Information Center

    Andrade-Gamboa, Julio

    2007-01-01

    Most of the articles and textbooks that show drawings of enantiomorphous crystals use an inadequate view to appreciate the fact that they are non-superimposable mirror images of one another. If a graphical presentation of crystal chirality is not evident, the main attribute of crystal enantiomorphism can not be recognized by students. The classic…

  3. Quantifying emissions reductions from New England offshore wind energy resources

    E-print Network

    Berlinski, Michael Peter

    2006-01-01

    Access to straightforward yet robust tools to quantify the impact of renewable energy resources on air emissions from fossil fuel power plants is important to governments aiming to improve air quality and reduce greenhouse ...

  4. Quantifying the Uncertainty in Estimates of World Conventional Oil Resources 

    E-print Network

    Tien, Chih-Ming

    2010-07-14

    and sometimes personal. A big reason for the large divide between the two groups is the failure of both to acknowledge the significant uncertainty in their estimates. Although some authors attempt to quantify uncertainty, most use deterministic methods...

  5. Quantifying Robustness Metrics in Parameterized Static Timing Analysis

    E-print Network

    Najm, Farid N.

    circuits, the control over process and environmental parameters has become increasingly difficultQuantifying Robustness Metrics in Parameterized Static Timing Analysis Khaled R. Heloue ECE Abstract--Process and environmental variations continue to present significant challenges to designers

  6. Beyond Quantifier-Free Interpolation in Extensions of Presburger Arithmetic

    NASA Astrophysics Data System (ADS)

    Brillout, Angelo; Kroening, Daniel; Rümmer, Philipp; Wahl, Thomas

    Craig interpolation has emerged as an effective means of generating candidate program invariants. We present interpolation procedures for the theories of Presburger arithmetic combined with (i) uninterpreted predicates (QPA+UP), (ii) uninterpreted functions (QPA+UF) and (iii) extensional arrays (QPA+AR). We prove that none of these combinations can be effectively interpolated without the use of quantifiers, even if the input formulae are quantifier-free. We go on to identify fragments of QPA+UP and QPA+UF with restricted forms of guarded quantification that are closed under interpolation. Formulae in these fragments can easily be mapped to quantifier-free expressions with integer division. For QPA+AR, we formulate a sound interpolation procedure that potentially produces interpolants with unrestricted quantifiers.

  7. Study Quantifies Physical Demands of Yoga in Seniors

    MedlinePLUS

    ... links Read our disclaimer about external links Menu Study Quantifies Physical Demands of Yoga in Seniors A recent NCCAM-funded study measured the physical demands associated with seven commonly ...

  8. A NEW METHOD TO QUANTIFY CORE TEMPERATURE INSTABILITY IN RODENTS.

    EPA Science Inventory

    Methods to quantify instability of autonomic systems such as temperature regulation should be important in toxicant and drug safety studies. Stability of core temperature (Tc) in laboratory rodents is susceptible to a variety of stimuli. Calculating the temperature differential o...

  9. 2005 Special issue Quantifying information and performance for flash detection

    E-print Network

    Maryland at College Park, University of

    2005 Special issue Quantifying information and performance for flash detection in the blowfly reserved. Keywords: Blowfly photoreceptor; Biophysical model; Ideal observer analysis; Flash detection, the performance levels are near fundamental physical limits (Bialek, 1987). Nowhere is evolutionary pressure

  10. Quantifying Environmental Limiting Factors on Tree Cover Using Geospatial Data

    PubMed Central

    Greenberg, Jonathan A.; Santos, Maria J.; Dobrowski, Solomon Z.; Vanderbilt, Vern C.; Ustin, Susan L.

    2015-01-01

    Environmental limiting factors (ELFs) are the thresholds that determine the maximum or minimum biological response for a given suite of environmental conditions. We asked the following questions: 1) Can we detect ELFs on percent tree cover across the eastern slopes of the Lake Tahoe Basin, NV? 2) How are the ELFs distributed spatially? 3) To what extent are unmeasured environmental factors limiting tree cover? ELFs are difficult to quantify as they require significant sample sizes. We addressed this by using geospatial data over a relatively large spatial extent, where the wall-to-wall sampling ensures the inclusion of rare data points which define the minimum or maximum response to environmental factors. We tested mean temperature, minimum temperature, potential evapotranspiration (PET) and PET minus precipitation (PET-P) as potential limiting factors on percent tree cover. We found that the study area showed system-wide limitations on tree cover, and each of the factors showed evidence of being limiting on tree cover. However, only 1.2% of the total area appeared to be limited by the four (4) environmental factors, suggesting other unmeasured factors are limiting much of the tree cover in the study area. Where sites were near their theoretical maximum, non-forest sites (tree cover < 25%) were primarily limited by cold mean temperatures, open-canopy forest sites (tree cover between 25% and 60%) were primarily limited by evaporative demand, and closed-canopy forests were not limited by any particular environmental factor. The detection of ELFs is necessary in order to fully understand the width of limitations that species experience within their geographic range. PMID:25692604

  11. Quantifying peculiarity of cluster galaxies and their kinematic features

    NASA Astrophysics Data System (ADS)

    Oh, Sree; Jeong, Hyunjin; Sheen, Yun-Kyeong; Yi, Sukyoung

    2016-01-01

    Galaxy morphology involves complex effects from both secular and non-secular evolution of galaxies. Although it is a final product of galaxy evolution, it gives a clue to the processes that the galaxy suffer. Galaxy clusters are the sites where the most massive galaxies are found, and so the most dramatic merger histories are embedded. Our extra-ordinary deep (?r ~ 28 mag/''2) imaging of Abell 119 at z = 0.044 using a Blanco 4-m telescope at CTIO enable us to detect low surface brightness features, and we found post-merger signatures for 25% of red-sequence galaxies in the clusters suggesting that so many galaxies even in clusters have gone through galaxy mergers at recent epochs. We quantified the degree of peculiarity of morphology utilizing residual lights from model subtracted images to pin down the merger frequency in cluster environments more objectively. With our technique we measured the degree of features which in turn allow us to extract the details of the merger properties, such as the galaxy mass ratios and the merger frequency. We went further to understand the impact of galaxy mergers in cluster environment using the SAMI Integral Field Unit on the galaxies of Abell 119 and found that half of galaxies related to mergers show misalignment in the angle between the photometric major and the rotation axes, and most of them show complex kinematic features. Our research on quantification of merger features through deep imaging help us to understand the merger history of cluster galaxies, and we present our understanding of galaxy mergers in cluster environment from the perspective of kinematics.

  12. Quantifying Differential Rotation Across the Main Sequence

    NASA Astrophysics Data System (ADS)

    Ule, Nicholas M.

    We have constructed a sample of eight stars from the Kepler field covering a broad range of spectral types, from F7 to K3. These stars have well defined rotation rates and show evidence of differential rotation in their lightcurves. In order to robustly determine differential rotation the inclination of a star must first be known. Thus, we have obtained moderate resolution spectra of these targets and obtained their radial velocities (v sin i), which is then used to determine inclinations. The photometric variations often seen in stars are created by star spots which we model in order to determine differential rotation. We have adapted the starspotz model developed by Croll (2006) with an asexual genetic algorithm to measure the strength of differential rotation (described with the parameter k). The photometric data was broken into 167 segments which were modeled for 6--8 values of k, with each model producing 50,000+ solutions. The value of k with a solution which produced the closest fit to the data was determined to be the most correct value of k for that lightcurve segment. With this data we also performed signal analysis which indicated the presence of long lived, latitudinally dependant active regions on stars. For our eight targets we successfully determined differential rotation rates and evaluated those values in relation to stellar temperature and rotational period. Coupled with previously published values for nine additional targets we find no temperature relation with differential rotation, but we do find a strong trend with rotation rates.

  13. Quantifying the Consistency of Scientific Databases

    PubMed Central

    Šubelj, Lovro; Bajec, Marko; Mileva Boshkoska, Biljana; Kastrin, Andrej; Levnaji?, Zoran

    2015-01-01

    Science is a social process with far-reaching impact on our modern society. In recent years, for the first time we are able to scientifically study the science itself. This is enabled by massive amounts of data on scientific publications that is increasingly becoming available. The data is contained in several databases such as Web of Science or PubMed, maintained by various public and private entities. Unfortunately, these databases are not always consistent, which considerably hinders this study. Relying on the powerful framework of complex networks, we conduct a systematic analysis of the consistency among six major scientific databases. We found that identifying a single "best" database is far from easy. Nevertheless, our results indicate appreciable differences in mutual consistency of different databases, which we interpret as recipes for future bibliometric studies. PMID:25984946

  14. Quantifying viruses and bacteria in wastewater - results, quality control, and interpretation methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Membrane bioreactors (MBR), used for wastewater treatment in Ohio and elsewhere in the United States, have pore sizes large enough to theoretically reduce concentrations of protozoa and bacteria, but not viruses. Sampling for viruses in wastewater is seldom done and not required. Instead, the bac...

  15. Quantifying uncertainties in wind energy assessment

    NASA Astrophysics Data System (ADS)

    Patlakas, Platon; Galanis, George; Kallos, George

    2015-04-01

    The constant rise of wind energy production and the subsequent penetration in global energy markets during the last decades resulted in new sites selection with various types of problems. Such problems arise due to the variability and the uncertainty of wind speed. The study of the wind speed distribution lower and upper tail may support the quantification of these uncertainties. Such approaches focused on extreme wind conditions or periods below the energy production threshold are necessary for a better management of operations. Towards this direction, different methodologies are presented for the credible evaluation of potential non-frequent/extreme values for these environmental conditions. The approaches used, take into consideration the structural design of the wind turbines according to their lifespan, the turbine failures, the time needed for repairing as well as the energy production distribution. In this work, a multi-parametric approach for studying extreme wind speed values will be discussed based on tools of Extreme Value Theory. In particular, the study is focused on extreme wind speed return periods and the persistence of no energy production based on a weather modeling system/hind cast/10-year dataset. More specifically, two methods (Annual Maxima and Peaks Over Threshold) were used for the estimation of extreme wind speeds and their recurrence intervals. Additionally, two different methodologies (intensity given duration and duration given intensity, both based on Annual Maxima method) were implied to calculate the extreme events duration, combined with their intensity as well as the event frequency. The obtained results prove that the proposed approaches converge, at least on the main findings, for each case. It is also remarkable that, despite the moderate wind speed climate of the area, several consequent days of no energy production are observed.

  16. Quantifying selective linear erosion in Antarctica

    NASA Astrophysics Data System (ADS)

    Balco, G.; Shuster, D. L.

    2012-12-01

    David Sugden (1978) coined the term 'selective linear erosion' to describe landscapes, characteristic of high-latitude glaciated areas, that are distinguished by deep glacially excavated troughs separated by low-relief upland surfaces that show no evidence of glacial erosion. Sugden (and later researchers) proposed that this landscape form owed its existence to the thermal distribution within polar ice sheets: ice at high elevations is thin, frozen to its bed, and therefore protects rather than erodes the landscape; thicker ice in topographic depressions can sustain basal melting with consequent erosion by hydraulic and thermodynamic processes. This contrast in basal thermal regime implies an extreme contrast in erosion rates, which amplifies preexisting relief and gives rise to landscapes of selective linear erosion. These landscapes are currently exposed in formerly glaciated high-latitude regions of the northern continents. They also exist beneath the Antarctic ice sheets, where presumably the processes responsible for their formation are currently active. Here we argue that understanding how and when these landscapes form is important to understanding how ice sheets mediate climate-landscape interactions. However, the facts that: i) the processes in question occur beneath the modern Antarctic ice sheet, and ii) currently unglaciated portions of glacier troughs in Arctic and Antarctic landscapes are nearly universally submerged, present several challenges to attaining this understanding. Here we summarize geochemical and geochronological means of addressing these challenges. These include: first, cosmogenic-nuclide measurements that establish the Plio-Pleistocene erosion history of high-elevation plateau surfaces; second, thermochronometric observations on debris shed by glaciers occupying major troughs that provide information about when and how fast these troughs formed.

  17. Worldwide trends show oropharyngeal cancer rate increasing

    Cancer.gov

    DCEG scientists report that the incidence of oropharyngeal cancer significantly increased in countries that are economically developed, during the period 1983-2002. The results of this study appeared online in the Journal of Clinical Oncology, on November 18, 2013.

  18. Virtual environment to quantify the influence of colour stimuli on the performance of tasks requiring attention

    PubMed Central

    2011-01-01

    Background Recent studies indicate that the blue-yellow colour discrimination is impaired in ADHD individuals. However, the relationship between colour and performance has not been investigated. This paper describes the development and the testing of a virtual environment that is capable to quantify the influence of red-green versus blue-yellow colour stimuli on the performance of people in a fun and interactive way, being appropriate for the target audience. Methods An interactive computer game based on virtual reality was developed to evaluate the performance of the players. The game's storyline was based on the story of an old pirate who runs across islands and dangerous seas in search of a lost treasure. Within the game, the player must find and interpret the hints scattered in different scenarios. Two versions of this game were implemented. In the first, hints and information boards were painted using red and green colours. In the second version, these objects were painted using blue and yellow colours. For modelling, texturing, and animating virtual characters and objects the three-dimensional computer graphics tool Blender 3D was used. The textures were created with the GIMP editor to provide visual effects increasing the realism and immersion of the players. The games were tested on 20 non-ADHD volunteers who were divided into two subgroups (A1 and A2) and 20 volunteers with ADHD who were divided into subgroups B1 and B2. Subgroups A1 and B1 used the first version of the game with the hints painted in green-red colors, and subgroups A2 and B2 the second version using the same hints now painted in blue-yellow. The time spent to complete each task of the game was measured. Results Data analyzed with ANOVA two-way and posthoc TUKEY LSD showed that the use of blue/yellow instead of green/red colors decreased the game performance of all participants. However, a greater decrease in performance could be observed with ADHD participants where tasks, that require attention, were most affected. Conclusions The game proved to be a user-friendly tool capable to detect and quantify the influence of color on the performance of people executing tasks that require attention and showed to be attractive for people with ADHD. PMID:21854630

  19. Can satellite-derived aerosol optical depth quantify the surface aerosol radiative forcing?

    NASA Astrophysics Data System (ADS)

    Xu, Hui; Ceamanos, Xavier; Roujean, Jean-Louis; Carrer, Dominique; Xue, Yong

    2014-12-01

    Aerosols play an important role in the climate of the Earth through aerosol radiative forcing (ARF). Nowadays, aerosol particles are detected, quantified and monitored by remote sensing techniques using low Earth orbit (LEO) and geostationary (GEO) satellites. In the present article, the use of satellite-derived AOD (aerosol optical depth) products is investigated in order to quantify on a daily basis the ARF at the surface level (SARF). By daily basis we mean that an average SARF value is computed every day based upon the available AOD satellite measurements for each station. In the first part of the study, the performance of four state-of-art different AOD products (MODIS-DT, MODIS-DB, MISR, and SEVIRI) is assessed through comparison against ground-based AOD measurements from 24 AERONET stations located in Europe and Africa during a 6-month period. While all AOD products are found to be comparable in terms of measured value (RMSE of 0.1 for low and average AOD values), a higher number of AOD estimates is made available by GEO satellites due to their enhanced frequency of scan. Experiments show a general lower agreement of AOD estimates over the African sites (RMSE of 0.2), which show the highest aerosol concentrations along with the occurrence of dust aerosols, coarse particles, and bright surfaces. In the second part of this study, the lessons learned about the confidence in aerosol burden derived from satellites are used to estimate SARF under clear sky conditions. While the use of AOD products issued from GEO observations like SEVIRI brings improvement in the SARF estimates with regard to LEO-based AOD products, the resulting absolute bias (13 W/m2 in average when AERONET AOD is used as reference) is judged to be still high in comparison with the average values of SARF found in this study (from - 25 W/m2 to - 43 W/m2) and also in the literature (from - 10 W/m2 to - 47 W/m2).

  20. Rapidly quantifying the relative distention of a human bladder

    NASA Technical Reports Server (NTRS)

    Companion, John A. (inventor); Heyman, Joseph S. (inventor); Mineo, Beth A. (inventor); Cavalier, Albert R. (inventor); Blalock, Travis N. (inventor)

    1989-01-01

    A device and method of rapidly quantifying the relative distention of the bladder in a human subject are disclosed. The ultrasonic transducer which is positioned on the subject in proximity to the bladder is excited by a pulser under the command of a microprocessor to launch an acoustic wave into the patient. This wave interacts with the bladder walls and is reflected back to the ultrasonic transducer, when it is received, amplified and processed by the receiver. The resulting signal is digitized by an analog-to-digital converter under the command of the microprocessor and is stored in the data memory. The software in the microprocessor determines the relative distention of the bladder as a function of the propagated ultrasonic energy; and based on programmed scientific measurements and individual, anatomical, and behavioral characterists of the specific subject as contained in the program memory, sends out a signal to turn on any or all of the audible alarm, the visible alarm, the tactile alarm, and the remote wireless alarm.

  1. Quantum Process Tomography Quantifies Coherence Transfer Dynamics in Vibrational Exciton

    PubMed Central

    Chuntonov, Lev; Ma, Jianqiang

    2013-01-01

    Quantum coherence has been a subject of great interest in many scientific disciplines. However, detailed characterization of the quantum coherence in molecular systems, especially its transfer and relaxation mechanisms, still remains a major challenge. The difficulties arise in part because the spectroscopic signatures of the coherence transfer are typically overwhelmed by other excitation relaxation processes. We use quantum process tomography (QPT) via two-dimensional infrared spectroscopy to quantify the rate of the elusive coherence transfer between two vibrational exciton states. QPT retrieves the dynamics of the dissipative quantum system directly from the experimental observables. It thus serves as an experimental alternative to theoretical models of the system-bath interaction, and can be used to validate these theories. Our results for coupled carbonyl groups of a diketone molecule in chloroform, used as a benchmark system, reveal the non-secular nature of the interaction between the exciton and the Markovian bath and open the door for the systematic studies of the dissipative quantum systems dynamics in detail. PMID:24079417

  2. Quantifying Snow Volume Uncertainty from Repeat Terrestrial Laser Scanning Observations

    NASA Astrophysics Data System (ADS)

    Gadomski, P. J.; Hartzell, P. J.; Finnegan, D. C.; Glennie, C. L.; Deems, J. S.

    2014-12-01

    Terrestrial laser scanning (TLS) systems are capable of providing rapid, high density, 3D topographic measurements of snow surfaces from increasing standoff distances. By differencing snow surface with snow free measurements within a common scene, snow depths and volumes can be estimated. These data can support operational water management decision-making when combined with measured or modeled snow densities to estimate basin water content, evaluate in-situ data, or drive operational hydrologic models. In addition, change maps from differential TLS scans can also be used to support avalanche control operations to quantify loading patterns for both pre-control planning and post-control assessment. However, while methods for computing volume from TLS point cloud data are well documented, a rigorous quantification of the volumetric uncertainty has yet to be presented. Using repeat TLS data collected at the Arapahoe Basin Ski Area in Summit County, Colorado, we demonstrate the propagation of TLS point measurement and cloud registration uncertainties into 3D covariance matrices at the point level. The point covariances are then propagated through a volume computation to arrive at a single volume uncertainty value. Results from two volume computation methods are compared and the influence of data voids produced by occlusions examined.

  3. Rapidly quantifying the relative distention of a human bladder

    NASA Technical Reports Server (NTRS)

    Companion, John A. (inventor); Heyman, Joseph S. (inventor); Mineo, Beth A. (inventor); Cavalier, Albert R. (inventor); Blalock, Travis N. (inventor)

    1991-01-01

    A device and method was developed to rapidly quantify the relative distention of the bladder of a human subject. An ultrasonic transducer is positioned on the human subject near the bladder. A microprocessor controlled pulser excites the transducer by sending an acoustic wave into the human subject. This wave interacts with the bladder walls and is reflected back to the ultrasonic transducer where it is received, amplified, and processed by the receiver. The resulting signal is digitized by an analog to digital converter, controlled by the microprocessor again, and is stored in data memory. The software in the microprocessor determines the relative distention of the bladder as a function of the propagated ultrasonic energy. Based on programmed scientific measurements and the human subject's past history as contained in program memory, the microprocessor sends out a signal to turn on any or all of the available alarms. The alarm system includes and audible alarm, the visible alarm, the tactile alarm, and the remote wireless alarm.

  4. Quantifying hurricane-induced coastal changes using topographic lidar

    USGS Publications Warehouse

    Sallenger, Asbury H., Jr.; Krabill, William; Swift, Robert; Brock, John

    2001-01-01

    USGS and NASA are investigating the impacts of hurricanes on the United States East and Gulf of Mexico coasts with the ultimate objective of improving predictive capabilities. The cornerstone of our effort is to use topographic lidar to acquire pre- and post-storm topography to quantify changes to beaches and dunes. With its rapidity of acquisition and very high density, lidar is revolutionizing the. quantification of storm-induced coastal change. Lidar surveys have been acquired for the East and Gulf coasts to serve as pre-storm baselines. Within a few days of a hurricane landfall anywhere within the study area, the impacted area will be resurveyed to detect changes. For example, during 1999, Hurricane Dennis impacted the northern North Carolina coast. Along a 70-km length of coast between Cape Hatteras and Oregon Inlet, there was large variability in the types of impacts including overwash, dune erosion, dune stability, and even accretion at the base of dunes. These types of impacts were arranged in coherent patterns that repeated along the coast over scales of tens of kilometers. Preliminary results suggest the variability is related to the influence of offshore shoals that induce longshore gradients in wave energy by wave refraction.

  5. Cardiovascular regulation during sleep quantified by symbolic coupling traces

    NASA Astrophysics Data System (ADS)

    Suhrbier, A.; Riedl, M.; Malberg, H.; Penzel, T.; Bretthauer, G.; Kurths, J.; Wessel, N.

    2010-12-01

    Sleep is a complex regulated process with short periods of wakefulness and different sleep stages. These sleep stages modulate autonomous functions such as blood pressure and heart rate. The method of symbolic coupling traces (SCT) is used to analyze and quantify time-delayed coupling of these measurements during different sleep stages. The symbolic coupling traces, defined as the symmetric and diametric traces of the bivariate word distribution matrix, allow the quantification of time-delayed coupling. In this paper, the method is applied to heart rate and systolic blood pressure time series during different sleep stages for healthy controls as well as for normotensive and hypertensive patients with sleep apneas. Using the SCT, significant different cardiovascular mechanisms not only between the deep sleep and the other sleep stages but also between healthy subjects and patients can be revealed. The SCT method is applied to model systems, compared with established methods, such as cross correlation, mutual information, and cross recurrence analysis and demonstrates its advantages especially for nonstationary physiological data. As a result, SCT proves to be more specific in detecting delays of directional interactions than standard coupling analysis methods and yields additional information which cannot be measured by standard parameters of heart rate and blood pressure variability. The proposed method may help to indicate the pathological changes in cardiovascular regulation and also the effects of continuous positive airway pressure therapy on the cardiovascular system.

  6. A diagnostic for quantifying heat flux from a thermite spray

    SciTech Connect

    E. P. Nixon; M. L. Pantoya; D. J. Prentice; E. D. Steffler; M. A. Daniels; S. P. D'Arche

    2010-02-01

    Characterizing the combustion behaviors of energetic materials requires diagnostic tools that are often not readily or commercially available. For example, a jet of thermite spray provides a high temperature and pressure reaction that can also be highly corrosive and promote undesirable conditions for the survivability of any sensor. Developing a diagnostic to quantify heat flux from a thermite spray is the objective of this study. Quick response sensors such as thin film heat flux sensors cannot survive the harsh conditions of the spray, but more rugged sensors lack the response time for the resolution desired. A sensor that will allow for adequate response time while surviving the entire test duration was constructed. The sensor outputs interior temperatures of the probes at known locations and utilizes an inverse heat conduction code to calculate heat flux values. The details of this device are discussed and illustrated. Temperature and heat flux measurements of various thermite sprays are reported. Results indicate that this newly designed heat flux sensor provides quantitative data with good repeatability suitable for characterizing energetic material combustion.

  7. Quantifying the relationship between visual salience and visual importance

    NASA Astrophysics Data System (ADS)

    Wang, Junle; Chandler, Damon M.; Le Callet, Patrick

    2010-02-01

    This paper presents the results of two psychophysical experiments and an associated computational analysis designed to quantify the relationship between visual salience and visual importance. In the first experiment, importance maps were collected by asking human subjects to rate the relative visual importance of each object within a database of hand-segmented images. In the second experiment, experimental saliency maps were computed from visual gaze patterns measured for these same images by using an eye-tracker and task-free viewing. By comparing the importance maps with the saliency maps, we found that the maps are related, but perhaps less than one might expect. When coupled with the segmentation information, the saliency maps were shown to be effective at predicting the main subjects. However, the saliency maps were less effective at predicting the objects of secondary importance and the unimportant objects. We also found that the vast majority of early gaze position samples (0-2000 ms) were made on the main subjects, suggesting that a possible strategy of early visual coding might be to quickly locate the main subject(s) in the scene.

  8. Quantifying characters: polygenist anthropologists and the hardening of heredity.

    PubMed

    Hume, Brad D

    2008-01-01

    Scholars studying the history of heredity suggest that during the 19th-century biologists and anthropologists viewed characteristics as a collection of blended qualities passed on from the parents. Many argued that those characteristics could be very much affected by environmental circumstances, which scholars call the inheritance of acquired characteristics or "soft" heredity. According to these accounts, Gregor Mendel reconceived heredity--seeing distinct hereditary units that remain unchanged by the environment. This resulted in particular traits that breed true in succeeding generations, or "hard" heredity. The author argues that polygenist anthropology (an argument that humanity consisted of many species) and anthropometry in general should be seen as a hardening of heredity. Using a debate between Philadelphia anthropologist and physician, Samuel G. Morton, and Charleston naturalist and reverend, John Bachman, as a springboard, the author contends that polygenist anthropologists hardened heredity by conceiving of durable traits that might reappear even after a race has been eliminated. Polygenists saw anthropometry (the measurement of humans) as one method of quantifying hereditary qualities. These statistical ranges were ostensibly characteristics that bred true and that defined racial groups. Further, Morton's interest in hybridity and racial mixing demonstrates that the polygenists focused as much on the transmission and recognition of "amalgamations" of characters as they did on racial categories themselves. The author suggests that seeing race science as the study of heritable, statistical characteristics rather than broad categories helps explain why "race" is such a persistent cultural phenomenon. PMID:19048797

  9. Quantifying molecular oxygen isotope variations during a Heinrich stadial

    NASA Astrophysics Data System (ADS)

    Reutenauer, C.; Landais, A.; Blunier, T.; Bréant, C.; Kageyama, M.; Woillez, M.-N.; Risi, C.; Mariotti, V.; Braconnot, P.

    2015-11-01

    ?18O of atmospheric oxygen (?18Oatm) undergoes millennial-scale variations during the last glacial period, and systematically increases during Heinrich stadials (HSs). Changes in ?18Oatm combine variations in biospheric and water cycle processes. The identification of the main driver of the millennial variability in ?18Oatm is thus not straightforward. Here, we quantify the response of ?18Oatm to such millennial events using a freshwater hosing simulation performed under glacial boundary conditions. Our global approach takes into account the latest estimates of isotope fractionation factor for respiratory and photosynthetic processes and make use of atmospheric water isotope and vegetation changes. Our modeling approach allows to reproduce the main observed features of a HS in terms of climatic conditions, vegetation distribution and ?18O of precipitation. We use it to decipher the relative importance of the different processes behind the observed changes in ?18Oatm. The results highlight the dominant role of hydrology on ?18Oatm and confirm that ?18Oatm can be seen as a global integrator of hydrological changes over vegetated areas.

  10. Quantifying interictal metabolic activity in human temporal lobe epilepsy

    SciTech Connect

    Henry, T.R.; Mazziotta, J.C.; Engel, J. Jr.; Christenson, P.D.; Zhang, J.X.; Phelps, M.E.; Kuhl, D.E. )

    1990-09-01

    The majority of patients with complex partial seizures of unilateral temporal lobe origin have interictal temporal hypometabolism on (18F)fluorodeoxyglucose positron emission tomography (FDG PET) studies. Often, this hypometabolism extends to ipsilateral extratemporal sites. The use of accurately quantified metabolic data has been limited by the absence of an equally reliable method of anatomical analysis of PET images. We developed a standardized method for visual placement of anatomically configured regions of interest on FDG PET studies, which is particularly adapted to the widespread, asymmetric, and often severe interictal metabolic alterations of temporal lobe epilepsy. This method was applied by a single investigator, who was blind to the identity of subjects, to 10 normal control and 25 interictal temporal lobe epilepsy studies. All subjects had normal brain anatomical volumes on structural neuroimaging studies. The results demonstrate ipsilateral thalamic and temporal lobe involvement in the interictal hypometabolism of unilateral temporal lobe epilepsy. Ipsilateral frontal, parietal, and basal ganglial metabolism is also reduced, although not as markedly as is temporal and thalamic metabolism.

  11. QUANTIFYING KINEMATIC SUBSTRUCTURE IN THE MILKY WAY'S STELLAR HALO

    SciTech Connect

    Xue Xiangxiang; Zhao Gang; Luo Ali; Rix, Hans-Walter; Bell, Eric F.; Koposov, Sergey E.; Kang, Xi; Liu, Chao; Yanny, Brian; Beers, Timothy C.; Lee, Young Sun; Bullock, James S.; Johnston, Kathryn V.; Morrison, Heather; Rockosi, Constance

    2011-09-01

    We present and analyze the positions, distances, and radial velocities for over 4000 blue horizontal-branch (BHB) stars in the Milky Way's halo, drawn from SDSS DR8. We search for position-velocity substructure in these data, a signature of the hierarchical assembly of the stellar halo. Using a cumulative 'close pair distribution' as a statistic in the four-dimensional space of sky position, distance, and velocity, we quantify the presence of position-velocity substructure at high statistical significance among the BHB stars: pairs of BHB stars that are close in position on the sky tend to have more similar distances and radial velocities compared to a random sampling of these overall distributions. We make analogous mock observations of 11 numerical halo formation simulations, in which the stellar halo is entirely composed of disrupted satellite debris, and find a level of substructure comparable to that seen in the actually observed BHB star sample. This result quantitatively confirms the hierarchical build-up of the stellar halo through a signature in phase (position-velocity) space. In detail, the structure present in the BHB stars is somewhat less prominent than that seen in most simulated halos, quite possibly because BHB stars represent an older sub-population. BHB stars located beyond 20 kpc from the Galactic center exhibit stronger substructure than at r{sub gc} < 20 kpc.

  12. Quantifying the abnormal hemodynamics of sickle cell anemia

    NASA Astrophysics Data System (ADS)

    Lei, Huan; Karniadakis, George

    2012-02-01

    Sickle red blood cells (SS-RBC) exhibit heterogeneous morphologies and abnormal hemodynamics in deoxygenated states. A multi-scale model for SS-RBC is developed based on the Dissipative Particle Dynamics (DPD) method. Different cell morphologies (sickle, granular, elongated shapes) typically observed in deoxygenated states are constructed and quantified by the Asphericity and Elliptical shape factors. The hemodynamics of SS-RBC suspensions is studied in both shear and pipe flow systems. The flow resistance obtained from both systems exhibits a larger value than the healthy blood flow due to the abnormal cell properties. Moreover, SS-RBCs exhibit abnormal adhesive interactions with both the vessel endothelium cells and the leukocytes. The effect of the abnormal adhesive interactions on the hemodynamics of sickle blood is investigated using the current model. It is found that both the SS-RBC - endothelium and the SS-RBC - leukocytes interactions, can potentially trigger the vicious ``sickling and entrapment'' cycles, resulting in vaso-occlusion phenomena widely observed in micro-circulation experiments.

  13. Map showing depth to bedrock, Anchorage, Alaska

    USGS Publications Warehouse

    Glass, R.L.

    1988-01-01

    Knowledge of the physical and hydrologic characteristics of geologic materials is useful in determining the availability of groundwater for public and domestic supply and the suitability of areas for on-site septic systems. A generalized map of the Anchorage area shows the approximate distance from land surface to the top of the bedrock surface. Four depth zones are shown. The depths were determined from lithologic data contained in drillers ' logs. (USGS)

  14. Malignant melanoma showing smooth muscle differentiation.

    PubMed Central

    Banerjee, S S; Bishop, P W; Nicholson, C M; Eyden, B P

    1996-01-01

    A unique case of a metastatic non-desmoplastic sarcomatoid malignant melanoma in an axillary lymph node showing smooth muscle differentiation in a 54 year old woman is described. The tumour cells exhibited alpha-smooth muscle actin, HHF-35 and desmin positivity but were negative for S100 protein and HMB-45. Ultrastructural examination revealed smooth muscle phenotype and there was no evidence of myofibroblastic differentiation, a feature described previously in desmoplastic melanomas. Images PMID:8944620

  15. Jackson County Fed Cattle Show Rules

    E-print Network

    Jawitz, James W.

    . The 1219-pound steer was the winner of Class 4; Case was awarded the W. H. Neel Award and won the Senior. #12;37th Annual Jackson County Market Steer Show & Sale PROGRAM Monday, February 15, 2016 4 P.M. ­ 6 Steer 4:30 ­ 6:30 P.M. Barbeque Dinner 6:15 P.M. All persons receiving awards meet at the Sale Ring 6

  16. New map of Io shows an otherworldly

    E-print Network

    Rhoads, James

    AnEtArium shoWs Wednesday evenings @ 7 p.m. Astronomy PubliC lECturEs James Webb Telescope (Prof. Windhorst), April 13 @ 7 p.m. [Cover]. NASA's Spitzer Space Telescope has detected the solid form of buckyballs consisting of stacked buckyballs around a pair of stars called "XX Ophiuchi" SESE BuckyBAlls in spAce New

  17. Mercury's Core Molten, Radar Study Shows

    NASA Astrophysics Data System (ADS)

    2007-05-01

    Scientists using a high-precision planetary radar technique for the first time have discovered that the innermost planet Mercury probably has a molten core, resolving a mystery of more than three decades. The discovery, which used the National Science Foundation's Robert C. Byrd Green Bank Telescope in West Virginia and Arecibo Observatory in Puerto Rico, and NASA/Jet Propulsion Laboratory antennas in California, is an important step toward a better understanding of how planets form and evolve. Planetary Radar High-precision planetary radar technique sent signal to Mercury, received reflection. CREDIT: Bill Saxton, NRAO/AUI/NSF Click on image for high-resolution file (447 KB) "For a long time it was thought we'd have to land spacecraft on Mercury to learn if its core is solid or molten. Now we've answered that question using ground-based telescopes," said Jean-Luc Margot, of Cornell University, leader of the research team, which published its results in the May 4 issue of the journal Science. Mercury is one of the least-understood of the planets in our Solar System. Its distance from the Sun is just over one-third that of the Earth, and it contains a mass just 5½ percent that of Earth. Only about half of Mercury's surface has been photographed by a spacecraft, Mariner 10, back in 1974. Mariner 10 also discovered that Mercury has a weak magnetic field, about one percent as strong as Earth's. That discovery spurred a scientific debate about the planet's core. Scientists normally expect a rocky planet's magnetic field to be caused by an electromagnetic dynamo in a molten core. However, Mercury is so small that most scientists expected its core to have cooled and solidified long ago. Those scientists speculated that the magnetic field seen today may have been "frozen" into the planet when the core cooled. "Whether the core is molten or solid today depends greatly on the chemical composition of the core. That chemical composition can provide important clues about the processes involved in planet formation," Margot said. To answer the question, the scientists implemented an ingenious, high-precision technique in which they sent a powerful beam of radio waves to bounce off Mercury, then received and analyzed the reflected signal using pairs of ground-based radio telescopes. While similar radar systems have been used in the past to map planetary surfaces, this technique instead measured the rate at which Mercury spins on its axis, and did so with an unprecedented precision of one part in 100,000. By making 21 separate observations, the research team was able to measure minute variations in the planet's spin rate. This was the key to learning whether Mercury's core is solid or molten. Using an understanding of the Sun's gravitational effect on the planet, they realized that the tiny variations in its spin rate would be twice as large if the core is liquid than they would be if Mercury has a solid core. "The variations in Mercury's spin rate that we measured are best explained by a core that is at least partially molten. We have a 95 percent confidence level in this conclusion," Margot said. For most of their observations, carried out from 2002-2006, the scientists transmitted a powerful radar beam from the NASA/JPL 70-meter antenna at Goldstone, California, and received the reflected signal with the Green Bank Telescope and the Goldstone antenna. For some observations, they transmitted from the Arecibo Observatory in Puerto Rico and received at Arecibo and two Goldstone antennas. They used radar signals at frequencies of 8.5 and 2.4 GHz. To make the precision measurements of Mercury's spin rate, the geometry between the planet and the receiving antennas had to match a specific alignment. Such an alignment only occurs for about 20 seconds a day. In addition to measuring Mercury's spin rate, their technique also made the best measurement ever of the alignment of the planet's axis of rotation. "We improved the accuracy of this measurement by 100 times, and showed that Mercury's spin axis

  18. Quantifying protein–protein interactions in high throughput using protein domain microarrays

    PubMed Central

    Kaushansky, Alexis; Allen, John E; Gordus, Andrew; Stiffler, Michael A; Karp, Ethan S; Chang, Bryan H; MacBeath, Gavin

    2011-01-01

    Protein microarrays provide an efficient way to identify and quantify protein–protein interactions in high throughput. One drawback of this technique is that proteins show a broad range of physicochemical properties and are often difficult to produce recombinantly. To circumvent these problems, we have focused on families of protein interaction domains. Here we provide protocols for constructing microarrays of protein interaction domains in individual wells of 96-well microtiter plates, and for quantifying domain–peptide interactions in high throughput using fluorescently labeled synthetic peptides. As specific examples, we will describe the construction of microarrays of virtually every human Src homology 2 (SH2) and phosphotyrosine binding (PTB) domain, as well as microarrays of mouse PDZ domains, all produced recombinantly in Escherichia coli. For domains that mediate high-affinity interactions, such as SH2 and PTB domains, equilibrium dissociation constants (KDs) for their peptide ligands can be measured directly on arrays by obtaining saturation binding curves. For weaker binding domains, such as PDZ domains, arrays are best used to identify candidate interactions, which are then retested and quantified by fluorescence polarization. Overall, protein domain microarrays provide the ability to rapidly identify and quantify protein–ligand interactions with minimal sample consumption. Because entire domain families can be interrogated simultaneously, they provide a powerful way to assess binding selectivity on a proteome-wide scale and provide an unbiased perspective on the connectivity of protein–protein interaction networks. PMID:20360771

  19. Infrared techniques for quantifying protein structural stability.

    PubMed

    Vrettos, John S; Meuse, Curtis W

    2009-07-01

    Biopharmaceutical and biotechnology companies and regulatory agencies require novel methods to determine the structural stabilities of proteins and the integrity of protein-protein, protein-ligand, and protein-membrane interactions that can be applied to a variety of sample states and environments. Infrared spectroscopy is a favorable method for a number of reasons: it is adequately sensitive to minimal sample amounts and is not limited by the molecular weight of the sample; yields spectra that are simple to evaluate; does not require protein modifications, a special supporting matrix, or internal standard; and is applicable to soluble and membrane proteins. In this paper, we investigate the application of infrared spectroscopy to the quantification of protein structural stability by measuring the extent of amide hydrogen/deuterium exchange in buffers containing D(2)O for proteins in solution and interacting with ligands and lipid membranes. We report the thermodynamic stability of several protein preparations, including chick egg-white lysozyme, trypsin bound by benzamidine inhibitors, and cytochrome c interacting with lipid membranes of varying net-negative surface charge density. The results demonstrate that infrared spectroscopy can be used to compare protein stability as determined by amide hydrogen/deuterium exchange for a variety of cases. PMID:19327337

  20. Quantifying uncertainty in chemical systems modeling.

    SciTech Connect

    Reagan, Matthew T.; Knio, Omar M.; Najm, Habib N.; Ghanem, Roger Georges; Pebay, Philippe Pierre

    2004-09-01

    This study compares two techniques for uncertainty quantification in chemistry computations, one based on sensitivity analysis and error propagation, and the other on stochastic analysis using polynomial chaos techniques. The two constructions are studied in the context of H{sub 2}-O{sub 2} ignition under supercritical-water conditions. They are compared in terms of their prediction of uncertainty in species concentrations and the sensitivity of selected species concentrations to given parameters. The formulation is extended to one-dimensional reacting-flow simulations. The computations are used to study sensitivities to both reaction rate pre-exponentials and enthalpies, and to examine how this information must be evaluated in light of known, inherent parametric uncertainties in simulation parameters. The results indicate that polynomial chaos methods provide similar first-order information to conventional sensitivity analysis, while preserving higher-order information that is needed for accurate uncertainty quantification and for assigning confidence intervals on sensitivity coefficients. These higher-order effects can be significant, as the analysis reveals substantial uncertainties in the sensitivity coefficients themselves.

  1. Quantifying 'normal' shoulder muscle activity during abduction.

    PubMed

    Wickham, James; Pizzari, Tania; Stansfeld, Katie; Burnside, Amanda; Watson, Lyn

    2010-04-01

    The purpose of this experiment was to obtain electromyographic (EMG) activity from a sample of healthy shoulders to allow a reference database to be developed and used for comparison with pathological shoulders. Temporal and intensity shoulder muscle activation characteristics during a coronal plane abduction/adduction movement were evaluated in the dominant healthy shoulder of 24 subjects. Surface and intramuscular fine wire electrodes recorded EMG activity from 15 shoulder muscles (deltoid x 3, trapezius x 3, subscapularis x 2, latissimus dorsi, pectoralis major, pectoralis minor, supraspinatus, infraspinatus, serratus anterior and rhomboids) at 2000 Hz for 10s whilst each subject performed 10 dynamic coronal plane abduction/adduction movements from 0 degrees to 166 degrees to 0 degrees with a light dumbbell. Results revealed that supraspinatus (-.102 s before movement onset) initiated the movement with middle trapezius (-.019 s) and middle deltoid (-.014 s) also activated before the movement onset. Similar patterns were also found in the time of peak amplitude and %MVC with a pattern emerging where the prime movers (supraspinatus and middle deltoid) were among the first to reach peak amplitude or display the highest %MVC values. In conclusion, the most reproducible patterns of activation arose from the more prime mover muscle sites in all EMG variables analysed and although variability was present, there emerged 'invariant characteristics' that were considered 'normal' for this group of non pathological shoulders. The authors believe that the methodology and certain parts of the analysis in this study can be duplicated and used by future researchers who require a reference database of muscle activity for use as a control group in comparisons to their respective pathological shoulder group. PMID:19625195

  2. A model for quantifying uncertainty in the estimation of noise-contaminated measurements of transmissibility

    NASA Astrophysics Data System (ADS)

    Mao, Zhu; Todd, Michael

    2012-04-01

    System identification in the frequency domain is a very important process in many aspects of engineering. Among many forms of frequency domain system identification such as frequency response function analysis and modal decomposition, transmissibility (output-to-output relationship) estimation has been regarded as one of the most practical tools for its clear physical interpretation, its compatibility with output-only data, and its sensitivity to local changes of structural parameters. Due to operational and environmental variability in any real system, quantization and estimation error, and extraneous measurement noise, the computation of transmissibility may contain a significant level of uncertainty and variability, and these sources propagate to degrade system identification quality and in some cases to system mischaracterization. In this paper, the uncertainty of the magnitude of a transmissibility estimator via output auto-power density spectra is quantified, an exact probability density function for the estimates is derived analytically via a Chi-square bivariate approach, and it is validated with Monte Carlo simulation. Validation shows very consistent results between the observed histogram and predicted distribution for different estimation and noise conditions.

  3. Quantifying the Economic and Cultural Biases of Social Media through Trending Topics

    PubMed Central

    Carrascosa, Juan Miguel; Cuevas, Ruben; Gonzalez, Roberto; Azcorra, Arturo; Garcia, David

    2015-01-01

    Online social media has recently irrupted as the last major venue for the propagation of news and cultural content, competing with traditional mass media and allowing citizens to access new sources of information. In this paper, we study collectively filtered news and popular content in Twitter, known as Trending Topics (TTs), to quantify the extent to which they show similar biases known for mass media. We use two datasets collected in 2013 and 2014, including more than 300.000 TTs from 62 countries. The existing patterns of leader-follower relationships among countries reveal systemic biases known for mass media: Countries concentrate their attention to small groups of other countries, generating a pattern of centralization in which TTs follow the gradient of wealth across countries. At the same time, we find subjective biases within language communities linked to the cultural similarity of countries, in which countries with closer cultures and shared languages tend to follow each other’s TTs. Moreover, using a novel methodology based on the Google News service, we study the influence of mass media in TTs for four countries. We find that roughly half of the TTs in Twitter overlap with news reported by mass media, and that the rest of TTs are more likely to spread internationally within Twitter. Our results confirm that online social media have the power to independently spread content beyond mass media, but at the same time social media content follows economic incentives and is subject to cultural factors and language barriers. PMID:26230656

  4. Quantifying solar spectral irradiance in aquatic habitats for the assessment of photoenhanced toxicity

    SciTech Connect

    Barron, M.G.; Little, E.E.; Calfee, R.; Diamond, S.

    2000-04-01

    The spectra and intensity of solar radiation (solar spectral irradiance [SSI]) was quantified in selected aquatic habitats in the vicinity of an oil field on the California coast. Solar spectral irradiance measurements consisted of spectral scans and radiometric measurements of ultraviolet (UV): UVB and UVA. Solar spectral irradiance measurements were taken at the surface and at various depths in two marsh ponds, a shallow wetland, an estuary lagoon, and the intertidal area of a high-energy sandy beach. Daily fluctuation in SSI showed a general parabolic relationship with time; maximum structure-activity relationship (SAR) was observed at approximate solar noon. Solar spectral irradiance measurements taken at 10-cm depth at approximate solar noon in multiple aquatic habitats exhibited only a twofold variation in visible light and UVA and a 4.5-fold variation in UVB. Visible light ranged from 11,000 to 19,000 {micro}W/cm{sup 2}, UVA ranged from 460 to 1,100 {micro}W/cm{sup 2}, and UVB ranged from 8.4 to 38 {micro}W/cm{sup 2}. In each habitat, the attenuation of light intensity with increasing water depth was differentially affected over specific wavelengths of SSI. The study results allowed the development of environmentally realistic light regimes necessary for photoenhanced toxicity studies.

  5. Quantifying solar spectral irradiance in aquatic habitats for the assessment of photoenhanced toxicity

    USGS Publications Warehouse

    Barron, M.G.; Little, E.E.; Calfee, R.; Diamond, S.

    2000-01-01

    The spectra and intensity of solar radiation (solar spectral irradiance [SSI]) was quantified in selected aquatic habitats in the vicinity of an oil field on the California coast. Solar spectral irradiance measurements consisted of spectral scans (280-700 rim) and radiometric measurements of ultraviolet (UV): UVB (280-320 nm) and UVA (320-400 nm). Solar spectral irradiance measurements were taken at the surface and at various depths in two marsh ponds, a shallow wetland, an estuary lagoon, and the intertidal area of a high-energy sandy beach. Daily fluctuation in SSI showed a general parabolic relationship with time; maximum structure-activity relationship (SAR) was observed at approximate solar noon. Solar spectral irradiance measurements taken at 10-cm depth at approximate solar noon in multiple aquatic habitats exhibited only a twofold variation in visible light and UVA and a 4.5-fold variation in UVB. Visible light ranged from 11,000 to 19,000 ??W/cm2, UVA ranged from 460 to 1,100 ??W/cm2, and UVB ranged from 8.4 to 38 ??W/cm2. In each habitat, the attenuation of light intensity with increasing water depth was differentially affected over specific wavelengths of SSI. The study results allowed the development of environmentally realistic light regimes necessary for photoenhanced toxicity studies.

  6. Quantifying variable erosion rates to understand the coupling of surface processes in the Teton Range, Wyoming

    NASA Astrophysics Data System (ADS)

    Tranel, Lisa M.; Spotila, James A.; Binnie, Steven A.; Freeman, Stewart P. H. T.

    2015-01-01

    Short-term geomorphic processes (fluvial, glacial, and hillslope erosion) and long-term exhumation control transient alpine landscapes. Long-term measurements of exhumation are not sufficient to capture the processes driving transient responses associated with short-term climatic oscillations, because of high variability of individual processes across space and time. This study compares the efficacy of different erosional agents to assess the importance of variability in tectonically active landscapes responding to fluctuations in Quaternary climate. We focus on the Teton Range, where erosional mechanisms include hillslope, glacial, and fluvial processes. Erosion rates were quantified using sediment accumulation and cosmogenic dating (bedrock and stream sediments). Results show that rates of erosion are highly variable, with average short-term rockfall rates (0.8 mm/y) occurring faster than either apparent basin-averaged (0.2 mm/y) and long-term ridge erosion rates (0.02 mm/y). Examining erosion rates separately also demonstrates the coupling between glacial, fluvial, and hillslope processes. Apparent basin-averaged erosion rates amalgamate valley wall and ridge erosion with stream and glacial rates. Climate oscillations drive the short-term response of a single erosional process (e.g., rockfalls or other mass wasting) that may enhance or limit the erosional efficiency of other processes (glacial or fluvial). While the Teton landscape may approach long-term equilibrium, stochastic processes and rapid response to short-term climate change actively perpetuate the transient ruggedness of the topography.

  7. H-ATLAS/GAMA: Quantifying the Morphological Evolution of the Galaxy Population Using Cosmic Calorimetry

    E-print Network

    Eales, Stephen; Allen, Matthew; Smith, M W L; Baldry, Ivan; Bourne, Nathan; Clark, C J R; Driver, Simon; Dunne, Loretta; Dye, Simon; Graham, Alister W; Ibar, Edo; Hopkins, Andrew; Ivison, Rob; Kelvin, Lee S; Maddox, Steve; Maraston, Claudia; Robotham, Aaron S G; Smith, Dan; Taylor, Edward N; Valiante, Elisabetta; van der Werf, Paul; Baes, Maarten; Brough, Sarah; Clements, David; Cooray, Asantha; Gomez, Haley; Loveday, Jon; Phillipps, Steven; Scott, Douglas; Serjeant, Steve

    2015-01-01

    Using results from the Herschel Astrophysical Terrahertz Large-Area Survey and the Galaxy and Mass Assembly project, we show that, for galaxy masses above approximately 1.0e8 solar masses, 51% of the stellar mass-density in the local Universe is in early-type galaxies (ETGs: Sersic n > 2.5) while 89% of the rate of production of stellar mass-density is occurring in late-type galaxies (LTGs: Sersic n < 2.5). From this zero-redshift benchmark, we have used a calorimetric technique to quantify the importance of the morphological transformation of galaxies over the history of the Universe. The extragalactic background radiation contains all the energy generated by nuclear fusion in stars since the Big Bang. By resolving this background radiation into individual galaxies using the deepest far-infrared survey with the Herschel Space Observatory and a deep near-infrared/optical survey with the Hubble Space Telescope (HST), and using measurements of the Sersic index of these galaxies derived from the HST images, w...

  8. Quantifying Uncertainty of Pedotransfer Functions on Soil Water Retention and Hydrologic Model Output

    NASA Astrophysics Data System (ADS)

    Göhler, Maren; Mai, Juliane; Zacharias, Steffen; Cuntz, Matthias

    2015-04-01

    Pedotransfer Functions are often used to estimate soil water retention which is an important physical property of soils and hence quantifying their uncertainty is of high interest. Three independent uncertainties with regard to uncertainty in Pedotransfer Functions are analysed using a probabilistic approach: (1) uncertainty resulting through a limited data base for Pedotransfer Function calibration, (2) uncertainty arising through unknown errors in the measurements which are used for developing the Pedotransfer Functions, and (3) uncertainty arising through the application of the Pedotransfer Functions in a modeling procedure using soil maps with textural classifications. The third uncertainty, arising through the application of the functions to random textural compositions, appears to be the most influential uncertainty in water retention estimates especially for soil classes where sparse data was available for calibration. Furthermore, the bulk density is strongly influencing the variability in the saturated water content and spatial variations in soil moisture. Furthermore, the propagation of the uncertainty arising from random sampling of the calibration data set has a large effect on soil moisture computed with a mesoscale hydrologic model. The evapotranspiration is the most affected hydrologic model output, whereas the discharge shows only minor variation. The analysis of the measurement error remains difficult due to high correlation between the Pedotransfer function coefficients.

  9. Quantifying downstream impacts of impoundment on flow regime and channel planform, lower Trinity River, Texas

    NASA Astrophysics Data System (ADS)

    Wellmeyer, Jessica L.; Slattery, Michael C.; Phillips, Jonathan D.

    2005-07-01

    As human population worldwide has grown, so has interest in harnessing and manipulating the flow of water for the benefit of humans. The Trinity River of eastern Texas is one such watershed greatly impacted by engineering and urbanization. Draining the Dallas-Fort Worth metroplex, just under 30 reservoirs are in operation in the basin, regulating flow while containing public supplies, supporting recreation, and providing flood control. Lake Livingston is the lowest, as well as largest, reservoir in the basin, a mere 95 km above the Trinity's outlet near Galveston Bay. This study seeks to describe and quantify channel activity and flow regime, identifying effects of the 1968 closure of Livingston dam. Using historic daily and peak discharge data from USGS gauging stations, flow duration curves are constructed, identifying pre- and post-dam flow conditions. A digital historic photo archive was also constructed using six sets of aerial photographs spanning from 1938 to 1995, and three measures of channel activity applied using a GIS. Results show no changes in high flow conditions following impoundment, while low flows are elevated. However, the entire post-dam period is characterized by significantly higher rainfall, which may be obscuring the full impact of flow regulation. Channel activity rates do not indicate a more stabilized planform following dam closure; rather they suggest that the Trinity River is adjusting itself to the stress of Livingston dam in a slow, gradual process that may not be apparent in a modern time scale.

  10. Quantifying ammonia emissions from a cattle feedlot using a dispersion model.

    PubMed

    McGinn, S M; Flesch, T K; Crenna, B P; Beauchemin, K A; Coates, T

    2007-01-01

    Livestock manure is a significant source of ammonia (NH3) emissions. In the atmosphere, NH3 is a precursor to the formation of fine aerosols that contribute to poor air quality associated with human health. Other environmental issues result when NH3 is deposited to land and water. Our study documented the quantity of NH3 emitted from a feedlot housing growing beef cattle. The study was conducted between June and October 2006 at a feedlot with a one-time capacity of 22,500 cattle located in southern Alberta, Canada. A backward Lagrangian stochastic (bLS) inverse-dispersion technique was used to calculate NH3 emissions, based on measurements of NH3 concentration (open-path laser) and wind (sonic anemometer) taken above the interior of the feedlot. There was an average of 3146 kg NH3 d(-1) lost from the entire feedlot, equivalent to 84 microg NH3 m(-2) s(-1) or 140 g NH3 head(-1) d(-1). The NH3 emissions correlated with sensible heat flux (r2 = 0.84) and to a lesser extent the wind speed (r2 = 0.56). There was also evidence that rain suppressed the NH3 emission. Quantifying NH3 emission and dispersion from farms is essential to show the impact of farm management on reducing NH3-related environmental issues. PMID:17940257

  11. Using Eddy Covariance Sensors to Quantify Carbon Metabolism of Peatlands: A Case Study in Turkey

    PubMed Central

    Evrendilek, Fatih; Karakaya, Nusret; Aslan, Guler; Ertekin, Can

    2011-01-01

    Net ecosystem exchange (NEE) of carbon dioxide (CO2) was measured in a cool temperate peatland in northwestern Turkey on a continuous basis using eddy covariance (EC) sensors and multiple (non-)linear regression-M(N)LR-models. Our results showed that hourly NEE varied between ?1.26 and 1.06 mg CO2 m?2 s?1, with a mean value of 0.11 mg CO2 m?2 s?1. Nighttime ecosystem respiration (RE) was on average measured as 0.23 ± 0.09 mg CO2 m?2 s?1. Two best-fit M(N)LR models estimated daytime RE as 0.64 ± 0.31 and 0.24 ± 0.05 mg CO2 m?2 s?1. Total RE as the sum of nighttime and daytime RE ranged from 0.47 to 0.87 mg CO2 m?2 s?1, thus yielding estimates of gross primary productivity (GPP) at ?0.35 ± 0.18 and ?0.74 ± 0.43 mg CO2 m?2 s?1. Use of EC sensors and M(N)LR models is one of the most direct ways to quantify turbulent CO2 exchanges among the soil, vegetation and atmosphere within the atmospheric boundary layer, as well as source and sink behaviors of ecosystems. PMID:22346588

  12. Quantifying the Entropy of Binding for Water Molecules in Protein Cavities by Computing Correlations

    PubMed Central

    Huggins, David J.

    2015-01-01

    Protein structural analysis demonstrates that water molecules are commonly found in the internal cavities of proteins. Analysis of experimental data on the entropies of inorganic crystals suggests that the entropic cost of transferring such a water molecule to a protein cavity will not typically be greater than 7.0 cal/mol/K per water molecule, corresponding to a contribution of approximately +2.0 kcal/mol to the free energy. In this study, we employ the statistical mechanical method of inhomogeneous fluid solvation theory to quantify the enthalpic and entropic contributions of individual water molecules in 19 protein cavities across five different proteins. We utilize information theory to develop a rigorous estimate of the total two-particle entropy, yielding a complete framework to calculate hydration free energies. We show that predictions from inhomogeneous fluid solvation theory are in excellent agreement with predictions from free energy perturbation (FEP) and that these predictions are consistent with experimental estimates. However, the results suggest that water molecules in protein cavities containing charged residues may be subject to entropy changes that contribute more than +2.0 kcal/mol to the free energy. In all cases, these unfavorable entropy changes are predicted to be dominated by highly favorable enthalpy changes. These findings are relevant to the study of bridging water molecules at protein-protein interfaces as well as in complexes with cognate ligands and small-molecule inhibitors. PMID:25692597

  13. Quantifying Diurnal Cloud Radiative Effects by Cloud Type in the Tropical Western Pacific

    SciTech Connect

    Burleyson, Casey D.; Long, Charles N.; Comstock, Jennifer M.

    2015-06-01

    Cloud radiative effects are examined using long-term datasets collected at the three Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facilities in the tropical western Pacific. We quantify the surface radiation budget, cloud populations, and cloud radiative effects by partitioning the data by cloud type, time of day, and as a function of large scale modes of variability such as El Niño Southern Oscillation (ENSO) phase and wet/dry seasons at Darwin. The novel facet of our analysis is that we break aggregate cloud radiative effects down by cloud type across the diurnal cycle. The Nauru cloud populations and subsequently the surface radiation budget are strongly impacted by ENSO variability whereas the cloud populations over Manus only shift slightly in response to changes in ENSO phase. The Darwin site exhibits large seasonal monsoon related variations. We show that while deeper convective clouds have a strong conditional influence on the radiation reaching the surface, their limited frequency reduces their aggregate radiative impact. The largest source of shortwave cloud radiative effects at all three sites comes from low clouds. We use the observations to demonstrate that potential model biases in the amplitude of the diurnal cycle and mean cloud frequency would lead to larger errors in the surface energy budget compared to biases in the timing of the diurnal cycle of cloud frequency. Our results provide solid benchmarks to evaluate model simulations of cloud radiative effects in the tropics.

  14. Quantifying sub-pixel urban impervious surface through fusion of optical and inSAR imagery

    USGS Publications Warehouse

    Yang, L.; Jiang, L.; Lin, H.; Liao, M.

    2009-01-01

    In this study, we explored the potential to improve urban impervious surface modeling and mapping with the synergistic use of optical and Interferometric Synthetic Aperture Radar (InSAR) imagery. We used a Classification and Regression Tree (CART)-based approach to test the feasibility and accuracy of quantifying Impervious Surface Percentage (ISP) using four spectral bands of SPOT 5 high-resolution geometric (HRG) imagery and three parameters derived from the European Remote Sensing (ERS)-2 Single Look Complex (SLC) SAR image pair. Validated by an independent ISP reference dataset derived from the 33 cm-resolution digital aerial photographs, results show that the addition of InSAR data reduced the ISP modeling error rate from 15.5% to 12.9% and increased the correlation coefficient from 0.71 to 0.77. Spatially, the improvement is especially noted in areas of vacant land and bare ground, which were incorrectly mapped as urban impervious surfaces when using the optical remote sensing data. In addition, the accuracy of ISP prediction using InSAR images alone is only marginally less than that obtained by using SPOT imagery. The finding indicates the potential of using InSAR data for frequent monitoring of urban settings located in cloud-prone areas. Copyright ?? 2009 by Bellwether Publishing, Ltd. All right reserved.

  15. Quantifying the Economic and Cultural Biases of Social Media through Trending Topics.

    PubMed

    Carrascosa, Juan Miguel; Cuevas, Ruben; Gonzalez, Roberto; Azcorra, Arturo; Garcia, David

    2015-01-01

    Online social media has recently irrupted as the last major venue for the propagation of news and cultural content, competing with traditional mass media and allowing citizens to access new sources of information. In this paper, we study collectively filtered news and popular content in Twitter, known as Trending Topics (TTs), to quantify the extent to which they show similar biases known for mass media. We use two datasets collected in 2013 and 2014, including more than 300.000 TTs from 62 countries. The existing patterns of leader-follower relationships among countries reveal systemic biases known for mass media: Countries concentrate their attention to small groups of other countries, generating a pattern of centralization in which TTs follow the gradient of wealth across countries. At the same time, we find subjective biases within language communities linked to the cultural similarity of countries, in which countries with closer cultures and shared languages tend to follow each other's TTs. Moreover, using a novel methodology based on the Google News service, we study the influence of mass media in TTs for four countries. We find that roughly half of the TTs in Twitter overlap with news reported by mass media, and that the rest of TTs are more likely to spread internationally within Twitter. Our results confirm that online social media have the power to independently spread content beyond mass media, but at the same time social media content follows economic incentives and is subject to cultural factors and language barriers. PMID:26230656

  16. Using nonlinear methods to quantify changes in infant limb movements and vocalizations

    PubMed Central

    Abney, Drew H.; Warlaumont, Anne S.; Haussman, Anna; Ross, Jessica M.; Wallot, Sebastian

    2014-01-01

    The pairing of dynamical systems theory and complexity science brings novel concepts and methods to the study of infant motor development. Accordingly, this longitudinal case study presents a new approach to characterizing the dynamics of infant limb and vocalization behaviors. A single infant's vocalizations and limb movements were recorded from 51-days to 305-days of age. On each recording day, accelerometers were placed on all four of the infant's limbs and an audio recorder was worn on the child's chest. Using nonlinear time series analysis methods, such as recurrence quantification analysis and Allan factor, we quantified changes in the stability and multiscale properties of the infant's behaviors across age as well as how these dynamics relate across modalities and effectors. We observed that particular changes in these dynamics preceded or coincided with the onset of various developmental milestones. For example, the largest changes in vocalization dynamics preceded the onset of canonical babbling. The results show that nonlinear analyses can help to understand the functional co-development of different aspects of infant behavior. PMID:25161629

  17. Assessment of machine learning reliability methods for quantifying the applicability domain of QSAR regression models.

    PubMed

    Toplak, Marko; Mo?nik, Rok; Polajnar, Matija; Bosni?, Zoran; Carlsson, Lars; Hasselgren, Catrin; Demšar, Janez; Boyer, Scott; Zupan, Blaž; Stålring, Jonna

    2014-02-24

    The vastness of chemical space and the relatively small coverage by experimental data recording molecular properties require us to identify subspaces, or domains, for which we can confidently apply QSAR models. The prediction of QSAR models in these domains is reliable, and potential subsequent investigations of such compounds would find that the predictions closely match the experimental values. Standard approaches in QSAR assume that predictions are more reliable for compounds that are "similar" to those in subspaces with denser experimental data. Here, we report on a study of an alternative set of techniques recently proposed in the machine learning community. These methods quantify prediction confidence through estimation of the prediction error at the point of interest. Our study includes 20 public QSAR data sets with continuous response and assesses the quality of 10 reliability scoring methods by observing their correlation with prediction error. We show that these new alternative approaches can outperform standard reliability scores that rely only on similarity to compounds in the training set. The results also indicate that the quality of reliability scoring methods is sensitive to data set characteristics and to the regression method used in QSAR. We demonstrate that at the cost of increased computational complexity these dependencies can be leveraged by integration of scores from various reliability estimation approaches. The reliability estimation techniques described in this paper have been implemented in an open source add-on package ( https://bitbucket.org/biolab/orange-reliability ) to the Orange data mining suite. PMID:24490838

  18. H-ATLAS/GAMA: quantifying the morphological evolution of the galaxy population using cosmic calorimetry

    NASA Astrophysics Data System (ADS)

    Eales, Stephen; Fullard, Andrew; Allen, Matthew; Smith, M. W. L.; Baldry, Ivan; Bourne, Nathan; Clark, C. J. R.; Driver, Simon; Dunne, Loretta; Dye, Simon; Graham, Alister W.; Ibar, Edo; Hopkins, Andrew; Ivison, Rob; Kelvin, Lee S.; Maddox, Steve; Maraston, Claudia; Robotham, Aaron S. G.; Smith, Dan; Taylor, Edward N.; Valiante, Elisabetta; Werf, Paul van der; Baes, Maarten; Brough, Sarah; Clements, David; Cooray, Asantha; Gomez, Haley; Loveday, Jon; Phillipps, Steven; Scott, Douglas; Serjeant, Steve

    2015-10-01

    Using results from the Herschel Astrophysical Terrahertz Large-Area Survey (H-ATLAS) and the Galaxy and Mass Assembly (GAMA) project, we show that, for galaxy masses above ? 108 M?, 51 per cent of the stellar mass-density in the local Universe is in early-type galaxies (ETGs; Sérsic n > 2.5) while 89 per cent of the rate of production of stellar mass-density is occurring in late-type galaxies (LTGs; Sérsic n < 2.5). From this zero-redshift benchmark, we have used a calorimetric technique to quantify the importance of the morphological transformation of galaxies over the history of the Universe. The extragalactic background radiation contains all the energy generated by nuclear fusion in stars since the big bang. By resolving this background radiation into individual galaxies using the deepest far-infrared survey with the Herschel Space Observatory and a deep near-infrared/optical survey with the Hubble Space Telescope (HST), and using measurements of the Sérsic index of these galaxies derived from the HST images, we estimate that ?83 per cent of the stellar mass-density formed over the history of the Universe occurred in LTGs. The difference between this value and the fraction of the stellar mass-density that is in LTGs today implies there must have been a major transformation of LTGs into ETGs after the formation of most of the stars.

  19. Quantifying the energy dissipation of overriding plate deformation in three-dimensional subduction models

    NASA Astrophysics Data System (ADS)

    Chen, Zhihao; Schellart, Wouter P.; Duarte, João. C.

    2015-01-01

    In a subduction system the force and the energy required to deform the overriding plate are generally thought to come from the negative buoyancy of the subducted slab and its potential energy, respectively. Such deformation might involve extension and back-arc basin formation or shortening and mountain building. How much of the slab's potential energy is consumed during overriding plate deformation remains unknown. In this work, we present dynamic three-dimensional laboratory experiments of progressive subduction with an overriding plate to quantify the force (FOPD) that drives overriding plate deformation and the associated energy dissipation rate (?OPD), and we compare them with the negative buoyancy (FBU) of the subducted slab and its total potential energy release rate (?BU), respectively. We varied the viscosity ratio between the plates and the sublithospheric upper mantle with ?SP/?UM = 157-560 and the thickness of the overriding plate with TOP = 0.5-2.5 cm (scaling to 25-125 km in nature). The results show that FOPD/FBU has average values of 0.5-2.0%, with a maximum of 5.3%, and ?OPD/?BU has average values of 0.05-0.30%, with a maximum of 0.41%. The results indicate that only a small portion of the negative buoyancy of the slab and its potential energy are used to deform the overriding plate. Our models also suggest that the force required to deform the overriding plate is of comparable magnitude as the ridge push force. Furthermore, we show that in subduction models with an overriding plate bending dissipation at the subduction zone hinge remains low (3-15% during steady state subduction).

  20. Infrared imaging to quantify the effects of nicotine-induced vasoconstriction in humans

    NASA Astrophysics Data System (ADS)

    Brunner, Siegfried; Kargel, Christian

    2009-05-01

    Smoking is the most significant source of preventable morbidity and premature mortality worldwide (WHO-2008). One of the many effects of nicotine is vasoconstriction which is triggered by the autonomic nervous system. The constriction of blood vessels e.g. of the skin's vascular bed is responsible for a decrease of the supply with oxygen and nutrients and a lowering of the skin temperature. We used infrared imaging to quantify temperature decreases caused by cigarette smoking in the extremities of smokers and also monitored heart rate as well as blood pressure. The results - including thermograms showing "temporary amputations" of the fingertips due to a significant temperature drop - can help increase the awareness of the dangers of smoking and the success of withdrawal programs. Surprisingly, in our control persons (3 brave non-smoking volunteers who smoked a cigarette) we also found temperature increases suggesting that vasodilation (widening of blood vessels) was provoked by cigarettes. To verify this unexpected finding and eliminate effects from the 4000 chemical compounds in the smoke, we repeated the experiment following a stringent protocol ruling out physiological and psychological influences with 9 habitual smokers and 17 nonsmokers who all chew gums with 2 mg of nicotine. Task-optimized digital image processing techniques (target detection, image-registration and -segmentation) were applied to the acquired infrared image sequences to automatically yield temperature plots of the fingers and palm. In this paper we present the results of our study in detail and show that smokers and non-smokers respond differently to the administration of nicotine.

  1. Can recurrence networks show small world property?

    E-print Network

    Rinku Jacob; K. P. Harikrishnan; R. Misra; G. Ambika

    2015-09-15

    Recurrence networks are important statistical tools used for the analysis of time series data with several practical applications. Though these networks are complex and characterize objects with structural scale invariance, their properties from a complex network perspective have not been fully understood. In this Letter, we argue, with numerical support, that the recurrence networks from chaotic attractors with continuous measure can neither show scale free topology nor small world property. However, if the critical threshold is increased from its optimum value, the recurrence network initially crosses over to a complex network with the small world property and finally to the classical random graph as the threshold approaches the size of the strange attractor.

  2. Quantifying Regional Measurement Requirements for ASCENDS

    NASA Astrophysics Data System (ADS)

    Mountain, M. E.; Eluszkiewicz, J.; Nehrkorn, T.; Hegarty, J. D.; Aschbrenner, R.; Henderson, J.; Zaccheo, S.

    2011-12-01

    Quantification of greenhouse gas fluxes at regional and local scales is required by the Kyoto protocol and potential follow-up agreements, and their accompanying implementation mechanisms (e.g., cap-and-trade schemes and treaty verification protocols). Dedicated satellite observations, such as those provided by the Greenhouse gases Observing Satellite (GOSAT), the upcoming Orbiting Carbon Observatory (OCO-2), and future active missions, particularly Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) and Advanced Space Carbon and Climate Observation of Planet Earth (A-SCOPE), are poised to play a central role in this endeavor. In order to prepare for the ASCENDS mission, we are applying the Stochastic Time-Inverted Lagrangian Transport (STILT) model driven by meteorological fields from a customized version of the Weather Research and Forecasting (WRF) model to generate surface influence functions for ASCENDS observations. These "footprints" (or adjoint) express the sensitivity of observations to surface fluxes in the upwind source regions and thus enable the computation of a posteriori flux error reductions resulting from the inclusion of satellite observations (taking into account the vertical sensitivity and error characteristics of the latter). The overarching objective of this project is the specification of the measurement requirements for the ASCENDS mission, with a focus on policy-relevant regional scales. Several features make WRF-STILT an attractive tool for regional analysis of satellite observations: 1) WRF meteorology is available at higher resolution than for global models and is thus more realistic, 2) The Lagrangian approach minimizes numerical diffusion present in Eulerian models, 3) The WRF-STILT coupling has been specifically designed to achieve good mass conservation characteristics, and 4) The receptor-oriented approach offers a relatively straightforward way to compute the adjoint of the transport model. These aspects allow the model to compute surface influences for satellite observations at high spatiotemporal resolution and to generate realistic flux error and flux estimates at policy-relevant scales. The main drawbacks of the Lagrangian approach to satellite simulations are inefficiency and storage requirements, but these obstacles can be overcome by taking advantage of modern computing resources (the current runs are being performed on the NASA Pleiades supercomputer). We gratefully acknowledge funding by the NASA Atmospheric CO2 Observations from Space Program (grant NNX10AT87G).

  3. Global climate change: the quantifiable sustainability challenge.

    PubMed

    Princiotta, Frank T; Loughlin, Daniel H

    2014-09-01

    Population growth and the pressures spawned by increasing demands for energy and resource-intensive goods, foods, and services are driving unsustainable growth in greenhouse gas (GHG) emissions. Recent GHG emission trends are consistent with worst-case scenarios of the previous decade. Dramatic and near-term emission reductions likely will be needed to ameliorate the potential deleterious impacts of climate change. To achieve such reductions, fundamental changes are required in the way that energy is generated and used. New technologies must be developed and deployed at a rapid rate. Advances in carbon capture and storage, renewable, nuclear and transportation technologies are particularly important; however, global research and development efforts related to these technologies currently appear to fall short relative to needs. Even with a proactive and international mitigation effort, humanity will need to adapt to climate change, but the adaptation needs and damages will be far greater if mitigation activities are not pursued in earnest. In this review, research is highlighted that indicates increasing global and regional temperatures and ties climate changes to increasing GHG emissions. GHG mitigation targets necessary for limiting future global temperature increases are discussed, including how factors such as population growth and the growing energy intensity of the developing world will make these reduction targets more challenging. Potential technological pathways for meeting emission reduction targets are examined, barriers are discussed, and global and US. modeling results are presented that suggest that the necessary pathways will require radically transformed electric and mobile sectors. While geoengineering options have been proposed to allow more time for serious emission reductions, these measures are at the conceptual stage with many unanswered cost, environmental, and political issues. Implications: This paper lays out the case that mitigating the potential for catastrophic climate change will be a monumental challenge, requiring the global community to transform its energy system in an aggressive, coordinated, and timely manner. If this challenge is to be met, new technologies will have to be developed and deployed at a rapid rate. Advances in carbon capture and storage, renewable, nuclear, and transportation technologies are particularly important. Even with an aggressive international mitigation effort, humanity will still need to adapt to significant climate change. PMID:25282995

  4. Conservation Project Shows Substantial Reduction in Home Water Use

    ERIC Educational Resources Information Center

    Sharpe, William E.; Smith, Donald

    1978-01-01

    Describes a water use study-conservation project conducted by the Washington Suburban Sanitary Commission in Maryland. Results show a significant decrease in the amount of water used by home customers over a ten-year period. (Author/MA)

  5. Ancient bacteria show evidence of DNA repair Sarah Stewart Johnson*

    E-print Network

    Nielsen, Rasmus

    Ancient bacteria show evidence of DNA repair Sarah Stewart Johnson* , Martin B. Hebsgaard , Torben for review June 14, 2007) Recent claims of cultivable ancient bacteria within sealed environ- ments highlight-term survival of bacteria sealed in frozen conditions for up to one million years. Our results show evidence

  6. Show Me the Invisible: Visualizing Hidden Content

    PubMed Central

    Geymayer, Thomas; Steinberger, Markus; Lex, Alexander; Streit, Marc; Schmalstieg, Dieter

    2014-01-01

    Content on computer screens is often inaccessible to users because it is hidden, e.g., occluded by other windows, outside the viewport, or overlooked. In search tasks, the efficient retrieval of sought content is important. Current software, however, only provides limited support to visualize hidden occurrences and rarely supports search synchronization crossing application boundaries. To remedy this situation, we introduce two novel visualization methods to guide users to hidden content. Our first method generates awareness for occluded or out-of-viewport content using see-through visualization. For content that is either outside the screen’s viewport or for data sources not opened at all, our second method shows off-screen indicators and an on-demand smart preview. To reduce the chances of overlooking content, we use visual links, i.e., visible edges, to connect the visible content or the visible representations of the hidden content. We show the validity of our methods in a user study, which demonstrates that our technique enables a faster localization of hidden content compared to traditional search functionality and thereby assists users in information retrieval tasks. PMID:25325078

  7. Show Me the Invisible: Visualizing Hidden Content.

    PubMed

    Geymayer, Thomas; Steinberger, Markus; Lex, Alexander; Streit, Marc; Schmalstieg, Dieter

    2014-01-01

    Content on computer screens is often inaccessible to users because it is hidden, e.g., occluded by other windows, outside the viewport, or overlooked. In search tasks, the efficient retrieval of sought content is important. Current software, however, only provides limited support to visualize hidden occurrences and rarely supports search synchronization crossing application boundaries. To remedy this situation, we introduce two novel visualization methods to guide users to hidden content. Our first method generates awareness for occluded or out-of-viewport content using see-through visualization. For content that is either outside the screen's viewport or for data sources not opened at all, our second method shows off-screen indicators and an on-demand smart preview. To reduce the chances of overlooking content, we use visual links, i.e., visible edges, to connect the visible content or the visible representations of the hidden content. We show the validity of our methods in a user study, which demonstrates that our technique enables a faster localization of hidden content compared to traditional search functionality and thereby assists users in information retrieval tasks. PMID:25325078

  8. Quantifying the Carbon Intensity of Biomass Energy

    NASA Astrophysics Data System (ADS)

    Hodson, E. L.; Wise, M.; Clarke, L.; McJeon, H.; Mignone, B.

    2012-12-01

    Regulatory agencies at the national and regional level have recognized the importance of quantitative information about greenhouse gas emissions from biomass used in transportation fuels or in electricity generation. For example, in the recently enacted California Low-Carbon Fuel Standard, the California Air Resources Board conducted a comprehensive study to determine an appropriate methodology for setting carbon intensities for biomass-derived transportation fuels. Furthermore, the U.S. Environmental Protection Agency is currently conducting a multi-year review to develop a methodology for estimating biogenic carbon dioxide (CO2) emissions from stationary sources. Our study develops and explores a methodology to compute carbon emission intensities (CIs) per unit of biomass energy, which is a metric that could be used to inform future policy development exercises. To compute CIs for biomass, we use the Global Change Assessment Model (GCAM), which is an integrated assessment model that represents global energy, agriculture, land and physical climate systems with regional, sectoral, and technological detail. The GCAM land use and land cover component includes both managed and unmanaged land cover categories such as food crop production, forest products, and various non-commercial land uses, and it is subdivided into 151 global land regions (wiki.umd.edu/gcam), ten of which are located in the U.S. To illustrate a range of values for different biomass resources, we use GCAM to compute CIs for a variety of biomass crops grown in different land regions of the U.S. We investigate differences in emissions for biomass crops such as switchgrass, miscanthus and willow. Specifically, we use GCAM to compute global carbon emissions from the land use change caused by a marginal increase in the amount of biomass crop grown in a specific model region. Thus, we are able to explore how land use change emissions vary by the type and location of biomass crop grown in the U.S. Direct emissions occur when biomass production used for energy displaces land used for food crops, forest products, pasture, or other arable land in the same region. Indirect emissions occur when increased food crop production, compensating for displaced food crop production in the biomass production region, displaces land in regions outside of the region of biomass production. Initial results from this study suggest that indirect land use emissions, mainly from converting unmanaged forest land, are likely to be as important as direct land use emissions in determining the carbon intensity of biomass energy. Finally, we value the emissions of a marginal unit of biomass production for a given carbon price path and a range of assumed social discount rates. We also compare the cost of bioenergy emissions as valued by a hypothetical private actor to the relevant cost of emissions from conventional fossil fuels, such as coal or natural gas.

  9. Quantifying retro-foreland evolution in the Eastern Pyrenees.

    NASA Astrophysics Data System (ADS)

    Grool, Arjan R.; Ford, Mary; Huismans, Ritske S.

    2015-04-01

    The northern Pyrenees form the retro-foreland of the Pyrenean orogen. Modelling studies show that retro-forelands have several contrasting characteristics compared to pro-forelands: They tend to show a constant tectonic subsidence during the growth phase of an orogen, and no tectonic subsidence during the steady-state phase. Retro-forelands are also not displaced into the core of the orogen once the steady state phase is achieved. This means they tend to preserve the subsidence history from the growth phase of the orogen, but little or no history from the steady state phase. The northeastern Pyrenees (Carcassonne high) are a good location to test these characteristics against real-world data, because syn-orogenic sediments are preserved and the lack of postrift thermal subsidence and Triassic salt reduce complicating factors. In order to test the model, quantification of the following parameters is needed: Timing, amount and distribution of deformation, subsidence and sedimentation. We use subsurface, field, map and literature data to construct 2 balanced and restored cross sections through the eastern north Pyrenean foreland, stretching from the Montagne Noire in the north, to the Axial Zone in the south. We will link this to published thermochronology data to further constrain the evolution of the retro-foreland and investigate the link with the Axial Zone towards the south. We will quantify subsidence, deformation and sedimentation and link them to exhumation phases in the North Pyrenean Zone (NPZ) and the Axial Zone. The north Pyrenean retro-foreland is divided into two parts: the external foreland basin (Aquitaine basin) to the north and the North Pyrenean Zone to the south, separated by the North Pyrenean Frontal Thrust (NPFT). South of the NPZ lies the Axial Zone, separated from the retro-foreland by the North Pyrenean Fault which is believed to be the suture between Iberia and Europe. The NPFT was the breakaway fault on the European continent during the Apto-Albian rifting phase and was strongly inverted during the Pyrenean orogeny. South of the NPFT we find Lower Cretaceous and older sediments, including Triassic salt. These sediments are completely absent north of the NPFT (on Carcassonne high), indicating its significance during the extensional phase. The retro-foreland is deformed by fault-propagation folds above basement-involving thrusts. A slow northward propagation of deformation and sedimentation is clearly visible. The preserved thickness of Upper Cretaceous sediments corresponds with the retro-foreland model's prediction that early subsidence records are preserved. Two distinct deformation phases are recognized, but not the latest Oligocene phase that is found in the pro-foreland (southern Pyrenees). This could indicate a steady state during the late Oligocene.We quantify and constrain the evolution of the eastern Pyrenean retro-foreland basin, investigate the link with the axial zone and investigate the pre-orogenic configuration of the region that currently constitutes the eastern Pyrenean retro-foreland.

  10. Quantifying Interannual Variability for Photovoltaic Systems in PVWatts

    SciTech Connect

    Ryberg, David Severin; Freeman, Janine; Blair, Nate

    2015-10-01

    The National Renewable Energy Laboratory's (NREL's) PVWatts is a relatively simple tool used by industry and individuals alike to easily estimate the amount of energy a photovoltaic (PV) system will produce throughout the course of a typical year. PVWatts Version 5 has previously been shown to be able to reasonably represent an operating system's output when provided with concurrent weather data, however this type of data is not available when estimating system output during future time frames. For this purpose PVWatts uses weather data from typical meteorological year (TMY) datasets which are available on the NREL website. The TMY files represent a statistically 'typical' year which by definition excludes anomalous weather patterns and as a result may not provide sufficient quantification of project risk to the financial community. It was therefore desired to quantify the interannual variability associated with TMY files in order to improve the understanding of risk associated with these projects. To begin to understand the interannual variability of a PV project, we simulated two archetypal PV system designs, which are common in the PV industry, in PVWatts using the NSRDB's 1961-1990 historical dataset. This dataset contains measured hourly weather data and spans the thirty years from 1961-1990 for 239 locations in the United States. To note, this historical dataset was used to compose the TMY2 dataset. Using the results of these simulations we computed several statistical metrics which may be of interest to the financial community and normalized the results with respect to the TMY energy prediction at each location, so that these results could be easily translated to similar systems. This report briefly describes the simulation process used and the statistical methodology employed for this project, but otherwise focuses mainly on a sample of our results. A short discussion of these results is also provided. It is our hope that this quantification of the interannual variability of PV systems will provide a starting point for variability considerations in future PV system designs and investigations. however this type of data is not available when estimating system output during future time frames.

  11. Quantifying anti-vascular effects of monoclonal antibodies to VEGF

    PubMed Central

    O’Connor, James P B; Carano, Richard A D; Clamp, Andrew R; Ross, Jed; Ho, Calvin C K; Jackson, Alan; Parker, Geoff J M; Rose, Chris J; Peale, Franklin V; Friesenhahn, Michel; Mitchell, Claire L; Watson, Yvonne; Roberts, Caleb; Hope, Lynn; Cheung, Sue; Reslan, Hani Bou; Go, Mary Ann T; Pacheco, Glenn J; Wu, Xiumin; Cao, Tim C; Ross, Sarajane; Buonaccorsi, Giovanni A; Davies, Karen; Hasan, Jurjees; Thornton, Paula; del Puerto, Olivia; Ferrara, Napoleone; van Bruggen, Nicholas; Jayson, Gordon C

    2009-01-01

    Purpose Little is known concerning the onset, duration and magnitude of direct therapeutic effects of anti-VEGF therapies. Such knowledge would help guide the rational development of targeted therapeutics from bench to bedside and optimize use of imaging technologies that quantify tumor function in early phase clinical trials. Experimental design Pre-clinical studies were performed using ex vivo micro-CT and in-vivo ultrasound imaging to characterize tumor vasculature in a human HM-7 colorectal xenograft model treated with the anti-VEGF antibody G6-31. Clinical evaluation was by quantitative MRI in ten patients with metastatic CRC treated with bevacizumab. Results Micro-CT experiments demonstrated reduction in perfused vessels within 24-48 hours of G6-31 drug administration (p ? 0.005). Ultrasound imaging confirmed reduced tumor blood volume within the same time frame (p = 0.048). Consistent with the pre-clinical results, reductions in enhancing fraction and fractional plasma volume were detected in patient CRC metastases within 48 hours after a single dose of bevacizumab that persisted throughout one cycle of therapy. These effects were followed by resolution of edema (p = 0.0023) and tumor shrinkage in 9/26 tumors at day 12. Conclusion These data suggest that VEGF-specific inhibition induces rapid structural and functional effects with downstream significant anti-tumor activity within one cycle of therapy. This finding has important implications for the design of early phase clinical trials that incorporate physiological imaging. The study demonstrates how animal data help interpret clinical imaging data, an important step towards the validation of image biomarkers of tumor structure and function. PMID:19861458

  12. Quantifying brain shift during neurosurgery using spatially tracked ultrasound

    NASA Astrophysics Data System (ADS)

    Blumenthal, Tico; Hartov, Alex; Lunn, Karen; Kennedy, Francis E.; Roberts, David W.; Paulsen, Keith D.

    2005-04-01

    Brain shift during neurosurgery currently limits the effectiveness of stereotactic guidance systems that rely on preoperative image modalities like magnetic resonance (MR). The authors propose a process for quantifying intraoperative brain shift using spatially-tracked freehand intraoperative ultrasound (iUS). First, one segments a distinct feature from the preoperative MR (tumor, ventricle, cyst, or falx) and extracts a faceted surface using the marching cubes algorithm. Planar contours are then semi-automatically segmented from two sets of iUS b-planes obtained (a) prior to the dural opening and (b) after the dural opening. These two sets of contours are reconstructed in the reference frame of the MR, composing two distinct sparsely-sampled surface descriptions of the same feature segmented from MR. Using the Iterative Closest Point (ICP) algorithm one obtains discrete estimates of the feature deformation performing point-to-surface matching. Vector subtraction of the matched points then can be used as sparse deformation data inputs for inverse biomechanical brain tissue models. The results of these simulations are then used to modify the pre-operative MR to account for intraoperative changes. The proposed process has undergone preliminary evaluations in a phantom study and was applied to data from two clinical cases. In the phantom study, the process recovered controlled deformations with an RMS error of 1.1 mm. These results also suggest that clinical accuracy would be on the order of 1-2mm. This finding is consistent with prior work by the Dartmouth Image-Guided Neurosurgery (IGNS) group. In the clinical cases, the deformations obtained were used to produce qualitatively reasonable updated guidance volumes.

  13. Quantified trends in the history of verbal behavior research

    PubMed Central

    Eshleman, John W.

    1991-01-01

    The history of scientific research about verbal behavior research, especially that based on Verbal Behavior (Skinner, 1957), can be assessed on the basis of a frequency and celeration analysis of the published and presented literature. In order to discover these quantified trends, a comprehensive bibliographical database was developed. Based on several literature searches, the bibliographic database included papers pertaining to verbal behavior that were published in the Journal of the Experimental Analysis of Behavior, the Journal of Applied Behavior Analysis, Behaviorism, The Behavior Analyst, and The Analysis of Verbal Behavior. A nonbehavioral journal, the Journal of Verbal Learning and Verbal Behavior was assessed as a nonexample comparison. The bibliographic database also included a listing of verbal behavior papers presented at the meetings of the Association for Behavior Analysis. Papers were added to the database if they (a) were about verbal behavior, (b) referenced B.F. Skinner's (1957) book Verbal Behavior, or (c) did both. Because the references indicated the year of publication or presentation, a count per year of them was measured. These yearly frequencies were plotted on Standard Celeration Charts. Once plotted, various celeration trends in the literature became visible, not the least of which was the greater quantity of verbal behavior research than is generally acknowledged. The data clearly show an acceleration of research across the past decade. The data also question the notion that a “paucity” of research based on Verbal Behavior currently exists. Explanations of the acceleration of verbal behavior research are suggested, and plausible reasons are offered as to why a relative lack of verbal behavior research extended through the mid 1960s to the latter 1970s. PMID:22477630

  14. A FRAMEWORK FOR QUANTIFYING THE DEGENERACIES OF EXOPLANET INTERIOR COMPOSITIONS

    SciTech Connect

    Rogers, L. A.; Seager, S.

    2010-04-01

    Several transiting super-Earths are expected to be discovered in the coming few years. While tools to model the interior structure of transiting planets exist, inferences about the composition are fraught with ambiguities. We present a framework to quantify how much we can robustly infer about super-Earth and Neptune-size exoplanet interiors from radius and mass measurements. We introduce quaternary diagrams to illustrate the range of possible interior compositions for planets with four layers (iron core, silicate mantles, water layers, and H/He envelopes). We apply our model to CoRoT-7b, GJ 436b, and HAT-P-11b. Interpretation of planets with H/He envelopes is limited by the model uncertainty in the interior temperature, while for CoRoT-7b observational uncertainties dominate. We further find that our planet interior model sharpens the observational constraints on CoRoT-7b's mass and radius, assuming the planet does not contain significant amounts of water or gas. We show that the strength of the limits that can be placed on a super-Earth's composition depends on the planet's density; for similar observational uncertainties, high-density super-Mercuries allow the tightest composition constraints. Finally, we describe how techniques from Bayesian statistics can be used to take into account in a formal way the combined contributions of both theoretical and observational uncertainties to ambiguities in a planet's interior composition. On the whole, with only a mass and radius measurement an exact interior composition cannot be inferred for an exoplanet because the problem is highly underconstrained. Detailed quantitative ranges of plausible compositions, however, can be found.

  15. Evaluation of airborne topographic lidar for quantifying beach changes

    USGS Publications Warehouse

    Sallenger, A.H., Jr.; Krabill, W.B.; Swift, R.N.; Brock, J.; List, J.; Hansen, M.; Holman, R.A.; Manizade, S.; Sontag, J.; Meredith, A.; Morgan, K.; Yunkel, J.K.; Frederick, E.B.; Stockdon, H.

    2003-01-01

    A scanning airborne topographic lidar was evaluated for its ability to quantify beach topography and changes during the Sandy Duck experiment in 1997 along the North Carolina coast. Elevation estimates, acquired with NASA's Airborne Topographic Mapper (ATM), were compared to elevations measured with three types of ground-based measurements - 1) differential GPS equipped all-terrain vehicle (ATV) that surveyed a 3-km reach of beach from the shoreline to the dune, 2) GPS antenna mounted on a stadia rod used to intensely survey a different 100 m reach of beach, and 3) a second GPS-equipped ATV that surveyed a 70-km-long transect along the coast. Over 40,000 individual intercomparisons between ATM and ground surveys were calculated. RMS vertical differences associated with the ATM when compared to ground measurements ranged from 13 to 19 cm. Considering all of the intercomparisons together, RMS ??? 15 cm. This RMS error represents a total error for individual elevation estimates including uncertainties associated with random and mean errors. The latter was the largest source of error and was attributed to drift in differential GPS. The ??? 15 cm vertical accuracy of the ATM is adequate to resolve beach-change signals typical of the impact of storms. For example, ATM surveys of Assateague Island (spanning the border of MD and VA) prior to and immediately following a severe northeaster showed vertical beach changes in places greater than 2 m, much greater than expected errors associated with the ATM. A major asset of airborne lidar is the high spatial data density. Measurements of elevation are acquired every few m2 over regional scales of hundreds of kilometers. Hence, many scales of beach morphology and change can be resolved, from beach cusps tens of meters in wavelength to entire coastal cells comprising tens to hundreds of kilometers of coast. Topographic lidars similar to the ATM are becoming increasingly available from commercial vendors and should, in the future, be widely used in beach surveying.

  16. Nature's Late-Night Light Shows

    NASA Astrophysics Data System (ADS)

    Peterson, Carolyn Collins

    2002-09-01

    In addition to stars and planets, there are other interesting lights to be seen in the night sky. The northern and southern lights, called the aurora borealis and aurora australis, are created by charged particles from the Sun reacting in Earth's magnetic field. Night-shining clouds or noctilucent clouds appear at evening twilight as a result of water vapor in the polar mesosphere. Zodiacal light can be seen stretching up from the horizon after sunset or before sunrise.

  17. Color Voyager 2 Image Showing Crescent Uranus

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This image shows a crescent Uranus, a view that Earthlings never witnessed until Voyager 2 flew near and then beyond Uranus on January 24, 1986. This planet's natural blue-green color is due to the absorption of redder wavelengths in the atmosphere by traces of methane gas. Uranus' diameter is 32,500 miles, a little over four times that of Earth. The hazy blue-green atmosphere probably extends to a depth of around 5,400 miles, where it rests above what is believed to be an icy or liquid mixture (an 'ocean') of water, ammonia, methane, and other volatiles, which in turn surrounds a rocky core perhaps a little smaller than Earth.

  18. Microbiological and environmental issues in show caves.

    PubMed

    Saiz-Jimenez, Cesareo

    2012-07-01

    Cultural tourism expanded in the last half of the twentieth century, and the interest of visitors has come to include caves containing archaeological remains. Some show caves attracted mass tourism, and economical interests prevailed over conservation, which led to a deterioration of the subterranean environment and the rock art. The presence and the role of microorganisms in caves is a topic that is often ignored in cave management. Knowledge of the colonisation patterns, the dispersion mechanisms, and the effect on human health and, when present, over rock art paintings of these microorganisms is of the utmost importance. In this review the most recent advances in the study of microorganisms in caves are presented, together with the environmental implications of the findings. PMID:22806150

  19. Simulation shows hospitals that cooperate on infection control obtain better results than hospitals acting alone.

    PubMed

    Lee, Bruce Y; Bartsch, Sarah M; Wong, Kim F; Yilmaz, S Levent; Avery, Taliser R; Singh, Ashima; Song, Yeohan; Kim, Diane S; Brown, Shawn T; Potter, Margaret A; Platt, Richard; Huang, Susan S

    2012-10-01

    Efforts to control life-threatening infections, such as with methicillin-resistant Staphylococcus aureus (MRSA), can be complicated when patients are transferred from one hospital to another. Using a detailed computer simulation model of all hospitals in Orange County, California, we explored the effects when combinations of hospitals tested all patients at admission for MRSA and adopted procedures to limit transmission among patients who tested positive. Called "contact isolation," these procedures specify precautions for health care workers interacting with an infected patient, such as wearing gloves and gowns. Our simulation demonstrated that each hospital's decision to test for MRSA and implement contact isolation procedures could affect the MRSA prevalence in all other hospitals. Thus, our study makes the case that further cooperation among hospitals--which is already reflected in a few limited collaborative infection control efforts under way--could help individual hospitals achieve better infection control than they could achieve on their own. PMID:23048111

  20. Simulation Shows Hospitals That Cooperate On Infection Control Obtain Better Results Than Hospitals Acting Alone

    PubMed Central

    Lee, Bruce Y.; Bartsch, Sarah M.; Wong, Kim F.; Yilmaz, S. Levent; Avery, Taliser R.; Singh, Ashima; Song, Yeohan; Kim, Diane S.; Brown, Shawn T.; Potter, Margaret A.; Platt, Richard; Huang, Susan S.

    2013-01-01

    Efforts to control life-threatening infections, such as with methicillin-resistant Staphylococcus aureus (MRSA), can be complicated when patients are transferred from one hospital to another. Using a detailed computer simulation model of all hospitals in Orange County, California, we explored the effects when combinations of hospitals tested all patients at admission for MRSA and adopted procedures to limit transmission among patients who tested positive. Called “contact isolation,” these procedures specify precautions for health care workers interacting with an infected patient, such as wearing gloves and gowns. Our simulation demonstrated that each hospital’s decision to test for MRSA and implement contact isolation procedures could affect the MRSA prevalence in all other hospitals. Thus, our study makes the case that further cooperation among hospitals—which is already reflected in a few limited collaborative infection control efforts under way—could help individual hospitals achieve better infection control than they could achieve on their own. PMID:23048111