Sample records for initial estimates suggest

  1. Comparative assessment of techniques for initial pose estimation using monocular vision

    NASA Astrophysics Data System (ADS)

    Sharma, Sumant; D`Amico, Simone

    2016-06-01

    This work addresses the comparative assessment of initial pose estimation techniques for monocular navigation to enable formation-flying and on-orbit servicing missions. Monocular navigation relies on finding an initial pose, i.e., a coarse estimate of the attitude and position of the space resident object with respect to the camera, based on a minimum number of features from a three dimensional computer model and a single two dimensional image. The initial pose is estimated without the use of fiducial markers, without any range measurements or any apriori relative motion information. Prior work has been done to compare different pose estimators for terrestrial applications, but there is a lack of functional and performance characterization of such algorithms in the context of missions involving rendezvous operations in the space environment. Use of state-of-the-art pose estimation algorithms designed for terrestrial applications is challenging in space due to factors such as limited on-board processing power, low carrier to noise ratio, and high image contrasts. This paper focuses on performance characterization of three initial pose estimation algorithms in the context of such missions and suggests improvements.

  2. Parent-Child Communication and Marijuana Initiation: Evidence Using Discrete-Time Survival Analysis

    PubMed Central

    Nonnemaker, James M.; Silber-Ashley, Olivia; Farrelly, Matthew C.; Dench, Daniel

    2012-01-01

    This study supplements existing literature on the relationship between parent-child communication and adolescent drug use by exploring whether parental and/or adolescent recall of specific drug-related conversations differentially impact youth's likelihood of initiating marijuana use. Using discrete-time survival analysis, we estimated the hazard of marijuana initiation using a logit model to obtain an estimate of the relative risk of initiation. Our results suggest that parent-child communication about drug use is either not protective (no effect) or—in the case of youth reports of communication—potentially harmful (leading to increased likelihood of marijuana initiation). PMID:22958867

  3. Parent-child communication and marijuana initiation: evidence using discrete-time survival analysis.

    PubMed

    Nonnemaker, James M; Silber-Ashley, Olivia; Farrelly, Matthew C; Dench, Daniel

    2012-12-01

    This study supplements existing literature on the relationship between parent-child communication and adolescent drug use by exploring whether parental and/or adolescent recall of specific drug-related conversations differentially impact youth's likelihood of initiating marijuana use. Using discrete-time survival analysis, we estimated the hazard of marijuana initiation using a logit model to obtain an estimate of the relative risk of initiation. Our results suggest that parent-child communication about drug use is either not protective (no effect) or - in the case of youth reports of communication - potentially harmful (leading to increased likelihood of marijuana initiation). Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. NREL Screens Universities for Solar and Battery Storage Potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    In support of the U.S. Department of Energy's SunShot initiative, NREL provided solar photovoltaic (PV) screenings in 2016 for eight universities seeking to go solar. NREL conducted an initial technoeconomic assessment of PV and storage feasibility at the selected universities using the REopt model, an energy planning platform that can be used to evaluate RE options, estimate costs, and suggest a mix of RE technologies to meet defined assumptions and constraints. NREL provided each university with customized results, including the cost-effectiveness of PV and storage, recommended system size, estimated capital cost to implement the technology, and estimated life cycle costmore » savings.« less

  5. Reef fish communities are spooked by scuba surveys and may take hours to recover

    PubMed Central

    Cheal, Alistair J.; Miller, Ian R.

    2018-01-01

    Ecological monitoring programs typically aim to detect changes in the abundance of species of conservation concern or which reflect system status. Coral reef fish assemblages are functionally important for reef health and these are most commonly monitored using underwater visual surveys (UVS) by divers. In addition to estimating numbers, most programs also collect estimates of fish lengths to allow calculation of biomass, an important determinant of a fish’s functional impact. However, diver surveys may be biased because fishes may either avoid or are attracted to divers and the process of estimating fish length could result in fish counts that differ from those made without length estimations. Here we investigated whether (1) general diver disturbance and (2) the additional task of estimating fish lengths affected estimates of reef fish abundance and species richness during UVS, and for how long. Initial estimates of abundance and species richness were significantly higher than those made on the same section of reef after diver disturbance. However, there was no evidence that estimating fish lengths at the same time as abundance resulted in counts different from those made when estimating abundance alone. Similarly, there was little consistent bias among observers. Estimates of the time for fish taxa that avoided divers after initial contact to return to initial levels of abundance varied from three to 17 h, with one group of exploited fishes showing initial attraction to divers that declined over the study period. Our finding that many reef fishes may disperse for such long periods after initial contact with divers suggests that monitoring programs should take great care to minimise diver disturbance prior to surveys. PMID:29844998

  6. Maritime Military Decision Making in Environments of Extreme Information Ambiguity: An Initial Exploration

    DTIC Science & Technology

    2005-09-01

    Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the...collection of information . Send comments regarding this burden estimate or any other aspect of this collection of information , including suggestions for

  7. Inter-Method Reliability of School Effectiveness Measures: A Comparison of Value-Added and Regression Discontinuity Estimates

    ERIC Educational Resources Information Center

    Perry, Thomas

    2017-01-01

    Value-added (VA) measures are currently the predominant approach used to compare the effectiveness of schools. Recent educational effectiveness research, however, has developed alternative approaches including the regression discontinuity (RD) design, which also allows estimation of absolute school effects. Initial research suggests RD is a viable…

  8. Quantifying potential health impacts of cadmium in cigarettes on smoker risk of lung cancer: a portfolio-of-mechanisms approach.

    PubMed

    Cox, Louis Anthony Tony

    2006-12-01

    This article introduces an approach to estimating the uncertain potential effects on lung cancer risk of removing a particular constituent, cadmium (Cd), from cigarette smoke, given the useful but incomplete scientific information available about its modes of action. The approach considers normal cell proliferation; DNA repair inhibition in normal cells affected by initiating events; proliferation, promotion, and progression of initiated cells; and death or sparing of initiated and malignant cells as they are further transformed to become fully tumorigenic. Rather than estimating unmeasured model parameters by curve fitting to epidemiological or animal experimental tumor data, we attempt rough estimates of parameters based on their biological interpretations and comparison to corresponding genetic polymorphism data. The resulting parameter estimates are admittedly uncertain and approximate, but they suggest a portfolio approach to estimating impacts of removing Cd that gives usefully robust conclusions. This approach views Cd as creating a portfolio of uncertain health impacts that can be expressed as biologically independent relative risk factors having clear mechanistic interpretations. Because Cd can act through many distinct biological mechanisms, it appears likely (subjective probability greater than 40%) that removing Cd from cigarette smoke would reduce smoker risks of lung cancer by at least 10%, although it is possible (consistent with what is known) that the true effect could be much larger or smaller. Conservative estimates and assumptions made in this calculation suggest that the true impact could be greater for some smokers. This conclusion appears to be robust to many scientific uncertainties about Cd and smoking effects.

  9. Absorption of the Martian regolith: Specific surface area and missing CO(sub 2)

    NASA Technical Reports Server (NTRS)

    Zent, A. P.; Fanale, F. P.; Postawko, S. E.

    1987-01-01

    For most estimates of available regolith and initial degassed CO(sub 2) inventories, it appears that any initial inventory must have been lost to space or incorporated into carbonates. Most estimates of the total available degassed CO(sub 2) inventory are only marginally sufficient to allow for a major early greenhouse effect. It is suggested that the requirements for greenhouse warming to produce old dessicated terrain would be greatly lessened if groundwater brines rather than rainfall were involved and if a higher internal gradient were involved to raise the water (brine) table, leading to more frequent sapping.

  10. Linking resource selection and mortality modeling for population estimation of mountain lions in Montana

    USGS Publications Warehouse

    Robinson, Hugh S.; Ruth, Toni K.; Gude, Justin A.; Choate, David; DeSimone, Rich; Hebblewhite, Mark; Matchett, Marc R.; Mitchell, Michael S.; Murphy, Kerry; Williams, Jim

    2015-01-01

    To be most effective, the scale of wildlife management practices should match the range of a particular species’ movements. For this reason, combined with our inability to rigorously or regularly census mountain lion populations, several authors have suggested that mountain lions be managed in a source-sink or metapopulation framework. We used a combination of resource selection functions, mortality estimation, and dispersal modeling to estimate cougar population levels in Montana statewide and potential population level effects of planned harvest levels. Between 1980 and 2012, 236 independent mountain lions were collared and monitored for research in Montana. From these data we used 18,695 GPS locations collected during winter from 85 animals to develop a resource selection function (RSF), and 11,726 VHF and GPS locations from 142 animals along with the locations of 6343 mountain lions harvested from 1988–2011 to validate the RSF model. Our RSF model validated well in all portions of the State, although it appeared to perform better in Montana Fish, Wildlife and Parks (MFWP) Regions 1, 2, 4 and 6, than in Regions 3, 5, and 7. Our mean RSF based population estimate for the total population (kittens, juveniles, and adults) of mountain lions in Montana in 2005 was 3926, with almost 25% of the entire population in MFWP Region 1. Estimates based on a high and low reference population estimates produce a possible range of 2784 to 5156 mountain lions statewide. Based on a range of possible survival rates we estimated the mountain lion population in Montana to be stable to slightly increasing between 2005 and 2010 with lambda ranging from 0.999 (SD = 0.05) to 1.02 (SD = 0.03). We believe these population growth rates to be a conservative estimate of true population growth. Our model suggests that proposed changes to female harvest quotas for 2013–2015 will result in an annual statewide population decline of 3% and shows that, due to reduced dispersal, changes to harvest in one management unit may affect population growth in neighboring units where smaller or even no changes were made. Uncertainty regarding dispersal levels and initial population density may have a significant effect on predictions at a management unit scale (i.e. 2000 km2), while at a regional scale (i.e. 50,000 km2) large differences in initial population density result in relatively small changes in population growth rate, and uncertainty about dispersal may not be as influential. Doubling the presumed initial density from a low estimation of 2.19 total animals per 100 km2 resulted in a difference in annual population growth rate of only 2.6% statewide when compared to high density of 4.04 total animals per 100 km2 (low initial population estimate λ = 0.99, while high initial population estimate λ = 1.03). We suggest modeling tools such as this may be useful in harvest planning at a regional and statewide level.

  11. Have we left some behind? Trends in socio-economic inequalities in breastfeeding initiation: a population-based epidemiological surveillance study.

    PubMed

    Nickel, Nathan C; Martens, Patricia J; Chateau, Dan; Brownell, Marni D; Sarkar, Joykrishna; Goh, Chun Yan; Burland, Elaine; Taylor, Carole; Katz, Alan

    2014-07-31

    Breastfeeding is associated with improved health. Surveillance data show that breastfeeding initiation rates have increased; however, limited work has examined trends in socio-economic inequalities in initiation. The study's research question was whether socio-economic inequalities in breastfeeding initiation have changed over the past 20 years. This population-based study is a project within PATHS Equity for Children. Analyses used hospital discharge data for Manitoba mother-infant dyads with live births, 1988-2011 (n=316,027). Income quintiles were created, each with ~20% of dyads. Three-year, overall and by-quintile breastfeeding initiation rates were estimated for Manitoba and two hospitals. Age-adjusted rates were estimated for Manitoba. Rates were modelled using generalized linear models. Three measures, rate ratios (RRs), rate differences (RDs) and concentration indices, assessed inequality at each time point. We also compared concentration indices with Gini coefficients to assess breastfeeding inequality vis-à-vis income inequality. Trend analyses tested for changes over time. Manitoba and Hospital A initiation rates increased; Hospital B rates did not change. Significant inequalities existed in nearly every period, across all three measures: RRs, RDs and concentration indices. RRs and concentration indices suggested little to no change in inequality from 1988 to 2011. RDs for Manitoba (comparing initiation in the highest to lowest income quintiles) did not change significantly over time. RDs decreased for Hospital A, suggesting decreasing socio-economic inequalities in breastfeeding; RDs increased for Hospital B. Income inequality increased significantly in Manitoba during the study period. Overall breastfeeding initiation rates can improve while inequality persists or worsens.

  12. Estimating initial contaminant mass based on fitting mass-depletion functions to contaminant mass discharge data: Testing method efficacy with SVE operations data

    NASA Astrophysics Data System (ADS)

    Mainhagu, J.; Brusseau, M. L.

    2016-09-01

    The mass of contaminant present at a site, particularly in the source zones, is one of the key parameters for assessing the risk posed by contaminated sites, and for setting and evaluating remediation goals and objectives. This quantity is rarely known and is challenging to estimate accurately. This work investigated the efficacy of fitting mass-depletion functions to temporal contaminant mass discharge (CMD) data as a means of estimating initial mass. Two common mass-depletion functions, exponential and power functions, were applied to historic soil vapor extraction (SVE) CMD data collected from 11 contaminated sites for which the SVE operations are considered to be at or close to essentially complete mass removal. The functions were applied to the entire available data set for each site, as well as to the early-time data (the initial 1/3 of the data available). Additionally, a complete differential-time analysis was conducted. The latter two analyses were conducted to investigate the impact of limited data on method performance, given that the primary mode of application would be to use the method during the early stages of a remediation effort. The estimated initial masses were compared to the total masses removed for the SVE operations. The mass estimates obtained from application to the full data sets were reasonably similar to the measured masses removed for both functions (13 and 15% mean error). The use of the early-time data resulted in a minimally higher variation for the exponential function (17%) but a much higher error (51%) for the power function. These results suggest that the method can produce reasonable estimates of initial mass useful for planning and assessing remediation efforts.

  13. I've Fallen and I Can't Get up: Can High-Ability Students Recover from Early Mistakes in CAT?

    ERIC Educational Resources Information Center

    Rulison, Kelly L.; Loken, Eric

    2009-01-01

    A difficult result to interpret in Computerized Adaptive Tests (CATs) occurs when an ability estimate initially drops and then ascends continuously until the test ends, suggesting that the true ability may be higher than implied by the final estimate. This study explains why this asymmetry occurs and shows that early mistakes by high-ability…

  14. A cumulative shear mechanism for tissue damage initiation in shock-wave lithotripsy.

    PubMed

    Freund, Jonathan B; Colonius, Tim; Evan, Andrew P

    2007-09-01

    Evidence suggests that inertial cavitation plays an important role in the renal injury incurred during shock-wave lithotripsy. However, it is unclear how tissue damage is initiated, and significant injury typically occurs only after a sufficient dose of shock waves. Although it has been suggested that shock-induced shearing might initiate injury, estimates indicate that individual shocks do not produce sufficient shear to do so. In this paper, we hypothesize that the cumulative shear of the many shocks is damaging. This mechanism depends on whether there is sufficient time between shocks for tissue to relax to its unstrained state. We investigate the mechanism with a physics-based simulation model, wherein the basement membranes that define the tubules and vessels in the inner medulla are represented as elastic shells surrounded by viscous fluid. Material properties are estimated from in-vitro tests of renal basement membranes and documented mechanical properties of cells and extracellular gels. Estimates for the net shear deformation from a typical lithotripter shock (approximately 0.1%) are found from a separate dynamic shock simulation. The results suggest that the larger interstitial volume (approximately 40%) near the papilla tip gives the tissue there a relaxation time comparable to clinical shock delivery rates (approximately 1 Hz), thus allowing shear to accumulate. Away from the papilla tip, where the interstitial volume is smaller (approximately 20%), the model tissue relaxes completely before the next shock would be delivered. Implications of the model are that slower delivery rates and broader focal zones should both decrease injury, consistent with some recent observations.

  15. Disruption of State Estimation in the Human Lateral Cerebellum

    PubMed Central

    Miall, R. Chris; Christensen, Lars O. D; Cain, Owen; Stanley, James

    2007-01-01

    The cerebellum has been proposed to be a crucial component in the state estimation process that combines information from motor efferent and sensory afferent signals to produce a representation of the current state of the motor system. Such a state estimate of the moving human arm would be expected to be used when the arm is rapidly and skillfully reaching to a target. We now report the effects of transcranial magnetic stimulation (TMS) over the ipsilateral cerebellum as healthy humans were made to interrupt a slow voluntary movement to rapidly reach towards a visually defined target. Errors in the initial direction and in the final finger position of this reach-to-target movement were significantly higher for cerebellar stimulation than they were in control conditions. The average directional errors in the cerebellar TMS condition were consistent with the reaching movements being planned and initiated from an estimated hand position that was 138 ms out of date. We suggest that these results demonstrate that the cerebellum is responsible for estimating the hand position over this time interval and that TMS disrupts this state estimate. PMID:18044990

  16. Recent Surface Reflectance Measurement Campaigns with Emphasis on Best Practices, SI Traceability and Uncertainty Estimation

    NASA Technical Reports Server (NTRS)

    Helder, Dennis; Thome, Kurtis John; Aaron, Dave; Leigh, Larry; Czapla-Myers, Jeff; Leisso, Nathan; Biggar, Stuart; Anderson, Nik

    2012-01-01

    A significant problem facing the optical satellite calibration community is limited knowledge of the uncertainties associated with fundamental measurements, such as surface reflectance, used to derive satellite radiometric calibration estimates. In addition, it is difficult to compare the capabilities of calibration teams around the globe, which leads to differences in the estimated calibration of optical satellite sensors. This paper reports on two recent field campaigns that were designed to isolate common uncertainties within and across calibration groups, particularly with respect to ground-based surface reflectance measurements. Initial results from these efforts suggest the uncertainties can be as low as 1.5% to 2.5%. In addition, methods for improving the cross-comparison of calibration teams are suggested that can potentially reduce the differences in the calibration estimates of optical satellite sensors.

  17. Experimental evaluation of rigor mortis. VIII. Estimation of time since death by repeated measurements of the intensity of rigor mortis on rats.

    PubMed

    Krompecher, T

    1994-10-21

    The development of the intensity of rigor mortis was monitored in nine groups of rats. The measurements were initiated after 2, 4, 5, 6, 8, 12, 15, 24, and 48 h post mortem (p.m.) and lasted 5-9 h, which ideally should correspond to the usual procedure after the discovery of a corpse. The experiments were carried out at an ambient temperature of 24 degrees C. Measurements initiated early after death resulted in curves with a rising portion, a plateau, and a descending slope. Delaying the initial measurement translated into shorter rising portions, and curves initiated 8 h p.m. or later are comprised of a plateau and/or a downward slope only. Three different phases were observed suggesting simple rules that can help estimate the time since death: (1) if an increase in intensity was found, the initial measurements were conducted not later than 5 h p.m.; (2) if only a decrease in intensity was observed, the initial measurements were conducted not earlier than 7 h p.m.; and (3) at 24 h p.m., the resolution is complete, and no further changes in intensity should occur. Our results clearly demonstrate that repeated measurements of the intensity of rigor mortis allow a more accurate estimation of the time since death of the experimental animals than the single measurement method used earlier. A critical review of the literature on the estimation of time since death on the basis of objective measurements of the intensity of rigor mortis is also presented.

  18. Promising Variants of Initiation of Martensitic γ - α Transformation in Iron Alloys by a Couple of Elastic Waves

    NASA Astrophysics Data System (ADS)

    Kashchenko, M. P.; Chashchina, V. G.

    2016-01-01

    Variants of initiation of growth of crystals of α-martensite by couples of elastic waves propagating in directions <001>γ and <110>γ in singles crystals of Fe31Ni are suggested. The dynamic theory is used to show that the expected orientations of habit planes {110}γ, {001}γ and {559}γ differ from the typical {31015}γ. Possible features of tetragonality of martensite crystals are discussed. The power of the sources of ultrasound required for initiation of γ - α martensitic transformation is estimated.

  19. Woodpecker densities in the big woods of Arkansas

    USGS Publications Warehouse

    Luscier, J.D.; Krementz, David G.

    2010-01-01

    Sightings of the now-feared-extinct ivory-billed woodpecker Campephilus principalis in 2004 in the Big Woods of Arkansas initiated a series of studies on how to best manage habitat for this endangered species as well as all woodpeckers in the area. Previous work suggested that densities of other woodpeckers, particularly pileated Dryocopus pileatus and red-bellied Melanerpes carolinus woodpeckers, might be useful in characterizing habitat use by the ivory-billed woodpecker. We estimated densities of six woodpecker species in the Big Woods during the breeding seasons of 2006 and 2007 and also during the winter season of 2007. Our estimated densities were as high as or higher than previously published woodpecker density estimates for the Southeastern United States. Density estimates ranged from 9.1 to 161.3 individuals/km2 across six woodpecker species. Our data suggest that the Big Woods of Arkansas is attractive to all woodpeckers using the region, including ivory-billed woodpeckers.

  20. Using Population Dose to Evaluate Community-level Health Initiatives.

    PubMed

    Harner, Lisa T; Kuo, Elena S; Cheadle, Allen; Rauzon, Suzanne; Schwartz, Pamela M; Parnell, Barbara; Kelly, Cheryl; Solomon, Loel

    2018-05-01

    Successful community-level health initiatives require implementing an effective portfolio of strategies and understanding their impact on population health. These factors are complicated by the heterogeneity of overlapping multicomponent strategies and availability of population-level data that align with the initiatives. To address these complexities, the population dose methodology was developed for planning and evaluating multicomponent community initiatives. Building on the population dose methodology previously developed, this paper operationalizes dose estimates of one initiative targeting youth physical activity as part of the Kaiser Permanente Community Health Initiative, a multicomponent community-level obesity prevention initiative. The technical details needed to operationalize the population dose method are explained, and the use of population dose as an interim proxy for population-level survey data is introduced. The alignment of the estimated impact from strategy-level data analysis using the dose methodology and the data from the population-level survey suggest that dose is useful for conducting real-time evaluation of multiple heterogeneous strategies, and as a viable proxy for existing population-level surveys when robust strategy-level evaluation data are collected. This article is part of a supplement entitled Building Thriving Communities Through Comprehensive Community Health Initiatives, which is sponsored by Kaiser Permanente, Community Health. Copyright © 2018 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  1. Seismic Oceanography's Failure to Flourish: A Possible Solution

    NASA Astrophysics Data System (ADS)

    Ruddick, B. R.

    2018-01-01

    A recent paper in Journal of Geophysical Research: Oceans used multichannel seismic observations to map estimates of internal wave mixing in the Gulf of Mexico, finding greatly enhanced mixing over the slope region. These results suggest that the ocean margins may supply the mixing required to close the global thermohaline circulation, and the techniques demonstrated here might be used to map mixing over much of the world's continental shelves. The use of multichannel seismics to image ocean phenomena is nearly 15 years old, and despite the initial promise, the techniques have not become as broadly used as initially expected. We discuss possible reasons for this, and suggest an alternative approach that might gain broader success.

  2. Investigating causal associations between use of nicotine, alcohol, caffeine and cannabis: a two-sample bidirectional Mendelian randomization study.

    PubMed

    Verweij, Karin J H; Treur, Jorien L; Vink, Jacqueline M

    2018-07-01

    Epidemiological studies consistently show co-occurrence of use of different addictive substances. Whether these associations are causal or due to overlapping underlying influences remains an important question in addiction research. Methodological advances have made it possible to use published genetic associations to infer causal relationships between phenotypes. In this exploratory study, we used Mendelian randomization (MR) to examine the causality of well-established associations between nicotine, alcohol, caffeine and cannabis use. Two-sample MR was employed to estimate bidirectional causal effects between four addictive substances: nicotine (smoking initiation and cigarettes smoked per day), caffeine (cups of coffee per day), alcohol (units per week) and cannabis (initiation). Based on existing genome-wide association results we selected genetic variants associated with the exposure measure as an instrument to estimate causal effects. Where possible we applied sensitivity analyses (MR-Egger and weighted median) more robust to horizontal pleiotropy. Most MR tests did not reveal causal associations. There was some weak evidence for a causal positive effect of genetically instrumented alcohol use on smoking initiation and of cigarettes per day on caffeine use, but these were not supported by the sensitivity analyses. There was also some suggestive evidence for a positive effect of alcohol use on caffeine use (only with MR-Egger) and smoking initiation on cannabis initiation (only with weighted median). None of the suggestive causal associations survived corrections for multiple testing. Two-sample Mendelian randomization analyses found little evidence for causal relationships between nicotine, alcohol, caffeine and cannabis use. © 2018 Society for the Study of Addiction.

  3. Initial planetary base construction techniques and machine implementation

    NASA Technical Reports Server (NTRS)

    Crockford, William W.

    1987-01-01

    Conceptual designs of (1) initial planetary base structures, and (2) an unmanned machine to perform the construction of these structures using materials local to the planet are presented. Rock melting is suggested as a possible technique to be used by the machine in fabricating roads, platforms, and interlocking bricks. Identification of problem areas in machine design and materials processing is accomplished. The feasibility of the designs is contingent upon favorable results of an analysis of the engineering behavior of the product materials. The analysis requires knowledge of several parameters for solution of the constitutive equations of the theory of elasticity. An initial collection of these parameters is presented which helps to define research needed to perform a realistic feasibility study. A qualitative approach to estimating power and mass lift requirements for the proposed machine is used which employs specifications of currently available equipment. An initial, unmanned mission scenario is discussed with emphasis on identifying uncompleted tasks and suggesting design considerations for vehicles and primitive structures which use the products of the machine processing.

  4. Estimating the Impacts of Local Policy Innovation: The Synthetic Control Method Applied to Tropical Deforestation

    PubMed Central

    Sills, Erin O.; Herrera, Diego; Kirkpatrick, A. Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander

    2015-01-01

    Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts’ selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal “blacklist” that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on policies that are implemented in just a few locations. PMID:26173108

  5. Estimating the Impacts of Local Policy Innovation: The Synthetic Control Method Applied to Tropical Deforestation.

    PubMed

    Sills, Erin O; Herrera, Diego; Kirkpatrick, A Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander

    2015-01-01

    Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts' selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal "blacklist" that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on policies that are implemented in just a few locations.

  6. Imbibition of hydraulic fracturing fluids into partially saturated shale

    NASA Astrophysics Data System (ADS)

    Birdsell, Daniel T.; Rajaram, Harihar; Lackey, Greg

    2015-08-01

    Recent studies suggest that imbibition of hydraulic fracturing fluids into partially saturated shale is an important mechanism that restricts their migration, thus reducing the risk of groundwater contamination. We present computations of imbibition based on an exact semianalytical solution for spontaneous imbibition. These computations lead to quantitative estimates of an imbibition rate parameter (A) with units of LT-1/2 for shale, which is related to porous medium and fluid properties, and the initial water saturation. Our calculations suggest that significant fractions of injected fluid volumes (15-95%) can be imbibed in shale gas systems, whereas imbibition volumes in shale oil systems is much lower (3-27%). We present a nondimensionalization of A, which provides insights into the critical factors controlling imbibition, and facilitates the estimation of A based on readily measured porous medium and fluid properties. For a given set of medium and fluid properties, A varies by less than factors of ˜1.8 (gas nonwetting phase) and ˜3.4 (oil nonwetting phase) over the range of initial water saturations reported for the Marcellus shale (0.05-0.6). However, for higher initial water saturations, A decreases significantly. The intrinsic permeability of the shale and the viscosity of the fluids are the most important properties controlling the imbibition rate.

  7. Thickness distribution of a cooling pyroclastic flow deposit on Augustine Volcano, Alaska: Optimization using InSAR, FEMs, and an adaptive mesh algorithm

    USGS Publications Warehouse

    Masterlark, Timothy; Lu, Zhong; Rykhus, Russell P.

    2006-01-01

    Interferometric synthetic aperture radar (InSAR) imagery documents the consistent subsidence, during the interval 1992–1999, of a pyroclastic flow deposit (PFD) emplaced during the 1986 eruption of Augustine Volcano, Alaska. We construct finite element models (FEMs) that simulate thermoelastic contraction of the PFD to account for the observed subsidence. Three-dimensional problem domains of the FEMs include a thermoelastic PFD embedded in an elastic substrate. The thickness of the PFD is initially determined from the difference between post- and pre-eruption digital elevation models (DEMs). The initial excess temperature of the PFD at the time of deposition, 640 °C, is estimated from FEM predictions and an InSAR image via standard least-squares inverse methods. Although the FEM predicts the major features of the observed transient deformation, systematic prediction errors (RMSE = 2.2 cm) are most likely associated with errors in the a priori PFD thickness distribution estimated from the DEM differences. We combine an InSAR image, FEMs, and an adaptive mesh algorithm to iteratively optimize the geometry of the PFD with respect to a minimized misfit between the predicted thermoelastic deformation and observed deformation. Prediction errors from an FEM, which includes an optimized PFD geometry and the initial excess PFD temperature estimated from the least-squares analysis, are sub-millimeter (RMSE = 0.3 mm). The average thickness (9.3 m), maximum thickness (126 m), and volume (2.1 × 107m3) of the PFD, estimated using the adaptive mesh algorithm, are about twice as large as the respective estimations for the a priori PFD geometry. Sensitivity analyses suggest unrealistic PFD thickness distributions are required for initial excess PFD temperatures outside of the range 500–800 °C.

  8. Extended Kalman Doppler tracking and model determination for multi-sensor short-range radar

    NASA Astrophysics Data System (ADS)

    Mittermaier, Thomas J.; Siart, Uwe; Eibert, Thomas F.; Bonerz, Stefan

    2016-09-01

    A tracking solution for collision avoidance in industrial machine tools based on short-range millimeter-wave radar Doppler observations is presented. At the core of the tracking algorithm there is an Extended Kalman Filter (EKF) that provides dynamic estimation and localization in real-time. The underlying sensor platform consists of several homodyne continuous wave (CW) radar modules. Based on In-phase-Quadrature (IQ) processing and down-conversion, they provide only Doppler shift information about the observed target. Localization with Doppler shift estimates is a nonlinear problem that needs to be linearized before the linear KF can be applied. The accuracy of state estimation depends highly on the introduced linearization errors, the initialization and the models that represent the true physics as well as the stochastic properties. The important issue of filter consistency is addressed and an initialization procedure based on data fitting and maximum likelihood estimation is suggested. Models for both, measurement and process noise are developed. Tracking results from typical three-dimensional courses of movement at short distances in front of a multi-sensor radar platform are presented.

  9. Raising household saving: does financial education work?

    PubMed

    Gale, William G; Harris, Benjamin H; Levine, Ruth

    2012-01-01

    This article highlights the prevalence and economic outcomes of financial illiteracy among American households, and reviews previous research that examines how improving financial literacy affects household saving. Analysis of the research literature suggests that previous financial literacy efforts have yielded mixed results. Evidence suggests that interventions provided for employees in the workplace have helped increase household saving, but estimates of the magnitude of the impact vary widely. For financial education initiatives targeted to other groups, the evidence is much more ambiguous, suggesting a need for more econometrically rigorous evaluations.

  10. Receiver function stacks: initial steps for seismic imaging of Cotopaxi volcano, Ecuador

    NASA Astrophysics Data System (ADS)

    Bishop, J. W.; Lees, J. M.; Ruiz, M. C.

    2017-12-01

    Cotopaxi volcano is a large, andesitic stratovolcano located within 50 km of the the Ecuadorean capital of Quito. Cotopaxi most recently erupted for the first time in 73 years during August 2015. This eruptive cycle (VEI = 1) featured phreatic explosions and ejection of an ash column 9 km above the volcano edifice. Following this event, ash covered approximately 500 km2 of the surrounding area. Analysis of Multi-GAS data suggests that this eruption was fed from a shallow source. However, stratigraphic evidence surveying the last 800 years of Cotopaxi's activity suggests that there may be a deep magmatic source. To establish a geophysical framework for Cotopaxi's activity, receiver functions were calculated from well recorded earthquakes detected from April 2015 to December 2015 at 9 permanent broadband seismic stations around the volcano. These events were located, and phase arrivals were manually picked. Radial teleseismic receiver functions were then calculated using an iterative deconvolution technique with a Gaussian width of 2.5. A maximum of 200 iterations was allowed in each deconvolution. Iterations were stopped when either the maximum iteration number was reached or the percent change fell beneath a pre-determined tolerance. Receiver functions were then visually inspected for anomalous pulses before the initial P arrival or later peaks larger than the initial P-wave correlated pulse, which were also discarded. Using this data, initial crustal thickness and slab depth estimates beneath the volcano were obtained. Estimates of crustal Vp/Vs ratio for the region were also calculated.

  11. Decreasing initial telomere length in humans intergenerationally understates age-associated telomere shortening

    PubMed Central

    Holohan, Brody; De Meyer, Tim; Batten, Kimberly; Mangino, Massimo; Hunt, Steven C; Bekaert, Sofie; De Buyzere, Marc L; Rietzschel, Ernst R; Spector, Tim D; Wright, Woodring E; Shay, Jerry W

    2015-01-01

    Telomere length shortens with aging, and short telomeres have been linked to a wide variety of pathologies. Previous studies suggested a discrepancy in age-associated telomere shortening rate estimated by cross-sectional studies versus the rate measured in longitudinal studies, indicating a potential bias in cross-sectional estimates. Intergenerational changes in initial telomere length, such as that predicted by the previously described effect of a father’s age at birth of his offspring (FAB), could explain the discrepancy in shortening rate measurements. We evaluated whether changes occur in initial telomere length over multiple generations in three large datasets and identified paternal birth year (PBY) as a variable that reconciles the difference between longitudinal and cross-sectional measurements. We also clarify the association between FAB and offspring telomere length, demonstrating that this effect is substantially larger than reported in the past. These results indicate the presence of a downward secular trend in telomere length at birth over generational time with potential public health implications. PMID:25952108

  12. Waterspout, Gust Fronts and Associated Cloud Systems

    NASA Technical Reports Server (NTRS)

    Simpson, J.

    1983-01-01

    Nine waterspouts observed on five experimental days during the GATE period of observations are discussed. Primary data used are from 2 aircraft flying in different patterns, one above the other between 30 and 300 m. There is strong evidence associating whirl initiation with cumulus outflow. Computations prepared from estimates of convergence with the region suggest the possibility of vortex generation within 4 minutes. This analysis supports (1) the importance cumulus outflows may have in waterspout initiation and (2) the possibility that sea surface temperature gradients may be important in enabling waterspout development from modest size cumuli.

  13. No Risk of Myocardial Infarction Associated With Initial Antiretroviral Treatment Containing Abacavir: Short and Long-Term Results from ACTG A5001/ALLRT

    PubMed Central

    Benson, Constance A.; Zheng, Yu; Koletar, Susan L.; Collier, Ann C.; Lok, Judith J.; Smurzynski, Marlene; Bosch, Ronald J.; Bastow, Barbara; Schouten, Jeffrey T.

    2011-01-01

    Background. Observational and retrospective clinical trial cohorts have reported conflicting results for the association of abacavir use with risk of myocardial infarction (MI), possibly related to issues that may bias estimation of treatment effects, such as time-varying confounders, informative dropout, and cohort loss due to competing events. Methods. We analyzed data from 5056 individuals initiating randomized antiretroviral treatment (ART) in AIDS Clinical Trials Group studies; 1704 started abacavir therapy. An intent-to-treat analysis adjusted for pretreatment covariates and weighting for informative censoring was used to estimate the hazard ratio (HR) of MIs after initiation of a regimen with or without abacavir. Results. Through 6 years after ART initiation, 36 MI events were observed in 17,404 person-years of follow-up. No evidence of an increased hazard of MI in subjects using abacavir versus no abacavir was seen (over a 1-year period: P = .50; HR, 0.7 [95% confidence interval {CI}, 0.2-2.4]); over a 6-year period: P = .24; HR, 0.6 [95% CI, 0.3-1.4]); these results were robust over as-treated and sensitivity analyses. Although the risk of MI decreased over time, there was no evidence to suggest a time-dependent abacavir effect. Classic cardiovascular disease (CVD) risk factors were the strongest predictors of MI. Conclusion. We find no evidence to suggest that initial ART containing abacavir increases MI risk over short-term and long-term periods in this population with relatively low MI risk. Traditional CVD risk factors should be the main focus in assessing CVD risk in individuals with human immunodeficiency virus infection. PMID:21427402

  14. No risk of myocardial infarction associated with initial antiretroviral treatment containing abacavir: short and long-term results from ACTG A5001/ALLRT.

    PubMed

    Ribaudo, Heather J; Benson, Constance A; Zheng, Yu; Koletar, Susan L; Collier, Ann C; Lok, Judith J; Smurzynski, Marlene; Bosch, Ronald J; Bastow, Barbara; Schouten, Jeffrey T

    2011-04-01

    Observational and retrospective clinical trial cohorts have reported conflicting results for the association of abacavir use with risk of myocardial infarction (MI), possibly related to issues that may bias estimation of treatment effects, such as time-varying confounders, informative dropout, and cohort loss due to competing events. We analyzed data from 5056 individuals initiating randomized antiretroviral treatment (ART) in AIDS Clinical Trials Group studies; 1704 started abacavir therapy. An intent-to-treat analysis adjusted for pretreatment covariates and weighting for informative censoring was used to estimate the hazard ratio (HR) of MIs after initiation of a regimen with or without abacavir. Through 6 years after ART initiation, 36 MI events were observed in 17,404 person-years of follow-up. No evidence of an increased hazard of MI in subjects using abacavir versus no abacavir was seen (over a 1-year period: P=.50; HR, 0.7 [95% confidence interval {CI}, 0.2-2.4]); over a 6-year period: P=.24; HR, 0.6 [95% CI, 0.3-1.4]); these results were robust over as-treated and sensitivity analyses. Although the risk of MI decreased over time, there was no evidence to suggest a time-dependent abacavir effect. Classic cardiovascular disease (CVD) risk factors were the strongest predictors of MI. We find no evidence to suggest that initial ART containing abacavir increases MI risk over short-term and long-term periods in this population with relatively low MI risk. Traditional CVD risk factors should be the main focus in assessing CVD risk in individuals with human immunodeficiency virus infection. © The Author 2011. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved.

  15. Smoking Initiation and the Iron Law of Demand *

    PubMed Central

    Lillard, Dean R.; Molloy, Eamon; Sfekas, Andrew

    2012-01-01

    We show, with three longitudinal datasets, that cigarette taxes and prices affect smoking initiation decisions. Previous longitudinal studies have found somewhat mixed results, but generally have not found initiation to be sensitive to increases in price or tax. We show that the lack of statistical significance in previous studies may be at least partially attributed to a lack of policy variation in the time periods studied, truncated behavioral windows, or mis-assignment of price and tax rates in retrospective data (which occurs when one has no information about respondents’ prior state or region of residence in retrospective data). We show how each factor may affect the estimation of initiation models. Our findings suggest several problems that are applicable to initiation behavior generally, particularly those for which individuals’ responses to policy changes may be noisy or small in magnitude. PMID:23220458

  16. Genetic Divergence Disclosing a Rapid Prehistorical Dispersion of Native Americans in Central and South America

    PubMed Central

    He, Yungang; Wang, Wei R.; Li, Ran; Wang, Sijia; Jin, Li

    2012-01-01

    An accurate estimate of the divergence time between Native Americans is important for understanding the initial entry and early dispersion of human beings in the New World. Current methods for estimating the genetic divergence time of populations could seriously depart from a linear relationship with the true divergence for multiple populations of a different population size and significant population expansion. Here, to address this problem, we propose a novel measure to estimate the genetic divergence time of populations. Computer simulation revealed that the new measure maintained an excellent linear correlation with the population divergence time in complicated multi-population scenarios with population expansion. Utilizing the new measure and microsatellite data of 21 Native American populations, we investigated the genetic divergences of the Native American populations. The results indicated that genetic divergences between North American populations are greater than that between Central and South American populations. None of the divergences, however, were large enough to constitute convincing evidence supporting the two-wave or multi-wave migration model for the initial entry of human beings into America. The genetic affinity of the Native American populations was further explored using Neighbor-Net and the genetic divergences suggested that these populations could be categorized into four genetic groups living in four different ecologic zones. The divergence of the population groups suggests that the early dispersion of human beings in America was a multi-step procedure. Further, the divergences suggest the rapid dispersion of Native Americans in Central and South Americas after a long standstill period in North America. PMID:22970308

  17. Neural Mechanisms of Cortical Motion Computation Based on a Neuromorphic Sensory System

    PubMed Central

    Abdul-Kreem, Luma Issa; Neumann, Heiko

    2015-01-01

    The visual cortex analyzes motion information along hierarchically arranged visual areas that interact through bidirectional interconnections. This work suggests a bio-inspired visual model focusing on the interactions of the cortical areas in which a new mechanism of feedforward and feedback processing are introduced. The model uses a neuromorphic vision sensor (silicon retina) that simulates the spike-generation functionality of the biological retina. Our model takes into account two main model visual areas, namely V1 and MT, with different feature selectivities. The initial motion is estimated in model area V1 using spatiotemporal filters to locally detect the direction of motion. Here, we adapt the filtering scheme originally suggested by Adelson and Bergen to make it consistent with the spike representation of the DVS. The responses of area V1 are weighted and pooled by area MT cells which are selective to different velocities, i.e. direction and speed. Such feature selectivity is here derived from compositions of activities in the spatio-temporal domain and integrating over larger space-time regions (receptive fields). In order to account for the bidirectional coupling of cortical areas we match properties of the feature selectivity in both areas for feedback processing. For such linkage we integrate the responses over different speeds along a particular preferred direction. Normalization of activities is carried out over the spatial as well as the feature domains to balance the activities of individual neurons in model areas V1 and MT. Our model was tested using different stimuli that moved in different directions. The results reveal that the error margin between the estimated motion and synthetic ground truth is decreased in area MT comparing with the initial estimation of area V1. In addition, the modulated V1 cell activations shows an enhancement of the initial motion estimation that is steered by feedback signals from MT cells. PMID:26554589

  18. Estimating monthly streamflow values by cokriging

    USGS Publications Warehouse

    Solow, A.R.; Gorelick, S.M.

    1986-01-01

    Cokriging is applied to estimation of missing monthly streamflow values in three records from gaging stations in west central Virginia. Missing values are estimated from optimal consideration of the pattern of auto- and cross-correlation among standardized residual log-flow records. Investigation of the sensitivity of estimation to data configuration showed that when observations are available within two months of a missing value, estimation is improved by accounting for correlation. Concurrent and lag-one observations tend to screen the influence of other available observations. Three models of covariance structure in residual log-flow records are compared using cross-validation. Models differ in how much monthly variation they allow in covariance. Precision of estimation, reflected in mean squared error (MSE), proved to be insensitive to this choice. Cross-validation is suggested as a tool for choosing an inverse transformation when an initial nonlinear transformation is applied to flow values. ?? 1986 Plenum Publishing Corporation.

  19. Multidimensional density shaping by sigmoids.

    PubMed

    Roth, Z; Baram, Y

    1996-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.

  20. Characterization of turbulence stability through the identification of multifractional Brownian motions

    NASA Astrophysics Data System (ADS)

    Lee, K. C.

    2013-02-01

    Multifractional Brownian motions have become popular as flexible models in describing real-life signals of high-frequency features in geoscience, microeconomics, and turbulence, to name a few. The time-changing Hurst exponent, which describes regularity levels depending on time measurements, and variance, which relates to an energy level, are two parameters that characterize multifractional Brownian motions. This research suggests a combined method of estimating the time-changing Hurst exponent and variance using the local variation of sampled paths of signals. The method consists of two phases: initially estimating global variance and then accurately estimating the time-changing Hurst exponent. A simulation study shows its performance in estimation of the parameters. The proposed method is applied to characterization of atmospheric stability in which descriptive statistics from the estimated time-changing Hurst exponent and variance classify stable atmosphere flows from unstable ones.

  1. Urinary cadmium and estimated dietary cadmium in the Women's Health Initiative.

    PubMed

    Quraishi, Sabah M; Adams, Scott V; Shafer, Martin; Meliker, Jaymie R; Li, Wenjun; Luo, Juhua; Neuhouser, Marian L; Newcomb, Polly A

    2016-01-01

    Cadmium, a heavy metal dispersed in the environment as a result of industrial and agricultural applications, has been implicated in several human diseases including renal disease, cancers, and compromised bone health. In the general population, the predominant sources of cadmium exposure are tobacco and diet. Urinary cadmium (uCd) reflects long-term exposure and has been frequently used to assess cadmium exposure in epidemiological studies; estimated dietary intake of cadmium (dCd) has also been used in several studies. The validity of dCd in comparison with uCd is unclear. This study aimed to compare dCd, estimated from food frequency questionnaires, to uCd measured in spot urine samples from 1,002 participants of the Women's Health Initiative. Using linear regression, we found that dCd was not statistically significantly associated with uCd (β=0.006, P-value=0.14). When stratified by smoking status, dCd was not significantly associated with uCd both in never smokers (β=0.006, P-value=0.09) and in ever smokers (β=0.003, P-value=0.67). Our results suggest that because of the lack of association between estimated dCd and measured uCd, dietary estimation of cadmium exposure should be used with caution in epidemiologic studies.

  2. Urinary Cadmium and Estimated Dietary Cadmium in the Women’s Health Initiative

    PubMed Central

    Quraishi, Sabah M.; Adams, Scott V.; Shafer, Martin; Meliker, Jaymie R.; Li, Wenjun; Luo, Juhua; Neuhouser, Marian L.; Newcomb, Polly A.

    2016-01-01

    Cadmium, a heavy metal dispersed in the environment as a result of industrial and agricultural applications, has been implicated in several human diseases including renal disease, cancers, and compromised bone health. In the general population, the predominant sources of cadmium exposure are tobacco and diet. Urinary cadmium (uCd) reflects long-term exposure and has been frequently used to assess cadmium exposure in epidemiological studies; estimated dietary intake of cadmium (dCd) has also been used in several studies. The validity of dCd in comparison to uCd is unclear. This study aimed to compare dCd, estimated from food frequency questionnaires (FFQs), to uCd measured in spot urine samples from 1,002 participants of the Women’s Health Initiative. Using linear regression, we found that dCd was not statistically significantly associated with uCd (β=0.006, p-value=0.14). When stratified by smoking status, dCd was not significantly associated with uCd both in never smokers (β=0.006, p-value=0.09) and in ever smokers (β=0.003, p-value=0.0.67). Our results suggest that because of the lack of association between estimated dietary cadmium and measured urinary cadmium exposure, dietary estimation of cadmium exposure should be used with caution in epidemiologic studies. PMID:26015077

  3. Net anthropogenic nitrogen inputs and nitrogen fluxes from Indian watersheds: An initial assessment

    NASA Astrophysics Data System (ADS)

    Swaney, D. P.; Hong, B.; Paneer Selvam, A.; Howarth, R. W.; Ramesh, R.; Purvaja, R.

    2015-01-01

    In this paper, we apply an established methodology for estimating Net Anthropogenic Nitrogen Inputs (NANI) to India and its major watersheds. Our primary goal here is to provide initial estimates of major nitrogen inputs of NANI for India, at the country level and for major Indian watersheds, including data sources and parameter estimates, making some assumptions as needed in areas of limited data availability. Despite data limitations, we believe that it is clear that the main anthropogenic N source is agricultural fertilizer, which is being produced and applied at a growing rate, followed by N fixation associated with rice, leguminous crops, and sugar cane. While India appears to be a net exporter of N in food/feed as reported elsewhere (Lassaletta et al., 2013b), the balance of N associated with exports and imports of protein in food and feedstuffs is sensitive to protein content and somewhat uncertain. While correlating watershed N inputs with riverine N fluxes is problematic due in part to limited available riverine data, we have assembled some data for comparative purposes. We also suggest possible improvements in methods for future studies, and the potential for estimating riverine N fluxes to coastal waters.

  4. Estimation of the Standardized Risk Difference and Ratio in a Competing Risks Framework: Application to Injection Drug Use and Progression to AIDS After Initiation of Antiretroviral Therapy

    PubMed Central

    Cole, Stephen R.; Lau, Bryan; Eron, Joseph J.; Brookhart, M. Alan; Kitahata, Mari M.; Martin, Jeffrey N.; Mathews, William C.; Mugavero, Michael J.; Cole, Stephen R.; Brookhart, M. Alan; Lau, Bryan; Eron, Joseph J.; Kitahata, Mari M.; Martin, Jeffrey N.; Mathews, William C.; Mugavero, Michael J.

    2015-01-01

    There are few published examples of absolute risk estimated from epidemiologic data subject to censoring and competing risks with adjustment for multiple confounders. We present an example estimating the effect of injection drug use on 6-year risk of acquired immunodeficiency syndrome (AIDS) after initiation of combination antiretroviral therapy between 1998 and 2012 in an 8-site US cohort study with death before AIDS as a competing risk. We estimate the risk standardized to the total study sample by combining inverse probability weights with the cumulative incidence function; estimates of precision are obtained by bootstrap. In 7,182 patients (83% male, 33% African American, median age of 38 years), we observed 6-year standardized AIDS risks of 16.75% among 1,143 injection drug users and 12.08% among 6,039 nonusers, yielding a standardized risk difference of 4.68 (95% confidence interval: 1.27, 8.08) and a standardized risk ratio of 1.39 (95% confidence interval: 1.12, 1.72). Results may be sensitive to the assumptions of exposure-version irrelevance, no measurement bias, and no unmeasured confounding. These limitations suggest that results be replicated with refined measurements of injection drug use. Nevertheless, estimating the standardized risk difference and ratio is straightforward, and injection drug use appears to increase the risk of AIDS. PMID:24966220

  5. Uncertainty Estimation in Tsunami Initial Condition From Rapid Bayesian Finite Fault Modeling

    NASA Astrophysics Data System (ADS)

    Benavente, R. F.; Dettmer, J.; Cummins, P. R.; Urrutia, A.; Cienfuegos, R.

    2017-12-01

    It is well known that kinematic rupture models for a given earthquake can present discrepancies even when similar datasets are employed in the inversion process. While quantifying this variability can be critical when making early estimates of the earthquake and triggered tsunami impact, "most likely models" are normally used for this purpose. In this work, we quantify the uncertainty of the tsunami initial condition for the great Illapel earthquake (Mw = 8.3, 2015, Chile). We focus on utilizing data and inversion methods that are suitable to rapid source characterization yet provide meaningful and robust results. Rupture models from teleseismic body and surface waves as well as W-phase are derived and accompanied by Bayesian uncertainty estimates from linearized inversion under positivity constraints. We show that robust and consistent features about the rupture kinematics appear when working within this probabilistic framework. Moreover, by using static dislocation theory, we translate the probabilistic slip distributions into seafloor deformation which we interpret as a tsunami initial condition. After considering uncertainty, our probabilistic seafloor deformation models obtained from different data types appear consistent with each other providing meaningful results. We also show that selecting just a single "representative" solution from the ensemble of initial conditions for tsunami propagation may lead to overestimating information content in the data. Our results suggest that rapid, probabilistic rupture models can play a significant role during emergency response by providing robust information about the extent of the disaster.

  6. Audio-visual speech cue combination.

    PubMed

    Arnold, Derek H; Tear, Morgan; Schindel, Ryan; Roseboom, Warrick

    2010-04-16

    Different sources of sensory information can interact, often shaping what we think we have seen or heard. This can enhance the precision of perceptual decisions relative to those made on the basis of a single source of information. From a computational perspective, there are multiple reasons why this might happen, and each predicts a different degree of enhanced precision. Relatively slight improvements can arise when perceptual decisions are made on the basis of multiple independent sensory estimates, as opposed to just one. These improvements can arise as a consequence of probability summation. Greater improvements can occur if two initially independent estimates are summated to form a single integrated code, especially if the summation is weighted in accordance with the variance associated with each independent estimate. This form of combination is often described as a Bayesian maximum likelihood estimate. Still greater improvements are possible if the two sources of information are encoded via a common physiological process. Here we show that the provision of simultaneous audio and visual speech cues can result in substantial sensitivity improvements, relative to single sensory modality based decisions. The magnitude of the improvements is greater than can be predicted on the basis of either a Bayesian maximum likelihood estimate or a probability summation. Our data suggest that primary estimates of speech content are determined by a physiological process that takes input from both visual and auditory processing, resulting in greater sensitivity than would be possible if initially independent audio and visual estimates were formed and then subsequently combined.

  7. Technology Estimating 2: A Process to Determine the Cost and Schedule of Space Technology Research and Development

    NASA Technical Reports Server (NTRS)

    Cole, Stuart K.; Wallace, Jon; Schaffer, Mark; May, M. Scott; Greenberg, Marc W.

    2014-01-01

    As a leader in space technology research and development, NASA is continuing in the development of the Technology Estimating process, initiated in 2012, for estimating the cost and schedule of low maturity technology research and development, where the Technology Readiness Level is less than TRL 6. NASA' s Technology Roadmap areas consist of 14 technology areas. The focus of this continuing Technology Estimating effort included four Technology Areas (TA): TA3 Space Power and Energy Storage, TA4 Robotics, TA8 Instruments, and TA12 Materials, to confine the research to the most abundant data pool. This research report continues the development of technology estimating efforts completed during 2013-2014, and addresses the refinement of parameters selected and recommended for use in the estimating process, where the parameters developed are applicable to Cost Estimating Relationships (CERs) used in the parametric cost estimating analysis. This research addresses the architecture for administration of the Technology Cost and Scheduling Estimating tool, the parameters suggested for computer software adjunct to any technology area, and the identification of gaps in the Technology Estimating process.

  8. LiDAR-derived site index in the U.S. Pacihic Northwest--challenges and opportunities

    Treesearch

    Demetrios Gatziolis

    2007-01-01

    Site Index (SI), a key inventory parameter, is traditionally estimated by using costly and laborious field assessments of tree height and age. The increasing availability of reliable information on stand initiation timing and extent of planted, even-aged stands maintained in digital databases suggests that information on the height of dominant trees suffices for...

  9. Validation of the Adolescent Concerns Measure (ACM): evidence from exploratory and confirmatory factor analysis.

    PubMed

    Ang, Rebecca P; Chong, Wan Har; Huan, Vivien S; Yeo, Lay See

    2007-01-01

    This article reports the development and initial validation of scores obtained from the Adolescent Concerns Measure (ACM), a scale which assesses concerns of Asian adolescent students. In Study 1, findings from exploratory factor analysis using 619 adolescents suggested a 24-item scale with four correlated factors--Family Concerns (9 items), Peer Concerns (5 items), Personal Concerns (6 items), and School Concerns (4 items). Initial estimates of convergent validity for ACM scores were also reported. The four-factor structure of ACM scores derived from Study 1 was confirmed via confirmatory factor analysis in Study 2 using a two-fold cross-validation procedure with a separate sample of 811 adolescents. Support was found for both the multidimensional and hierarchical models of adolescent concerns using the ACM. Internal consistency and test-retest reliability estimates were adequate for research purposes. ACM scores show promise as a reliable and potentially valid measure of Asian adolescents' concerns.

  10. Does finance affect environmental degradation: evidence from One Belt and One Road Initiative region?

    PubMed

    Hafeez, Muhammad; Chunhui, Yuan; Strohmaier, David; Ahmed, Manzoor; Jie, Liu

    2018-04-01

    This paper explores the effects of finance on environmental degradation and investigates environmental Kuznets curve (EKC) of each country among 52 that participate in the One Belt and One Road Initiative (OBORI) using the latest long panel data span (1980-2016). We utilized panel long run econometric models (fully modified ordinary least square and dynamic ordinary least square) to explore the long-run estimates in full panel and country level. Moreover, the Dumitrescu and Hurlin (2012) causality test is applied to examine the short-run causalities among our considered variables. The empirical findings validate the EKC hypothesis; the long-run estimates point out that finance significantly enhances the environmental degradation (negatively in few cases). The short-run heterogeneous causality confirms the bi-directional causality between finance and environmental degradation. The empirical outcomes suggest that policymakers should consider the environmental degradation issue caused by financial development in the One Belt and One Road region.

  11. Spatial resolution in visual memory.

    PubMed

    Ben-Shalom, Asaf; Ganel, Tzvi

    2015-04-01

    Representations in visual short-term memory are considered to contain relatively elaborated information on object structure. Conversely, representations in earlier stages of the visual hierarchy are thought to be dominated by a sensory-based, feed-forward buildup of information. In four experiments, we compared the spatial resolution of different object properties between two points in time along the processing hierarchy in visual short-term memory. Subjects were asked either to estimate the distance between objects or to estimate the size of one of the objects' features under two experimental conditions, of either a short or a long delay period between the presentation of the target stimulus and the probe. When different objects were referred to, similar spatial resolution was found for the two delay periods, suggesting that initial processing stages are sensitive to object-based properties. Conversely, superior resolution was found for the short, as compared with the long, delay when features were referred to. These findings suggest that initial representations in visual memory are hybrid in that they allow fine-grained resolution for object features alongside normal visual sensitivity to the segregation between objects. The findings are also discussed in reference to the distinction made in earlier studies between visual short-term memory and iconic memory.

  12. Application of biological simulation models in estimating feed efficiency of finishing steers.

    PubMed

    Williams, C B

    2010-07-01

    Data on individual daily feed intake, BW at 28-d intervals, and carcass composition were obtained on 1,212 crossbred steers. Within-animal regressions of cumulative feed intake and BW on linear and quadratic days on feed were used to quantify initial and ending BW, average daily observed feed intake (OFI), and ADG over a 120-d finishing period. Feed intake was predicted (PFI) with 3 biological simulation models (BSM): a) Decision Evaluator for the Cattle Industry, b) Cornell Value Discovery System, and c) NRC update 2000, using observed growth and carcass data as input. Residual feed intake (RFI) was estimated using OFI (RFI(EL)) in a linear statistical model (LSM), and feed conversion ratio (FCR) was estimated as OFI/ADG (FCR(E)). Output from the BSM was used to estimate RFI by using PFI in place of OFI with the same LSM, and FCR was estimated as PFI/ADG. These estimates were evaluated against RFI(EL) and FCR(E). In a second analysis, estimates of RFI were obtained for the 3 BSM as the difference between OFI and PFI, and these estimates were evaluated against RFI(EL). The residual variation was extremely small when PFI was used in the LSM to estimate RFI, and this was mainly due to the fact that the same input variables (initial BW, days on feed, and ADG) were used in the BSM and LSM. Hence, the use of PFI obtained with BSM as a replacement for OFI in a LSM to characterize individual animals for RFI was not feasible. This conclusion was also supported by weak correlations (<0.4) between RFI(EL) and RFI obtained with PFI in the LSM, and very weak correlations (<0.13) between RFI(EL) and FCR obtained with PFI. In the second analysis, correlations (>0.89) for RFI(EL) with the other RFI estimates suggest little difference between RFI(EL) and any of these RFI estimates. In addition, results suggest that the RFI estimates calculated with PFI would be better able to identify animals with low OFI and small ADG as inefficient compared with RFI(EL). These results may be due to the fact that computer models predict performance on an individual-animal basis in contrast to a LSM, which estimates a fixed relationship for all animals; hence, the BSM may provide RFI estimates that are closer to the true biological efficiency of animals. In addition, BSM may facilitate comparisons across different data sets and provide more accurate estimates of efficiency in small data sets where errors would be greater with a LSM.

  13. Parameter estimation for a cohesive sediment transport model by assimilating satellite observations in the Hangzhou Bay: Temporal variations and spatial distributions

    NASA Astrophysics Data System (ADS)

    Wang, Daosheng; Zhang, Jicai; He, Xianqiang; Chu, Dongdong; Lv, Xianqing; Wang, Ya Ping; Yang, Yang; Fan, Daidu; Gao, Shu

    2018-01-01

    Model parameters in the suspended cohesive sediment transport models are critical for the accurate simulation of suspended sediment concentrations (SSCs). Difficulties in estimating the model parameters still prevent numerical modeling of the sediment transport from achieving a high level of predictability. Based on a three-dimensional cohesive sediment transport model and its adjoint model, the satellite remote sensing data of SSCs during both spring tide and neap tide, retrieved from Geostationary Ocean Color Imager (GOCI), are assimilated to synchronously estimate four spatially and temporally varying parameters in the Hangzhou Bay in China, including settling velocity, resuspension rate, inflow open boundary conditions and initial conditions. After data assimilation, the model performance is significantly improved. Through several sensitivity experiments, the spatial and temporal variation tendencies of the estimated model parameters are verified to be robust and not affected by model settings. The pattern for the variations of the estimated parameters is analyzed and summarized. The temporal variations and spatial distributions of the estimated settling velocity are negatively correlated with current speed, which can be explained using the combination of flocculation process and Stokes' law. The temporal variations and spatial distributions of the estimated resuspension rate are also negatively correlated with current speed, which are related to the grain size of the seabed sediments under different current velocities. Besides, the estimated inflow open boundary conditions reach the local maximum values near the low water slack conditions and the estimated initial conditions are negatively correlated with water depth, which is consistent with the general understanding. The relationships between the estimated parameters and the hydrodynamic fields can be suggestive for improving the parameterization in cohesive sediment transport models.

  14. Risk stratification and staging in prostate cancer with prostatic specific membrane antigen PET/CTObjective: A one-stop-shop.

    PubMed

    Gupta, Manoj; Choudhury, Partha Sarathi; Rawal, Sudhir; Goel, Harish Chandra; Singh, Amitabh; Talwar, Vineet; Sahoo, Saroj Kumar

    2017-01-01

    Current imaging modalities for prostate cancer (PC) had limitations for risk stratification and staging. Magnetic resonance imaging (MRI) frequently underestimated lymphatic metastasis while bone scintigraphy often had diagnostic dilemmas. Prostatic specific membrane antigen (PSMA) positron emission tomography-computed tomography (PET/CT) has been remarkable in diagnosing PC recurrence and staging. We hypothesized it can become one-stop-shop for initial risk stratification and staging. Ninety seven PSMA PET-CT studies were re analysed for tumor node metastases (TNM) staging and risk stratification of lymphatic and distant metastases proportion. The histopathology of 23/97 patients was available as gold standard. Chi-square test was used for proportion comparison. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), over-estimation, under-estimation and correct-estimation of T and N stages were calculated. Cohen's kappa coefficient (k) was derived for inter-rater agreement. Lymphic or distant metastases detection on PSMA PET/CT increased significantly with increase in risk category. PSMA PET/CT sensitivity, specificity, PPV and NPV for extra prostatic extension (EPE), seminal vesicle invasion (SVI) and lymphatic metastases were 63.16%, 100%, 100%, 36.36% & 55%, 100%, 100%, 25% and 65.62%, 99.31%, 87.50%, 97.53%, respectively. Cohen's kappa coefficient showed substantial agreement between PSMA PET/CT and histopathological lymphic metastases (κ 0.734) however, it was just in fair agreement (κ 0.277) with T stage. PSMA PET/CT over-estimated, under-estimated and correct-estimated T and N stages in 8.71%, 39.13%, 52.17% and 8.71%, 4.35%, 86.96% cases, respectively. We found that PSMA PET/CT has potential for initial risk stratifications with reasonable correct estimation for N stage. However, it can underestimate T stage. Hence, we suggest that PSMA PET/CT should be used for staging and initial risk stratification of PC as one-stop-shop with regional MRI in surgically resectable cases.

  15. Genetic diversity of a newly established population of golden eagles on the Channel Islands, California

    USGS Publications Warehouse

    Sonsthagen, Sarah A.; Coonan, Timothy J.; Latta, Brian C.; Sage, George K.; Talbot, Sandra L.

    2012-01-01

    Gene flow can have profound effects on the genetic diversity of a founding population depending on the number and relationship among colonizers and the duration of the colonization event. Here we used data from nuclear microsatellite and mitochondrial DNA control region loci to assess genetic diversity in golden eagles of the recently colonized Channel Islands, California. Genetic diversity in the Channel Island population was low, similar to signatures observed for other recent colonizing island populations. Differences in levels of genetic diversity and structure observed between mainland California and the islands suggests that few individuals were involved in the initial founding event, and may have comprised a family group. The spatial genetic structure observed between Channel Island and mainland California golden eagle populations across marker types, and genetic signature of population decline observed for the Channel Island population, suggest a single or relatively quick colonization event. Polarity in gene flow estimates based on mtDNA confirm an initial colonization of the Channel Islands by mainland golden eagles, but estimates from microsatellite data suggest that golden eagles on the islands were dispersing more recently to the mainland, possibly after reaching the carrying capacity of the island system. These results illustrate the strength of founding events on the genetic diversity of a population, and confirm that changes to genetic diversity can occur within just a few generations.

  16. Detection of water vapor on Jupiter

    NASA Technical Reports Server (NTRS)

    Larson, H. P.; Fink, U.; Treffers, R.; Gautier, T. N., III

    1975-01-01

    High-altitude (12.4 km) spectroscopic observations of Jupiter at 5 microns from the NASA 91.5 cm airborne infrared telescope have revealed 14 absorptions assigned to the rotation-vibration spectrum of water vapor. Preliminary analysis indicates a mixing ratio about 1 millionth for the vapor phase of water. Estimates of temperature (greater than about 300 K) and pressure (less than 20 atm) suggest observation of water deep in Jupiter's hot spots responsible for its 5 micron flux. Model-atmosphere calculations based on radiative-transfer theory may change these initial estimates and provide a better physical picture of Jupiter's atmosphere below the visible cloud tops.

  17. Comparison of Smoking History Patterns Among African American and White Cohorts in the United States Born 1890 to 1990.

    PubMed

    Holford, Theodore R; Levy, David T; Meza, Rafael

    2016-04-01

    Characterizing smoking history patterns summarizes life course exposure for birth cohorts, essential for evaluating the impact of tobacco control on health. Limited attention has been given to patterns among African Americans. Life course smoking histories of African Americans and whites were estimated beginning with the 1890 birth cohort. Estimates of smoking initiation and cessation probabilities, and intensity can be used as a baseline for studying smoking intervention strategies that target smoking exposure. US National Health Interview Surveys conducted from 1965 to 2012 yielded cross-sectional information on current smoking behavior among African Americans and whites. Additional detail for smokers including age at initiation, age at cessation and smoking intensity were available in some surveys and these were used to construct smoking histories for participants up to the date that they were interviewed. Age-period-cohort models with constrained natural splines provided estimates of current, former and never-smoker prevalence in cohorts beginning in 1890. This approach yielded yearly estimates of initiation, cessation and smoking intensity by age for each birth cohort. Smoking initiation probabilities tend to be lower among African Americans compared to whites, and cessation probabilities also were generally lower. Higher initiation leads to higher smoking prevalence among whites in younger ages, but lower cessation leads to higher prevalence at older ages in blacks, when adverse health effects of smoking become most apparent. These estimates provide a summary that can be used to better understand the effects of changes in smoking behavior following publication of the Surgeon General's Report in 1964. A novel method of estimating smoking histories was applied to data from the National Health Interview Surveys, which provided an extensive summary of the smoking history in this population following publication of the Surgeon General's Report in 1964. The results suggest that some of the existing disparities in smoking-related disease may be due to the lower cessation rates in African Americans compared to whites. However, the number of cigarettes smoked is also lower among African Americans. Further work is needed to determine mechanisms by which smoking duration and intensity can account for racial disparities in smoking-related diseases. © The Author 2016. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. Accurate Initial State Estimation in a Monocular Visual–Inertial SLAM System

    PubMed Central

    Chen, Jing; Zhou, Zixiang; Leng, Zhen; Fan, Lei

    2018-01-01

    The fusion of monocular visual and inertial cues has become popular in robotics, unmanned vehicles and augmented reality fields. Recent results have shown that optimization-based fusion strategies outperform filtering strategies. Robust state estimation is the core capability for optimization-based visual–inertial Simultaneous Localization and Mapping (SLAM) systems. As a result of the nonlinearity of visual–inertial systems, the performance heavily relies on the accuracy of initial values (visual scale, gravity, velocity and Inertial Measurement Unit (IMU) biases). Therefore, this paper aims to propose a more accurate initial state estimation method. On the basis of the known gravity magnitude, we propose an approach to refine the estimated gravity vector by optimizing the two-dimensional (2D) error state on its tangent space, then estimate the accelerometer bias separately, which is difficult to be distinguished under small rotation. Additionally, we propose an automatic termination criterion to determine when the initialization is successful. Once the initial state estimation converges, the initial estimated values are used to launch the nonlinear tightly coupled visual–inertial SLAM system. We have tested our approaches with the public EuRoC dataset. Experimental results show that the proposed methods can achieve good initial state estimation, the gravity refinement approach is able to efficiently speed up the convergence process of the estimated gravity vector, and the termination criterion performs well. PMID:29419751

  19. Analysis of Electrocardiograms Associated with Pediatric Electrical Burns.

    PubMed

    McLeod, Jennifer S; Maringo, Alison E; Doyle, Patrick J; Vitale, Lisa; Klein, Justin D; Shanti, Christina M

    2017-05-26

    The purpose of this study was to examine the utility of electrocardiograms (EKGs) for low-risk, low-voltage pediatric electrical burn victims. A retrospective chart review was conducted on 86 pediatric patients who presented to the children's hospital between 2000 and 2015 after sustaining electrical burns. Variables included source and estimated voltage, extent of injuries, length of stay, high risk factors, and EKG results. High risk factors included estimated voltage > 1000 V, lightning, tetany, symptoms, loss of consciousness, or seizures. Statistical analyses were conducted. Average age was 5 years. Of those who sustained burns, 84.5% (n = 71/84) had second-degree burns ≤ 1% TBSA or less. Eleven patients had high risk factors, 12.9% (n = 11/85) and most had length of stay < 3 days (91.8%; n = 78/85). Majority sustained burns from low-voltage (< 300 V) household electrical outlets, cords, or light bulb sockets (90.4%; n = 75/83). Among patients with available EKGs, 12 had arrhythmias on initial EKG (i.e., low right atrial rhythm, t-wave inversions, sinus tachycardia, bundle branch block; 20.7%; n = 12/58). All were transient and nonfatal. The data suggest that low estimated voltage (< 300 V) electrical injuries were associated with negative EKGs; however, due to the low rate of arrhythmias, a Fisher's exact test did not show significance, P = 0.09 (P > 0.05). Preliminary data suggest that most pediatric electrical burns are due to low voltage (< 300 V) household sources. Few have high risk factors or arrhythmias that were transient and nonfatal. These data suggest that low-risk, asymptomatic, low-voltage pediatric electrical burns may not require an initial screening EKG.

  20. The initial spin period of magnetar-like pulsar PSR J1846-0258 in Kes 75

    NASA Astrophysics Data System (ADS)

    Gelfand, Joseph; Slane, Patrick

    2012-07-01

    While the origin of the ultra-strong surface magnetic fields believed to be present in magnetars is unknown, one of the leading theories is that magnetars are born spinning very rapidly, with initial spin periods on the order of 2 ms. Unfortunately, it has not been possible to directly measure the initial spin-period due to the lack of detected pulsar wind nebulae around these neutron stars. The recent detection of magnetar-like X-ray flares from PSR J1846-0258 in SNR Kes 75 suggests this neutron star, which powers a well-studied pulsar wind nebula, is a magnetar. I will present an estimate of the initial spin period of this neutron star from a detailed study of its pulsar wind nebula, and discuss its implications for the formation of magnetars.

  1. Uranium/Thorium Dating and Growth Laminae Counting of Stalagmites Reveal a Record of Major Earthquakes in the Midwestern US

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Lundstrom, C.; Panno, S.; Hackley, K. C.; Fouke, B. W.; Curry, B.

    2009-12-01

    The recurrence interval of large New Madrid Seismic Zone (NMSZ) earthquakes is uncertain because of the limited number and likely incomplete nature of the record of dated seismic events. Data on paleoseismicity in this area is necessary for refining estimates of a recurrence interval for these earthquakes and for characterizing the geophysical nature of the NMSZ. Studies of the paleoseismic history of the NMSZ have previously used liquefaction features and flood plain deposits along the Mississippi River to estimate recurrence interval with considerable uncertainties. More precise estimates of the number and ages of paleoseismic events would enhance the ability of federal, state, and local agencies to make critical preparedness decisions. Initiation of new speleothems (cave deposits) has been shown in several localities to record large earthquake events. Our ongoing work in caves of southwestern Illinois, Missouri, Indiana and Arkansas has used both U/Th age dating techniques and growth laminae counting of actively growing stalagmites to determine the age of initiation of stalagmites in caves across the Midwestern U.S. These age initiations cluster around two known events, the great NMSZ earthquakes of 1811-1812 and the Missouri earthquake of 1917, suggesting that cave deposits in this region constitute a unique record of paleo-seismic history of the NMSZ. Furthermore, the U-Th disequilibria growth laminae ages of young, white stalagmites and of older stalagmites on which they grew, plus published Holocene stalagmite ages of initiation and regrowth from Missouri caves, are all coincident with suspected NMSZ earthquakes based on liquefaction and other paleoseimic techniques. We hypothesize that these speleothems were initiated by earthquake-induced opening/closing of fracture-controlled flowpaths in the ceilings of cave passages.

  2. An economic analysis of conservative management versus active treatment for men with localized prostate cancer.

    PubMed

    Perlroth, Daniella J; Bhattacharya, Jay; Goldman, Dana P; Garber, Alan M

    2012-12-01

    Comparative effectiveness research suggests that conservative management (CM) strategies are no less effective than active initial treatment for many men with localized prostate cancer. We estimate longer-term costs of initial management strategies and potential US health expenditure savings by increased use of conservative management for men with localized prostate cancer. Five-year total health expenditures attributed to initial management strategies for localized prostate cancer were calculated using commercial claims data from 1998 to 2006, and savings were estimated from a US population health-care expenditure model. Our analysis finds that patients receiving combinations of active treatments have the highest additional costs over conservative management at $63 500, followed by $48 550 for intensity-modulated radiation therapy, $37 500 for primary androgen deprivation therapy, and $28 600 for brachytherapy. Radical prostatectomy ($15 200) and external beam radiation therapy ($18 900) were associated with the lowest costs. The population model estimated that US health expenditures could be lowered by 1) use of initial CM over all active treatment ($2.9-3.25 billion annual savings), 2) shifting patients receiving intensity-modulated radiation therapy to CM ($680-930 million), 3) foregoing primary androgen deprivation therapy($555 million), 4) reducing the use of adjuvant androgen deprivation in addition to local therapies ($630 million), and 5) using single treatments rather than combination local treatment ($620-655 million). In conclusion, we find that all active treatments are associated with higher longer-term costs than CM. Substantial savings, representing up to 30% of total costs, could be realized by adopting CM strategies, including active surveillance, for initial management of men with localized prostate cancer.

  3. An instrumental variable approach finds no associated harm or benefit from early dialysis initiation in the United States

    PubMed Central

    Scialla, Julia J.; Liu, Jiannong; Crews, Deidra C.; Guo, Haifeng; Bandeen-Roche, Karen; Ephraim, Patti L.; Tangri, Navdeep; Sozio, Stephen M.; Shafi, Tariq; Miskulin, Dana C.; Michels, Wieneke M.; Jaar, Bernard G.; Wu, Albert W.; Powe, Neil R.; Boulware, L. Ebony

    2014-01-01

    The estimated glomerular filtration rate (eGFR) at dialysis initiation has been rising. Observational studies suggest harm, but may be confounded by unmeasured factors. As instrumental variable methods may be less biased we performed a retrospective cohort study of 310,932 patients starting dialysis between 2006 to 2008 and registered in the United States Renal Data System in order to describe geographic variation in eGFR at dialysis initiation and determine its association with mortality. Patients were grouped into 804 health service areas by zip code. Individual eGFR at dialysis initiation averaged 10.8 ml/min/1.73m2 but varied geographically. Only 11% of the variation in mean health service areas-level eGFR at dialysis initiation was accounted for by patient characteristics. We calculated demographic-adjusted mean eGFR at dialysis initiation in the health service areas using the 2006 and 2007 incident cohort as our instrument and estimated the association between individual eGFR at dialysis initiation and mortality in the 2008 incident cohort using the 2 stage residual inclusion method. Among 89,547 patients starting dialysis in 2008 with eGFR 5 to 20 ml/min/1.73m2, eGFR at initiation was not associated with mortality over a median of 15.5 months [hazard ratio 1.025 per 1 ml/min/1.73m2 for eGFR 5 to 14 ml/min/1.73m2; and 0.973 per 1 ml/min/1.73m2 for eGFR 14 to 20 ml/min/1.73m2]. Thus, there was no associated harm or benefit from early dialysis initiation in the United States. PMID:24786707

  4. Cost effectiveness of the Oregon quitline "free patch initiative".

    PubMed

    Fellows, Jeffrey L; Bush, Terry; McAfee, Tim; Dickerson, John

    2007-12-01

    We estimated the cost effectiveness of the Oregon tobacco quitline's "free patch initiative" compared to the pre-initiative programme. Using quitline utilisation and cost data from the state, intervention providers and patients, we estimated annual programme use and costs for media promotions and intervention services. We also estimated annual quitline registration calls and the number of quitters and life years saved for the pre-initiative and free patch initiative programmes. Service utilisation and 30-day abstinence at six months were obtained from 959 quitline callers. We compared the cost effectiveness of the free patch initiative (media and intervention costs) to the pre-initiative service offered to insured and uninsured callers. We conducted sensitivity analyses on key programme costs and outcomes by estimating a best case and worst case scenario for each intervention strategy. Compared to the pre-intervention programme, the free patch initiative doubled registered calls, increased quitting fourfold and reduced total costs per quit by $2688. We estimated annual paid media costs were $215 per registered tobacco user for the pre-initiative programme and less than $4 per caller during the free patch initiative. Compared to the pre-initiative programme, incremental quitline promotion and intervention costs for the free patch initiative were $86 (range $22-$353) per life year saved. Compared to the pre-initiative programme, the free patch initiative was a highly cost effective strategy for increasing quitting in the population.

  5. Cost-benefit analysis of biopsy methods for suspicious mammographic lesions; discussion 994-5.

    PubMed

    Fahy, B N; Bold, R J; Schneider, P D; Khatri, V; Goodnight, J E

    2001-09-01

    Stereotactic core biopsy (SCB) is more cost-effective than needle-localized biopsy (NLB) for evaluation and treatment of mammographic lesions. A computer-generated mathematical model was developed based on clinical outcome modeling to estimate costs accrued during evaluation and treatment of suspicious mammographic lesions. Total costs were determined for evaluation and subsequent treatment of cancer when either SCB or NLB was used as the initial biopsy method. Cost was estimated by the cumulative work relative value units accrued. The risk of malignancy based on the Breast Imaging Reporting Data System (BIRADS) score and mammographic suspicion of ductal carcinoma in situ were varied to simulate common clinical scenarios. Total cost accumulated during evaluation and subsequent surgical therapy (if required). Evaluation of BIRADS 5 lesions (highly suggestive, risk of malignancy = 90%) resulted in equivalent relative value units for both techniques (SCB, 15.54; NLB, 15.47). Evaluation of lesions highly suspicious for ductal carcinoma in situ yielded similar total treatment relative value units (SCB, 11.49; NLB, 10.17). Only for evaluation of BIRADS 4 lesions (suspicious abnormality, risk of malignancy = 34%) was SCB more cost-effective than NLB (SCB, 7.65 vs. NLB, 15.66). No difference in cost-benefit was found when lesions highly suggestive of malignancy (BIRADS 5) or those suspicious for ductal carcinoma in situ were evaluated initially with SCB vs. NLB, thereby disproving the hypothesis. Only for intermediate-risk lesions (BIRADS 4) did initial evaluation with SCB yield a greater cost savings than with NLB.

  6. Estimating evolutionary rates in giant viruses using ancient genomes

    PubMed Central

    Duchêne, Sebastián

    2018-01-01

    Abstract Pithovirus sibericum is a giant (610 Kpb) double-stranded DNA virus discovered in a purportedly 30,000-year-old permafrost sample. A closely related virus, Pithovirus massiliensis, was recently isolated from a sewer in southern France. An initial comparison of these two virus genomes assumed that P. sibericum was directly ancestral to P. massiliensis and gave a maximum evolutionary rate of 2.60 × 10−5 nucleotide substitutions per site per year (subs/site/year). If correct, this would make pithoviruses among the fastest-evolving DNA viruses, with rates close to those seen in some RNA viruses. To help determine whether this unusually high rate is accurate we utilized the well-known negative association between evolutionary rate and genome size in DNA microbes. This revealed that a more plausible rate estimate for Pithovirus evolution is ∼2.23 × 10−6 subs/site/year, with even lower estimates obtained if evolutionary rates are assumed to be time-dependent. Hence, we estimate that Pithovirus has evolved at least an order of magnitude more slowly than previously suggested. We then used our new rate estimates to infer a time-scale for Pithovirus evolution. Strikingly, this suggests that these viruses could have diverged at least hundreds of thousands of years ago, and hence have evolved over longer time-scales than previously suggested. We propose that the evolutionary rate and time-scale of pithovirus evolution should be reconsidered in the light of these observations and that future estimates of the rate of giant virus evolution should be carefully examined in the context of their biological plausibility. PMID:29511572

  7. A latent transition model of the effects of a teen dating violence prevention initiative.

    PubMed

    Williams, Jason; Miller, Shari; Cutbush, Stacey; Gibbs, Deborah; Clinton-Sherrod, Monique; Jones, Sarah

    2015-02-01

    Patterns of physical and psychological teen dating violence (TDV) perpetration, victimization, and related behaviors were examined with data from the evaluation of the Start Strong: Building Healthy Teen Relationships initiative, a dating violence primary prevention program targeting middle school students. Latent class and latent transition models were used to estimate distinct patterns of TDV and related behaviors of bullying and sexual harassment in seventh grade students at baseline and to estimate transition probabilities from one pattern of behavior to another at the 1-year follow-up. Intervention effects were estimated by conditioning transitions on exposure to Start Strong. Latent class analyses suggested four classes best captured patterns of these interrelated behaviors. Classes were characterized by elevated perpetration and victimization on most behaviors (the multiproblem class), bullying perpetration/victimization and sexual harassment victimization (the bully-harassment victimization class), bullying perpetration/victimization and psychological TDV victimization (bully-psychological victimization), and experience of bully victimization (bully victimization). Latent transition models indicated greater stability of class membership in the comparison group. Intervention students were less likely to transition to the most problematic pattern and more likely to transition to the least problem class. Although Start Strong has not been found to significantly change TDV, alternative evaluation models may find important differences. Latent transition analysis models suggest positive intervention impact, especially for the transitions at the most and the least positive end of the spectrum. Copyright © 2015. Published by Elsevier Inc.

  8. Valuing Quiet: An Economic Assessment of U.S. Environmental Noise as a Cardiovascular Health Hazard.

    PubMed

    Swinburn, Tracy K; Hammer, Monica S; Neitzel, Richard L

    2015-09-01

    Environmental noise pollution increases the risk for hearing loss, stress, sleep disruption, annoyance, and cardiovascular disease and has other adverse health impacts. Recent (2013) estimates suggest that more than 100 million Americans are exposed to unhealthy levels of noise. Given the pervasive nature and significant health effects of environmental noise pollution, the corresponding economic impacts may be substantial. This 2014 economic assessment developed a new approach to estimate the impact of environmental noise on the prevalence and cost of key components of hypertension and cardiovascular disease in the U.S. By placing environmental noise in context with comparable environmental pollutants, this approach can inform public health law, planning, and policy. The effects of hypothetical national-scale changes in environmental noise levels on the prevalence and corresponding costs of hypertension and coronary heart disease were estimated, with the caveat that the national-level U.S. noise data our exposure estimates were derived from are >30 years old. The analyses suggested that a 5-dB noise reduction scenario would reduce the prevalence of hypertension by 1.4% and coronary heart disease by 1.8%. The annual economic benefit was estimated at $3.9 billion. These findings suggest significant economic impacts from environmental noise-related cardiovascular disease. Given these initial findings, noise may deserve increased priority and research as an environmental health hazard. Copyright © 2015 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  9. Parameter estimation in plasmonic QED

    NASA Astrophysics Data System (ADS)

    Jahromi, H. Rangani

    2018-03-01

    We address the problem of parameter estimation in the presence of plasmonic modes manipulating emitted light via the localized surface plasmons in a plasmonic waveguide at the nanoscale. The emitter that we discuss is the nitrogen vacancy centre (NVC) in diamond modelled as a qubit. Our goal is to estimate the β factor measuring the fraction of emitted energy captured by waveguide surface plasmons. The best strategy to obtain the most accurate estimation of the parameter, in terms of the initial state of the probes and different control parameters, is investigated. In particular, for two-qubit estimation, it is found although we may achieve the best estimation at initial instants by using the maximally entangled initial states, at long times, the optimal estimation occurs when the initial state of the probes is a product one. We also find that decreasing the interqubit distance or increasing the propagation length of the plasmons improve the precision of the estimation. Moreover, decrease of spontaneous emission rate of the NVCs retards the quantum Fisher information (QFI) reduction and therefore the vanishing of the QFI, measuring the precision of the estimation, is delayed. In addition, if the phase parameter of the initial state of the two NVCs is equal to πrad, the best estimation with the two-qubit system is achieved when initially the NVCs are maximally entangled. Besides, the one-qubit estimation has been also analysed in detail. Especially, we show that, using a two-qubit probe, at any arbitrary time, enhances considerably the precision of estimation in comparison with one-qubit estimation.

  10. Joint reconstruction of the initial pressure and speed of sound distributions from combined photoacoustic and ultrasound tomography measurements

    NASA Astrophysics Data System (ADS)

    Matthews, Thomas P.; Anastasio, Mark A.

    2017-12-01

    The initial pressure and speed of sound (SOS) distributions cannot both be stably recovered from photoacoustic computed tomography (PACT) measurements alone. Adjunct ultrasound computed tomography (USCT) measurements can be employed to estimate the SOS distribution. Under the conventional image reconstruction approach for combined PACT/USCT systems, the SOS is estimated from the USCT measurements alone and the initial pressure is estimated from the PACT measurements by use of the previously estimated SOS. This approach ignores the acoustic information in the PACT measurements and may require many USCT measurements to accurately reconstruct the SOS. In this work, a joint reconstruction method where the SOS and initial pressure distributions are simultaneously estimated from combined PACT/USCT measurements is proposed. This approach allows accurate estimation of both the initial pressure distribution and the SOS distribution while requiring few USCT measurements.

  11. Timing of dialysis initiation in transplant-naive and failed transplant patients

    PubMed Central

    Molnar, Miklos Z.; Ojo, Akinlolu O.; Bunnapradist, Suphamai; Kovesdy, Csaba P.; Kalantar-Zadeh, Kamyar

    2017-01-01

    Over the past two decades, most guidelines have advocated early dialysis initiation on the basis of studies showing improved survival in patients starting dialysis early. These recommendations led to an increase in the proportion of patients initiating dialysis with an estimated glomerular filtration rate (eGFR) >10 ml/min/1.73 m2, from 20% in 1996 to 52% in 2008. During this period, patients starting dialysis with an eGFR ≥15 ml/min/1.73 m2 increased from 4% to 17%. However, recent studies have failed to substantiate a benefit of early dialysis initiation and some data have suggested worse outcomes in patients starting dialysis with a higher eGFR. Several reasons for this seemingly paradoxical observation have been suggested, including the fact that patients requiring early dialysis are likely to have more severe symptoms and comorbidities, leading to confounding by indication, as well as biological mechanisms that causally relate early dialysis therapy to adverse outcomes. Dialysis reinitiation in patients with a failing renal allograft encounters similar problems. However, unique factors associated with a failed allograft means that the optimal timing of dialysis initiation in failed transplant patients might differ from that in transplant-naive patients. In this Review, we will discuss studies of dialysis initiation and compare risks and benefits of early versus late dialysis therapy. PMID:22371250

  12. The Hadley circulation: assessing NCEP/NCAR reanalysis and sparse in-situ estimates

    NASA Astrophysics Data System (ADS)

    Waliser, D. E.; Shi, Zhixiong; Lanzante, J. R.; Oort, A. H.

    We present a comparison of the zonal mean meridional circulations derived from monthly in situ data (i.e. radiosondes and ship reports) and from the NCEP/NCAR reanalysis product. To facilitate the interpretation of the results, a third estimate of the mean meridional circulation is produced by subsampling the reanalysis at the locations where radiosonde and surface ship data are available for the in situ calculation. This third estimate, known as the subsampled estimate, is compared to the complete reanalysis estimate to assess biases in conventional, in situ estimates of the Hadley circulation associated with the sparseness of the data sources (i.e., radiosonde network). The subsampled estimate is also compared to the in situ estimate to assess the biases introduced into the reanalysis product by the numerical model, initialization process and/or indirect data sources such as satellite retrievals. The comparisons suggest that a number of qualitative differences between the in situ and reanalysis estimates are mainly associated with the sparse sampling and simplified interpolation schemes associated with in situ estimates. These differences include: (1) a southern Hadley cell that consistently extends up to 200 hPa in the reanalysis, whereas the bulk of the circulation for the in situ and subsampled estimates tends to be confined to the lower half of the troposphere, (2) more well-defined and consistent poleward limits of the Hadley cells in the reanalysis compared to the in-situ and subsampled estimates, and (3) considerably less variability in magnitude and latitudinal extent of the Ferrel cells and southern polar cell exhibited in the reanalysis estimate compared to the in situ and subsampled estimates. Quantitative comparison shows that the subsampled estimate, relative to the reanalysis estimate, produces a stronger northern Hadley cell ( 20%), a weaker southern Hadley cell ( 20-60%), and weaker Ferrel cells in both hemispheres. These differences stem from poorly measured oceanic regions which necessitate significant interpolation over broad regions. Moreover, they help to pinpoint specific shortcomings in the present and previous in situ estimates of the Hadley circulation. Comparisons between the subsampled and in situ estimates suggest that the subsampled estimate produces a slightly stronger Hadley circulation in both hemispheres, with the relative differences in some seasons as large as 20-30%. 6These differences suggest that the mean meridional circulation associated with the NCEP/NCAR reanalysis is more energetic than observations suggest. Examination of ENSO-related changes to the Hadley circulation suggest that the in situ and subsampled estimates significantly overestimate the effects of ENSO on the Hadley circulation due to the reliance on sparsely distributed data. While all three estimates capture the large-scale region of low-level equatorial convergence near the dateline that occurs during El Nino, the in situ and subsampled estimates fail to effectively reproduce the large-scale areas of equatorial mass divergence to the west and east of this convergence area, leading to an overestimate of the effects of ENSO on the zonal mean circulation.

  13. Spatio-temporal Variations in Slow Earthquakes along the Mexican Subduction Zone

    NASA Astrophysics Data System (ADS)

    Ide, S.; Maury, J.; Cruz-Atienza, V. M.; Kostoglodov, V.

    2017-12-01

    Slow earthquakes in Mexico have been investigated independently in different areas. Here, we review differences in tremor behavior and slow slip events along the entire subduction zone to improve our understanding of its segmentation. Some similarities are observed between the Guerrero and Oaxaca areas. By combining our improved tremor detection capabilities with previous results, we suggest that there is no gap in tremor between Guerrero and Oaxaca. However some differences between Michoacan and Guerrero are seen (e.g., SSE magnitude, tremor zone width, tremor rate), suggesting that these two areas behave differently. Tremor initiation shows clear tidal sensitivity along the entire subduction zone. Tremor in Guerrero is sensitive to small tidal normal stress as well as shear stress suggesting the subduction plane may include local variations in dip. Estimation of the energy rate shows similar values along the subduction zone interface. The scaled tremor energy estimates are similar to those calculated in Nankai and Cascadia, suggesting a common mechanism. Along-strike differences in slow deformation may be related to variations in the subduction interface that yield different geometrical and temperature profiles.

  14. Spatiotemporal Variations in Slow Earthquakes Along the Mexican Subduction Zone

    NASA Astrophysics Data System (ADS)

    Maury, J.; Ide, S.; Cruz-Atienza, V. M.; Kostoglodov, V.

    2018-02-01

    Slow earthquakes in Mexico have been investigated independently in different areas. Here we review differences in tremor behavior and slow slip events along the entire subduction zone to improve our understanding of its segmentation. Some similarities are observed between the Guerrero and Oaxaca areas. By combining our improved tremor detection capabilities with previous results, we suggest that there is no gap in tremor between Guerrero and Oaxaca. However, some differences between Michoacan and Guerrero are seen (e.g., SSE magnitude, tremor zone width, and tremor rate), suggesting that these two areas behave differently. Tremor initiation shows clear tidal sensitivity along the entire subduction zone. Tremor in Guerrero is sensitive to small tidal normal stress as well as shear stress, suggesting that the subduction plane may include local variations in dip. Estimation of the energy rate shows similar values along the subduction zone interface. The scaled tremor energy estimates are similar to those calculated in Nankai and Cascadia, suggesting a common mechanism. Along-strike differences in slow deformation may be related to variations in the subduction interface that yield different geometrical and temperature profiles.

  15. Rapid decrement in the effects of the Ponzo display dissociates action and perception.

    PubMed

    Whitwell, Robert L; Buckingham, Gavin; Enns, James T; Chouinard, Philippe A; Goodale, Melvyn A

    2016-08-01

    It has been demonstrated that pictorial illusions have a smaller influence on grasping than they do on perceptual judgments. Yet to date this work has not considered the reduced influence of an illusion as it is measured repeatedly. Here we studied this decrement in the context of a Ponzo illusion to further characterize the dissociation between vision for perception and for action. Participants first manually estimated the lengths of single targets in a Ponzo display with their thumb and index finger, then actually grasped these targets in another series of trials, and then manually estimated the target lengths again in a final set of trials. The results showed that although the perceptual estimates and grasp apertures were equally sensitive to real differences in target length on the initial trials, only the perceptual estimates remained biased by the illusion over repeated measurements. In contrast, the illusion's effect on the grasps decreased rapidly, vanishing entirely after only a few trials. Interestingly, a closer examination of the grasp data revealed that this initial effect was driven largely by undersizing the grip aperture for the display configuration in which the target was positioned between the diverging background lines (i.e., when the targets appeared to be shorter than they really were). This asymmetry between grasping apparently shorter and longer targets suggests that the sensorimotor system may initially treat the edges of the configuration as obstacles to be avoided. This finding highlights the sensorimotor system's ability to rapidly update motor programs through error feedback, manifesting as an immunity to the effects of illusion displays even after only a few trials.

  16. Uplift in the Fiordland region, New Zealand: implications for incipient subduction.

    PubMed

    House, M A; Gurnis, M; Kamp, P J J; Sutherland, R

    2002-09-20

    Low-temperature thermochronometry reveals regional Late Cenozoic denudation in Fiordland, New Zealand, consistent with geodynamic models showing uplift of the overriding plate during incipient subduction. The data show a northward progression of exhumation in response to northward migration of the initiation of subduction. The locus of most recent uplift coincides with a large positive Bouguer gravity anomaly within Fiordland. Thermochronometrically deduced crustal thinning, anomalous gravity, and estimates of surface uplift are all consistent with approximately 2 kilometers of dynamic support. This amount of dynamic support is in accord with geodynamic predictions, suggesting that we have dated the initiation of subduction adjacent to Fiordland.

  17. Hybrid diversity method utilizing adaptive diversity function for recovering unknown aberrations in an optical system

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H. (Inventor)

    2009-01-01

    A method of recovering unknown aberrations in an optical system includes collecting intensity data produced by the optical system, generating an initial estimate of a phase of the optical system, iteratively performing a phase retrieval on the intensity data to generate a phase estimate using an initial diversity function corresponding to the intensity data, generating a phase map from the phase retrieval phase estimate, decomposing the phase map to generate a decomposition vector, generating an updated diversity function by combining the initial diversity function with the decomposition vector, generating an updated estimate of the phase of the optical system by removing the initial diversity function from the phase map. The method may further include repeating the process beginning with iteratively performing a phase retrieval on the intensity data using the updated estimate of the phase of the optical system in place of the initial estimate of the phase of the optical system, and using the updated diversity function in place of the initial diversity function, until a predetermined convergence is achieved.

  18. Quantifying the Transition from Active Surveillance to Watchful Waiting Among Men with Very Low-risk Prostate Cancer.

    PubMed

    Van Hemelrijck, Mieke; Garmo, Hans; Lindhagen, Lars; Bratt, Ola; Stattin, Pär; Adolfsson, Jan

    2017-10-01

    Active surveillance (AS) is commonly used for men with low-risk prostate cancer (PCa). When life expectancy becomes too short for curative treatment to be beneficial, a change from AS to watchful waiting (WW) follows. Little is known about this change since it is rarely documented in medical records. To model transition from AS to WW and how this is affected by age and comorbidity among men with very low-risk PCa. National population-based healthcare registers were used for analysis. Using data on PCa characteristics, age, and comorbidity, a state transition model was created to estimate the probability of changes between predefined treatments to estimate transition from AS to WW. Our estimates indicate that 48% of men with very low-risk PCa starting AS eventually changed to WW over a life course. This proportion increased with age at time of AS initiation. Within 10 yr from start of AS, 10% of men aged 55 yr and 50% of men aged 70 yr with no comorbidity at initiation changed to WW. Our prevalence simulation suggests that the number of men on WW who were previously on AS will eventually stabilise after 30 yr. A limitation is the limited information from clinical follow-up visits (eg, repeat biopsies). We estimated that changes from AS to WW become common among men with very low-risk PCa who are elderly. This potential change to WW should be discussed with men starting on AS. Moreover, our estimates may help in planning health care resources allocated to men on AS, as the transition to WW is associated with lower demands on outpatient resources. Changes from active surveillance to watchful waiting will become more common among men with very low-risk prostate cancer. These observations suggest that patients need to be informed about this potential change before they start on active surveillance. Copyright © 2016 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  19. Monitoring vegetation conditions from LANDSAT for use in range management

    NASA Technical Reports Server (NTRS)

    Haas, R. H.; Deering, D. W.; Rouse, J. W., Jr.; Schell, J. A.

    1975-01-01

    A summary of the LANDSAT Great Plains Corridor projects and the principal results are presented. Emphasis is given to the use of satellite acquired phenological data for range management and agri-business activities. A convenient method of reducing LANDSAT MSS data to provide quantitative estimates of green biomass on rangelands in the Great Plains is explained. Suggestions for the use of this approach for evaluating range feed conditions are presented. A LANDSAT Follow-on project has been initiated which will employ the green biomass estimation method in a quasi-operational monitoring of range readiness and range feed conditions on a regional scale.

  20. GROWTH AND INEQUALITY: MODEL EVALUATION BASED ON AN ESTIMATION-CALIBRATION STRATEGY

    PubMed Central

    Jeong, Hyeok; Townsend, Robert

    2010-01-01

    This paper evaluates two well-known models of growth with inequality that have explicit micro underpinnings related to household choice. With incomplete markets or transactions costs, wealth can constrain investment in business and the choice of occupation and also constrain the timing of entry into the formal financial sector. Using the Thai Socio-Economic Survey (SES), we estimate the distribution of wealth and the key parameters that best fit cross-sectional data on household choices and wealth. We then simulate the model economies for two decades at the estimated initial wealth distribution and analyze whether the model economies at those micro-fit parameter estimates can explain the observed macro and sectoral aspects of income growth and inequality change. Both models capture important features of Thai reality. Anomalies and comparisons across the two distinct models yield specific suggestions for improved research on the micro foundations of growth and inequality. PMID:20448833

  1. Climate Projections and Uncertainty Communication.

    PubMed

    Joslyn, Susan L; LeClerc, Jared E

    2016-01-01

    Lingering skepticism about climate change might be due in part to the way climate projections are perceived by members of the public. Variability between scientists' estimates might give the impression that scientists disagree about the fact of climate change rather than about details concerning the extent or timing. Providing uncertainty estimates might clarify that the variability is due in part to quantifiable uncertainty inherent in the prediction process, thereby increasing people's trust in climate projections. This hypothesis was tested in two experiments. Results suggest that including uncertainty estimates along with climate projections leads to an increase in participants' trust in the information. Analyses explored the roles of time, place, demographic differences (e.g., age, gender, education level, political party affiliation), and initial belief in climate change. Implications are discussed in terms of the potential benefit of adding uncertainty estimates to public climate projections. Copyright © 2015 Cognitive Science Society, Inc.

  2. Can even-order laser harmonics exhibited by Bohmian trajectories in symmetric potentials be observed?

    PubMed

    Peatross, J; Johansen, J

    2014-01-13

    Strong-field laser-atom interactions provide extreme conditions that may be useful for investigating the de Broglie-Bohm quantum interpretation. Bohmian trajectories representing bound electrons in individual atoms exhibit both even and odd harmonic motion when subjected to a strong external laser field. The phases of the even harmonics depend on the random initial positions of the trajectories within the wave function, making the even harmonics incoherent. In contrast, the phases of odd harmonics remain for the most part coherent regardless of initial position. Under the conjecture that a Bohmian point particle plays the role of emitter, this suggests an experiment to determine whether both even and odd harmonics are produced at the atomic level. Estimates suggest that incoherent emission of even harmonics may be detectable out the side of an intense laser focus interacting with a large number of atoms.

  3. Overuse Injury Assessment Model

    DTIC Science & Technology

    2003-06-01

    initial physical fitness level foot type lower extremity alignment altered gait pretest anthropometry diet and nutrition genetics endocrine status and...using published anthropometry values. Assuming that these forces are the primary loads that cause the tibia to undergo shear and bending, the maximal...both the model and in vivo results suggest that the ratio of walking to running bone stress is 0.54. Table 3-3 Estimated walk/march and run tensile

  4. Developmental trends in alcohol use initiation and escalation from early- to middle-adolescence: Prediction by urgency and trait affect

    PubMed Central

    Spillane, Nichea S.; Merrill, Jennifer E.; Jackson, Kristina M.

    2016-01-01

    Studies on adolescent drinking have not always been able to distinguish between initiation and escalation of drinking, because many studies include samples in which initiation has already occurred; hence initiation and escalation are often confounded. The present study draws from a dual-process theoretical framework to investigate: if changes in the likelihood of drinking initiation and escalation are predicted by a tendency towards rash action when experiencing positive and negative emotions (positive and negative urgency); and whether trait positive and negative affect moderate such effects. Alcohol naïve adolescents (n=944; age: M=12.16, SD=.96; 52% female) completed 6 semi-annual assessments of trait urgency and affect (wave-1) and alcohol use (waves 2–6). A two-part random-effects model was used to estimate changes in the likelihood of any alcohol use vs. escalation in the volume of use amongst initiators. Main effects suggest a significant association between positive affect and change in level of alcohol use amongst initiators, such that lower positive affect predicted increased alcohol involvement. This main effect was qualified by a significant interaction between positive urgency and positive affect predicting changes in the escalation of drinking, such that the effect of positive urgency was augmented for those high on trait positive affect, though only at extremely high levels of positive affect. Results suggest risk factors in the development of drinking depend on whether initiation or escalation is investigated. A more nuanced understanding of the early developmental phases of alcohol involvement can inform prevention and intervention efforts. PMID:27031086

  5. How to perform measurements in a hovering animal's wake: physical modelling of the vortex wake of the hawkmoth, Manduca sexta.

    PubMed Central

    Tytell, Eric D; Ellington, Charles P

    2003-01-01

    The vortex wake structure of the hawkmoth, Manduca sexta, was investigated using a vortex ring generator. Based on existing kinematic and morphological data, a piston and tube apparatus was constructed to produce circular vortex rings with the same size and disc loading as a hovering hawkmoth. Results show that the artificial rings were initially laminar, but developed turbulence owing to azimuthal wave instability. The initial impulse and circulation were accurately estimated for laminar rings using particle image velocimetry; after the transition to turbulence, initial circulation was generally underestimated. The underestimate for turbulent rings can be corrected if the transition time and velocity profile are accurately known, but this correction will not be feasible for experiments on real animals. It is therefore crucial that the circulation and impulse be estimated while the wake vortices are still laminar. The scaling of the ring Reynolds number suggests that flying animals of about the size of hawkmoths may be the largest animals whose wakes stay laminar for long enough to perform such measurements during hovering. Thus, at low advance ratios, they may be the largest animals for which wake circulation and impulse can be accurately measured. PMID:14561347

  6. Testing for handling bias in survival estimation for black brant

    USGS Publications Warehouse

    Sedinger, J.S.; Lindberg, M.S.; Rexstad, E.A.; Chelgren, N.D.; Ward, D.H.

    1997-01-01

    We used an ultrastructure approach in program SURVIV to test for, and remove, bias in survival estimates for the year following mass banding of female black brant (Branta bernicla nigricans). We used relative banding-drive size as the independent variable to control for handling effects in our ultrastructure models, which took the form: S = S0(1 - ??D), where ?? was handling effect and D was the ratio of banding-drive size to the largest banding drive. Brant were divided into 3 classes: goslings, initial captures, and recaptures, based on their state at the time of banding, because we anticipated the potential for heterogeneity in model parameters among classes of brant. Among models examined, for which ?? was not constrained, a model with ?? constant across classes of brant and years, constant survival rates among years for initially captured brant but year-specific survival rates for goslings and recaptures, and year- and class-specific detection probabilities had the lowest Akaike Information Criterion (AIC). Handling effect, ??, was -0.47 ?? 0.13 SE, -0.14 ?? 0.057, and -0.12 ?? 0.049 for goslings, initially released adults, and recaptured adults. Gosling annual survival in the first year ranged from 0.738 ?? 0.072 for the 1986 cohort to 0.260 ?? 0.025 for the 1991 cohort. Inclusion of winter observations increased estimates of first-year survival rates by an average of 30%, suggesting that permanent emigration had an important influence on apparent survival, especially for later cohorts. We estimated annual survival for initially captured brant as 0.782 ?? 0.013, while that for recaptures varied from 0.726 ?? 0.034 to 0.900 ?? 0.062. Our analyses failed to detect a negative effect of handling on survival of brant, which is consistent with an hypothesis of substantial inherent heterogeneity in post-fledging survival rates, such that individuals most likely to die as a result of handling also have lower inherent survival probabilities.

  7. Empirical evidence for resource-rational anchoring and adjustment.

    PubMed

    Lieder, Falk; Griffiths, Thomas L; M Huys, Quentin J; Goodman, Noah D

    2018-04-01

    People's estimates of numerical quantities are systematically biased towards their initial guess. This anchoring bias is usually interpreted as sign of human irrationality, but it has recently been suggested that the anchoring bias instead results from people's rational use of their finite time and limited cognitive resources. If this were true, then adjustment should decrease with the relative cost of time. To test this hypothesis, we designed a new numerical estimation paradigm that controls people's knowledge and varies the cost of time and error independently while allowing people to invest as much or as little time and effort into refining their estimate as they wish. Two experiments confirmed the prediction that adjustment decreases with time cost but increases with error cost regardless of whether the anchor was self-generated or provided. These results support the hypothesis that people rationally adapt their number of adjustments to achieve a near-optimal speed-accuracy tradeoff. This suggests that the anchoring bias might be a signature of the rational use of finite time and limited cognitive resources rather than a sign of human irrationality.

  8. The MAP Spacecraft Angular State Estimation After Sensor Failure

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, Richard R.

    2003-01-01

    This work describes two algorithms for computing the angular rate and attitude in case of a gyro and a Star Tracker failure in the Microwave Anisotropy Probe (MAP) satellite, which was placed in the L2 parking point from where it collects data to determine the origin of the universe. The nature of the problem is described, two algorithms are suggested, an observability study is carried out and real MAP data are used to determine the merit of the algorithms. It is shown that one of the algorithms yields a good estimate of the rates but not of the attitude whereas the other algorithm yields a good estimate of the rate as well as two of the three attitude angles. The estimation of the third angle depends on the initial state estimate. There is a contradiction between this result and the outcome of the observability analysis. An explanation of this contradiction is given in the paper. Although this work treats a particular spacecraft, the conclusions have a far reaching consequence.

  9. The Effect of Sensor Failure on the Attitude and Rate Estimation of MAP Spacecraft

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, Richard R.

    2003-01-01

    This work describes two algorithms for computing the angular rate and attitude in case of a gyro and a Star Tracker failure in the Microwave Anisotropy Probe (MAP) satellite, which was placed in the L2 parking point from where it collects data to determine the origin of the universe. The nature of the problem is described, two algorithms are suggested, an observability study is carried out and real MAP data are used to determine the merit of the algorithms. It is shown that one of the algorithms yields a good estimate of the rates but not of the attitude whereas the other algorithm yields a good estimate of the rate as well as two of the three attitude angles. The estimation of the third angle depends on the initial state estimate. There is a contradiction between this result and the outcome of the observability analysis. An explanation of this contradiction is given in the paper. Although this work treats a particular spacecraft, its conclusions are more general.

  10. Early adolescent adversity inflates threat estimation in females and promotes alcohol use initiation in both sexes.

    PubMed

    Walker, Rachel A; Andreansky, Christopher; Ray, Madelyn H; McDannald, Michael A

    2018-06-01

    Childhood adversity is associated with exaggerated threat processing and earlier alcohol use initiation. Conclusive links remain elusive, as childhood adversity typically co-occurs with detrimental socioeconomic factors, and its impact is likely moderated by biological sex. To unravel the complex relationships among childhood adversity, sex, threat estimation, and alcohol use initiation, we exposed female and male Long-Evans rats to early adolescent adversity (EAA). In adulthood, >50 days following the last adverse experience, threat estimation was assessed using a novel fear discrimination procedure in which cues predict a unique probability of footshock: danger (p = 1.00), uncertainty (p = .25), and safety (p = .00). Alcohol use initiation was assessed using voluntary access to 20% ethanol, >90 days following the last adverse experience. During development, EAA slowed body weight gain in both females and males. In adulthood, EAA selectively inflated female threat estimation, exaggerating fear to uncertainty and safety, but promoted alcohol use initiation across sexes. Meaningful relationships between threat estimation and alcohol use initiation were not observed, underscoring the independent effects of EAA. Results isolate the contribution of EAA to adult threat estimation, alcohol use initiation, and reveal moderation by biological sex. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  11. Length of stay and fat content of migrant semipalmated sandpipers in eastern Maine

    USGS Publications Warehouse

    Dunn, P.O.; May, T.A.; McCollough, M.A.; Howe, M.A.

    1988-01-01

    Semipalmated Sandpipers (Calidris pusilla) stop at coastal staging areas in the Canadian maritime provinces and northeastern United States to replenish fat reserves before initiating a nonstop transoceanic flight of at least 3,200 km to wintering areas in South America. The relationship between estimated fat content at capture and length of stay (days between marking and last observation) of Semipalmated Sandpipers at one of these staging areas in eastern Maine was studied during 1980-1982. Total body mass and wing chord length were used to estimate fat content. When data were analyzed by week of initial capture, mean length of stay of both adults and juveniles decreased with increasing fat content. This supports the assumption that resumption of migration is affected by fat content at staging areas for long-distance nonstop flights. However, fat content at capture was a poor predictor of length of stay, which suggests that other factors are more important in determining length of stay.

  12. Too Much of a Good Thing? Exploring the Impact of Wealth on Weight.

    PubMed

    Au, Nicole; Johnston, David W

    2015-11-01

    Obesity, like many health conditions, is more prevalent among the socioeconomically disadvantaged. In our data, very poor women are three times more likely to be obese and five times more likely to be severely obese than rich women. Despite this strong correlation, it remains unclear whether higher wealth causes lower obesity. In this paper, we use nationally representative panel data and exogenous wealth shocks (primarily inheritances and lottery wins) to shed light on this issue. Our estimates show that wealth improvements increase weight for women, but not men. This effect differs by initial wealth and weight-an average-sized wealth shock received by initially poor and obese women is estimated to increase weight by almost 10 lb. Importantly, for some females, the effects appear permanent. We also find that a change in diet is the most likely explanation for the weight gain. Overall, the results suggest that additional wealth may exacerbate rather than alleviate weight problems. Copyright © 2014 John Wiley & Sons, Ltd.

  13. Using Lidar and Radar measurements to constrain predictions of forest ecosystem structure and function.

    PubMed

    Antonarakis, Alexander S; Saatchi, Sassan S; Chazdon, Robin L; Moorcroft, Paul R

    2011-06-01

    Insights into vegetation and aboveground biomass dynamics within terrestrial ecosystems have come almost exclusively from ground-based forest inventories that are limited in their spatial extent. Lidar and synthetic-aperture Radar are promising remote-sensing-based techniques for obtaining comprehensive measurements of forest structure at regional to global scales. In this study we investigate how Lidar-derived forest heights and Radar-derived aboveground biomass can be used to constrain the dynamics of the ED2 terrestrial biosphere model. Four-year simulations initialized with Lidar and Radar structure variables were compared against simulations initialized from forest-inventory data and output from a long-term potential-vegtation simulation. Both height and biomass initializations from Lidar and Radar measurements significantly improved the representation of forest structure within the model, eliminating the bias of too many large trees that arose in the potential-vegtation-initialized simulation. The Lidar and Radar initializations decreased the proportion of larger trees estimated by the potential vegetation by approximately 20-30%, matching the forest inventory. This resulted in improved predictions of ecosystem-scale carbon fluxes and structural dynamics compared to predictions from the potential-vegtation simulation. The Radar initialization produced biomass values that were 75% closer to the forest inventory, with Lidar initializations producing canopy height values closest to the forest inventory. Net primary production values for the Radar and Lidar initializations were around 6-8% closer to the forest inventory. Correcting the Lidar and Radar initializations for forest composition resulted in improved biomass and basal-area dynamics as well as leaf-area index. Correcting the Lidar and Radar initializations for forest composition and fine-scale structure by combining the remote-sensing measurements with ground-based inventory data further improved predictions, suggesting that further improvements of structural and carbon-flux metrics will also depend on obtaining reliable estimates of forest composition and accurate representation of the fine-scale vertical and horizontal structure of plant canopies.

  14. Estimation of the standardized risk difference and ratio in a competing risks framework: application to injection drug use and progression to AIDS after initiation of antiretroviral therapy.

    PubMed

    Cole, Stephen R; Lau, Bryan; Eron, Joseph J; Brookhart, M Alan; Kitahata, Mari M; Martin, Jeffrey N; Mathews, William C; Mugavero, Michael J

    2015-02-15

    There are few published examples of absolute risk estimated from epidemiologic data subject to censoring and competing risks with adjustment for multiple confounders. We present an example estimating the effect of injection drug use on 6-year risk of acquired immunodeficiency syndrome (AIDS) after initiation of combination antiretroviral therapy between 1998 and 2012 in an 8-site US cohort study with death before AIDS as a competing risk. We estimate the risk standardized to the total study sample by combining inverse probability weights with the cumulative incidence function; estimates of precision are obtained by bootstrap. In 7,182 patients (83% male, 33% African American, median age of 38 years), we observed 6-year standardized AIDS risks of 16.75% among 1,143 injection drug users and 12.08% among 6,039 nonusers, yielding a standardized risk difference of 4.68 (95% confidence interval: 1.27, 8.08) and a standardized risk ratio of 1.39 (95% confidence interval: 1.12, 1.72). Results may be sensitive to the assumptions of exposure-version irrelevance, no measurement bias, and no unmeasured confounding. These limitations suggest that results be replicated with refined measurements of injection drug use. Nevertheless, estimating the standardized risk difference and ratio is straightforward, and injection drug use appears to increase the risk of AIDS. © The Author 2014. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. Estimated reductions in provider-initiated preterm births and hospital length of stay under a universal acetylsalicylic acid prophylaxis strategy: a retrospective cohort study

    PubMed Central

    Ray, Joel G.; Bartsch, Emily; Park, Alison L.; Shah, Prakesh S.; Dzakpasu, Susie

    2017-01-01

    Background: Hypertensive disorders, especially preeclampsia, are the leading reason for provider-initiated preterm birth. We estimated how universal acetylsalicylic acid (ASA) prophylaxis might reduce rates of provider-initiated preterm birth associated with preeclampsia and intrauterine growth restriction, which are related conditions. Methods: We performed a cohort study of singleton hospital births in 2013 in Canada, excluding Quebec. We estimated the proportion of term births and provider-initiated preterm births affected by preeclampsia and/or intrauterine growth restriction, and the corresponding mean maternal and newborn hospital length of stay. We projected the potential number of cases reduced and corresponding hospital length of stay if ASA prophylaxis lowered cases of preeclampsia and intrauterine growth restriction by a relative risk reduction (RRR) of 10% (lowest) or 53% (highest), as suggested by randomized clinical trials. Results: Of the 269 303 singleton live births and stillbirths in our cohort, 4495 (1.7%) were provider-initiated preterm births. Of the 4495, 1512 (33.6%) had a diagnosis of preeclampsia and/or intrauterine growth restriction. The mean maternal length of stay was 2.0 (95% confidence interval [CI] 2.0-2.0) days among term births unaffected by either condition and 7.3 (95% CI 6.1-8.6) days among provider-initiated preterm births with both conditions. The corresponding values for mean newborn length of stay were 1.9 (95% CI 1.8-1.9) days and 21.8 (95% CI 17.4-26.2) days. If ASA conferred a 53% RRR against preeclampsia and/or intrauterine growth restriction, 3365 maternal and 11 591 newborn days in hospital would be averted. If ASA conferred a 10% RRR, 635 maternal and 2187 newborn days in hospital would be averted. Interpretation: A universal ASA prophylaxis strategy could substantially reduce the burden of long maternal and newborn hospital stays associated with provider-initiated preterm birth. However, until there is compelling evidence that administration of ASA to all, or most, pregnant women reduces the risk of preeclampsia and/or intrauterine growth restriction, clinicians should continue to follow current clinical practice guidelines. PMID:28646095

  16. Adolescent cortical thickness pre- and post marijuana and alcohol initiation.

    PubMed

    Jacobus, Joanna; Castro, Norma; Squeglia, Lindsay M; Meloy, M J; Brumback, Ty; Huestis, Marilyn A; Tapert, Susan F

    Cortical thickness abnormalities have been identified in youth using both alcohol and marijuana. However, limited studies have followed individuals pre- and post initiation of alcohol and marijuana use to help identify to what extent discrepancies in structural brain integrity are pre-existing or substance-related. Adolescents (N=69) were followed from ages 13 (pre-initiation of substance use, baseline) to ages 19 (post-initiation, follow-up). Three subgroups were identified, participants that initiated alcohol use (ALC, n=23, >20 alcohol use episodes), those that initiated both alcohol and marijuana use (ALC+MJ, n=23, >50 marijuana use episodes) and individuals that did not initiate either substance regularly by follow-up (CON, n=23, <3 alcohol use episodes, no marijuana use episodes). All adolescents underwent neurocognitive testing, neuroimaging, and substance use and mental health interviews. Significant group by time interactions and main effects on cortical thickness estimates were identified for 18 cortical regions spanning the left and right hemisphere (ps<0.05). The vast majority of findings suggest a more substantial decrease, or within-subjects effect, in cortical thickness by follow-up for individuals who have not initiated regular substance use or alcohol use only by age 19; modest between-group differences were identified at baseline in several cortical regions (ALC and CON>ALC+MJ). Minimal neurocognitive differences were observed in this sample. Findings suggest pre-existing neural differences prior to marijuana use may contribute to initiation of use and observed neural outcomes. Marijuana use may also interfere with thinning trajectories that contribute to morphological differences in young adulthood that are often observed in cross-sectional studies of heavy marijuana users. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Adolescent Cortical Thickness Pre- and Post Marijuana and Alcohol Initiation

    PubMed Central

    Jacobus, Joanna; Castro, Norma; Squeglia, Lindsay M.; Meloy, M.J.; Brumback, Ty; Huestis, Marilyn; Tapert, Susan F.

    2016-01-01

    Cortical thickness abnormalities have been identified in youth using both alcohol and marijuana. However, limited studies have followed individuals pre- and post initiation of alcohol and marijuana use to help identify to what extent discrepancies in structural brain integrity are pre-existing or substance-related. Adolescents (N=69) were followed from ages 13 (pre-initiation of substance use, baseline) to ages 19 (post-initiation, follow-up). Three subgroups were identified, participants that initiated alcohol use (ALC, n=23, >20 alcohol use episodes), those that initiated both alcohol and marijuana use (ALC+MJ, n=23, >50 marijuana use episodes) and individuals that did not initiate either substance regularly by follow-up (CON, n=23, <3 alcohol use episodes, no marijuana use episodes). All adolescents underwent neurocognitive testing, neuroimaging, and substance use and mental health interviews. Significant group by time interactions and main effects on cortical thickness estimates were identified for 18 cortical regions spanning the left and right hemisphere (ps<.05). The vast majority of findings suggest a more substantial decrease, or within-subjects effect, in cortical thickness by follow-up for individuals who have not initiated regular substance use or alcohol use only by age 19; modest between-group differences were identified at baseline in several cortical regions (ALC and CON>ALC+MJ). Minimal neurocognitive differences were observed in this sample. Findings suggest pre-existing neural differences prior to marijuana use may contribute to initiation of use and observed neural outcomes. Marijuana use may also interfere with thinning trajectories that contribute to morphological differences in young adulthood that are often observed in cross-sectional studies of heavy marijuana users. PMID:27687470

  18. Measuring the willingness to pay user fees for interpretive services at a national forest

    NASA Astrophysics Data System (ADS)

    Goldhor-Wilcock, Barbara Ashley

    An understanding of willingness to pay (WTP) for nonmarket environmental goods is useful for planning and policy, but difficult to determine. WTP for interpretive services was investigated using interviews with 361 participants in guided nature tours. Immediately after the tour, participants were asked to state their WIT for the tour. Responses were predominantly 5 (42%), 2 (14%) and 10 (13%). A predetermined amount was added to the open-ended (OE) WTP offer and respondents were asked if they were willing to pay a larger amount. Acceptance of the larger amount depended strongly on the relative increase over the initial WTP. If the increase was smaller than the initial offer, most respondents agreed, whereas if the increment was larger, most did not agree, suggesting that the initial offer was approximately half of the true WTP. The two WTP questions were used to define lower and upper bounds for each respondent's true WTP. A censored interval regression was used to estimate a WTP distribution with mean 11.30 and median $10.00. The median is twice that of the OE WTP, further suggesting that the OE response understated value by 50 percent. The estimated true WTP distribution and the OE WTP distribution have a weak, but statistically significant, dependence on some demographic, travel, and benefit variables, although these relations have negligible practical significance over the observed range of the variables. To evaluate whether the WTP amounts were based on a true economic tradeoff, respondents were asked to explain their WTP responses. For the initial OE question, 38% gave explanations that could be interpreted as an economic tradeoff, whereas 33% gave reasons that were clearly irrelevant. For the second, dichotomous choice (DC), question, 59% gave reasons suggesting a relevant economic judgement. A DC question may provoke apparently relevant answers, regardless of the underlying reasoning (a majority simply said "it was (not) worth it"). The DC reasoning may also be influenced by the preceding OE question, which provides a comparative base. Combining OE and DC questions in a single survey may encourage relevant reasoning, while also helping to identify the true WTP and consumer surplus.

  19. Initial mass functions from ultraviolet stellar photometry: A comparison of Lucke and Hodge OB associations near 30 Doradus with the nearby field

    NASA Technical Reports Server (NTRS)

    Hill, Jesse K.; Isensee, Joan E.; Cornett, Robert H.; Bohlin, Ralph C.; O'Connell, Robert W.; Roberts, Morton S.; Smith, Andrew M.; Stecher, Theodore P.

    1994-01-01

    UV stellar photometry is presented for 1563 stars within a 40 minutes circular field in the Large Magellanic Cloud (LMC), excluding the 10 min x 10 min field centered on R136 investigated earlier by Hill et al. (1993). Magnitudes are computed from images obtained by the Ultraviolet Imaging Telescope (UIT) in bands centered at 1615 A and 2558 A. Stellar masses and extinctions are estimated for the stars in associations using the evolutionary models of Schaerer et al. (1993), assuming the age is 4 Myr and that the local LMC extinction follows the Fitzpatrick (1985) 30 Dor extinction curve. The estimated slope of the initial mass function (IMF) for massive stars (greater than 15 solar mass) within the Lucke and Hodge (LH) associations is Gamma = -1.08 +/- 0.2. Initial masses and extinctions for stars not within LH associations are estimated assuming that the stellar age is either 4 Myr or half the stellar lifetime, whichever is larger. The estimated slope of the IMF for massive stars not within LH associations is Gamma = -1.74 +/- 0.3 (assuming continuous star formation), compared with Gamma = -1.35, and Gamma = -1.7 +/- 0.5, obtained for the Galaxy by Salpeter (1955) and Scalo (1986), respectively, and Gamma = -1.6 obtained for massive stars in the Galaxy by Garmany, Conti, & Chiosi (1982). The shallower slope of the association IMF suggests that not only is the star formation rate higher in associations, but that the local conditions favor the formation of higher mass stars there. We make no corrections for binaries or incompleteness.

  20. Committed sea-level rise for the next century from Greenland ice sheet dynamics during the past decade

    PubMed Central

    Price, Stephen F.; Payne, Antony J.; Howat, Ian M.; Smith, Benjamin E.

    2011-01-01

    We use a three-dimensional, higher-order ice flow model and a realistic initial condition to simulate dynamic perturbations to the Greenland ice sheet during the last decade and to assess their contribution to sea level by 2100. Starting from our initial condition, we apply a time series of observationally constrained dynamic perturbations at the marine termini of Greenland’s three largest outlet glaciers, Jakobshavn Isbræ, Helheim Glacier, and Kangerdlugssuaq Glacier. The initial and long-term diffusive thinning within each glacier catchment is then integrated spatially and temporally to calculate a minimum sea-level contribution of approximately 1 ± 0.4 mm from these three glaciers by 2100. Based on scaling arguments, we extend our modeling to all of Greenland and estimate a minimum dynamic sea-level contribution of approximately 6 ± 2 mm by 2100. This estimate of committed sea-level rise is a minimum because it ignores mass loss due to future changes in ice sheet dynamics or surface mass balance. Importantly, > 75% of this value is from the long-term, diffusive response of the ice sheet, suggesting that the majority of sea-level rise from Greenland dynamics during the past decade is yet to come. Assuming similar and recurring forcing in future decades and a self-similar ice dynamical response, we estimate an upper bound of 45 mm of sea-level rise from Greenland dynamics by 2100. These estimates are constrained by recent observations of dynamic mass loss in Greenland and by realistic model behavior that accounts for both the long-term cumulative mass loss and its decay following episodic boundary forcing. PMID:21576500

  1. Melting in Superheated Silicon Films Under Pulsed-Laser Irradiation

    NASA Astrophysics Data System (ADS)

    Wang, Jin Jimmy

    This thesis examines melting in superheated silicon films in contact with SiO2 under pulsed laser irradiation. An excimer-laser pulse was employed to induce heating of the film by irradiating the film through the transparent fused-quartz substrate such that most of the beam energy was deposited near the bottom Si-SiO2 interface. Melting dynamics were probed via in situ transient reflectance measurements. The temperature profile was estimated computationally by incorporating temperature- and phase-dependent physical parameters and the time-dependent intensity profile of the incident excimer-laser beam obtained from the experiments. The results indicate that a significant degree of superheating occurred in the subsurface region of the film. Surface-initiated melting was observed in spite of the internal heating scheme, which resulted in the film being substantially hotter at and near the bottom Si-SiO2 interface. By considering that the surface melts at the equilibrium melting point, the solid-phase-only heat-flow analysis estimates that the bottom Si-SiO2 interface can be superheated by at least 220 K during excimer-laser irradiation. It was found that at higher laser fluences (i.e., at higher temperatures), melting can be triggered internally. At heating rates of 1010 K/s, melting was observed to initiate at or near the (100)-oriented Si-SiO2 interface at temperatures estimated to be over 300 K above the equilibrium melting point. Based on theoretical considerations, it was deduced that melting in the superheated solid initiated via a nucleation and growth process. Nucleation rates were estimated from the experimental data using Johnson-Mehl-Avrami-Kolmogorov (JMAK) analysis. Interpretation of the results using classical nucleation theory suggests that nucleation of the liquid phase occurred via the heterogeneous mechanism along the Si-SiO2 interface.

  2. Committed sea-level rise for the next century from Greenland ice sheet dynamics during the past decade.

    PubMed

    Price, Stephen F; Payne, Antony J; Howat, Ian M; Smith, Benjamin E

    2011-05-31

    We use a three-dimensional, higher-order ice flow model and a realistic initial condition to simulate dynamic perturbations to the Greenland ice sheet during the last decade and to assess their contribution to sea level by 2100. Starting from our initial condition, we apply a time series of observationally constrained dynamic perturbations at the marine termini of Greenland's three largest outlet glaciers, Jakobshavn Isbræ, Helheim Glacier, and Kangerdlugssuaq Glacier. The initial and long-term diffusive thinning within each glacier catchment is then integrated spatially and temporally to calculate a minimum sea-level contribution of approximately 1 ± 0.4 mm from these three glaciers by 2100. Based on scaling arguments, we extend our modeling to all of Greenland and estimate a minimum dynamic sea-level contribution of approximately 6 ± 2 mm by 2100. This estimate of committed sea-level rise is a minimum because it ignores mass loss due to future changes in ice sheet dynamics or surface mass balance. Importantly, > 75% of this value is from the long-term, diffusive response of the ice sheet, suggesting that the majority of sea-level rise from Greenland dynamics during the past decade is yet to come. Assuming similar and recurring forcing in future decades and a self-similar ice dynamical response, we estimate an upper bound of 45 mm of sea-level rise from Greenland dynamics by 2100. These estimates are constrained by recent observations of dynamic mass loss in Greenland and by realistic model behavior that accounts for both the long-term cumulative mass loss and its decay following episodic boundary forcing.

  3. Evaluating outcomes of management targeting the recovery of a migratory songbird of conservation concern.

    PubMed

    Streby, Henry M; Kramer, Gunnar R; Peterson, Sean M; Andersen, David E

    2018-01-01

    Assessing outcomes of habitat management is critical for informing and adapting conservation plans. From 2013-2019, a multi-stage management initiative, led by the American Bird Conservancy (ABC), aims to create >25,000 ha of shrubland and early-successional vegetation to benefit Golden-winged Warblers ( Vermivora chrysoptera ) in managed forested landscapes of the western Great Lakes region. We studied a dense breeding population of Golden-winged Warblers at Rice Lake National Wildlife Refuge (NWR) in Minnesota, USA, where ABC initiative management was implemented to benefit the species. We monitored abundance before (2011-2014) and after (2015-2016) management, and we estimated full-season productivity (i.e., young recruited into the fall population) from predictive, spatially explicit models, informed by nest and fledgling survival data collected at sites in the western Great Lakes region, including Rice Lake NWR, during 2011 and 2012. Then, using biologically informed models of bird response to observed and predicted vegetation succession, we estimated the cumulative change in population recruitment over various scenarios of vegetation succession and demographic response. We observed an 32% decline in abundance of breeding pairs and estimated a 27% decline in per-pair full-season productivity following management, compared to no change in a nearby control site. In models that ranged from highly optimistic to progressively more realistic scenarios, we estimated a net loss of 72-460 juvenile Golden-winged Warblers produced from the managed site in the 10-20 years following management. Even if our well-informed and locally validated productivity models produced erroneous estimates and the management resulted in only a temporary reduction in abundance (i.e., no change in productivity), our forecast models still predicted a net loss of 137-260 juvenile Golden-winged Warblers from the managed area over the same time frame. Our study site represents only a small portion of a massive management initiative; however, the management at our site was conducted in accordance with the initiative's management plans, the resulting vegetation structure is consistent with that of other areas managed under the initiative, and those responsible for the initiative have described the management at our study site as successful Golden-winged Warbler management. Our assessment demonstrates that, at least for the only site for which pre- and post-management data on Golden-winged Warblers exist, the ABC management initiative is having a substantial and likely enduring negative impact on the species it purports to benefit. We suggest that incorporating region-specific, empirical information about Golden-winged Warbler-habitat relations into habitat management efforts would increase the likelihood of a positive response by Golden-winged Warblers.

  4. Possible role of electric forces in bromine activation during polar boundary layer ozone depletion and aerosol formation events

    NASA Astrophysics Data System (ADS)

    Tkachenko, Ekaterina

    2017-11-01

    This work presents a hypothesis about the mechanism of bromine activation during polar boundary layer ozone depletion events (ODEs) as well as the mechanism of aerosol formation from the frost flowers. The author suggests that ODEs may be initiated by the electric-field gradients created at the sharp tips of ice formations as a result of the combined effect of various environmental conditions. According to the author's estimates, these electric-field gradients may be sufficient for the onset of point or corona discharges followed by generation of high local concentrations of the reactive oxygen species and initiation of free-radical and redox reactions. This process may be responsible for the formation of seed bromine which then undergoes further amplification by HOBr-driven bromine explosion. The proposed hypothesis may explain a variety of environmental conditions and substrates as well as poor reproducibility of ODE initiation observed by researchers in the field. According to the author's estimates, high wind can generate sufficient conditions for overcoming the Rayleigh limit and thus can initiate ;spraying; of charged aerosol nanoparticles. These charged aerosol nanoparticles can provoke formation of free radicals, turning the ODE on. One can also envision a possible emission of halogen ion as a result of the ;electrospray; process analogous to that of electrospray ionization mass-spectrometry.

  5. GFR at Initiation of Dialysis and Mortality in CKD: A Meta-analysis

    PubMed Central

    Susantitaphong, Paweena; Altamimi, Sarah; Ashkar, Motaz; Balk, Ethan M.; Stel, Vianda S.; Wright, Seth; Jaber, Bertrand L.

    2012-01-01

    Background The proportion of patients with advanced chronic kidney disease (CKD) initiating dialysis at higher glomerular filtration rate (GFR) has increased over the past decade. Recent data suggest that higher GFR may be associated with increased mortality. Study Design A meta-analysis of cohort studies and trials. Setting & Population Patients with advanced CKD. Selection Criteria for Studies We performed a systematic literature search in MEDLINE, Cochrane Central Register of Controlled Trials, ClinicalTrials.gov, American Society of Nephrology abstracts, and bibliographies of retrieved articles to identify studies reporting on GFR at dialysis initiation and mortality. Predictor estimated or calculated GFR at dialysis initiation. Outcome Pooled adjusted hazard ratio (HR) of continuous GFR for all-cause mortality. Results Sixteen cohort studies and one randomized controlled trial were identified (n=1,081,116). By meta-analysis, restricted to the 15 cohorts (n=1,079,917), higher GFR at dialysis initiation was associated with a higher pooled adjusted HR for all-cause mortality (1.04; 95% CI, 1.03–1.05; P<0.001). However, there was significant heterogeneity (I2=97%; P<0.001). The association persisted among the 9 cohorts that adjusted analytically for nutritional covariates (HR 1.03; 95% CI 1.02, 1.04; P<0.001; residual I2=97%). The highest mortality risk was observed in hemodialysis cohorts (HR 1.05; 95% CI 1.02, 1.08; P<0.001) whereas there was no association between GFR and mortality in peritoneal dialysis cohorts (HR 1.04; 95% CI 0.99, 1.08, P=0.11; residual I2=98%). Finally, higher GFR was associated with a lower mortality risk in cohorts that calculated GFR (HR 0.80; 95% CI 0.71, 0.91; P=0.003), contrasting with a higher mortality risk in cohorts that estimated GFR (HR 1.04; 95% CI 1.03, 1.05; P<0.001; residual I2=97%). Limitations Paucity of randomized controlled trials; different methods for determining GFR; and substantial heterogeneity. Conclusions Higher estimated rather than calculated GFR at dialysis initiation is associated with a higher mortality risk among patients with advanced CKD, independent of nutritional status. Although there was substantial heterogeneity of effect size estimates across studies, this observation requires further study. PMID:22465328

  6. Training to estimate blood glucose and to form associations with initial hunger

    PubMed Central

    Ciampolini, Mario; Bianchi, Riccardo

    2006-01-01

    Background The will to eat is a decision associated with conditioned responses and with unconditioned body sensations that reflect changes in metabolic biomarkers. Here, we investigate whether this decision can be delayed until blood glucose is allowed to fall to low levels, when presumably feeding behavior is mostly unconditioned. Following such an eating pattern might avoid some of the metabolic risk factors that are associated with high glycemia. Results In this 7-week study, patients were trained to estimate their blood glucose at meal times by associating feelings of hunger with glycemic levels determined by standard blood glucose monitors and to eat only when glycemia was < 85 mg/dL. At the end of the 7-week training period, estimated and measured glycemic values were found to be linearly correlated in the trained group (r = 0.82; p = 0.0001) but not in the control (untrained) group (r = 0.10; p = 0.40). Fewer subjects in the trained group were hungry than those in the control group (p = 0.001). The 18 hungry subjects of the trained group had significantly lower glucose levels (80.1 ± 6.3 mg/dL) than the 42 hungry control subjects (89.2 ± 10.2 mg/dL; p = 0.01). Moreover, the trained hungry subjects estimated their glycemia (78.1 ± 6.7 mg/dL; estimation error: 3.2 ± 2.4% of the measured glycemia) more accurately than the control hungry subjects (75.9 ± 9.8 mg/dL; estimation error: 16.7 ± 11.0%; p = 0.0001). Also the estimation error of the entire trained group (4.7 ± 3.6%) was significantly lower than that of the control group (17.1 ± 11.5%; p = 0.0001). A value of glycemia at initial feelings of hunger was provisionally identified as 87 mg/dL. Below this level, estimation showed lower error in both trained (p = 0.04) and control subjects (p = 0.001). Conclusion Subjects could be trained to accurately estimate their blood glucose and to recognize their sensations of initial hunger at low glucose concentrations. These results suggest that it is possible to make a behavioral distinction between unconditioned and conditioned hunger, and to achieve a cognitive will to eat by training. PMID:17156448

  7. Gambling with stimulus payments: feeding gaming machines with federal dollars.

    PubMed

    Lye, Jenny; Hirschberg, Joe

    2014-09-01

    In late 2008 and early 2009 the Australian Federal Government introduced a series of economic stimulus packages designed to maintain consumer spending in the early days of the Great Recession. When these packages were initiated the media suggested that the wide-spread availability of electronic gaming machines (EGMs, e.g. slot machines, poker machines, video lottery terminals) in Australia would result in stimulating the EGMs. Using state level monthly data we estimate that the stimulus packages led to an increase of 26 % in EGM revenues. This also resulted in over $60 million in additional tax revenue for State Governments. We also estimate a short-run aggregate income demand elasticity for EGMs to be approximately 2.

  8. Co-seismic Static Stress Drops for Earthquake Ruptures Nucleated on Faults After Progressive Strain Localization

    NASA Astrophysics Data System (ADS)

    Griffith, W. A.; Nielsen, S.; di Toro, G.; Pollard, D. D.; Pennacchioni, G.

    2007-12-01

    We estimate the coseismic static stress drop on small exhumed strike-slip faults in the Mt. Abbot quadrangle of the central Sierra Nevada (California). The sub-vertical strike-slip faults cut ~85 Ma granodiorite, were exhumed from 7-10 km depth, and were chosen because they are exposed along their entire lengths, ranging from 8 to 13 m. Net slip is estimated using offset aplite dikes and shallowly plunging slickenlines on the fault surfaces. The faults show a record of progressive strain localization: slip initially nucleated on joints and accumulated from ductile shearing (quartz-bearing mylonites) to brittle slipping (epidote-bearing cataclasites). Thin (< 1 mm) pseudotachylytes associated with the cataclasites have been identified along some faults, suggesting that brittle slip may have been seismic. The brittle contribution to slip may be distinguished from the ductile shearing because epidote-filled, rhombohedral dilational jogs opened at bends and step-overs during brittle slip, are distributed periodically along the length of the faults. We argue that brittle slip occurred along the measured fault lengths in single slip events based on several pieces of evidence. 1) Epidote crystals are randomly oriented and undeformed within dilational jogs, indicating they did not grow during aseismic slip and were not broken after initial opening and precipitation. 2) Opening-mode splay cracks are concentrated near fault tips rather than the fault center, suggesting that the reactivated faults ruptured all at once rather than in smaller slip patches. 3) The fact that the opening lengths of the dilational jogs vary systematically along the fault traces suggests that brittle reactivation occurred in a single slip event along the entire fault rather than in multiple slip events. This unique combination of factors distinguishes this study from previous attempts to estimate stress drop from exhumed faults because we can constrain the coseismic rupture length and slip. The static stress drop is calculated for a circular fault using the length of the mapped faults and their slip distributions as well as the shear modulus of the host granodiorite measured in the laboratory. Calculations yield stress drops on the order of 100-200 MPa, one to two orders of magnitude larger than typical seismological estimates. The studied seismic ruptures occurred along small, deep-seated faults (10 km depth), and, given the fault mineral filling (quartz-bearing mylonites) these were "strong" faults. Our estimates are consistent with static stress drops estimated by Nadeau and Johnson (1998) for small repeated earthquakes.

  9. Inferring the source of evaporated waters using stable H and O isotopes

    NASA Astrophysics Data System (ADS)

    Bowen, G. J.; Putman, A.; Brooks, J. R.; Bowling, D. R.; Oerter, E.; Good, S. P.

    2017-12-01

    Stable isotope ratios of H and O are widely used identify the source of water, e.g., in aquifers, river runoff, soils, plant xylem, and plant-based beverages. In situations where the sampled water is partially evaporated, its isotope values will have evolved along an evaporation line (EL) in δ2H/δ18O space, and back-correction along the EL to its intersection with a meteoric water line (MWL) has been used to estimate the source water's isotope ratios. Several challenges and potential pitfalls exist with traditional approaches to this problem, including potential for bias from a commonly used regression-based approach for EL slope estimation and incomplete estimation of uncertainty in most studies. We suggest the value of a model-based approach to EL estimation, and introduce a mathematical framework that eliminates the need to explicitly estimate the EL-MWL intersection, simplifying analysis and facilitating more rigorous uncertainty estimation. We apply this analysis framework to data from 1,000 lakes sampled in EPA's 2007 National Lakes Assessment. We find that data for most lakes is consistent with a water source similar to annual runoff, estimated from monthly precipitation and evaporation within the lake basin. Strong evidence for both summer- and winter-biased sources exists, however, with winter bias pervasive in most snow-prone regions. The new analytical framework should improve the rigor of source-water inference from evaporated samples in ecohydrology and related sciences, and our initial results from U.S. lakes suggest that previous interpretations of lakes as unbiased isotope integrators may only be valid in certain climate regimes.

  10. Source properties of earthquakes near the Salton Sea triggered by the 16 October 1999 M 7.1 Hector Mine, California, earthquake

    USGS Publications Warehouse

    Hough, S.E.; Kanamori, H.

    2002-01-01

    We analyze the source properties of a sequence of triggered earthquakes that occurred near the Salton Sea in southern California in the immediate aftermath of the M 7.1 Hector Mine earthquake of 16 October 1999. The sequence produced a number of early events that were not initially located by the regional network, including two moderate earthquakes: the first within 30 sec of the P-wave arrival and a second approximately 10 minutes after the mainshock. We use available amplitude and waveform data from these events to estimate magnitudes to be approximately 4.7 and 4.4, respectively, and to obtain crude estimates of their locations. The sequence of small events following the initial M 4.7 earthquake is clustered and suggestive of a local aftershock sequence. Using both broadband TriNet data and analog data from the Southern California Seismic Network (SCSN), we also investigate the spectral characteristics of the M 4.4 event and other triggered earthquakes using empirical Green's function (EGF) analysis. We find that the source spectra of the events are consistent with expectations for tectonic (brittle shear failure) earthquakes, and infer stress drop values of 0.1 to 6 MPa for six M 2.1 to M 4.4 events. The estimated stress drop values are within the range observed for tectonic earthquakes elsewhere. They are relatively low compared to typically observed stress drop values, which is consistent with expectations for faulting in an extensional, high heat flow regime. The results therefore suggest that, at least in this case, triggered earthquakes are associated with a brittle shear failure mechanism. This further suggests that triggered earthquakes may tend to occur in geothermal-volcanic regions because shear failure occurs at, and can be triggered by, relatively low stresses in extensional regimes.

  11. Estimation of Soil-Water Characteristic Curves in Multiple-Cycles Using Membrane and TDR System

    PubMed Central

    Hong, Won-Taek; Jung, Young-Seok; Kang, Seonghun; Lee, Jong-Sub

    2016-01-01

    The objective of this study is to estimate multiple-cycles of the soil-water characteristic curve (SWCC) using an innovative volumetric pressure plate extractor (VPPE), which is incorporated with a membrane and time domain reflectometry (TDR). The pressure cell includes the membrane to reduce the experimental time and the TDR probe to automatically estimate the volumetric water content. For the estimation of SWCC using the VPPE system, four specimens with different grain size and void ratio are prepared. The volumetric water contents of the specimens according to the matric suction are measured by the burette system and are estimated in the TDR system during five cycles of SWCC tests. The volumetric water contents estimated by the TDR system are almost identical to those determined by the burette system. The experimental time significantly decreases with the new VPPE. The hysteresis in the SWCC is largest in the first cycle and is nearly identical after 1.5 cycles. As the initial void ratio decreases, the air entry value increases. This study suggests that the new VPPE may effectively estimate multiple-cycles of the SWCC of unsaturated soils. PMID:28774139

  12. Estimation of fish biomass using environmental DNA.

    PubMed

    Takahara, Teruhiko; Minamoto, Toshifumi; Yamanaka, Hiroki; Doi, Hideyuki; Kawabata, Zen'ichiro

    2012-01-01

    Environmental DNA (eDNA) from aquatic vertebrates has recently been used to estimate the presence of a species. We hypothesized that fish release DNA into the water at a rate commensurate with their biomass. Thus, the concentration of eDNA of a target species may be used to estimate the species biomass. We developed an eDNA method to estimate the biomass of common carp (Cyprinus carpio L.) using laboratory and field experiments. In the aquarium, the concentration of eDNA changed initially, but reached an equilibrium after 6 days. Temperature had no effect on eDNA concentrations in aquaria. The concentration of eDNA was positively correlated with carp biomass in both aquaria and experimental ponds. We used this method to estimate the biomass and distribution of carp in a natural freshwater lagoon. We demonstrated that the distribution of carp eDNA concentration was explained by water temperature. Our results suggest that biomass data estimated from eDNA concentration reflects the potential distribution of common carp in the natural environment. Measuring eDNA concentration offers a non-invasive, simple, and rapid method for estimating biomass. This method could inform management plans for the conservation of ecosystems.

  13. Estimation of Fish Biomass Using Environmental DNA

    PubMed Central

    Takahara, Teruhiko; Minamoto, Toshifumi; Yamanaka, Hiroki; Doi, Hideyuki; Kawabata, Zen'ichiro

    2012-01-01

    Environmental DNA (eDNA) from aquatic vertebrates has recently been used to estimate the presence of a species. We hypothesized that fish release DNA into the water at a rate commensurate with their biomass. Thus, the concentration of eDNA of a target species may be used to estimate the species biomass. We developed an eDNA method to estimate the biomass of common carp (Cyprinus carpio L.) using laboratory and field experiments. In the aquarium, the concentration of eDNA changed initially, but reached an equilibrium after 6 days. Temperature had no effect on eDNA concentrations in aquaria. The concentration of eDNA was positively correlated with carp biomass in both aquaria and experimental ponds. We used this method to estimate the biomass and distribution of carp in a natural freshwater lagoon. We demonstrated that the distribution of carp eDNA concentration was explained by water temperature. Our results suggest that biomass data estimated from eDNA concentration reflects the potential distribution of common carp in the natural environment. Measuring eDNA concentration offers a non-invasive, simple, and rapid method for estimating biomass. This method could inform management plans for the conservation of ecosystems. PMID:22563411

  14. Maximum current density and beam brightness achievable by laser-driven electron sources

    NASA Astrophysics Data System (ADS)

    Filippetto, D.; Musumeci, P.; Zolotorev, M.; Stupakov, G.

    2014-02-01

    This paper discusses the extension to different electron beam aspect ratio of the Child-Langmuir law for the maximum achievable current density in electron guns. Using a simple model, we derive quantitative formulas in good agreement with simulation codes. The new scaling laws for the peak current density of temporally long and transversely narrow initial beam distributions can be used to estimate the maximum beam brightness and suggest new paths for injector optimization.

  15. Use of the superpopulation approach to estimate breeding population size: An example in asynchronously breeding birds

    USGS Publications Warehouse

    Williams, K.A.; Frederick, P.C.; Nichols, J.D.

    2011-01-01

    Many populations of animals are fluid in both space and time, making estimation of numbers difficult. Much attention has been devoted to estimation of bias in detection of animals that are present at the time of survey. However, an equally important problem is estimation of population size when all animals are not present on all survey occasions. Here, we showcase use of the superpopulation approach to capture-recapture modeling for estimating populations where group membership is asynchronous, and where considerable overlap in group membership among sampling occasions may occur. We estimate total population size of long-legged wading bird (Great Egret and White Ibis) breeding colonies from aerial observations of individually identifiable nests at various times in the nesting season. Initiation and termination of nests were analogous to entry and departure from a population. Estimates using the superpopulation approach were 47-382% larger than peak aerial counts of the same colonies. Our results indicate that the use of the superpopulation approach to model nesting asynchrony provides a considerably less biased and more efficient estimate of nesting activity than traditional methods. We suggest that this approach may also be used to derive population estimates in a variety of situations where group membership is fluid. ?? 2011 by the Ecological Society of America.

  16. CH-47F Improved Cargo Helicopter (CH-47F)

    DTIC Science & Technology

    2015-12-01

    Confidence Level Confidence Level of cost estimate for current APB: 50% The Confidence Level of the CH-47F APB cost estimate, which was approved on April...M) Initial PAUC Development Estimate Changes PAUC Production Estimate Econ Qty Sch Eng Est Oth Spt Total 10.316 -0.491 3.003 -0.164 2.273 7.378...SAR Baseline to Current SAR Baseline (TY $M) Initial APUC Development Estimate Changes APUC Production Estimate Econ Qty Sch Eng Est Oth Spt Total

  17. Conformational plasticity of RepB, the replication initiator protein of promiscuous streptococcal plasmid pMV158

    PubMed Central

    Boer, D. Roeland; Ruiz-Masó, José Angel; Rueda, Manuel; Petoukhov, Maxim V.; Machón, Cristina; Svergun, Dmitri I.; Orozco, Modesto; del Solar, Gloria; Coll, Miquel

    2016-01-01

    DNA replication initiation is a vital and tightly regulated step in all replicons and requires an initiator factor that specifically recognizes the DNA replication origin and starts replication. RepB from the promiscuous streptococcal plasmid pMV158 is a hexameric ring protein evolutionary related to viral initiators. Here we explore the conformational plasticity of the RepB hexamer by i) SAXS, ii) sedimentation experiments, iii) molecular simulations and iv) X-ray crystallography. Combining these techniques, we derive an estimate of the conformational ensemble in solution showing that the C-terminal oligomerisation domains of the protein form a rigid cylindrical scaffold to which the N-terminal DNA-binding/catalytic domains are attached as highly flexible appendages, featuring multiple orientations. In addition, we show that the hinge region connecting both domains plays a pivotal role in the observed plasticity. Sequence comparisons and a literature survey show that this hinge region could exists in other initiators, suggesting that it is a common, crucial structural element for DNA binding and manipulation. PMID:26875695

  18. Initial rupture of earthquakes in the 1995 Ridgecrest, California sequence

    USGS Publications Warehouse

    Mori, J.; Kanamori, H.

    1996-01-01

    Close examination of the P waves from earthquakes ranging in size across several orders of magnitude shows that the shape of the initiation of the velocity waveforms is independent of the magnitude of the earthquake. A model in which earthquakes of all sizes have similar rupture initiation can explain the data. This suggests that it is difficult to estimate the eventual size of an earthquake from the initial portion of the waveform. Previously reported curvature seen in the beginning of some velocity waveforms can be largely explained as the effect of anelastic attenuation; thus there is little evidence for a departure from models of simple rupture initiation that grow dynamically from a small region. The results of this study indicate that any "precursory" radiation at seismic frequencies must emanate from a source region no larger than the equivalent of a M0.5 event (i.e. a characteristic length of ???10 m). The size of the nucleation region for magnitude 0 to 5 earthquakes thus is not resolvable with the standard seismic instrumentation deployed in California. Copyright 1996 by the American Geophysical Union.

  19. Experimental Estimation of Mutation Rates in a Wheat Population With a Gene Genealogy Approach

    PubMed Central

    Raquin, Anne-Laure; Depaulis, Frantz; Lambert, Amaury; Galic, Nathalie; Brabant, Philippe; Goldringer, Isabelle

    2008-01-01

    Microsatellite markers are extensively used to evaluate genetic diversity in natural or experimental evolving populations. Their high degree of polymorphism reflects their high mutation rates. Estimates of the mutation rates are therefore necessary when characterizing diversity in populations. As a complement to the classical experimental designs, we propose to use experimental populations, where the initial state is entirely known and some intermediate states have been thoroughly surveyed, thus providing a short timescale estimation together with a large number of cumulated meioses. In this article, we derived four original gene genealogy-based methods to assess mutation rates with limited bias due to relevant model assumptions incorporating the initial state, the number of new alleles, and the genetic effective population size. We studied the evolution of genetic diversity at 21 microsatellite markers, after 15 generations in an experimental wheat population. Compared to the parents, 23 new alleles were found in generation 15 at 9 of the 21 loci studied. We provide evidence that they arose by mutation. Corresponding estimates of the mutation rates ranged from 0 to 4.97 × 10−3 per generation (i.e., year). Sequences of several alleles revealed that length polymorphism was only due to variation in the core of the microsatellite. Among different microsatellite characteristics, both the motif repeat number and an independent estimation of the Nei diversity were correlated with the novel diversity. Despite a reduced genetic effective size, global diversity at microsatellite markers increased in this population, suggesting that microsatellite diversity should be used with caution as an indicator in biodiversity conservation issues. PMID:18689900

  20. Experimental estimation of mutation rates in a wheat population with a gene genealogy approach.

    PubMed

    Raquin, Anne-Laure; Depaulis, Frantz; Lambert, Amaury; Galic, Nathalie; Brabant, Philippe; Goldringer, Isabelle

    2008-08-01

    Microsatellite markers are extensively used to evaluate genetic diversity in natural or experimental evolving populations. Their high degree of polymorphism reflects their high mutation rates. Estimates of the mutation rates are therefore necessary when characterizing diversity in populations. As a complement to the classical experimental designs, we propose to use experimental populations, where the initial state is entirely known and some intermediate states have been thoroughly surveyed, thus providing a short timescale estimation together with a large number of cumulated meioses. In this article, we derived four original gene genealogy-based methods to assess mutation rates with limited bias due to relevant model assumptions incorporating the initial state, the number of new alleles, and the genetic effective population size. We studied the evolution of genetic diversity at 21 microsatellite markers, after 15 generations in an experimental wheat population. Compared to the parents, 23 new alleles were found in generation 15 at 9 of the 21 loci studied. We provide evidence that they arose by mutation. Corresponding estimates of the mutation rates ranged from 0 to 4.97 x 10(-3) per generation (i.e., year). Sequences of several alleles revealed that length polymorphism was only due to variation in the core of the microsatellite. Among different microsatellite characteristics, both the motif repeat number and an independent estimation of the Nei diversity were correlated with the novel diversity. Despite a reduced genetic effective size, global diversity at microsatellite markers increased in this population, suggesting that microsatellite diversity should be used with caution as an indicator in biodiversity conservation issues.

  1. Estimation of the Arrival Time and Duration of a Radio Signal with Unknown Amplitude and Initial Phase

    NASA Astrophysics Data System (ADS)

    Trifonov, A. P.; Korchagin, Yu. E.; Korol'kov, S. V.

    2018-05-01

    We synthesize the quasi-likelihood, maximum-likelihood, and quasioptimal algorithms for estimating the arrival time and duration of a radio signal with unknown amplitude and initial phase. The discrepancies between the hardware and software realizations of the estimation algorithm are shown. The characteristics of the synthesized-algorithm operation efficiency are obtained. Asymptotic expressions for the biases, variances, and the correlation coefficient of the arrival-time and duration estimates, which hold true for large signal-to-noise ratios, are derived. The accuracy losses of the estimates of the radio-signal arrival time and duration because of the a priori ignorance of the amplitude and initial phase are determined.

  2. Improving Children’s Knowledge of Fraction Magnitudes

    PubMed Central

    Fazio, Lisa K.; Kennedy, Casey A.; Siegler, Robert S.

    2016-01-01

    We examined whether playing a computerized fraction game, based on the integrated theory of numerical development and on the Common Core State Standards’ suggestions for teaching fractions, would improve children’s fraction magnitude understanding. Fourth and fifth-graders were given brief instruction about unit fractions and played Catch the Monster with Fractions, a game in which they estimated fraction locations on a number line and received feedback on the accuracy of their estimates. The intervention lasted less than 15 minutes. In our initial study, children showed large gains from pretest to posttest in their fraction number line estimates, magnitude comparisons, and recall accuracy. In a more rigorous second study, the experimental group showed similarly large improvements, whereas a control group showed no improvement from practicing fraction number line estimates without feedback. The results provide evidence for the effectiveness of interventions emphasizing fraction magnitudes and indicate how psychological theories and research can be used to evaluate specific recommendations of the Common Core State Standards. PMID:27768756

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuncarayakti, Hanindyo; Maeda, Keiichi; Doi, Mamoru

    Integral field spectroscopy of 11 Type Ib/Ic supernova (SN Ib/Ic) explosion sites in nearby galaxies has been obtained using UH88/SNIFS and Gemini-N/GMOS. The use of integral field spectroscopy enables us to obtain both spatial and spectral information about the explosion site, enabling the identification of the parent stellar population of the SN progenitor star. The spectrum of the parent population provides metallicity determination via strong-line method and age estimation obtained via comparison with simple stellar population models. We adopt this information as the metallicity and age of the SN progenitor, under the assumption that it was coeval with the parentmore » stellar population. The age of the star corresponds to its lifetime, which in turn gives the estimate of its initial mass. With this method we were able to determine both the metallicity and initial (zero-age main sequence) mass of the progenitor stars of SNe Ib and Ic. We found that on average SN Ic explosion sites are more metal-rich and younger than SN Ib sites. The initial mass of the progenitors derived from parent stellar population age suggests that SN Ic has more massive progenitors than SN Ib. In addition, we also found indication that some of our SN progenitors are less massive than {approx}25 M{sub Sun }, indicating that they may have been stars in a close binary system that have lost their outer envelope via binary interactions to produce SNe Ib/Ic, instead of single Wolf-Rayet stars. These findings support the current suggestions that both binary and single progenitor channels are in effect in producing SNe Ib/Ic. This work also demonstrates the power of integral field spectroscopy in investigating SN environments and active star-forming regions.« less

  4. Do family dinners reduce the risk for early adolescent substance use? A propensity score analysis.

    PubMed

    Hoffmann, John P; Warnick, Elizabeth

    2013-01-01

    The risks of early adolescent substance use on health and well-being are well documented. In recent years, several experts have claimed that a simple preventive measure for these behaviors is for families to share evening meals. In this study, we use data from the 1997 National Longitudinal Study of Youth (n = 5,419) to estimate propensity score models designed to match on a set of covariates and predict early adolescent substance use frequency and initiation. The results indicate that family dinners are not generally associated with alcohol or cigarette use or with drug use initiation. However, a continuous measure of family dinners is modestly associated with marijuana frequency, thus suggesting a potential causal impact. These results show that family dinners may help prevent one form of substance use in the short term but do not generally affect substance use initiation or alcohol and cigarette use.

  5. Identification and characterization of kidney transplants with good glomerular filtration rate at 1 year but subsequent progressive loss of renal function.

    PubMed

    Park, Walter D; Larson, Timothy S; Griffin, Matthew D; Stegall, Mark D

    2012-11-15

    After the first year after kidney transplantation, 3% to 5% of grafts fail each year but detailed studies of how grafts progress to failure are lacking. This study aimed to analyze the functional stability of kidney transplants between 1 and 5 years after transplantation and to identify initially well-functioning grafts with progressive decline in allograft function. The study included 788 adult conventional kidney transplants performed at the Mayo Clinic Rochester between January 2000 and December 2005 with a minimum graft survival and follow-up of 2.6 years. The modification of diet in renal disease equation for estimating glomerular filtration rate (eGFR(MDRD)) was used to calculate the slope of renal function over time using all available serum creatinine values between 1 and 5 years after transplantation. Most transplants demonstrated good function (eGFR(MDRD) ≥40 mL/min) at 1 year with positive eGFR(MDRD) slope between 1 and 5 years after transplantation. However, a subset of grafts with 1-year eGFR(MDRD) ≥40 mL/min exhibited strongly negative eGFR(MDRD) slope between 1 and 5 years suggestive of progressive loss of graft function. Forty-one percent of this subset reached graft failure during follow-up, accounting for 69% of allograft failures occurring after 2.5 years after transplantation. This pattern of progressive decline in estimated glomerular filtration rate despite good early function was associated with but not fully attributable to factors suggestive of enhanced antidonor immunity. Longitudinal analysis of serial estimated glomerular filtration ratemeasurements identifies initially well-functioning kidney transplants at high risk for subsequent graft loss. For this subset, further studies are needed to identify modifiable causes of functional decline.

  6. Comparison Between Individually and Group-Based Insulin Pump Initiation by Time-Driven Activity-Based Costing

    PubMed Central

    Ridderstråle, Martin

    2017-01-01

    Background: Depending on available resources, competencies, and pedagogic preference, initiation of insulin pump therapy can be performed on either an individual or a group basis. Here we compared the two models with respect to resources used. Methods: Time-driven activity-based costing (TDABC) was used to compare initiating insulin pump treatment in groups (GT) to individual treatment (IT). Activities and cost drivers were identified, timed, or estimated at location. Medical quality and patient satisfaction were assumed to be noninferior and were not measured. Results: GT was about 30% less time-consuming and 17% less cost driving per patient and activity compared to IT. As a batch driver (16 patients in one group) GT produced an upward jigsaw-shaped accumulative cost curve compared to the incremental increase incurred by IT. Taking the alternate cost for those not attending into account, and realizing the cost of opportunity gained, suggested that GT was cost neutral already when 5 of 16 patients attended, and that a second group could be initiated at no additional cost as the attendance rate reached 15:1. Conclusions: We found TDABC to be effective in comparing treatment alternatives, improving cost control and decision making. Everything else being equal, if the setup is available, our data suggest that initiating insulin pump treatment in groups is far more cost effective than on an individual basis and that TDABC may be used to find the balance point. PMID:28366085

  7. The economics of improving medication adherence in osteoporosis: validation and application of a simulation model.

    PubMed

    Patrick, Amanda R; Schousboe, John T; Losina, Elena; Solomon, Daniel H

    2011-09-01

    Adherence to osteoporosis treatment is low. Although new therapies and behavioral interventions may improve medication adherence, questions are likely to arise regarding their cost-effectiveness. Our objectives were to develop and validate a model to simulate the clinical outcomes and costs arising from various osteoporosis medication adherence patterns among women initiating bisphosphonate treatment and to estimate the cost-effectiveness of a hypothetical intervention to improve medication adherence. We constructed a computer simulation using estimates of fracture rates, bisphosphonate treatment effects, costs, and utilities for health states drawn from the published literature. Probabilities of transitioning on and off treatment were estimated from administrative claims data. Patients were women initiating bisphosphonate therapy from the general community. We evaluated a hypothetical behavioral intervention to improve medication adherence. Changes in 10-yr fracture rates and incremental cost-effectiveness ratios were evaluated. A hypothetical intervention with a one-time cost of $250 and reducing bisphosphonate discontinuation by 30% had an incremental cost-effectiveness ratio (ICER) of $29,571 per quality-adjusted life year in 65-yr-old women initiating bisphosphonates. Although the ICER depended on patient age, intervention effectiveness, and intervention cost, the ICERs were less than $50,000 per quality-adjusted life year for the majority of intervention cost and effectiveness scenarios evaluated. Results were sensitive to bisphosphonate cost and effectiveness and assumptions about the rate at which intervention and treatment effects decline over time. Our results suggests that behavioral interventions to improve osteoporosis medication adherence will likely have favorable ICERs if their efficacy can be sustained.

  8. Applications of ASFCM(Assessment System of Flood Control Measurement) in Typhoon Committee Members

    NASA Astrophysics Data System (ADS)

    Kim, C.

    2013-12-01

    Due to extreme weather environment such as global warming and greenhouse effect, the risks of having flood damage has been increased with larger scale of flood damages. Therefore, it became necessary to consider modifying climate change, flood damage and its scale to the previous dimension measurement evaluation system. In this regard, it is needed to establish a comprehensive and integrated system to evaluate the most optimized measures for flood control through eliminating uncertainties of socio-economic impacts. Assessment System of Structural Flood Control Measures (ASFCM) was developed for determining investment priorities of the flood control measures and establishing the social infrastructure projects. ASFCM consists of three modules: 1) the initial setup and inputs module, 2) the flood and damage estimation module, and 3) the socio-economic analysis module. First, we have to construct the D/B for flood damage estimation, which is the initial and input data about the estimation unit, property, historical flood damages, and applied area's topographic & hydrological data. After that, it is important to classify local characteristic for constructing flood damage data. Five local characteristics (big city, medium size city, small city, farming area, and mountain area) are classified by criterion of application (population density). Next step is the floodplain simulation with HEC-RAS which is selected to simulate inundation. Through inputting the D/B and damage estimation, it is able to estimate the total damage (only direct damage) that is the amount of cost to recover the socio-economic activities back to the safe level before flood did occur. The last module suggests the economic analysis index (B/C ratio) with Multidimensional Flood Damage Analysis. Consequently, ASFCM suggests the reference index in constructing flood control measures and planning non-structural systems to reduce water-related damage. It is possible to encourage flood control planners and managers to consider and apply the socio-economic analysis results. ASFCM was applied in Republic of Korea, Thailand and Philippines to review efficiency and applicability. Figure 1. ASFCM Application(An-yang Stream, Republic of Korea)

  9. The effects of a flexible visual acuity-driven ranibizumab treatment regimen in age-related macular degeneration: outcomes of a drug and disease model.

    PubMed

    Holz, Frank G; Korobelnik, Jean-François; Lanzetta, Paolo; Mitchell, Paul; Schmidt-Erfurth, Ursula; Wolf, Sebastian; Markabi, Sabri; Schmidli, Heinz; Weichselberger, Andreas

    2010-01-01

    Differences in treatment responses to ranibizumab injections observed within trials involving monthly (MARINA and ANCHOR studies) and quarterly (PIER study) treatment suggest that an individualized treatment regimen may be effective in neovascular age-related macular degeneration. In the present study, a drug and disease model was used to evaluate the impact of an individualized, flexible treatment regimen on disease progression. For visual acuity (VA), a model was developed on the 12-month data from ANCHOR, MARINA, and PIER. Data from untreated patients were used to model patient-specific disease progression in terms of VA loss. Data from treated patients from the period after the three initial injections were used to model the effect of predicted ranibizumab vitreous concentration on VA loss. The model was checked by comparing simulations of VA outcomes after monthly and quarterly injections during this period with trial data. A flexible VA-guided regimen (after the three initial injections) in which treatment is initiated by loss of >5 letters from best previously observed VA scores was simulated. Simulated monthly and quarterly VA-guided regimens showed good agreement with trial data. Simulation of VA-driven individualized treatment suggests that this regimen, on average, sustains the initial gains in VA seen in clinical trials at month 3. The model predicted that, on average, to maintain initial VA gains, an estimated 5.1 ranibizumab injections are needed during the 9 months after the three initial monthly injections, which amounts to a total of 8.1 injections during the first year. A flexible, individualized VA-guided regimen after the three initial injections may sustain vision improvement with ranibizumab and could improve cost-effectiveness and convenience and reduce drug administration-associated risks.

  10. Continuous-variable quantum probes for structured environments

    NASA Astrophysics Data System (ADS)

    Bina, Matteo; Grasselli, Federico; Paris, Matteo G. A.

    2018-01-01

    We address parameter estimation for structured environments and suggest an effective estimation scheme based on continuous-variables quantum probes. In particular, we investigate the use of a single bosonic mode as a probe for Ohmic reservoirs, and obtain the ultimate quantum limits to the precise estimation of their cutoff frequency. We assume the probe prepared in a Gaussian state and determine the optimal working regime, i.e., the conditions for the maximization of the quantum Fisher information in terms of the initial preparation, the reservoir temperature, and the interaction time. Upon investigating the Fisher information of feasible measurements, we arrive at a remarkable simple result: homodyne detection of canonical variables allows one to achieve the ultimate quantum limit to precision under suitable, mild, conditions. Finally, upon exploiting a perturbative approach, we find the invariant sweet spots of the (tunable) characteristic frequency of the probe, able to drive the probe towards the optimal working regime.

  11. Preliminary Assessment of Spatial Competition in the Market for E85

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clinton, Bentley

    Anecdotal evidence suggests retail E85 prices may track retail gasoline prices rather than wholesale costs. This indicates E85 prices may be higher than they would be if priced on a cost basis hence limiting adoption by some price-sensitive consumers. Using publicly available and proprietary E83 and regular gasoline price data, we examine pricing behavior in the market for E85. Specifically, we assess the extent to which local retail competition in E85 markets decreases E85 retail prices. Results of econometric analysis suggest that higher levels of retail competition (measured in terms of station density) are associated with lower E85 prices atmore » the pump. While more precise causal estimates may be produced from more comprehensive data, this study is the first to our knowledge that estimates the spatial competition dimension of E85 pricing behavior by firms. This is an initial presentation; a related technical report is also available.« less

  12. Rate and yield relationships in the production of xanthan gum by batch fermentations using complex and chemically defined growth media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinches, A.; Pallent, L.J.

    1986-10-01

    Rate and yield information relating to biomass and product formation and to nitrogen, glucose and oxygen consumption are described for xanthan gum batch fermentations in which both chemically defined (glutamate nitrogen) and complex (peptone nitrogen) media are employed. Simple growth and product models are used for data interpretation. For both nitrogen sources, rate and yield parameter estimates are shown to be independent of initial nitrogen concentrations. For stationary phases, specific rates of gum production are shown to be independent of nitrogen source but dependent on initial nitrogen concentration. The latter is modeled empirically and suggests caution in applying simple productmore » models to xanthan gum fermentations. 13 references.« less

  13. Estimating the Impact of Earlier ART Initiation and Increased Testing Coverage on HIV Transmission among Men Who Have Sex with Men in Mexico using a Mathematical Model.

    PubMed

    Caro-Vega, Yanink; del Rio, Carlos; Lima, Viviane Dias; Lopez-Cervantes, Malaquias; Crabtree-Ramirez, Brenda; Bautista-Arredondo, Sergio; Colchero, M Arantxa; Sierra-Madero, Juan

    2015-01-01

    To estimate the impact of late ART initiation on HIV transmission among men who have sex with men (MSM) in Mexico. An HIV transmission model was built to estimate the number of infections transmitted by HIV-infected men who have sex with men (MSM-HIV+) MSM-HIV+ in the short and long term. Sexual risk behavior data were estimated from a nationwide study of MSM. CD4+ counts at ART initiation from a representative national cohort were used to estimate time since infection. Number of MSM-HIV+ on treatment and suppressed were estimated from surveillance and government reports. Status quo scenario (SQ), and scenarios of early ART initiation and increased HIV testing were modeled. We estimated 14239 new HIV infections per year from MSM-HIV+ in Mexico. In SQ, MSM take an average 7.4 years since infection to initiate treatment with a median CD4+ count of 148 cells/mm3(25th-75th percentiles 52-266). In SQ, 68% of MSM-HIV+ are not aware of their HIV status and transmit 78% of new infections. Increasing the CD4+ count at ART initiation to 350 cells/mm3 shortened the time since infection to 2.8 years. Increasing HIV testing to cover 80% of undiagnosed MSM resulted in a reduction of 70% in new infections in 20 years. Initiating ART at 500 cells/mm3 and increasing HIV testing the reduction would be of 75% in 20 years. A substantial number of new HIV infections in Mexico are transmitted by undiagnosed and untreated MSM-HIV+. An aggressive increase in HIV testing coverage and initiating ART at a CD4 count of 500 cells/mm3 in this population would significantly benefit individuals and decrease the number of new HIV infections in Mexico.

  14. Critical Parameters of the Initiation Zone for Spontaneous Dynamic Rupture Propagation

    NASA Astrophysics Data System (ADS)

    Galis, M.; Pelties, C.; Kristek, J.; Moczo, P.; Ampuero, J. P.; Mai, P. M.

    2014-12-01

    Numerical simulations of rupture propagation are used to study both earthquake source physics and earthquake ground motion. Under linear slip-weakening friction, artificial procedures are needed to initiate a self-sustained rupture. The concept of an overstressed asperity is often applied, in which the asperity is characterized by its size, shape and overstress. The physical properties of the initiation zone may have significant impact on the resulting dynamic rupture propagation. A trial-and-error approach is often necessary for successful initiation because 2D and 3D theoretical criteria for estimating the critical size of the initiation zone do not provide general rules for designing 3D numerical simulations. Therefore, it is desirable to define guidelines for efficient initiation with minimal artificial effects on rupture propagation. We perform an extensive parameter study using numerical simulations of 3D dynamic rupture propagation assuming a planar fault to examine the critical size of square, circular and elliptical initiation zones as a function of asperity overstress and background stress. For a fixed overstress, we discover that the area of the initiation zone is more important for the nucleation process than its shape. Comparing our numerical results with published theoretical estimates, we find that the estimates by Uenishi & Rice (2004) are applicable to configurations with low background stress and small overstress. None of the published estimates are consistent with numerical results for configurations with high background stress. We therefore derive new equations to estimate the initiation zone size in environments with high background stress. Our results provide guidelines for defining the size of the initiation zone and overstress with minimal effects on the subsequent spontaneous rupture propagation.

  15. Super- and sub-critical regions in shocks driven by radio-loud and radio-quiet CMEs

    PubMed Central

    Bemporad, Alessandro; Mancuso, Salvatore

    2012-01-01

    White-light coronagraphic images of Coronal Mass Ejections (CMEs) observed by SOHO/LASCO C2 have been used to estimate the density jump along the whole front of two CME-driven shocks. The two events are different in that the first one was a “radio-loud” fast CME, while the second one was a “radio quiet” slow CME. From the compression ratios inferred along the shock fronts, we estimated the Alfvén Mach numbers for the general case of an oblique shock. It turns out that the “radio-loud” CME shock is initially super-critical around the shock center, while later on the whole shock becomes sub-critical. On the contrary, the shock associated with the “radio-quiet” CME is sub-critical at all times. This suggests that CME-driven shocks could be efficient particle accelerators at the shock nose only at the initiation phases of the event, if and when the shock is super-critical, while at later times they lose their energy and the capability to accelerate high energetic particles. PMID:25685431

  16. Homage to Linnaeus: How many parasites? How many hosts?

    USGS Publications Warehouse

    Dobson, Andy; Lafferty, Kevin D.; Kuris, Armand M.; Hechinger, Ryan F.; Jetz, Walter

    2008-01-01

    Estimates of the total number of species that inhabit the Earth have increased significantly since Linnaeus's initial catalog of 20,000 species. The best recent estimates suggest that there are ≈6 million species. More emphasis has been placed on counts of free-living species than on parasitic species. We rectify this by quantifying the numbers and proportion of parasitic species. We estimate that there are between 75,000 and 300,000 helminth species parasitizing the vertebrates. We have no credible way of estimating how many parasitic protozoa, fungi, bacteria, and viruses exist. We estimate that between 3% and 5% of parasitic helminths are threatened with extinction in the next 50 to 100 years. Because patterns of parasite diversity do not clearly map onto patterns of host diversity, we can make very little prediction about geographical patterns of threat to parasites. If the threats reflect those experienced by avian hosts, then we expect climate change to be a major threat to the relatively small proportion of parasite diversity that lives in the polar and temperate regions, whereas habitat destruction will be the major threat to tropical parasite diversity. Recent studies of food webs suggest that ≈75% of the links in food webs involve a parasitic species; these links are vital for regulation of host abundance and potentially for reducing the impact of toxic pollutants. This implies that parasite extinctions may have unforeseen costs that impact the health and abundance of a large number of free-living species.

  17. Is talk "cheap"? An initial investigation of the equivalence of alcohol purchase task performance for hypothetical and actual rewards.

    PubMed

    Amlung, Michael T; Acker, John; Stojek, Monika K; Murphy, James G; MacKillop, James

    2012-04-01

    Behavioral economic alcohol purchase tasks (APTs) are self-report measures of alcohol demand that assess estimated consumption at escalating levels of price. However, the relationship between estimated performance for hypothetical outcomes and choices for actual outcomes has not been determined. The present study examined both the correspondence between choices for hypothetical and actual outcomes, and the correspondence between estimated alcohol consumption and actual drinking behavior. A collateral goal of the study was to examine the effects of alcohol cues on APT performance. Forty-one heavy-drinking adults (56% men) participated in a human laboratory protocol comprising APTs for hypothetical and actual alcohol and money, an alcohol cue reactivity paradigm, an alcohol self-administration period, and a recovery period. Pearson correlations revealed very high correspondence between APT performance for hypothetical and actual alcohol (ps < 0.001). Estimated consumption on the APT was similarly strongly associated with actual consumption during the self-administration period (r = 0.87, p < 0.001). Exposure to alcohol cues significantly increased subjective craving and arousal and had a trend-level effect on intensity of demand, in spite of notable ceiling effects. Associations among motivational indices were highly variable, suggesting multidimensionality. These results suggest there may be close correspondence both between value preferences for hypothetical alcohol and actual alcohol, and between estimated consumption and actual consumption. Methodological considerations and priorities for future studies are discussed. Copyright © 2011 by the Research Society on Alcoholism.

  18. Violence exposure among children with disabilities.

    PubMed

    Sullivan, Patricia M

    2009-06-01

    The focus of this paper is children with disabilities exposed to a broad range of violence types including child maltreatment, domestic violence, community violence, and war and terrorism. Because disability research must be interpreted on the basis of the definitional paradigm employed, definitions of disability status and current prevalence estimates as a function of a given paradigm are initially considered. These disability paradigms include those used in federal, education, juvenile justice, and health care arenas. Current prevalence estimates of childhood disability in the U.S. are presented within the frameworks of these varying definitions of disability status in childhood. Summaries of research from 2000 to 2008 on the four types of violence victimization addressed among children with disabilities are presented and directions for future research suggested.

  19. Existential Risk and Cost-Effective Biosecurity

    PubMed Central

    Snyder-Beattie, Andrew

    2017-01-01

    In the decades to come, advanced bioweapons could threaten human existence. Although the probability of human extinction from bioweapons may be low, the expected value of reducing the risk could still be large, since such risks jeopardize the existence of all future generations. We provide an overview of biotechnological extinction risk, make some rough initial estimates for how severe the risks might be, and compare the cost-effectiveness of reducing these extinction-level risks with existing biosecurity work. We find that reducing human extinction risk can be more cost-effective than reducing smaller-scale risks, even when using conservative estimates. This suggests that the risks are not low enough to ignore and that more ought to be done to prevent the worst-case scenarios. PMID:28806130

  20. A History of Ashes: An 80 Year Comparative Portrait of Smoking Initiation in American Indians and Non-Hispanic Whites—the Strong Heart Study

    PubMed Central

    Orr, Raymond; Calhoun, Darren; Noonan, Carolyn; Whitener, Ron; Henderson, Jeff; Goldberg, Jack; Henderson, Patrica Nez

    2013-01-01

    The consequences of starting smoking by age 18 are significant. Early smoking initiation is associated with higher tobacco dependence, increased difficulty in smoking cessation and more negative health outcomes. The purpose of this study is to examine how closely smoking initiation in a well-defined population of American Indians (AI) resembles a group of Non-Hispanic white (NHW) populations born over an 80 year period. We obtained data on age of smoking initiation among 7,073 AIs who were members of 13 tribes in Arizona, Oklahoma and North and South Dakota from the 1988 Strong Heart Study (SHS) and the 2001 Strong Heart Family Study (SHFS) and 19,747 NHW participants in the 2003 National Health Interview Survey. The participants were born as early as 1904 and as late as 1985. We classified participants according to birth cohort by decade, sex, and for AIs, according to location. We estimated the cumulative incidence of smoking initiation by age 18 in each sex and birth cohort group in both AIs and NHWs and used Cox regression to estimate hazard ratios for the association of birth cohort, sex and region with the age at smoking initiation. We found that the cumulative incidence of smoking initiation by age 18 was higher in males than females in all SHS regions and in NHWs (p < 0.001). Our results show regional variation of age of initiation significant in the SHS (p < 0.001). Our data showed that not all AIs (in this sample) showed similar trends toward increased earlier smoking. For instance, Oklahoma SHS male participants born in the 1980s initiated smoking before age 18 less often than those born before 1920 by a ratio of 0.7. The results showed significant variation in age of initiation across sex, birth cohort, and location. Our preliminary analyses suggest that AI smoking trends are not uniform across region or gender but are likely shaped by local context. If tobacco prevention and control programs depend in part on addressing the origin of AI smoking it may be helpful to increase the awareness in regional differences. PMID:23644825

  1. Method to monitor HC-SCR catalyst NOx reduction performance for lean exhaust applications

    DOEpatents

    Viola, Michael B [Macomb Township, MI; Schmieg, Steven J [Troy, MI; Sloane, Thompson M [Oxford, MI; Hilden, David L [Shelby Township, MI; Mulawa, Patricia A [Clinton Township, MI; Lee, Jong H [Rochester Hills, MI; Cheng, Shi-Wai S [Troy, MI

    2012-05-29

    A method for initiating a regeneration mode in selective catalytic reduction device utilizing hydrocarbons as a reductant includes monitoring a temperature within the aftertreatment system, monitoring a fuel dosing rate to the selective catalytic reduction device, monitoring an initial conversion efficiency, selecting a determined equation to estimate changes in a conversion efficiency of the selective catalytic reduction device based upon the monitored temperature and the monitored fuel dosing rate, estimating changes in the conversion efficiency based upon the determined equation and the initial conversion efficiency, and initiating a regeneration mode for the selective catalytic reduction device based upon the estimated changes in conversion efficiency.

  2. Quantifying human decomposition in an indoor setting and implications for postmortem interval estimation.

    PubMed

    Ceciliason, Ann-Sofie; Andersson, M Gunnar; Lindström, Anders; Sandler, Håkan

    2018-02-01

    This study's objective is to obtain accuracy and precision in estimating the postmortem interval (PMI) for decomposing human remains discovered in indoor settings. Data were collected prospectively from 140 forensic cases with a known date of death, scored according to the Total Body Score (TBS) scale at the post-mortem examination. In our model setting, it is estimated that, in cases with or without the presence of blowfly larvae, approximately 45% or 66% respectively, of the variance in TBS can be derived from Accumulated Degree-Days (ADD). The precision in estimating ADD/PMI from TBS is, in our setting, moderate to low. However, dividing the cases into defined subgroups suggests the possibility to increase the precision of the model. Our findings also suggest a significant seasonal difference with concomitant influence on TBS in the complete data set, possibly initiated by the presence of insect activity mainly during summer. PMI may be underestimated in cases with presence of desiccation. Likewise, there is a need for evaluating the effect of insect activity, to avoid overestimating the PMI. Our data sample indicates that the scoring method might need to be slightly modified to better reflect indoor decomposition, especially in cases with insect infestations or/and extensive desiccation. When applying TBS in an indoor setting, the model requires distinct inclusion criteria and a defined population. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Economic Analysis of Veterans Affairs Initiative to Prevent Methicillin-Resistant Staphylococcus aureus Infections.

    PubMed

    Nelson, Richard E; Stevens, Vanessa W; Khader, Karim; Jones, Makoto; Samore, Matthew H; Evans, Martin E; Douglas Scott, R; Slayton, Rachel B; Schweizer, Marin L; Perencevich, Eli L; Rubin, Michael A

    2016-05-01

    In an effort to reduce methicillin-resistant Staphylococcus aureus (MRSA) transmission through universal screening and isolation, the Department of Veterans Affairs (VA) launched the National MRSA Prevention Initiative in October 2007. The objective of this analysis was to quantify the budget impact and cost effectiveness of this initiative. An economic model was developed using published data on MRSA hospital-acquired infection (HAI) rates in the VA from October 2007 to September 2010; estimates of the costs of MRSA HAIs in the VA; and estimates of the intervention costs, including salaries of staff members hired to support the initiative at each VA facility. To estimate the rate of MRSA HAIs that would have occurred if the initiative had not been implemented, two different assumptions were made: no change and a downward temporal trend. Effectiveness was measured in life-years gained. The initiative resulted in an estimated 1,466-2,176 fewer MRSA HAIs. The initiative itself was estimated to cost $207 million during this 3-year period, while the cost savings from prevented MRSA HAIs ranged from $27 million to $75 million. The incremental cost-effectiveness ratios ranged from $28,048 to $56,944/life-years. The overall impact on the VA's budget was $131-$179 million. Wide-scale implementation of a national MRSA surveillance and prevention strategy in VA inpatient settings may have prevented a substantial number of MRSA HAIs. Although the savings associated with prevented infections helped offset some but not all of the cost of the initiative, this model indicated that the initiative would be considered cost effective. Copyright © 2016 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  4. Social foraging with partial (public) information.

    PubMed

    Mann, Ofri; Kiflawi, Moshe

    2014-10-21

    Group foragers can utilize public information to better estimate patch quality and arrive at more efficient patch-departure rules. However, acquiring such information may come at a cost; e.g. reduced search efficiency. We present a Bayesian group-foraging model in which social foragers do not require full awareness of their companions' foraging success; only of their number. In our model, patch departure is based on direct estimates of the number of remaining items. This is achieved by considering all likely combinations of initial patch-quality and group foraging-success; given the individual forager's experience within the patch. Slower rates of information-acquisition by our 'partially-aware' foragers lead them to over-utilize poor patches; more than fully-aware foragers. However, our model suggests that the ensuing loss in long-term intake-rates can be matched by a relatively low cost to the acquisition of full public information. In other words, we suggest that group-size offers sufficient information for optimal patch utilization by social foragers. We suggest, also, that our model is applicable to other situations where resources undergo 'background depletion', which is coincident but independent of the consumer's own utilization. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Full-field and anomaly initialization using a low-order climate model: a comparison and proposals for advanced formulations

    NASA Astrophysics Data System (ADS)

    Carrassi, A.; Weber, R. J. T.; Guemas, V.; Doblas-Reyes, F. J.; Asif, M.; Volpi, D.

    2014-04-01

    Initialization techniques for seasonal-to-decadal climate predictions fall into two main categories; namely full-field initialization (FFI) and anomaly initialization (AI). In the FFI case the initial model state is replaced by the best possible available estimate of the real state. By doing so the initial error is efficiently reduced but, due to the unavoidable presence of model deficiencies, once the model is let free to run a prediction, its trajectory drifts away from the observations no matter how small the initial error is. This problem is partly overcome with AI where the aim is to forecast future anomalies by assimilating observed anomalies on an estimate of the model climate. The large variety of experimental setups, models and observational networks adopted worldwide make it difficult to draw firm conclusions on the respective advantages and drawbacks of FFI and AI, or to identify distinctive lines for improvement. The lack of a unified mathematical framework adds an additional difficulty toward the design of adequate initialization strategies that fit the desired forecast horizon, observational network and model at hand. Here we compare FFI and AI using a low-order climate model of nine ordinary differential equations and use the notation and concepts of data assimilation theory to highlight their error scaling properties. This analysis suggests better performances using FFI when a good observational network is available and reveals the direct relation of its skill with the observational accuracy. The skill of AI appears, however, mostly related to the model quality and clear increases of skill can only be expected in coincidence with model upgrades. We have compared FFI and AI in experiments in which either the full system or the atmosphere and ocean were independently initialized. In the former case FFI shows better and longer-lasting improvements, with skillful predictions until month 30. In the initialization of single compartments, the best performance is obtained when the stabler component of the model (the ocean) is initialized, but with FFI it is possible to have some predictive skill even when the most unstable compartment (the extratropical atmosphere) is observed. Two advanced formulations, least-square initialization (LSI) and exploring parameter uncertainty (EPU), are introduced. Using LSI the initialization makes use of model statistics to propagate information from observation locations to the entire model domain. Numerical results show that LSI improves the performance of FFI in all the situations when only a portion of the system's state is observed. EPU is an online drift correction method in which the drift caused by the parametric error is estimated using a short-time evolution law and is then removed during the forecast run. Its implementation in conjunction with FFI allows us to improve the prediction skill within the first forecast year. Finally, the application of these results in the context of realistic climate models is discussed.

  6. Forest extent and deforestation in tropical Africa since 1900.

    PubMed

    Aleman, Julie C; Jarzyna, Marta A; Staver, A Carla

    2018-01-01

    Accurate estimates of historical forest extent and associated deforestation rates are crucial for quantifying tropical carbon cycles and formulating conservation policy. In Africa, data-driven estimates of historical closed-canopy forest extent and deforestation at the continental scale are lacking, and existing modelled estimates diverge substantially. Here, we synthesize available palaeo-proxies and historical maps to reconstruct forest extent in tropical Africa around 1900, when European colonization accelerated markedly, and compare these historical estimates with modern forest extent to estimate deforestation. We find that forests were less extensive in 1900 than bioclimatic models predict. Resultantly, across tropical Africa, ~ 21.7% of forests have been deforested, yielding substantially slower deforestation than previous estimates (35-55%). However, deforestation was heterogeneous: West and East African forests have undergone almost complete decline (~ 83.3 and 93.0%, respectively), while Central African forests have expanded at the expense of savannahs (~ 1.4% net forest expansion, with ~ 135,270 km 2 of savannahs encroached). These results suggest that climate alone does not determine savannah and forest distributions and that many savannahs hitherto considered to be degraded forests are instead relatively old. These data-driven reconstructions of historical biome distributions will inform tropical carbon cycle estimates, carbon mitigation initiatives and conservation planning in both forest and savannah systems.

  7. Combined statistical analyses for long-term stability data with multiple storage conditions: a simulation study.

    PubMed

    Almalik, Osama; Nijhuis, Michiel B; van den Heuvel, Edwin R

    2014-01-01

    Shelf-life estimation usually requires that at least three registration batches are tested for stability at multiple storage conditions. The shelf-life estimates are often obtained by linear regression analysis per storage condition, an approach implicitly suggested by ICH guideline Q1E. A linear regression analysis combining all data from multiple storage conditions was recently proposed in the literature when variances are homogeneous across storage conditions. The combined analysis is expected to perform better than the separate analysis per storage condition, since pooling data would lead to an improved estimate of the variation and higher numbers of degrees of freedom, but this is not evident for shelf-life estimation. Indeed, the two approaches treat the observed initial batch results, the intercepts in the model, and poolability of batches differently, which may eliminate or reduce the expected advantage of the combined approach with respect to the separate approach. Therefore, a simulation study was performed to compare the distribution of simulated shelf-life estimates on several characteristics between the two approaches and to quantify the difference in shelf-life estimates. In general, the combined statistical analysis does estimate the true shelf life more consistently and precisely than the analysis per storage condition, but it did not outperform the separate analysis in all circumstances.

  8. Levonorgestrel release rates over 5 years with the Liletta® 52-mg intrauterine system.

    PubMed

    Creinin, Mitchell D; Jansen, Rolf; Starr, Robert M; Gobburu, Joga; Gopalakrishnan, Mathangi; Olariu, Andrea

    2016-10-01

    To understand the potential duration of action for Liletta®, we conducted this study to estimate levonorgestrel (LNG) release rates over approximately 5½years of product use. Clinical sites in the U.S. Phase 3 study of Liletta collected the LNG intrauterine systems (IUSs) from women who discontinued the study. We randomly selected samples within 90-day intervals after discontinuation of IUS use through 900days (approximately 2.5years) and 180-day intervals for the remaining duration through 5.4years (1980days) to evaluate residual LNG content. We also performed an initial LNG content analysis using 10 randomly selected samples from a single lot. We calculated the average ex vivo release rate using the residual LNG content over the duration of the analysis. We analyzed 64 samples within 90-day intervals (range 6-10 samples per interval) through 900days and 36 samples within 180-day intervals (6 samples per interval) for the remaining duration. The initial content analysis averaged 52.0±1.8mg. We calculated an average initial release rate of 19.5mcg/day that decreased to 17.0, 14.8, 12.9, 11.3 and 9.8mcg/day after 1, 2, 3, 4 and 5years, respectively. The 5-year average release rate is 14.7mcg/day. The estimated initial LNG release rate and gradual decay of the estimated release rate are consistent with the target design and function of the product. The calculated LNG content and release rate curves support the continued evaluation of Liletta as a contraceptive for 5 or more years of use. Liletta LNG content and release rates are comparable to published data for another LNG 52-mg IUS. The release rate at 5years is more than double the published release rate at 3years with an LNG 13.5-mg IUS, suggesting continued efficacy of Liletta beyond 5years. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Effects of visual cues of object density on perception and anticipatory control of dexterous manipulation.

    PubMed

    Crajé, Céline; Santello, Marco; Gordon, Andrew M

    2013-01-01

    Anticipatory force planning during grasping is based on visual cues about the object's physical properties and sensorimotor memories of previous actions with grasped objects. Vision can be used to estimate object mass based on the object size to identify and recall sensorimotor memories of previously manipulated objects. It is not known whether subjects can use density cues to identify the object's center of mass (CM) and create compensatory moments in an anticipatory fashion during initial object lifts to prevent tilt. We asked subjects (n = 8) to estimate CM location of visually symmetric objects of uniform densities (plastic or brass, symmetric CM) and non-uniform densities (mixture of plastic and brass, asymmetric CM). We then asked whether subjects can use density cues to scale fingertip forces when lifting the visually symmetric objects of uniform and non-uniform densities. Subjects were able to accurately estimate an object's center of mass based on visual density cues. When the mass distribution was uniform, subjects could scale their fingertip forces in an anticipatory fashion based on the estimation. However, despite their ability to explicitly estimate CM location when object density was non-uniform, subjects were unable to scale their fingertip forces to create a compensatory moment and prevent tilt on initial lifts. Hefting object parts in the hand before the experiment did not affect this ability. This suggests a dichotomy between the ability to accurately identify the object's CM location for objects with non-uniform density cues and the ability to utilize this information to correctly scale their fingertip forces. These results are discussed in the context of possible neural mechanisms underlying sensorimotor integration linking visual cues and anticipatory control of grasping.

  10. Infiltration and runoff generation processes in fire-affected soils

    USGS Publications Warehouse

    Moody, John A.; Ebel, Brian A.

    2014-01-01

    Post-wildfire runoff was investigated by combining field measurements and modelling of infiltration into fire-affected soils to predict time-to-start of runoff and peak runoff rate at the plot scale (1 m2). Time series of soil-water content, rainfall and runoff were measured on a hillslope burned by the 2010 Fourmile Canyon Fire west of Boulder, Colorado during cyclonic and convective rainstorms in the spring and summer of 2011. Some of the field measurements and measured soil physical properties were used to calibrate a one-dimensional post-wildfire numerical model, which was then used as a ‘virtual instrument’ to provide estimates of the saturated hydraulic conductivity and high-resolution (1 mm) estimates of the soil-water profile and water fluxes within the unsaturated zone.Field and model estimates of the wetting-front depth indicated that post-wildfire infiltration was on average confined to shallow depths less than 30 mm. Model estimates of the effective saturated hydraulic conductivity, Ks, near the soil surface ranged from 0.1 to 5.2 mm h−1. Because of the relatively small values of Ks, the time-to-start of runoff (measured from the start of rainfall),  tp, was found to depend only on the initial soil-water saturation deficit (predicted by the model) and a measured characteristic of the rainfall profile (referred to as the average rainfall acceleration, equal to the initial rate of change in rainfall intensity). An analytical model was developed from the combined results and explained 92–97% of the variance of  tp, and the numerical infiltration model explained 74–91% of the variance of the peak runoff rates. These results are from one burned site, but they strongly suggest that  tp in fire-affected soils (which often have low values of Ks) is probably controlled more by the storm profile and the initial soil-water saturation deficit than by soil hydraulic properties.

  11. Urinary β2 microglobulin can predict tenofovir disoproxil fumarate-related renal dysfunction in HIV-1-infected patients who initiate tenofovir disoproxil fumarate-containing antiretroviral therapy.

    PubMed

    Nishijima, Takeshi; Kurosawa, Takuma; Tanaka, Noriko; Kawasaki, Yohei; Kikuchi, Yoshimi; Oka, Shinichi; Gatanaga, Hiroyuki

    2016-06-19

    In nephrotoxicity induced by tenofovir disoproxil fumarate (TDF), tubular dysfunction precedes the decline in GFR, suggesting that tubular markers are more sensitive than estimated glomerular filtration rate (eGFR). The hypothesis that urinary β2 microglobulin (β2 M), a tubular function marker, can predict TDF-renal dysfunction in HIV-1-infected patients was tested. A single-center observational study. The inclusion criteria were: HIV-1-infected patients who started TDF-containing antiretroviral therapy from 2004 to 2013, urinary β2 M after and closest to the day of TDF initiation within 180 days (termed 'β2 M after TDF') was measured. The associations between 'β2 M after TDF' and four renal end points (>10 ml/min per 1.73 m decrement in eGFR relative to baseline, >20 decrement, >25% decrement, and eGFR < 60) were estimated with logistic regression model. The association between 'β2 M after TDF' and longitudinal changes in eGFR after initiation of TDF was estimated with a mixed-model. A total 655 study patients were analyzed (96% men, median age 38, median CD4 238 cells/μl, 63% treatment naïve). The median baseline eGFR was 117 ml/min per 1.73 m (IQR 110-125), and the median duration of TDF use was 3.32 years (IQR 2.02-5.31). 'β2 M after TDF' was significantly associated with more than 20 decrement in eGFR (P = 0.024) and more than 25% decrement (P = 0.014), and was marginally associated with eGFR less than 60 (P = 0.076). It was also significantly associated with the longitudinal eGFR after initiation of TDF (P < 0.0001). 'β2 M after TDF' of 1700 μg/l was identified as the optimal cutoff value for the prediction of longitudinal eGFR. Urinary β2 M measured within 180 days after initiation of TDF predicts renal dysfunction related to long-term TDF use.

  12. 43 CFR Appendix I to Part 11 - Methods for Estimating the Areas of Ground Water and Surface Water Exposure During the...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...

  13. 43 CFR Appendix I to Part 11 - Methods for Estimating the Areas of Ground Water and Surface Water Exposure During the...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...

  14. 43 CFR Appendix I to Part 11 - Methods for Estimating the Areas of Ground Water and Surface Water Exposure During the...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...

  15. 43 CFR Appendix I to Part 11 - Methods for Estimating the Areas of Ground Water and Surface Water Exposure During the...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...

  16. 43 CFR Appendix I to Part 11 - Methods for Estimating the Areas of Ground Water and Surface Water Exposure During the...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...

  17. The Robustness of Designs for Trials with Nested Data against Incorrect Initial Intracluster Correlation Coefficient Estimates

    ERIC Educational Resources Information Center

    Korendijk, Elly J. H.; Moerbeek, Mirjam; Maas, Cora J. M.

    2010-01-01

    In the case of trials with nested data, the optimal allocation of units depends on the budget, the costs, and the intracluster correlation coefficient. In general, the intracluster correlation coefficient is unknown in advance and an initial guess has to be made based on published values or subject matter knowledge. This initial estimate is likely…

  18. Timing of first sex before marriage and its correlates: evidence from India.

    PubMed

    Santhya, K G; Acharya, Rajib; Jejeebhoy, Shireen J; Ram, Usha

    2011-03-01

    While several studies have documented the extent of pre-marital sexual experience among young people in India, little work has been done to explore the factors that are correlated with the timing of pre-marital sexual initiation. This paper examines age at initiation of pre-marital sex, circumstances in which first sex was experienced, nature of first sexual experience and correlates of age at initiation of pre-marital sex. Life table estimates suggest that pre-marital sexual initiation occurred in adolescence for 1 in 20 young women and 1 in 10 young men. For the majority of these young people, their first sex was with an opposite-sex romantic partner. First sex, moreover, was unprotected for the majority and forced for sizeable proportion of young women. A number of individual, family-, peer- and community-level factors were correlated with age at first pre-marital sex. Moreover, considerable gender differences were apparent in the correlates of age at first pre-marital sex, with peer- and parent-level factors found more often to be significant for young women than men.

  19. Polarization Resistance Measurement in Tap Water: The Influence of Rust Electrochemical Activity

    NASA Astrophysics Data System (ADS)

    Vasyliev, Georgii

    2017-08-01

    Corrosion rate of mild steel in tap water during 4300 h was estimated by LPR and weight-loss methods coupled with OCP measurements. The LPR results were found to be overestimated compared to the weight-loss data within initial 2000 h of exposure. The electrochemical activity of the rust separated from the metal surface was studied by cycling voltammetry using a home-built powder graphite electrode. High redox currents corresponding to the initial 2000 h of exposure were detected. Rust composition was characterized with IR and XRD, and the highest amounts of electrochemically active β- and γ-FeOOH were again detected for the initial 2000 h. Current consumption in rust transformation processes during LPR measurement in the galvanostatic mode accounts for overestimation of the corrosion rate. The time dependence of rust electrochemical activity correlates with OCP variation with time. During initial 2000 h, OCP values are shifted by 50 mV to cathodic side. For the period of a higher rust electrochemical activity, the use of a reduced B is suggested to increase accuracy of LPR technique in tap water.

  20. Principles of proportional recovery after stroke generalize to neglect and aphasia.

    PubMed

    Marchi, N A; Ptak, R; Di Pietro, M; Schnider, A; Guggisberg, A G

    2017-08-01

    Motor recovery after stroke can be characterized into two different patterns. A majority of patients recover about 70% of initial impairment, whereas some patients with severe initial deficits show little or no improvement. Here, we investigated whether recovery from visuospatial neglect and aphasia is also separated into two different groups and whether similar proportions of recovery can be expected for the two cognitive functions. We assessed 35 patients with neglect and 14 patients with aphasia at 3 weeks and 3 months after stroke using standardized tests. Recovery patterns were classified with hierarchical clustering and the proportion of recovery was estimated from initial impairment using a linear regression analysis. Patients were reliably clustered into two different groups. For patients in the first cluster (n = 40), recovery followed a linear model where improvement was proportional to initial impairment and achieved 71% of maximal possible recovery for both cognitive deficits. Patients in the second cluster (n = 9) exhibited poor recovery (<25% of initial impairment). Our findings indicate that improvement from neglect or aphasia after stroke shows the same dichotomy and proportionality as observed in motor recovery. This is suggestive of common underlying principles of plasticity, which apply to motor and cognitive functions. © 2017 EAN.

  1. Double Scaling in the Relaxation Time in the β -Fermi-Pasta-Ulam-Tsingou Model

    NASA Astrophysics Data System (ADS)

    Lvov, Yuri V.; Onorato, Miguel

    2018-04-01

    We consider the original β -Fermi-Pasta-Ulam-Tsingou system; numerical simulations and theoretical arguments suggest that, for a finite number of masses, a statistical equilibrium state is reached independently of the initial energy of the system. Using ensemble averages over initial conditions characterized by different Fourier random phases, we numerically estimate the time scale of equipartition and we find that for very small nonlinearity it matches the prediction based on exact wave-wave resonant interaction theory. We derive a simple formula for the nonlinear frequency broadening and show that when the phenomenon of overlap of frequencies takes place, a different scaling for the thermalization time scale is observed. Our result supports the idea that the Chirikov overlap criterion identifies a transition region between two different relaxation time scalings.

  2. Maximum magnitude estimations of induced earthquakes at Paradox Valley, Colorado, from cumulative injection volume and geometry of seismicity clusters

    NASA Astrophysics Data System (ADS)

    Yeck, William L.; Block, Lisa V.; Wood, Christopher K.; King, Vanessa M.

    2015-01-01

    The Paradox Valley Unit (PVU), a salinity control project in southwest Colorado, disposes of brine in a single deep injection well. Since the initiation of injection at the PVU in 1991, earthquakes have been repeatedly induced. PVU closely monitors all seismicity in the Paradox Valley region with a dense surface seismic network. A key factor for understanding the seismic hazard from PVU injection is the maximum magnitude earthquake that can be induced. The estimate of maximum magnitude of induced earthquakes is difficult to constrain as, unlike naturally occurring earthquakes, the maximum magnitude of induced earthquakes changes over time and is affected by injection parameters. We investigate temporal variations in maximum magnitudes of induced earthquakes at the PVU using two methods. First, we consider the relationship between the total cumulative injected volume and the history of observed largest earthquakes at the PVU. Second, we explore the relationship between maximum magnitude and the geometry of individual seismicity clusters. Under the assumptions that: (i) elevated pore pressures must be distributed over an entire fault surface to initiate rupture and (ii) the location of induced events delineates volumes of sufficiently high pore-pressure to induce rupture, we calculate the largest allowable vertical penny-shaped faults, and investigate the potential earthquake magnitudes represented by their rupture. Results from both the injection volume and geometrical methods suggest that the PVU has the potential to induce events up to roughly MW 5 in the region directly surrounding the well; however, the largest observed earthquake to date has been about a magnitude unit smaller than this predicted maximum. In the seismicity cluster surrounding the injection well, the maximum potential earthquake size estimated by these methods and the observed maximum magnitudes have remained steady since the mid-2000s. These observations suggest that either these methods overpredict maximum magnitude for this area or that long time delays are required for sufficient pore-pressure diffusion to occur to cause rupture along an entire fault segment. We note that earthquake clusters can initiate and grow rapidly over the course of 1 or 2 yr, thus making it difficult to predict maximum earthquake magnitudes far into the future. The abrupt onset of seismicity with injection indicates that pore-pressure increases near the well have been sufficient to trigger earthquakes under pre-existing tectonic stresses. However, we do not observe remote triggering from large teleseismic earthquakes, which suggests that the stress perturbations generated from those events are too small to trigger rupture, even with the increased pore pressures.

  3. Exploration of Extended-Area Treatment Effects in FACE-2 Using Satellite Imagery.

    NASA Astrophysics Data System (ADS)

    Meití, José G.; Woodley, William L.; Flueck, John A.

    1984-01-01

    The second phase of the Florida Area Cumulus Experiment (FACE-2) has been completed and an exploratory analysis has been conducted to investigate the possibility that cloud seeding may have affected the rainfall outside the intended target. Rainfall was estimated over a 3.5×105 km2 area centered on the target using geosynchronous, infrared satellite imagery and the Griffith-Woodley rain estimation technique. This technique was derived in South Florida by calibrating infrared images using raingage and radar observations to produce an empirical, diagnostic (a posteriori), satellite rain estimation technique. The satellite rain estimates for the extended area were adjusted based on comparisons of raingage and satellite rainfall estimates for the entire FACE target (1.3×104 km2). All daily rainfall estimates were composited in two ways: 1) in the original coordinate system and 2) in a relative coordinate system that rotates the research area as a function of wind direction. After compositing, seeding effects were sought as a function of space and time.The results show more rainfall (in the mean) on seed than no seed days both in and downwind of the target but lesser rainfall upwind. All differences (averaging 20% downwind and 10% upwind) are confined in space to within 200 km of the center of the FACE target and in time to the 8 h period after initial treatment. In addition, the positive correlation between untreated upwind rainfall and target rainfall is degraded on seed days, suggesting possible intermittent negative effects of seeding upwind. Although the development of these differences in space and time suggests that seeding may have been partially responsible for their generation, the results do not have strong inferential (P-value) support.

  4. Multiple populations within globular clusters in Early-type galaxies Exploring their effect on stellar initial mass function estimates

    NASA Astrophysics Data System (ADS)

    Chantereau, W.; Usher, C.; Bastian, N.

    2018-05-01

    It is now well-established that most (if not all) ancient globular clusters host multiple populations, that are characterised by distinct chemical features such as helium abundance variations along with N-C and Na-O anti-correlations, at fixed [Fe/H]. These very distinct chemical features are similar to what is found in the centres of the massive early-type galaxies and may influence measurements of the global properties of the galaxies. Additionally, recent results have suggested that M/L variations found in the centres of massive early-type galaxies might be due to a bottom-heavy stellar initial mass function. We present an analysis of the effects of globular cluster-like multiple populations on the integrated properties of early-type galaxies. In particular, we focus on spectral features in the integrated optical spectrum and the global mass-to-light ratio that have been used to infer variations in the stellar initial mass function. To achieve this we develop appropriate stellar population synthesis models and take into account, for the first time, an initial-final mass relation which takes into consideration a varying He abundance. We conclude that while the multiple populations may be present in massive early-type galaxies, they are likely not responsible for the observed variations in the mass-to-light ratio and IMF sensitive line strengths. Finally, we estimate the fraction of stars with multiple populations chemistry that come from disrupted globular clusters within massive ellipticals and find that they may explain some of the observed chemical patterns in the centres of these galaxies.

  5. Leadership training design, delivery, and implementation: A meta-analysis.

    PubMed

    Lacerenza, Christina N; Reyes, Denise L; Marlow, Shannon L; Joseph, Dana L; Salas, Eduardo

    2017-12-01

    Recent estimates suggest that although a majority of funds in organizational training budgets tend to be allocated to leadership training (Ho, 2016; O'Leonard, 2014), only a small minority of organizations believe their leadership training programs are highly effective (Schwartz, Bersin, & Pelster, 2014), calling into question the effectiveness of current leadership development initiatives. To help address this issue, this meta-analysis estimates the extent to which leadership training is effective and identifies the conditions under which these programs are most effective. In doing so, we estimate the effectiveness of leadership training across four criteria (reactions, learning, transfer, and results; Kirkpatrick, 1959) using only employee data and we examine 15 moderators of training design and delivery to determine which elements are associated with the most effective leadership training interventions. Data from 335 independent samples suggest that leadership training is substantially more effective than previously thought, leading to improvements in reactions (δ = .63), learning (δ = .73), transfer (δ = .82), and results (δ = .72), the strength of these effects differs based on various design, delivery, and implementation characteristics. Moderator analyses support the use of needs analysis, feedback, multiple delivery methods (especially practice), spaced training sessions, a location that is on-site, and face-to-face delivery that is not self-administered. Results also suggest that the content of training, attendance policy, and duration influence the effectiveness of the training program. Practical implications for training development and theoretical implications for leadership and training literatures are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Age, growth, and size of Lake Superior Pygmy Whitefish (Prosopium coulterii)

    USGS Publications Warehouse

    Stewart, Taylor; Derek Ogle,; Gorman, Owen T.; Vinson, Mark

    2016-01-01

    Pygmy Whitefish (Prosopium coulterii) are a small, glacial relict species with a disjunct distribution in North America and Siberia. In 2013 we collected Pygmy Whitefish at 28 stations from throughout Lake Superior. Total length was recorded for all fish and weight and sex were recorded and scales and otoliths were collected from a subsample. We compared the precision of estimated ages between readers and between scales and otoliths, estimated von Bertalanffy growth parameters for male and female Pygmy Whitefish, and reported the first weight-length relationship for Pygmy Whitefish. Age estimates between scales and otoliths differed significantly with otolith ages significantly greater for most ages after age-3. Maximum otolith age was nine for females and seven for males, which is older than previously reported for Pygmy Whitefish from Lake Superior. Growth was initially fast but slowed considerably after age-3 for males and age-4 for females, falling to 3–4 mm per year at maximum estimated ages. Females were longer than males after age-3. Our results suggest the size, age, and growth of Pygmy Whitefish in Lake Superior have not changed appreciably since 1953.

  7. Dynamic deformations and the M6.7, Northridge, California earthquake

    USGS Publications Warehouse

    Gomberg, J.

    1997-01-01

    A method of estimating the complete time-varying dynamic formation field from commonly available three-component single station seismic data has been developed and applied to study the relationship between dynamic deformation and ground failures and structural damage using observations from the 1994 Northridge, California earthquake. Estimates from throughout the epicentral region indicate that the horizontal strains exceed the vertical ones by more than a factor of two. The largest strains (exceeding ???100 ??strain) correlate with regions of greatest ground failure. There is a poor correlation between structural damage and peak strain amplitudes. The smallest strains, ???35 ??strain, are estimated in regions of no damage or ground failure. Estimates in the two regions with most severe and well mapped permanent deformation, Potrero Canyon and the Granada-Mission Hills regions, exhibit the largest strains; peak horizontal strains estimates in these regions equal ???139 and ???229 ??strain respectively. Of note, the dynamic principal strain axes have strikes consistent with the permanent failure features suggesting that, while gravity, sub-surface materials, and hydrologic conditions undoubtedly played fundamental roles in determining where and what types of failures occurred, the dynamic deformation field may have been favorably sized and oriented to initiate failure processes. These results support other studies that conclude that the permanent deformation resulted from ground shaking, rather than from static strains associated with primary or secondary faulting. They also suggest that such an analysis, either using data or theoretical calculations, may enable observations of paleo-ground failure to be used as quantitative constraints on the size and geometry of previous earthquakes. ?? 1997 Elsevier Science Limited.

  8. Unmasking the component-general and component-specific aspects of primary and secondary memory in the immediate free recall task.

    PubMed

    Gibson, Bradley S; Gondoli, Dawn M

    2018-04-01

    The immediate free recall (IFR) task has been commonly used to estimate the capacities of the primary memory (PM) and secondary memory (SM) components of working memory (WM). Using this method, the correlation between estimates of the PM and SM components has hovered around zero, suggesting that PM and SM represent fully distinct and dissociable components of WM. However, this conclusion has conflicted with more recent studies that have observed moderately strong, positive correlations between PM and SM when separate attention and retrieval tasks are used to estimate these capacities, suggesting that PM and SM represent at least some related capacities. The present study attempted to resolve this empirical discrepancy by investigating the extent to which the relation between estimates of PM and SM might be suppressed by a third variable that operates during the recall portion of the IFR task. This third variable was termed "strength of recency" (SOR) in the present study as it reflected differences in the extent to which individuals used the same experimentally-induced recency recall initiation strategy. As predicted, the present findings showed that the positive correlation between estimates of PM and SM grew from small to medium when the indirect effect of SOR was controlled across two separate sets of studies. This finding is important because it provides stronger support for the distinction between "component-general" and "component-specific" aspects of PM and SM; furthermore, a proof is presented that demonstrates a limitation of using regression techniques to differentiate general and specific aspects of these components.

  9. The estimation of growth dynamics for Pomacea maculata from hatchling to adult

    USGS Publications Warehouse

    Sutton, Karyn L.; Zhao, Lihong; Carter, Jacoby

    2017-01-01

    Pomacea maculata is a relatively new invasive species to the Gulf Coast region and potentially threatens local agriculture (rice) and ecosystems (aquatic vegetation). The population dynamics of P. maculata have largely been unquantified, and therefore, scientists and field-workers are ill-equipped to accurately project population sizes and the resulting impact of this species. We studied the growth of P. maculata ranging in weights from 6 to 105 g, identifying the sex of the animals when possible. Our studied population had a 4:9 male:female sex ratio. We present the findings from initial analysis of the individual growth data of males and females, from which it was apparent that females were generally larger than males and that small snails grew faster than larger snails. Since efforts to characterize the male and female growth rates from individual data do not yield statistically supported estimates, we present the estimation of several parameterized growth rate functions within a population-level mathematical model. We provide a comparison of the results using these various growth functions and discuss which best characterizes the dynamics of our observed population. We conclude that both males and females exhibit biphasic growth rates, and thus, their growth is size-dependent. Further, our results suggest that there are notable differences between males and females that are important to take into consideration in order to accurately model this species' population dynamics. Lastly, we include preliminary analyses of ongoing experiments to provide initial estimates of growth in the earliest life stages (hatchling to ≈6 g).

  10. Reliability of TMS phosphene threshold estimation: Toward a standardized protocol.

    PubMed

    Mazzi, Chiara; Savazzi, Silvia; Abrahamyan, Arman; Ruzzoli, Manuela

    Phosphenes induced by transcranial magnetic stimulation (TMS) are a subjectively described visual phenomenon employed in basic and clinical research as index of the excitability of retinotopically organized areas in the brain. Phosphene threshold estimation is a preliminary step in many TMS experiments in visual cognition for setting the appropriate level of TMS doses; however, the lack of a direct comparison of the available methods for phosphene threshold estimation leaves unsolved the reliability of those methods in setting TMS doses. The present work aims at fulfilling this gap. We compared the most common methods for phosphene threshold calculation, namely the Method of Constant Stimuli (MOCS), the Modified Binary Search (MOBS) and the Rapid Estimation of Phosphene Threshold (REPT). In two experiments we tested the reliability of PT estimation under each of the three methods, considering the day of administration, participants' expertise in phosphene perception and the sensitivity of each method to the initial values used for the threshold calculation. We found that MOCS and REPT have comparable reliability when estimating phosphene thresholds, while MOBS estimations appear less stable. Based on our results, researchers and clinicians can estimate phosphene threshold according to MOCS or REPT equally reliably, depending on their specific investigation goals. We suggest several important factors for consideration when calculating phosphene thresholds and describe strategies to adopt in experimental procedures. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Priors Engaged in Long-Latency Responses to Mechanical Perturbations Suggest a Rapid Update in State Estimation

    PubMed Central

    Crevecoeur, Frédéric; Scott, Stephen H.

    2013-01-01

    In every motor task, our brain must handle external forces acting on the body. For example, riding a bike on cobblestones or skating on irregular surface requires us to appropriately respond to external perturbations. In these situations, motor predictions cannot help anticipate the motion of the body induced by external factors, and direct use of delayed sensory feedback will tend to generate instability. Here, we show that to solve this problem the motor system uses a rapid sensory prediction to correct the estimated state of the limb. We used a postural task with mechanical perturbations to address whether sensory predictions were engaged in upper-limb corrective movements. Subjects altered their initial motor response in ∼60 ms, depending on the expected perturbation profile, suggesting the use of an internal model, or prior, in this corrective process. Further, we found trial-to-trial changes in corrective responses indicating a rapid update of these perturbation priors. We used a computational model based on Kalman filtering to show that the response modulation was compatible with a rapid correction of the estimated state engaged in the feedback response. Such a process may allow us to handle external disturbances encountered in virtually every physical activity, which is likely an important feature of skilled motor behaviour. PMID:23966846

  12. Origin and evolution of the Nakhla meteorite inferred from the Sm-Nd and U-Pb systematics and REE, Ba, Sr, Rb and K abundances

    NASA Technical Reports Server (NTRS)

    Nakamura, N.; Unruh, D. M.; Tatsumoto, M.; Hutchison, R.

    1982-01-01

    Analyses of whole rock and mineral separates from the Nakhla meteorite are carried out by means of Sm-Nd and U-Tn-Pb systematics and by determining their REE, Ba, Sr, Rb, and K concentrations. Results show that the Sm-Nd age of the meteorite is 1.26 + or - 0.7 b.y., while the high initial epsilon(Nd) value of +16 suggests that Nakhla was derived from a light REE-depleted, old planetary mantle source. A three-stage Sm-Nd evolution model is developed and used in combination with LIL element data and estimated partition coefficients in order to test partial melting and fractional crystallization models and to estimate LIL abundances in a possible Nakhla source. The calculations indicate that partial melting of the source followed by extensive fractional crystallization of the partial melt could account for the REE abundances in the Nakhla constituent minerals. It is concluded that the significantly younger age of Nakhla than the youngest lunar rock, the young differentiation age inferred from U-Th-Pb data, and the estimated LIL abundances suggest that this meteorite may have been derived from a relatively large, well-differentiated planetary body such as Mars.

  13. Abiotic and biotic determinants of coarse woody productivity in temperate mixed forests.

    PubMed

    Yuan, Zuoqiang; Ali, Arshad; Wang, Shaopeng; Gazol, Antonio; Freckleton, Robert; Wang, Xugao; Lin, Fei; Ye, Ji; Zhou, Li; Hao, Zhanqing; Loreau, Michel

    2018-07-15

    Forests play an important role in regulating the global carbon cycle. Yet, how abiotic (i.e. soil nutrients) and biotic (i.e. tree diversity, stand structure and initial biomass) factors simultaneously contribute to aboveground biomass (coarse woody) productivity, and how the relative importance of these factors changes over succession remain poorly studied. Coarse woody productivity (CWP) was estimated as the annual aboveground biomass gain of stems using 10-year census data in old growth and secondary forests (25-ha and 4.8-ha, respectively) in northeast China. Boosted regression tree (BRT) model was used to evaluate the relative contribution of multiple metrics of tree diversity (taxonomic, functional and phylogenetic diversity and trait composition as well as stand structure attributes), stand initial biomass and soil nutrients on productivity in the studied forests. Our results showed that community-weighted mean of leaf phosphorus content, initial stand biomass and soil nutrients were the three most important individual predictors for CWP in secondary forest. Instead, initial stand biomass, rather than diversity and functional trait composition (vegetation quality) was the most parsimonious predictor of CWP in old growth forest. By comparing the results from secondary and old growth forest, the summed relative contribution of trait composition and soil nutrients on productivity decreased as those of diversity indices and initial biomass increased, suggesting the stronger effect of diversity and vegetation quantity over time. Vegetation quantity, rather than diversity and soil nutrients, is the main driver of forest productivity in temperate mixed forest. Our results imply that diversity effect for productivity in natural forests may not be so important as often suggested, at least not during the later stage of forest succession. This finding suggests that as a change of the importance of different divers of productivity, the environmentally driven filtering decreases and competitively driven niche differentiation increases with forest succession. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Estimating the costs of human space exploration

    NASA Technical Reports Server (NTRS)

    Mandell, Humboldt C., Jr.

    1994-01-01

    The plan for NASA's new exploration initiative has the following strategic themes: (1) incremental, logical evolutionary development; (2) economic viability; and (3) excellence in management. The cost estimation process is involved with all of these themes and they are completely dependent upon the engineering cost estimator for success. The purpose is to articulate the issues associated with beginning this major new government initiative, to show how NASA intends to resolve them, and finally to demonstrate the vital importance of a leadership role by the cost estimation community.

  15. Convergence of the Full Compressible Navier-Stokes-Maxwell System to the Incompressible Magnetohydrodynamic Equations in a Bounded Domain II: Global Existence Case

    NASA Astrophysics Data System (ADS)

    Fan, Jishan; Li, Fucai; Nakamura, Gen

    2018-06-01

    In this paper we continue our study on the establishment of uniform estimates of strong solutions with respect to the Mach number and the dielectric constant to the full compressible Navier-Stokes-Maxwell system in a bounded domain Ω \\subset R^3. In Fan et al. (Kinet Relat Models 9:443-453, 2016), the uniform estimates have been obtained for large initial data in a short time interval. Here we shall show that the uniform estimates exist globally if the initial data are small. Based on these uniform estimates, we obtain the convergence of the full compressible Navier-Stokes-Maxwell system to the incompressible magnetohydrodynamic equations for well-prepared initial data.

  16. Evaluation of species richness estimators based on quantitative performance measures and sensitivity to patchiness and sample grain size

    NASA Astrophysics Data System (ADS)

    Willie, Jacob; Petre, Charles-Albert; Tagg, Nikki; Lens, Luc

    2012-11-01

    Data from forest herbaceous plants in a site of known species richness in Cameroon were used to test the performance of rarefaction and eight species richness estimators (ACE, ICE, Chao1, Chao2, Jack1, Jack2, Bootstrap and MM). Bias, accuracy, precision and sensitivity to patchiness and sample grain size were the evaluation criteria. An evaluation of the effects of sampling effort and patchiness on diversity estimation is also provided. Stems were identified and counted in linear series of 1-m2 contiguous square plots distributed in six habitat types. Initially, 500 plots were sampled in each habitat type. The sampling process was monitored using rarefaction and a set of richness estimator curves. Curves from the first dataset suggested adequate sampling in riparian forest only. Additional plots ranging from 523 to 2143 were subsequently added in the undersampled habitats until most of the curves stabilized. Jack1 and ICE, the non-parametric richness estimators, performed better, being more accurate and less sensitive to patchiness and sample grain size, and significantly reducing biases that could not be detected by rarefaction and other estimators. This study confirms the usefulness of non-parametric incidence-based estimators, and recommends Jack1 or ICE alongside rarefaction while describing taxon richness and comparing results across areas sampled using similar or different grain sizes. As patchiness varied across habitat types, accurate estimations of diversity did not require the same number of plots. The number of samples needed to fully capture diversity is not necessarily the same across habitats, and can only be known when taxon sampling curves have indicated adequate sampling. Differences in observed species richness between habitats were generally due to differences in patchiness, except between two habitats where they resulted from differences in abundance. We suggest that communities should first be sampled thoroughly using appropriate taxon sampling curves before explaining differences in diversity.

  17. Curvature estimation for multilayer hinged structures with initial strains

    NASA Astrophysics Data System (ADS)

    Nikishkov, G. P.

    2003-10-01

    Closed-form estimate of curvature for hinged multilayer structures with initial strains is developed. The finite element method is used for modeling of self-positioning microstructures. The geometrically nonlinear problem with large rotations and large displacements is solved using step procedure with node coordinate update. Finite element results for curvature of the hinged micromirror with variable width is compared to closed-form estimates.

  18. Evaluation of Event Physical Activity Engagement at an Open Streets Initiative Within a Texas-Mexico Border Town.

    PubMed

    Salazar-Collier, Cindy Lynn; Reininger, Belinda; Gowen, Rose; Rodriguez, Arturo; Wilkinson, Anna

    2018-05-09

    Open streets initiatives provide an opportunity to engage in physical activity (PA) freely by temporarily closing streets to motorized traffic. Route counting estimation and event intercept surveys (n = 682) were conducted across 4 CycloBia events in Brownsville, TX, in 2015 to determine sociodemographics, PA engagement at the event, event awareness, and past CycloBia attendance. Cycling was the most commonly observed activity along the route (73.6%) followed by walking (22.9%). Attendees self-reported a median of 120 minutes in PA with 17.3% of attendees meeting recommended weekly PA guidelines at the event. Significant predictors of meeting PA guidelines via event PA engagement were past event attendance, sex, age, and Hispanic ethnicity. Findings suggest that CycloBia reached a large, low-income, predominantly Hispanic population and may be effective in promoting PA. Results help understand the effect of an open streets initiative on attendees living in a midsize, border city.

  19. Sediment particle size and initial radiocesium accumulation in ponds following the Fukushima DNPP accident.

    PubMed

    Yoshimura, Kazuya; Onda, Yuichi; Fukushima, Takehiko

    2014-03-31

    This study used particle size analysis to investigate the initial accumulation and trap efficiency of radiocesium ((137)Cs) in four irrigation ponds, ~4-5 months after the Fukushima Dai-ichi nuclear power plant (DNPP) accident. Trap efficiency, represented by the inventory of (137)Cs in pond sediment to the inventory of radiocesium in soil surrounding the pond (i.e., total (137)Cs inventory), was less than 100% for all but one pond. Trap efficiency decreased as sediment particle size increased, indicating that sediments with a smaller particle size accumulate more (137)Cs. In ponds showing low trap efficiency, fine sediment containing high concentrations of (137)Cs appeared to be removed from the system by hydraulic flushing, leaving behind mostly coarse sediment. The results of this study suggest that sediment particle size can be used to estimate the initial accumulation and trap efficiency of (137)Cs in pond sediment, as well as the amount lost through hydraulic flushing.

  20. Sediment particle size and initial radiocesium accumulation in ponds following the Fukushima DNPP accident

    PubMed Central

    Yoshimura, Kazuya; Onda, Yuichi; Fukushima, Takehiko

    2014-01-01

    This study used particle size analysis to investigate the initial accumulation and trap efficiency of radiocesium (137Cs) in four irrigation ponds, ~4–5 months after the Fukushima Dai–ichi nuclear power plant (DNPP) accident. Trap efficiency, represented by the inventory of 137Cs in pond sediment to the inventory of radiocesium in soil surrounding the pond (i.e., total 137Cs inventory), was less than 100% for all but one pond. Trap efficiency decreased as sediment particle size increased, indicating that sediments with a smaller particle size accumulate more 137Cs. In ponds showing low trap efficiency, fine sediment containing high concentrations of 137Cs appeared to be removed from the system by hydraulic flushing, leaving behind mostly coarse sediment. The results of this study suggest that sediment particle size can be used to estimate the initial accumulation and trap efficiency of 137Cs in pond sediment, as well as the amount lost through hydraulic flushing. PMID:24682011

  1. Can a man-made universe be achieved by quantum tunneling without an initial singularity?

    NASA Technical Reports Server (NTRS)

    Guth, Alan H.; Haller, K. (Editor); Caldi, D. B. (Editor); Islam, M. M. (Editor); Mallett, R. L. (Editor); Mannheim, P. D. (Editor); Swanson, M. S. (Editor)

    1991-01-01

    Essentially all modern particle theories suggest the possible existence of a false vacuum state; a metastable state with an energy density that cannot be lowered except by means of a very slow phase transition. Inflationary cosmology makes use of such a state to drive the expansion of the big bang, allowing the entire observed universe to evolve from a very small initial mass. A sphere of false vacuum in the present universe, if larger than a certain critical mass, could inflate to form a new universe which would rapidly detach from its parent. A false vacuum bubble of this size, however, cannot be produced classically unless an initial singularity is present from the outset. The possibility is explored that a bubble of subcritical size, which classically would evolve to a maximum size and collapse, might instead tunnel through a barrier to produce a new universe. The tunneling rate using semiclassical quantum gravity is estimated, and some interesting ambiguities in the formulas are discovered.

  2. Journal: A Review of Some Tracer-Test Design Equations for ...

    EPA Pesticide Factsheets

    Determination of necessary tracer mass, initial sample-collection time, and subsequent sample-collection frequency are the three most difficult aspects to estimate for a proposed tracer test prior to conducting the tracer test. To facilitate tracer-mass estimation, 33 mass-estimation equations are reviewed here, 32 of which were evaluated using previously published tracer-test design examination parameters. Comparison of the results produced a wide range of estimated tracer mass, but no means is available by which one equation may be reasonably selected over the others. Each equation produces a simple approximation for tracer mass. Most of the equations are based primarily on estimates or measurements of discharge, transport distance, and suspected transport times. Although the basic field parameters commonly employed are appropriate for estimating tracer mass, the 33 equations are problematic in that they were all probably based on the original developers' experience in a particular field area and not necessarily on measured hydraulic parameters or solute-transport theory. Suggested sampling frequencies are typically based primarily on probable transport distance, but with little regard to expected travel times. This too is problematic in that tends to result in false negatives or data aliasing. Simulations from the recently developed efficient hydrologic tracer-test design methodology (EHTD) were compared with those obtained from 32 of the 33 published tracer-

  3. Correlation between Charge Contrast Imaging and the Distribution of Some Trace Level Impurities in Gibbsite

    NASA Astrophysics Data System (ADS)

    Baroni, Travis C.; Griffin, Brendan J.; Browne, James R.; Lincoln, Frank J.

    2000-01-01

    Charge contrast images (CCI) of synthetic gibbsite obtained on an environmental scanning electron microscope gives information on the crystallization process. Furthermore, X-ray mapping of the same grains shows that impurities are localized during the initial stages of growth and that the resulting composition images have features similar to these observed in CCI. This suggests a possible correlation between impurity distributions and the emission detected during CCI. X-ray line profiles, simulating the spatial distribution of impurities derived from the Monte Carlo program CASINO, have been compared with experimental line profiles and give an estimate of the localization. The model suggests that a main impurity, Ca, is depleted from the solution within approximately 3 4 [mu]m of growth.

  4. Effect of forward speed on the roll damping of three small fishing vessels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haddara, M.R.; Zhang, S.

    1994-05-01

    An extensive experimental program has been carried out to estimate roll damping parameters for three models of fishing vessels having different hull shapes and moving with forward speed. Roll damping parameters are determined using a novel method. This method combines the energy method and the modulating function method. The effect of forward speed, initial heel angle and the natural frequency on damping is discussed. A modification of Ikeda's formula for lift damping prediction is suggested. The modified formula produces results which are in good agreement with the experiments.

  5. Intraindividual variability of boldness is repeatable across contexts in a wild lizard.

    PubMed

    Highcock, Laura; Carter, Alecia J

    2014-01-01

    Animals do not behave in exactly the same way when repeatedly tested in the same context or situation, even once systematic variation, such as habituation, has been controlled for. This unpredictability is called intraindividual variability (IIV) and has been little studied in animals. Here we investigated how IIV in boldness (estimated by flight initiation distances) changed across two seasons--the dry, non-breeding season and the wet, breeding season--in a wild population of the Namibian rock agama, Agama planiceps. We found significant differences in IIV both between individuals and seasons, and IIV was higher in the wet season, suggesting plasticity in IIV. Further, IIV was highly repeatable (r = 0.61) between seasons and we found strong negative correlations between consistent individual differences in flight initiation distances, i.e. their boldness, and individuals' IIVs. We suggest that to understand personality in animals, researchers should generate a personality 'profile' that includes not only the relative level of a trait (i.e. its personality), but also its plasticity and variability under natural conditions.

  6. Magma ocean formation due to giant impacts

    NASA Technical Reports Server (NTRS)

    Tonks, W. B.; Melosh, H. J.

    1993-01-01

    The thermal effects of giant impacts are studied by estimating the melt volume generated by the initial shock wave and corresponding magma ocean depths. Additionally, the effects of the planet's initial temperature on the generated melt volume are examined. The shock pressure required to completely melt the material is determined using the Hugoniot curve plotted in pressure-entropy space. Once the melting pressure is known, an impact melting model is used to estimate the radial distance melting occurred from the impact site. The melt region's geometry then determines the associated melt volume. The model is also used to estimate the partial melt volume. Magma ocean depths resulting from both excavated and retained melt are calculated, and the melt fraction not excavated during the formation of the crater is estimated. The fraction of a planet melted by the initial shock wave is also estimated using the model.

  7. Propagation of uncertainty in nasal spray in vitro performance models using Monte Carlo simulation: Part II. Error propagation during product performance modeling.

    PubMed

    Guo, Changning; Doub, William H; Kauffman, John F

    2010-08-01

    Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association

  8. Computer model predictions of the local effects of large, solid-fuel rocket motors on stratospheric ozone. Technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zittel, P.F.

    1994-09-10

    The solid-fuel rocket motors of large space launch vehicles release gases and particles that may significantly affect stratospheric ozone densities along the vehicle's path. In this study, standard rocket nozzle and flowfield computer codes have been used to characterize the exhaust gases and particles through the afterburning region of the solid-fuel motors of the Titan IV launch vehicle. The models predict that a large fraction of the HCl gas exhausted by the motors is converted to Cl and Cl2 in the plume afterburning region. Estimates of the subsequent chemistry suggest that on expansion into the ambient daytime stratosphere, the highlymore » reactive chlorine may significantly deplete ozone in a cylinder around the vehicle track that ranges from 1 to 5 km in diameter over the altitude range of 15 to 40 km. The initial ozone depletion is estimated to occur on a time scale of less than 1 hour. After the initial effects, the dominant chemistry of the problem changes, and new models are needed to follow the further expansion, or closure, of the ozone hole on a longer time scale.« less

  9. The influence of school demographic factors and perceived student discrimination on delinquency trajectory in adolescence.

    PubMed

    Le, Thao N; Stockdale, Gary

    2011-10-01

    The purpose of this study was to examine the effects of school demographic factors and youth's perception of discrimination on delinquency in adolescence and into young adulthood for African American, Asian, Hispanic, and white racial/ethnic groups. Using data from the National Longitudinal Study of Adolescent Health (Add Health), models testing the effect of school-related variables on delinquency trajectories were evaluated for the four racial/ethnic groups using Mplus 5.21 statistical software. Results revealed that greater student ethnic diversity and perceived discrimination, but not teacher ethnic diversity, resulted in higher initial delinquency estimates at 13 years of age for all groups. However, except for African Americans, having a greater proportion of female teachers in the school decreased initial delinquency estimates. For African Americans and whites, a larger school size also increased the initial estimates. Additionally, lower social-economic status increased the initial estimates for whites, and being born in the United States increased the initial estimates for Asians and Hispanics. Finally, regardless of the initial delinquency estimate at age 13 and the effect of the school variables, all groups eventually converged to extremely low delinquency in young adulthood, at the age of 21 years. Educators and public policy makers seeking to prevent and reduce delinquency can modify individual risks by modifying characteristics of the school environment. Policies that promote respect for diversity and intolerance toward discrimination, as well as training to help teachers recognize the precursors and signs of aggression and/or violence, may also facilitate a positive school environment, resulting in lower delinquency. Copyright © 2011 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  10. Dual Extended Kalman Filter for the Identification of Time-Varying Human Manual Control Behavior

    NASA Technical Reports Server (NTRS)

    Popovici, Alexandru; Zaal, Peter M. T.; Pool, Daan M.

    2017-01-01

    A Dual Extended Kalman Filter was implemented for the identification of time-varying human manual control behavior. Two filters that run concurrently were used, a state filter that estimates the equalization dynamics, and a parameter filter that estimates the neuromuscular parameters and time delay. Time-varying parameters were modeled as a random walk. The filter successfully estimated time-varying human control behavior in both simulated and experimental data. Simple guidelines are proposed for the tuning of the process and measurement covariance matrices and the initial parameter estimates. The tuning was performed on simulation data, and when applied on experimental data, only an increase in measurement process noise power was required in order for the filter to converge and estimate all parameters. A sensitivity analysis to initial parameter estimates showed that the filter is more sensitive to poor initial choices of neuromuscular parameters than equalization parameters, and bad choices for initial parameters can result in divergence, slow convergence, or parameter estimates that do not have a real physical interpretation. The promising results when applied to experimental data, together with its simple tuning and low dimension of the state-space, make the use of the Dual Extended Kalman Filter a viable option for identifying time-varying human control parameters in manual tracking tasks, which could be used in real-time human state monitoring and adaptive human-vehicle haptic interfaces.

  11. The information gained from witnesses' responses to an initial "blank" lineup.

    PubMed

    Palmer, Matthew A; Brewer, Neil; Weber, Nathan

    2012-10-01

    Wells ("The psychology of lineup identifications," Journal of Applied Social Psychology, 1984, 14, 89-103) proposed that a blank lineup (an initial lineup of known-to-be-innocent foils) can be used to screen eyewitnesses; witnesses who chose from a blank lineup (initial choosers) were more likely to make an error on a second lineup that contained a suspect than were witnesses who rejected a blank lineup (initial nonchoosers). Recent technological advances (e.g., computer-administered lineups) may overcome many of the practical difficulties cited as a barrier to the use of blank lineups. Our research extended knowledge about the blank lineup procedure by investigating the underlying causes of the difference in identification performance between initial choosers and initial nonchoosers. Studies 1a and 1b (total, N = 303) demonstrated that initial choosers were more likely to reject a second lineup than initial nonchoosers and witnesses who did not view a blank lineup, implying that cognitive biases (e.g., confirmation bias and commitment effects) influenced initial choosers' identification decisions. In Study 2 (N = 200), responses on a forced-choice identification test provided evidence that initial choosers have, on average, poorer memories for the culprit than do initial nonchoosers. We also investigated the usefulness of blank lineups for interpreting identification evidence. Diagnosticity ratios suggested that suspect identifications made by initial nonchoosers (cf. initial choosers) should have a greater impact on estimates of the likely guilt of the suspect. Furthermore, for initial nonchoosers, higher confidence in blank lineup rejections was associated with higher diagnosticity for subsequent suspect identifications. These results have implications for policy to guide the collection and interpretation of identification evidence. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  12. Measurement and Reliability of Response Inhibition

    PubMed Central

    Congdon, Eliza; Mumford, Jeanette A.; Cohen, Jessica R.; Galvan, Adriana; Canli, Turhan; Poldrack, Russell A.

    2012-01-01

    Response inhibition plays a critical role in adaptive functioning and can be assessed with the Stop-signal task, which requires participants to suppress prepotent motor responses. Evidence suggests that this ability to inhibit a prepotent motor response (reflected as Stop-signal reaction time (SSRT)) is a quantitative and heritable measure of interindividual variation in brain function. Although attention has been given to the optimal method of SSRT estimation, and initial evidence exists in support of its reliability, there is still variability in how Stop-signal task data are treated across samples. In order to examine this issue, we pooled data across three separate studies and examined the influence of multiple SSRT calculation methods and outlier calling on reliability (using Intra-class correlation). Our results suggest that an approach which uses the average of all available sessions, all trials of each session, and excludes outliers based on predetermined lenient criteria yields reliable SSRT estimates, while not excluding too many participants. Our findings further support the reliability of SSRT, which is commonly used as an index of inhibitory control, and provide support for its continued use as a neurocognitive phenotype. PMID:22363308

  13. Continued observations of the H Ly alpha emission from Uranus

    NASA Technical Reports Server (NTRS)

    Clarke, J.; Durrance, S.; Moos, W.; Murthy, J.; Atreya, S.; Barnes, A.; Mihalov, J.; Belcher, J.; Festou, M.; Imhoff, C.

    1986-01-01

    Observations of Uranus obtained over four years with the IUE Observatory supports the initial identification of a bright H Ly alpha flux which varies independently of the solar H Ly alpha flux, implying a largely self-excited emission. An average brightness of 1400 Rayleighs is derived, and limits for the possible contribution by reflected solar H Ly alpha emission, estimated to be about 200 Rayleighs, suggest that the remaining self-excited emission is produced by an aurora. Based on comparison with solar wind measurements obtained in the vicinity of Uranus by Voyager 2 and Pioneer 11, no evidence for correlation between the solar wind density and the H Ly alpha brightness is found. The upper limit to H2 emission gives a lower limit to the ratio of H Ly alpha/H2 emissions of about 2.4, suggesting that the precipitating particles may be significantly less energetic on Uranus than those responsible for the aurora on Jupiter. The average power in precipitating particles is estimated to be of the order of 10 to the 12th W.

  14. Application of Zernike polynomials towards accelerated adaptive focusing of transcranial high intensity focused ultrasound.

    PubMed

    Kaye, Elena A; Hertzberg, Yoni; Marx, Michael; Werner, Beat; Navon, Gil; Levoy, Marc; Pauly, Kim Butts

    2012-10-01

    To study the phase aberrations produced by human skulls during transcranial magnetic resonance imaging guided focused ultrasound surgery (MRgFUS), to demonstrate the potential of Zernike polynomials (ZPs) to accelerate the adaptive focusing process, and to investigate the benefits of using phase corrections obtained in previous studies to provide the initial guess for correction of a new data set. The five phase aberration data sets, analyzed here, were calculated based on preoperative computerized tomography (CT) images of the head obtained during previous transcranial MRgFUS treatments performed using a clinical prototype hemispherical transducer. The noniterative adaptive focusing algorithm [Larrat et al., "MR-guided adaptive focusing of ultrasound," IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57(8), 1734-1747 (2010)] was modified by replacing Hadamard encoding with Zernike encoding. The algorithm was tested in simulations to correct the patients' phase aberrations. MR acoustic radiation force imaging (MR-ARFI) was used to visualize the effect of the phase aberration correction on the focusing of a hemispherical transducer. In addition, two methods for constructing initial phase correction estimate based on previous patient's data were investigated. The benefits of the initial estimates in the Zernike-based algorithm were analyzed by measuring their effect on the ultrasound intensity at the focus and on the number of ZP modes necessary to achieve 90% of the intensity of the nonaberrated case. Covariance of the pairs of the phase aberrations data sets showed high correlation between aberration data of several patients and suggested that subgroups can be based on level of correlation. Simulation of the Zernike-based algorithm demonstrated the overall greater correction effectiveness of the low modes of ZPs. The focal intensity achieves 90% of nonaberrated intensity using fewer than 170 modes of ZPs. The initial estimates based on using the average of the phase aberration data from the individual subgroups of subjects was shown to increase the intensity at the focal spot for the five subjects. The application of ZPs to phase aberration correction was shown to be beneficial for adaptive focusing of transcranial ultrasound. The skull-based phase aberrations were found to be well approximated by the number of ZP modes representing only a fraction of the number of elements in the hemispherical transducer. Implementing the initial phase aberration estimate together with Zernike-based algorithm can be used to improve the robustness and can potentially greatly increase the viability of MR-ARFI-based focusing for a clinical transcranial MRgFUS therapy.

  15. Application of Zernike polynomials towards accelerated adaptive focusing of transcranial high intensity focused ultrasound

    PubMed Central

    Kaye, Elena A.; Hertzberg, Yoni; Marx, Michael; Werner, Beat; Navon, Gil; Levoy, Marc; Pauly, Kim Butts

    2012-01-01

    Purpose: To study the phase aberrations produced by human skulls during transcranial magnetic resonance imaging guided focused ultrasound surgery (MRgFUS), to demonstrate the potential of Zernike polynomials (ZPs) to accelerate the adaptive focusing process, and to investigate the benefits of using phase corrections obtained in previous studies to provide the initial guess for correction of a new data set. Methods: The five phase aberration data sets, analyzed here, were calculated based on preoperative computerized tomography (CT) images of the head obtained during previous transcranial MRgFUS treatments performed using a clinical prototype hemispherical transducer. The noniterative adaptive focusing algorithm [Larrat , “MR-guided adaptive focusing of ultrasound,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57(8), 1734–1747 (2010)]10.1109/TUFFC.2010.1612 was modified by replacing Hadamard encoding with Zernike encoding. The algorithm was tested in simulations to correct the patients’ phase aberrations. MR acoustic radiation force imaging (MR-ARFI) was used to visualize the effect of the phase aberration correction on the focusing of a hemispherical transducer. In addition, two methods for constructing initial phase correction estimate based on previous patient's data were investigated. The benefits of the initial estimates in the Zernike-based algorithm were analyzed by measuring their effect on the ultrasound intensity at the focus and on the number of ZP modes necessary to achieve 90% of the intensity of the nonaberrated case. Results: Covariance of the pairs of the phase aberrations data sets showed high correlation between aberration data of several patients and suggested that subgroups can be based on level of correlation. Simulation of the Zernike-based algorithm demonstrated the overall greater correction effectiveness of the low modes of ZPs. The focal intensity achieves 90% of nonaberrated intensity using fewer than 170 modes of ZPs. The initial estimates based on using the average of the phase aberration data from the individual subgroups of subjects was shown to increase the intensity at the focal spot for the five subjects. Conclusions: The application of ZPs to phase aberration correction was shown to be beneficial for adaptive focusing of transcranial ultrasound. The skull-based phase aberrations were found to be well approximated by the number of ZP modes representing only a fraction of the number of elements in the hemispherical transducer. Implementing the initial phase aberration estimate together with Zernike-based algorithm can be used to improve the robustness and can potentially greatly increase the viability of MR-ARFI-based focusing for a clinical transcranial MRgFUS therapy. PMID:23039661

  16. An Integrated Approach to Indoor and Outdoor Localization

    DTIC Science & Technology

    2017-04-17

    localization estimate, followed by particle filter based tracking. Initial localization is performed using WiFi and image observations. For tracking we...source. A two-step process is proposed that performs an initial localization es-timate, followed by particle filter based t racking. Initial...mapped, it is possible to use them for localization [20, 21, 22]. Haverinen et al. show that these fields could be used with a particle filter to

  17. Integration and Analysis of Neighbor Discovery and Link Quality Estimation in Wireless Sensor Networks

    PubMed Central

    Radi, Marjan; Dezfouli, Behnam; Abu Bakar, Kamalrulnizam; Abd Razak, Shukor

    2014-01-01

    Network connectivity and link quality information are the fundamental requirements of wireless sensor network protocols to perform their desired functionality. Most of the existing discovery protocols have only focused on the neighbor discovery problem, while a few number of them provide an integrated neighbor search and link estimation. As these protocols require a careful parameter adjustment before network deployment, they cannot provide scalable and accurate network initialization in large-scale dense wireless sensor networks with random topology. Furthermore, performance of these protocols has not entirely been evaluated yet. In this paper, we perform a comprehensive simulation study on the efficiency of employing adaptive protocols compared to the existing nonadaptive protocols for initializing sensor networks with random topology. In this regard, we propose adaptive network initialization protocols which integrate the initial neighbor discovery with link quality estimation process to initialize large-scale dense wireless sensor networks without requiring any parameter adjustment before network deployment. To the best of our knowledge, this work is the first attempt to provide a detailed simulation study on the performance of integrated neighbor discovery and link quality estimation protocols for initializing sensor networks. This study can help system designers to determine the most appropriate approach for different applications. PMID:24678277

  18. Integration and initial operation of the multi-component large ring laser structure ROMY

    NASA Astrophysics Data System (ADS)

    Schreiber, Karl Ulrich; Igel, Heiner; Wassermann, Joachim; Gebauer, André; Simonelli, Andrea; Bernauer, Felix; Donner, Stefanie; Hadziioannou, Celine; Egdorf, Sven; Wells, Jon-Paul

    2017-04-01

    Rotation sensing for the geosciences requires a high sensor resolution of the order of 10 pico- radians per second or even less. An optical Sagnac interferometer offers this sensitivity, provided that the scale factor can be made very large. We have designed and built a multi- component ring laser system, consisting of 4 individual large ring lasers, each covering an area of more than 62 square m. The rings are orientated in the shape of a tetrahedron, so that all 3 spatial directions are covered, allowing also for some redundancy. We report on the initial operation of the free running gyroscopes in their underground facility in order to establish a performance estimate for the ROMY ring laser structure. Preliminary results suggest that the quantum noise limit is lower than that of the G ring laser.

  19. Racial/Ethnic Differences in Cigarette Smoking Initiation and Progression to Daily Smoking: A Multilevel Analysis

    PubMed Central

    Kandel, Denise B.; Kiros, Gebre-Egziabher; Schaffran, Christine; Hu, Mei-Chen

    2004-01-01

    Objectives. We sought to identify individual and contextual predictors of adolescent smoking initiation and progression to daily smoking by race/ethnicity. Methods. We used data from the National Longitudinal Study of Adolescent Health to estimate the effects of individual (adolescent, family, peer) and contextual (school and state) factors on smoking onset among nonsmokers (n = 5374) and progression to daily smoking among smokers (n = 4474) with multilevel regression models. Results. Individual factors were more important predictors of smoking behaviors than were contextual factors. Predictors of smoking behaviors were mostly common across racial/ethnic groups. Conclusions. The few identified racial/ethnic differences in predictors of smoking behavior suggest that universal prevention and intervention efforts could reach most adolescents regardless of race/ethnicity. With 2 exceptions, important contextual factors remain to be identified. PMID:14713710

  20. Computational Modeling and Analysis of Insulin Induced Eukaryotic Translation Initiation

    PubMed Central

    Lequieu, Joshua; Chakrabarti, Anirikh; Nayak, Satyaprakash; Varner, Jeffrey D.

    2011-01-01

    Insulin, the primary hormone regulating the level of glucose in the bloodstream, modulates a variety of cellular and enzymatic processes in normal and diseased cells. Insulin signals are processed by a complex network of biochemical interactions which ultimately induce gene expression programs or other processes such as translation initiation. Surprisingly, despite the wealth of literature on insulin signaling, the relative importance of the components linking insulin with translation initiation remains unclear. We addressed this question by developing and interrogating a family of mathematical models of insulin induced translation initiation. The insulin network was modeled using mass-action kinetics within an ordinary differential equation (ODE) framework. A family of model parameters was estimated, starting from an initial best fit parameter set, using 24 experimental data sets taken from literature. The residual between model simulations and each of the experimental constraints were simultaneously minimized using multiobjective optimization. Interrogation of the model population, using sensitivity and robustness analysis, identified an insulin-dependent switch that controlled translation initiation. Our analysis suggested that without insulin, a balance between the pro-initiation activity of the GTP-binding protein Rheb and anti-initiation activity of PTEN controlled basal initiation. On the other hand, in the presence of insulin a combination of PI3K and Rheb activity controlled inducible initiation, where PI3K was only critical in the presence of insulin. Other well known regulatory mechanisms governing insulin action, for example IRS-1 negative feedback, modulated the relative importance of PI3K and Rheb but did not fundamentally change the signal flow. PMID:22102801

  1. Why Did People Move During the Great Recession?: The Role of Economics in Migration Decisions

    PubMed Central

    Levy, Brian L.; Mouw, Ted; Daniel Perez, Anthony

    2017-01-01

    Labor migration offers an important mechanism to reallocate workers when there are regional differences in employment conditions. Whereas conventional wisdom suggests migration rates should increase during recessions as workers move out of areas that are hit hardest, initial evidence suggested that overall migration rates declined during the Great Recession, despite large regional differences in unemployment and growth rates. In this paper, we use data from the American Community Survey to analyze internal migration trends before and during the economic downturn. First, we find only a modest decline in the odds of adults leaving distressed labor market areas during the recession, which may result in part from challenges related to the housing price crash. Second, we estimate conditional logit models of destination choice for individuals who migrate across labor market areas and find a substantial effect of economic factors such as labor demand, unemployment, and housing values. We also estimate latent class conditional logit models that test whether there is heterogeneity in preferences for destination characteristics among migrants. Over all, the latent class models suggest that roughly equal percentages of migrants were motivated by economic factors before and during the recession. We conclude that fears of dramatic declines in labor migration seem to be unsubstantiated. PMID:28547003

  2. Why Did People Move During the Great Recession?: The Role of Economics in Migration Decisions.

    PubMed

    Levy, Brian L; Mouw, Ted; Daniel Perez, Anthony

    2017-04-01

    Labor migration offers an important mechanism to reallocate workers when there are regional differences in employment conditions. Whereas conventional wisdom suggests migration rates should increase during recessions as workers move out of areas that are hit hardest, initial evidence suggested that overall migration rates declined during the Great Recession, despite large regional differences in unemployment and growth rates. In this paper, we use data from the American Community Survey to analyze internal migration trends before and during the economic downturn. First, we find only a modest decline in the odds of adults leaving distressed labor market areas during the recession, which may result in part from challenges related to the housing price crash. Second, we estimate conditional logit models of destination choice for individuals who migrate across labor market areas and find a substantial effect of economic factors such as labor demand, unemployment, and housing values. We also estimate latent class conditional logit models that test whether there is heterogeneity in preferences for destination characteristics among migrants. Over all, the latent class models suggest that roughly equal percentages of migrants were motivated by economic factors before and during the recession. We conclude that fears of dramatic declines in labor migration seem to be unsubstantiated.

  3. New formulations for tsunami runup estimation

    NASA Astrophysics Data System (ADS)

    Kanoglu, U.; Aydin, B.; Ceylan, N.

    2017-12-01

    We evaluate shoreline motion and maximum runup in two folds: One, we use linear shallow water-wave equations over a sloping beach and solve as initial-boundary value problem similar to the nonlinear solution of Aydın and Kanoglu (2017, Pure Appl. Geophys., https://doi.org/10.1007/s00024-017-1508-z). Methodology we present here is simple; it involves eigenfunction expansion and, hence, avoids integral transform techniques. We then use several different types of initial wave profiles with and without initial velocity, estimate shoreline properties and confirm classical runup invariance between linear and nonlinear theories. Two, we use the nonlinear shallow water-wave solution of Kanoglu (2004, J. Fluid Mech. 513, 363-372) to estimate maximum runup. Kanoglu (2004) presented a simple integral solution for the nonlinear shallow water-wave equations using the classical Carrier and Greenspan transformation, and further extended shoreline position and velocity to a simpler integral formulation. In addition, Tinti and Tonini (2005, J. Fluid Mech. 535, 33-64) defined initial condition in a very convenient form for near-shore events. We use Tinti and Tonini (2005) type initial condition in Kanoglu's (2004) shoreline integral solution, which leads further simplified estimates for shoreline position and velocity, i.e. algebraic relation. We then use this algebraic runup estimate to investigate effect of earthquake source parameters on maximum runup and present results similar to Sepulveda and Liu (2016, Coast. Eng. 112, 57-68).

  4. Gaussian Decomposition of Laser Altimeter Waveforms

    NASA Technical Reports Server (NTRS)

    Hofton, Michelle A.; Minster, J. Bernard; Blair, J. Bryan

    1999-01-01

    We develop a method to decompose a laser altimeter return waveform into its Gaussian components assuming that the position of each Gaussian within the waveform can be used to calculate the mean elevation of a specific reflecting surface within the laser footprint. We estimate the number of Gaussian components from the number of inflection points of a smoothed copy of the laser waveform, and obtain initial estimates of the Gaussian half-widths and positions from the positions of its consecutive inflection points. Initial amplitude estimates are obtained using a non-negative least-squares method. To reduce the likelihood of fitting the background noise within the waveform and to minimize the number of Gaussians needed in the approximation, we rank the "importance" of each Gaussian in the decomposition using its initial half-width and amplitude estimates. The initial parameter estimates of all Gaussians ranked "important" are optimized using the Levenburg-Marquardt method. If the sum of the Gaussians does not approximate the return waveform to a prescribed accuracy, then additional Gaussians are included in the optimization procedure. The Gaussian decomposition method is demonstrated on data collected by the airborne Laser Vegetation Imaging Sensor (LVIS) in October 1997 over the Sequoia National Forest, California.

  5. Ice water path estimation and characterization using passive microwave radiometry

    NASA Technical Reports Server (NTRS)

    Vivekanandan, J.; Turk, J.; Bringi, V. N.

    1991-01-01

    Model computations of top-of-atmospheric microwave brightness temperatures T(B) from layers of precipitation-sized ice of variable bulk density and ice water content (IWC) are presented. It is shown that the 85-GHz T(B) depends essentially on the ice optical thickness. The results demonstrate the potential usefulness of scattering-based channels for characterizing the ice phase and suggest a top-down methodology for retrieval of cloud vertical structure and precipitation estimation from multifrequency passive microwave measurements. Attention is also given to radiative transfer model results based on the multiparameter radar data initialization from the Cooperative Huntsville Meteorological Experiment (COHMEX) in northern Alabama. It is shown that brightness temperature warming effects due to the inclusion of a cloud liquid water profile are especially significant at 85 GHz during later stages of cloud evolution.

  6. Divergence Times and the Evolutionary Radiation of New World Monkeys (Platyrrhini, Primates): An Analysis of Fossil and Molecular Data.

    PubMed

    Perez, S Ivan; Tejedor, Marcelo F; Novo, Nelson M; Aristide, Leandro

    2013-01-01

    The estimation of phylogenetic relationships and divergence times among a group of organisms is a fundamental first step toward understanding its biological diversification. The time of the most recent or last common ancestor (LCA) of extant platyrrhines is one of the most controversial among scholars of primate evolution. Here we use two molecular based approaches to date the initial divergence of the platyrrhine clade, Bayesian estimations under a relaxed-clock model and substitution rate plus generation time and body size, employing the fossil record and genome datasets. We also explore the robustness of our estimations with respect to changes in topology, fossil constraints and substitution rate, and discuss the implications of our findings for understanding the platyrrhine radiation. Our results suggest that fossil constraints, topology and substitution rate have an important influence on our divergence time estimates. Bayesian estimates using conservative but realistic fossil constraints suggest that the LCA of extant platyrrhines existed at ca. 29 Ma, with the 95% confidence limit for the node ranging from 27-31 Ma. The LCA of extant platyrrhine monkeys based on substitution rate corrected by generation time and body size was established between 21-29 Ma. The estimates based on the two approaches used in this study recalibrate the ages of the major platyrrhine clades and corroborate the hypothesis that they constitute very old lineages. These results can help reconcile several controversial points concerning the affinities of key early Miocene fossils that have arisen among paleontologists and molecular systematists. However, they cannot resolve the controversy of whether these fossil species truly belong to the extant lineages or to a stem platyrrhine clade. That question can only be resolved by morphology. Finally, we show that the use of different approaches and well supported fossil information gives a more robust divergence time estimate of a clade.

  7. Divergence Times and the Evolutionary Radiation of New World Monkeys (Platyrrhini, Primates): An Analysis of Fossil and Molecular Data

    PubMed Central

    Perez, S. Ivan; Tejedor, Marcelo F.; Novo, Nelson M.; Aristide, Leandro

    2013-01-01

    The estimation of phylogenetic relationships and divergence times among a group of organisms is a fundamental first step toward understanding its biological diversification. The time of the most recent or last common ancestor (LCA) of extant platyrrhines is one of the most controversial among scholars of primate evolution. Here we use two molecular based approaches to date the initial divergence of the platyrrhine clade, Bayesian estimations under a relaxed-clock model and substitution rate plus generation time and body size, employing the fossil record and genome datasets. We also explore the robustness of our estimations with respect to changes in topology, fossil constraints and substitution rate, and discuss the implications of our findings for understanding the platyrrhine radiation. Our results suggest that fossil constraints, topology and substitution rate have an important influence on our divergence time estimates. Bayesian estimates using conservative but realistic fossil constraints suggest that the LCA of extant platyrrhines existed at ca. 29 Ma, with the 95% confidence limit for the node ranging from 27–31 Ma. The LCA of extant platyrrhine monkeys based on substitution rate corrected by generation time and body size was established between 21–29 Ma. The estimates based on the two approaches used in this study recalibrate the ages of the major platyrrhine clades and corroborate the hypothesis that they constitute very old lineages. These results can help reconcile several controversial points concerning the affinities of key early Miocene fossils that have arisen among paleontologists and molecular systematists. However, they cannot resolve the controversy of whether these fossil species truly belong to the extant lineages or to a stem platyrrhine clade. That question can only be resolved by morphology. Finally, we show that the use of different approaches and well supported fossil information gives a more robust divergence time estimate of a clade. PMID:23826358

  8. A multi-directional tracer test in the fractured Chalk aquifer of E. Yorkshire, UK.

    PubMed

    Hartmann, S; Odling, N E; West, L J

    2007-12-07

    A multi-borehole radial tracer test has been conducted in the confined Chalk aquifer of E. Yorkshire, UK. Three different tracer dyes were injected into three injection boreholes and a central borehole, 25 m from the injection boreholes, was pumped at 330 m(3)/d for 8 days. The breakthrough curves show that initial breakthrough and peak times were fairly similar for all dyes but that recoveries varied markedly from 9 to 57%. The breakthrough curves show a steep rise to a peak and long tail, typical of dual porosity aquifers. The breakthrough curves were simulated using a 1D dual porosity model. Model input parameters were constrained to acceptable ranges determined from estimations of matrix porosity and diffusion coefficient, fracture spacing, initial breakthrough times and bulk transmissivity of the aquifer. The model gave equivalent hydraulic apertures for fractures in the range 363-384 microm, dispersivities of 1 to 5 m and matrix block sizes of 6 to 9 cm. Modelling suggests that matrix block size is the primary controlling parameter for solute transport in the aquifer, particularly for recovery. The observed breakthrough curves suggest results from single injection-borehole tracer tests in the Chalk may give initial breakthrough and peak times reasonably representative of the aquifer but that recovery is highly variable and sensitive to injection and abstraction borehole location. Consideration of aquifer heterogeneity suggests that high recoveries may be indicative of a high flow pathway adjacent, but not necessarily connected, to the injection and abstraction boreholes whereas low recoveries may indicate more distributed flow through many fractures of similar aperture.

  9. Geology and Origin of Europa's Mitten Feature (Murias Chaos)

    NASA Technical Reports Server (NTRS)

    Figueredo, P. H.; Chuang, F. C.; Rathbun, J.; Kirk, R. L.; Greeley, R.

    2002-01-01

    The "Mitten" (provisionally named Murias Chaos by the International Astronomical Union) is a region of elevated chaos-like terrain in the leading hemisphere of Europa. Its origin had been explained under the currently debated theories of melting through a thin lithosphere or convection within a thick one. Galileo observations reveal several characteristics that suggest that the Mitten is distinct from typical chaos terrain and point to a different formational process. Photoclinometric elevation estimates suggest that the Mitten is slightly elevated with respect to the surrounding terrain; geologic relations indicate that it must have raised significantly from the plains in its past, resembling disrupted domes on Europa's trailing hemisphere. Moreover, the Mitten material appears to have extruded onto the plains and flowed for tens of kilometers. The area subsequently subsided as a result of isostatic adjustment, viscous relaxation, and/or plains loading. Using plate flexure models, we estimated the elastic lithosphere in the area to be several kilometers thick. We propose that the Mitten originated by the ascent and extrusion of a large thermal diapir. Thermal-mechanical modeling shows that a Mitten-sized plume would remain sufficiently warm and buoyant to pierce through the crust and flow unconfined on the surface. Such a diapir probably had an initial radius between 5 and 8 km and an initial depth of 20-40 km, consistent with a thick-lithosphere model. In this scenario the Mitten appears to represent the surface expression of the rare ascent of a large diapir, in contrast to lenticulae and chaos terrain, which may form by isolated and clustered small diapirs, respectively.

  10. Geology and origin of Europa's "Mitten" feature (Murias Chaos)

    USGS Publications Warehouse

    Figueredo, P.H.; Chuang, F.C.; Rathbun, J.; Kirk, R.L.; Greeley, R.

    2002-01-01

    The "Mitten" (provisionally named Murias Chaos by the International Astronomical Union) is a region of elevated chaos-like terrain in the leading hemisphere of Europa. Its origin had been explained under the currently debated theories of melting through a thin lithosphere or convection within a thick one. Galileo observations reveal several characteristics that suggest that the Mitten is distinct from typical chaos terrain and point to a different formational process. Photoclinometric elevation estimates suggest that the Mitten is slightly elevated with respect to the surrounding terrain; geologic relations indicate that it must have raised significantly from the plains in its past, resembling disrupted domes on Europa's trailing hemisphere. Moreover, the Mitten material appears to have extruded onto the plains and flowed for tens of kilometers. The area subsequently subsided as a result of isostatic adjustment, viscous relaxation, and/or plains loading. Using plate flexure models, we estimated the elastic lithosphere in the area to be several kilometers thick. We propose that the Mitten originated by the ascent and extrusion of a large thermal diapir. Thermal-mechanical modeling shows that a Mitten-sized plume would remain sufficiently warm and buoyant to pierce through the crust and flow unconfined on the surface. Such a diapir probably had an initial radius between 5 and 8 km and an initial depth of 20-40 km, consistent with a thick-lithosphere model. In this scenario the Mitten appears to represent the surface expression of the rare ascent of a large diapir, in contrast to lenticulae and chaos terrain, which may form by isolated and clustered small diapirs, respectively.

  11. Genetic and environmental influences on cannabis use initiation and problematic use: a meta-analysis of twin studies

    PubMed Central

    Verweij, Karin J.H.; Zietsch, Brendan P.; Lynskey, Michael T.; Medland, Sarah E.; Neale, Michael C.; Martin, Nicholas G.; Boomsma, Dorret I.; Vink, Jacqueline M.

    2009-01-01

    Background Because cannabis use is associated with social, physical and psychological problems, it is important to know what causes some individuals to initiate cannabis use and a subset of those to become problematic users. Previous twin studies found evidence for both genetic and environmental influences on vulnerability, but due to considerable variation in the results it is difficult to draw clear conclusions regarding the relative magnitude of these influences. Method A systematic literature search identified 28 twin studies on cannabis use initiation and 24 studies on problematic cannabis use. The proportion of total variance accounted for by genes (A), shared environment (C), and unshared environment (E) in (1) initiation of cannabis use and (2) problematic cannabis use was calculated by averaging corresponding A, C, and E estimates across studies from independent cohorts and weighting by sample size. Results For cannabis use initiation, A, C, and E estimates were 48%, 25% and 27% in males and 40%, 39% and 21% in females. For problematic cannabis use A, C, and E estimates were 51%, 20% and 29% for males and 59%, 15% and 26% for females. Confidence intervals of these estimates are considerably narrower than those in the source studies. Conclusions Our results indicate that vulnerability to both cannabis use initiation and problematic use was significantly influenced by A, C, and E. There was a trend for a greater C and lesser A component for cannabis initiation as compared to problematic use for females. PMID:20402985

  12. Exposure to movie smoking: its relation to smoking initiation among US adolescents.

    PubMed

    Sargent, James D; Beach, Michael L; Adachi-Mejia, Anna M; Gibson, Jennifer J; Titus-Ernstoff, Linda T; Carusi, Charles P; Swain, Susan D; Heatherton, Todd F; Dalton, Madeline A

    2005-11-01

    Regional studies have linked exposure to movie smoking with adolescent smoking. We examined this association in a representative US sample. We conducted a random-digit-dial survey of 6522 US adolescents aged 10 to 14 years. Using previously validated methods, we estimated exposure to movie smoking, in 532 recent box-office hits, and examined its relation with adolescents having ever tried smoking a cigarette. The distributions of demographics and census region in the unweighted sample were almost identical to 2000 US Census estimates, confirming representativeness. Overall, 10% of the population had tried smoking. Quartile (Q) of movie smoking exposure was significantly associated with the prevalence of smoking initiation: 0.02 of adolescents in Q1 had tried smoking; 0.06 in Q2; 0.11 in Q3; and 0.22 in Q4. This association did not differ significantly by race/ethnicity or census region. After controlling for sociodemographics, friend/sibling/parent smoking, school performance, personality characteristics, and parenting style, the adjusted odds ratio for having tried smoking were 1.7 (95% confidence interval [CI]: 1.1, 2.7) for Q2, 1.8 (95% CI: 1.2, 2.9) for Q3, and 2.6 (95% CI: 1.7, 4.1) for Q4 compared with adolescents in Q1. The covariate-adjusted attributable fraction was 0.38 (95% CI: 0.20, 0.56), suggesting that exposure to movie smoking is the primary independent risk factor for smoking initiation in US adolescents in this age group. Smoking in movies is a risk factor for smoking initiation among US adolescents. Limiting exposure of young adolescents to movie smoking could have important public health implications.

  13. A hydroclimatological approach to predicting regional landslide probability using Landlab

    NASA Astrophysics Data System (ADS)

    Strauch, Ronda; Istanbulluoglu, Erkan; Nudurupati, Sai Siddhartha; Bandaragoda, Christina; Gasparini, Nicole M.; Tucker, Gregory E.

    2018-02-01

    We develop a hydroclimatological approach to the modeling of regional shallow landslide initiation that integrates spatial and temporal dimensions of parameter uncertainty to estimate an annual probability of landslide initiation based on Monte Carlo simulations. The physically based model couples the infinite-slope stability model with a steady-state subsurface flow representation and operates in a digital elevation model. Spatially distributed gridded data for soil properties and vegetation classification are used for parameter estimation of probability distributions that characterize model input uncertainty. Hydrologic forcing to the model is through annual maximum daily recharge to subsurface flow obtained from a macroscale hydrologic model. We demonstrate the model in a steep mountainous region in northern Washington, USA, over 2700 km2. The influence of soil depth on the probability of landslide initiation is investigated through comparisons among model output produced using three different soil depth scenarios reflecting the uncertainty of soil depth and its potential long-term variability. We found elevation-dependent patterns in probability of landslide initiation that showed the stabilizing effects of forests at low elevations, an increased landslide probability with forest decline at mid-elevations (1400 to 2400 m), and soil limitation and steep topographic controls at high alpine elevations and in post-glacial landscapes. These dominant controls manifest themselves in a bimodal distribution of spatial annual landslide probability. Model testing with limited observations revealed similarly moderate model confidence for the three hazard maps, suggesting suitable use as relative hazard products. The model is available as a component in Landlab, an open-source, Python-based landscape earth systems modeling environment, and is designed to be easily reproduced utilizing HydroShare cyberinfrastructure.

  14. X3 expansion tube driver gas spectroscopy and temperature measurements

    NASA Astrophysics Data System (ADS)

    Parekh, V.; Gildfind, D.; Lewis, S.; James, C.

    2018-07-01

    The University of Queensland's X3 facility is a large, free-piston driven expansion tube used for super-orbital and high Mach number scramjet aerothermodynamic studies. During recent development of new scramjet test flow conditions, experimentally measured shock speeds were found to be significantly lower than that predicted by initial driver performance calculations. These calculations were based on ideal, isentropic compression of the driver gas and indicated that loss mechanisms, not accounted for in the preliminary analysis, were significant. The critical determinant of shock speed is peak driver gas sound speed, which for a given gas composition depends on the peak driver gas temperature. This temperature may be inaccurately estimated if an incorrect fill temperature is assumed, or if heat losses during driver gas compression are significant but not accounted for. For this study, the ideal predicted peak temperature was 3750 K, without accounting for losses. However, a much lower driver temperature of 2400 K is suggested based on measured experimental shock speeds. This study aimed to measure initial and peak driver gas temperatures for a representative X3 operating condition. Examination of the transient temperatures of the driver gas and compression tube steel wall during the initial fill process showed that once the filling process was complete, the steady-state driver gas temperature closely matched the tube wall temperature. Therefore, while assuming the gas is initially at the ambient laboratory temperature is not a significant source of error, it can be entirely mitigated by simply monitoring tube wall temperature. Optical emission spectroscopy was used to determine the driver gas spectra after diaphragm rupture; the driver gas emission spectrum exhibited a significant continuum radiation component, with prominent spectral lines attributed to contamination of the gas. A graybody approximation of the continuum suggested a peak driver gas temperature of 3200 K; uncertainty associated with the blackbody curve fit is ±100 K. However, work is required to quantify additional sources of uncertainty due to the graybody assumption and the presence of contaminant particles in the driver gas; these are potentially significant. The estimate of the driver gas temperature suggests that driver heat losses are not the dominant contributor to the lower-than-expected shock speeds for X3. Since both the driver temperature and pressure have been measured, investigation of total pressure losses during driver gas expansion across the diaphragm and driver-to-driven tube area change (currently not accounted for) is recommended for future studies as the likely mechanism for the observed performance gap.

  15. X3 expansion tube driver gas spectroscopy and temperature measurements

    NASA Astrophysics Data System (ADS)

    Parekh, V.; Gildfind, D.; Lewis, S.; James, C.

    2017-11-01

    The University of Queensland's X3 facility is a large, free-piston driven expansion tube used for super-orbital and high Mach number scramjet aerothermodynamic studies. During recent development of new scramjet test flow conditions, experimentally measured shock speeds were found to be significantly lower than that predicted by initial driver performance calculations. These calculations were based on ideal, isentropic compression of the driver gas and indicated that loss mechanisms, not accounted for in the preliminary analysis, were significant. The critical determinant of shock speed is peak driver gas sound speed, which for a given gas composition depends on the peak driver gas temperature. This temperature may be inaccurately estimated if an incorrect fill temperature is assumed, or if heat losses during driver gas compression are significant but not accounted for. For this study, the ideal predicted peak temperature was 3750 K, without accounting for losses. However, a much lower driver temperature of 2400 K is suggested based on measured experimental shock speeds. This study aimed to measure initial and peak driver gas temperatures for a representative X3 operating condition. Examination of the transient temperatures of the driver gas and compression tube steel wall during the initial fill process showed that once the filling process was complete, the steady-state driver gas temperature closely matched the tube wall temperature. Therefore, while assuming the gas is initially at the ambient laboratory temperature is not a significant source of error, it can be entirely mitigated by simply monitoring tube wall temperature. Optical emission spectroscopy was used to determine the driver gas spectra after diaphragm rupture; the driver gas emission spectrum exhibited a significant continuum radiation component, with prominent spectral lines attributed to contamination of the gas. A graybody approximation of the continuum suggested a peak driver gas temperature of 3200 K; uncertainty associated with the blackbody curve fit is ±100 K. However, work is required to quantify additional sources of uncertainty due to the graybody assumption and the presence of contaminant particles in the driver gas; these are potentially significant. The estimate of the driver gas temperature suggests that driver heat losses are not the dominant contributor to the lower-than-expected shock speeds for X3. Since both the driver temperature and pressure have been measured, investigation of total pressure losses during driver gas expansion across the diaphragm and driver-to-driven tube area change (currently not accounted for) is recommended for future studies as the likely mechanism for the observed performance gap.

  16. The impact on hospital resource utilisation of treatment of hepatic encephalopathy with rifaximin-α.

    PubMed

    Orr, James G; Currie, Craig J; Berni, Ellen; Goel, Anurag; Moriarty, Kieran J; Sinha, Ashish; Gordon, Fiona; Dethier, Anne; Dillon, John F; Clark, Katie; Richardson, Paul; Middleton, Paul; Patel, Vishal; Shawcross, Debbie; Preedy, Helen; Aspinall, Richard J; Hudson, Mark

    2016-09-01

    Rifaximin-α reduces the risk of recurrence of overt hepatic encephalopathy. However, there remain concerns regarding the financial cost of the drug. We aimed to study the impact of treatment with rifaximin-α on healthcare resource utilisation using data from seven UK liver treatment centres. All seven centres agreed a standardised data set and data characterising clinical, demographic and emergency hospital admissions were collected retrospectively for the time periods 3, 6 and 12 months before and following initiation of rifaximin-α. Admission rates and hospital length of stay before and during therapy were compared. Costs of admissions and drug acquisition were estimated using published sources. Multivariate analyses were carried out to assess the relative impact of various factors on hospital length of stay. Data were available from 326 patients. Following the commencement of rifaximin, the total hospital length of stay reduced by an estimated 31-53%, equating to a reduction in inpatient costs of between £4858 and £6607 per year. Taking into account drug costs of £3379 for 1-year treatment with rifaximin-α, there was an estimated annual mean saving of £1480-£3228 per patient. Initiation of treatment with rifaximin-α was associated with a marked reduction in the number of hospital admissions and hospital length of stay. These data suggest that treatment of patients with rifaximin-α for hepatic encephalopathy was generally cost saving. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  17. Industrial point source CO2 emission strength estimation with aircraft measurements and dispersion modelling.

    PubMed

    Carotenuto, Federico; Gualtieri, Giovanni; Miglietta, Franco; Riccio, Angelo; Toscano, Piero; Wohlfahrt, Georg; Gioli, Beniamino

    2018-02-22

    CO 2 remains the greenhouse gas that contributes most to anthropogenic global warming, and the evaluation of its emissions is of major interest to both research and regulatory purposes. Emission inventories generally provide quite reliable estimates of CO 2 emissions. However, because of intrinsic uncertainties associated with these estimates, it is of great importance to validate emission inventories against independent estimates. This paper describes an integrated approach combining aircraft measurements and a puff dispersion modelling framework by considering a CO 2 industrial point source, located in Biganos, France. CO 2 density measurements were obtained by applying the mass balance method, while CO 2 emission estimates were derived by implementing the CALMET/CALPUFF model chain. For the latter, three meteorological initializations were used: (i) WRF-modelled outputs initialized by ECMWF reanalyses; (ii) WRF-modelled outputs initialized by CFSR reanalyses and (iii) local in situ observations. Governmental inventorial data were used as reference for all applications. The strengths and weaknesses of the different approaches and how they affect emission estimation uncertainty were investigated. The mass balance based on aircraft measurements was quite succesful in capturing the point source emission strength (at worst with a 16% bias), while the accuracy of the dispersion modelling, markedly when using ECMWF initialization through the WRF model, was only slightly lower (estimation with an 18% bias). The analysis will help in highlighting some methodological best practices that can be used as guidelines for future experiments.

  18. Influence of Initial Inclined Surface Crack on Estimated Residual Fatigue Lifetime of Railway Axle

    NASA Astrophysics Data System (ADS)

    Náhlík, Luboš; Pokorný, Pavel; Ševčík, Martin; Hutař, Pavel

    2016-11-01

    Railway axles are subjected to cyclic loading which can lead to fatigue failure. For safe operation of railway axles a damage tolerance approach taking into account a possible defect on railway axle surface is often required. The contribution deals with an estimation of residual fatigue lifetime of railway axle with initial inclined surface crack. 3D numerical model of inclined semi-elliptical surface crack in railway axle was developed and its curved propagation through the axle was simulated by finite element method. Presence of press-fitted wheel in the vicinity of initial crack was taken into account. A typical loading spectrum of railway axle was considered and residual fatigue lifetime was estimated by NASGRO approach. Material properties of typical axle steel EA4T were considered in numerical calculations and lifetime estimation.

  19. Evaluation of terrestrial and streamside salamander monitoring techniques at Shenandoah National Park

    USGS Publications Warehouse

    Jung, R.E.; Droege, S.; Sauer, J.R.; Landy, R.B.

    2000-01-01

    In response to concerns about amphibian declines, a study evaluating and validating amphibian monitoring techniques was initiated in Shenandoah and Big Bend National Parks in the spring of 1998. We evaluate precision, bias, and efficiency of several sampling methods for terrestrial and streamside salamanders in Shenandoah National Park and assess salamander abundance in relation to environmental variables, notably soil and water pH. Terrestrial salamanders, primarily redback salamanders (Plethodon cinereus), were sampled by searching under cover objects during the day in square plots (10 to 35 m2). We compared population indices (mean daily and total counts) with adjusted population estimates from capture-recapture. Analyses suggested that the proportion of salamanders detected (p) during sampling varied among plots, necessitating the use of adjusted population estimates. However, adjusted population estimates were less precise than population indices, and may not be efficient in relating salamander populations to environmental variables. In future sampling, strategic use of capture-recapture to verify consistency of p's among sites may be a reasonable compromise between the possibility of bias in estimation of population size and deficiencies due to inefficiency associated with the estimation of p. The streamside two-lined salamander (Eurycea bislineata) was surveyed using four methods: leaf litter refugia bags, 1 m2 quadrats, 50 x 1 m visual encounter transects, and electric shocking. Comparison of survey methods at nine streams revealed congruent patterns of abundance among sites, suggesting that relative bias among the methods is similar, and that choice of survey method should be based on precision and logistical efficiency. Redback and two-lined salamander abundance were not significantly related to soil or water pH, respectively.

  20. Effects of land-use change on the carbon balance of terrestrial ecosystems

    NASA Astrophysics Data System (ADS)

    Houghton, R. A.; Goodale, C. L.

    Most changes in land use affect the amount of carbon held in vegetation and soil, thereby, either releasing carbon dioxide (a greenhouse gas) to, or removing it from, the atmosphere. The greatest fluxes of carbon result from conversion of forests to open lands (and vice versa). Model-based estimates of the flux of carbon attributable to land-use change are highly variable, however, largely as a result of uncertainties in the areas annually affected by different types of land-use change. Uncertain rates of tropical deforestation, for example, account for more than half of the range in estimates of the global carbon flux. Three other factors account for much of the rest of the uncertainty: (1) the initial stocks of carbon in ecosystems affected by land-use change (i.e., spatial heterogeneity), (2) per hectare changes in carbon stocks in response to different types of land-use change, and (3) legacy effects; that is, the time it takes for carbon stocks to equilibrate following a change in land use. For the tropics, recent satellite-based estimates of deforestation are lower than previous estimates and yield calculated carbon emissions from land-use change that are similar to independently-derived estimates of the total net flux for the region. The similarity suggests that changes in land use account for the net flux of carbon from the tropics. For the northern mid-latitudes, the carbon sink attributed to land-use change is less than the sink obtained by other methods, suggesting either an incomplete accounting of land-use change or the importance of other factors in explaining the current carbon sink in that region.

  1. Estimating satellite pose and motion parameters using a novelty filter and neural net tracker

    NASA Technical Reports Server (NTRS)

    Lee, Andrew J.; Casasent, David; Vermeulen, Pieter; Barnard, Etienne

    1989-01-01

    A system for determining the position, orientation and motion of a satellite with respect to a robotic spacecraft using video data is advanced. This system utilizes two levels of pose and motion estimation: an initial system which provides coarse estimates of pose and motion, and a second system which uses the coarse estimates and further processing to provide finer pose and motion estimates. The present paper emphasizes the initial coarse pose and motion estimation sybsystem. This subsystem utilizes novelty detection and filtering for locating novel parts and a neural net tracker to track these parts over time. Results of using this system on a sequence of images of a spin stabilized satellite are presented.

  2. Recovery from PTSD following Hurricane Katrina.

    PubMed

    McLaughlin, Katie A; Berglund, Patricia; Gruber, Michael J; Kessler, Ronald C; Sampson, Nancy A; Zaslavsky, Alan M

    2011-06-01

    We examined patterns and correlates of speed of recovery of estimated posttraumatic stress disorder (PTSD) among people who developed PTSD in the wake of Hurricane Katrina. A probability sample of prehurricane residents of areas affected by Hurricane Katrina was administered a telephone survey 7-19 months following the hurricane and again 24-27 months posthurricane. The baseline survey assessed PTSD using a validated screening scale and assessed a number of hypothesized predictors of PTSD recovery that included sociodemographics, prehurricane history of psychopathology, hurricane-related stressors, social support, and social competence. Exposure to posthurricane stressors and course of estimated PTSD were assessed in a follow-up interview. An estimated 17.1% of respondents had a history of estimated hurricane-related PTSD at baseline and 29.2% by the follow-up survey. Of the respondents who developed estimated hurricane-related PTSD, 39.0% recovered by the time of the follow-up survey with a mean duration of 16.5 months. Predictors of slow recovery included exposure to a life-threatening situation, hurricane-related housing adversity, and high income. Other sociodemographics, history of psychopathology, social support, social competence, and posthurricane stressors were unrelated to recovery from estimated PTSD. The majority of adults who developed estimated PTSD after Hurricane Katrina did not recover within 18-27 months. Delayed onset was common. Findings document the importance of initial trauma exposure severity in predicting course of illness and suggest that pre- and posttrauma factors typically associated with course of estimated PTSD did not influence recovery following Hurricane Katrina. © 2011 Wiley-Liss, Inc.

  3. Estimated prevalence of hearing loss and provision of hearing services in Pacific Island nations.

    PubMed

    Sanders, Michael; Houghton, Natasha; Dewes, Ofa; McCool, Judith; Thorne, Peter R

    2015-03-01

    Hearing impairment (HI) affects an estimated 538 million people worldwide, with 80% of these living in developing countries. Untreated HI in childhood may lead to developmental delay and in adults results in social isolation, inability to find or maintain employment, and dependency. Early intervention and support programmes can significantly reduce the negative effects of HI. To estimate HI prevalence and identify available hearing services in some Pacific countries - Cook Islands, Fiji, Niue, Samoa, Tokelau, Tonga. Data were collected through literature review and correspondence with service providers. Prevalence estimates were based on census data and previously published regional estimates. Estimates indicate 20-23% of the population may have at least a mild HI, with up to 11% having a moderate impairment or worse. Estimated incidence of chronic otitis media in Pacific Island nations is 3-5 times greater than other Australasian countries in children under 10 years old. Permanent HI from otitis media is substantially more likely in children and adults in Pacific Island nations. Several organisations and individuals provide some limited hearing services in a few Pacific Island nations, but the majority of people with HI are largely underserved. Although accurate information on HI prevalence is lacking, prevalence estimates of HI and ear disease suggest they are significant health conditions in Pacific Island nations. There is relatively little support for people with HI or ear disease in the Pacific region. An investment in initiatives to both identify and support people with hearing loss in the Pacific is necessary.

  4. The Model Parameter Estimation Experiment (MOPEX): Its structure, connection to other international initiatives and future directions

    USGS Publications Warehouse

    Wagener, T.; Hogue, T.; Schaake, J.; Duan, Q.; Gupta, H.; Andreassian, V.; Hall, A.; Leavesley, G.

    2006-01-01

    The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrological models and in land surface parameterization schemes connected to atmospheric models. The MOPEX science strategy involves: database creation, a priori parameter estimation methodology development, parameter refinement or calibration, and the demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrological basins in the United States (US) and in other countries. This database is being continuously expanded to include basins from various hydroclimatic regimes throughout the world. MOPEX research has largely been driven by a series of international workshops that have brought interested hydrologists and land surface modellers together to exchange knowledge and experience in developing and applying parameter estimation techniques. With its focus on parameter estimation, MOPEX plays an important role in the international context of other initiatives such as GEWEX, HEPEX, PUB and PILPS. This paper outlines the MOPEX initiative, discusses its role in the scientific community, and briefly states future directions.

  5. Battery state-of-charge estimation using approximate least squares

    NASA Astrophysics Data System (ADS)

    Unterrieder, C.; Zhang, C.; Lunglmayr, M.; Priewasser, R.; Marsili, S.; Huemer, M.

    2015-03-01

    In recent years, much effort has been spent to extend the runtime of battery-powered electronic applications. In order to improve the utilization of the available cell capacity, high precision estimation approaches for battery-specific parameters are needed. In this work, an approximate least squares estimation scheme is proposed for the estimation of the battery state-of-charge (SoC). The SoC is determined based on the prediction of the battery's electromotive force. The proposed approach allows for an improved re-initialization of the Coulomb counting (CC) based SoC estimation method. Experimental results for an implementation of the estimation scheme on a fuel gauge system on chip are illustrated. Implementation details and design guidelines are presented. The performance of the presented concept is evaluated for realistic operating conditions (temperature effects, aging, standby current, etc.). For the considered test case of a GSM/UMTS load current pattern of a mobile phone, the proposed method is able to re-initialize the CC-method with a high accuracy, while state-of-the-art methods fail to perform a re-initialization.

  6. No evidence of response bias in a population-based childhood cancer survivor questionnaire survey — Results from the Swiss Childhood Cancer Survivor Study

    PubMed Central

    Gianinazzi, Micòl E.; Michel, Gisela; Zwahlen, Marcel; von der Weid, Nicolas X.; Kuehni, Claudia E.

    2017-01-01

    Purpose This is the first study to quantify potential nonresponse bias in a childhood cancer survivor questionnaire survey. We describe early and late responders and nonresponders, and estimate nonresponse bias in a nationwide questionnaire survey of survivors. Methods In the Swiss Childhood Cancer Survivor Study, we compared characteristics of early responders (who answered an initial questionnaire), late responders (who answered after ≥1 reminder) and nonresponders. Sociodemographic and cancer-related information was available for the whole population from the Swiss Childhood Cancer Registry. We compared observed prevalence of typical outcomes in responders to the expected prevalence in a complete (100% response) representative population we constructed in order to estimate the effect of nonresponse bias. We constructed the complete population using inverse probability of participation weights. Results Of 2328 survivors, 930 returned the initial questionnaire (40%); 671 returned the questionnaire after ≥1reminder (29%). Compared to early and late responders, we found that the 727 nonresponders (31%) were more likely male, aged <20 years, French or Italian speaking, of foreign nationality, diagnosed with lymphoma or a CNS or germ cell tumor, and treated only with surgery. But observed prevalence of typical estimates (somatic health, medical care, mental health, health behaviors) was similar among the sample of early responders (40%), all responders (69%), and the complete representative population (100%). In this survey, nonresponse bias did not seem to influence observed prevalence estimates. Conclusion Nonresponse bias may play only a minor role in childhood cancer survivor studies, suggesting that results can be generalized to the whole population of such cancer survivors and applied in clinical practice. PMID:28463966

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buckley, L; Lambert, C; Nyiri, B

    Purpose: To standardize the tube calibration for Elekta XVI cone beam CT (CBCT) systems in order to provide a meaningful estimate of the daily imaging dose and reduce the variation between units in a large centre with multiple treatment units. Methods: Initial measurements of the output from the CBCT systems were made using a Farmer chamber and standard CTDI phantom. The correlation between the measured CTDI and the tube current was confirmed using an Unfors Xi detector which was then used to perform a tube current calibration on each unit. Results: Initial measurements showed measured tube current variations of upmore » to 25% between units for scans with the same image settings. In order to reasonably estimate the imaging dose, a systematic approach to x-ray generator calibration was adopted to ensure that the imaging dose was consistent across all units at the centre and was adopted as part of the routine quality assurance program. Subsequent measurements show that the variation in measured dose across nine units is on the order of 5%. Conclusion: Increasingly, patients receiving radiation therapy have extended life expectancies and therefore the cumulative dose from daily imaging should not be ignored. In theory, an estimate of imaging dose can be made from the imaging parameters. However, measurements have shown that there are large differences in the x-ray generator calibration as installed at the clinic. Current protocols recommend routine checks of dose to ensure constancy. The present study suggests that in addition to constancy checks on a single machine, a tube current calibration should be performed on every unit to ensure agreement across multiple machines. This is crucial at a large centre with multiple units in order to provide physicians with a meaningful estimate of the daily imaging dose.« less

  8. Effect of insurance parity on substance abuse treatment.

    PubMed

    Azzone, Vanessa; Frank, Richard G; Normand, Sharon-Lise T; Burnam, M Audrey

    2011-02-01

    This study examined the impact of insurance parity on the use, cost, and quality of substance abuse treatment. The authors compared substance abuse treatment spending and utilization from 1999 to 2002 for continuously enrolled beneficiaries covered by Federal Employees Health Benefit (FEHB) plans, which require parity coverage of mental health and substance use disorders, with spending and utilization among beneficiaries in a matched set of health plans without parity coverage. Logistic regression models estimated the probability of any substance abuse service use. Conditional on use, linear models estimated total and out-of-pocket spending. Logistic regression models for three quality indicators for substance abuse treatment were also estimated: identification of adult enrollees with a new substance abuse diagnosis, treatment initiation, and treatment engagement. Difference-in-difference estimates were computed as (postparity - preparity) differences in outcomes in plans without parity subtracted from those in FEHB plans. There were no significant differences between FEHB and non-FEHB plans in rates of change in average utilization of substance abuse services. Conditional on service utilization, the rate of substance abuse treatment out-of-pocket spending declined significantly in the FEHB plans compared with the non-FEHB plans (mean difference=-$101.09, 95% confidence interval [CI]=-$198.06 to -$4.12), whereas changes in total plan spending per user did not differ significantly. With parity, more patients had new diagnoses of a substance use disorder (difference-in-difference risk=.10%, CI=.02% to .19%). No statistically significant differences were found for rates of initiation and engagement in substance abuse treatment. Findings suggest that for continuously enrolled populations, providing parity of substance abuse treatment coverage improved insurance protection but had little impact on utilization, costs for plans, or quality of care.

  9. Estimation of Ecosystem Parameters of the Community Land Model with DREAM: Evaluation of the Potential for Upscaling Net Ecosystem Exchange

    NASA Astrophysics Data System (ADS)

    Hendricks Franssen, H. J.; Post, H.; Vrugt, J. A.; Fox, A. M.; Baatz, R.; Kumbhar, P.; Vereecken, H.

    2015-12-01

    Estimation of net ecosystem exchange (NEE) by land surface models is strongly affected by uncertain ecosystem parameters and initial conditions. A possible approach is the estimation of plant functional type (PFT) specific parameters for sites with measurement data like NEE and application of the parameters at other sites with the same PFT and no measurements. This upscaling strategy was evaluated in this work for sites in Germany and France. Ecosystem parameters and initial conditions were estimated with NEE-time series of one year length, or a time series of only one season. The DREAM(zs) algorithm was used for the estimation of parameters and initial conditions. DREAM(zs) is not limited to Gaussian distributions and can condition to large time series of measurement data simultaneously. DREAM(zs) was used in combination with the Community Land Model (CLM) v4.5. Parameter estimates were evaluated by model predictions at the same site for an independent verification period. In addition, the parameter estimates were evaluated at other, independent sites situated >500km away with the same PFT. The main conclusions are: i) simulations with estimated parameters reproduced better the NEE measurement data in the verification periods, including the annual NEE-sum (23% improvement), annual NEE-cycle and average diurnal NEE course (error reduction by factor 1,6); ii) estimated parameters based on seasonal NEE-data outperformed estimated parameters based on yearly data; iii) in addition, those seasonal parameters were often also significantly different from their yearly equivalents; iv) estimated parameters were significantly different if initial conditions were estimated together with the parameters. We conclude that estimated PFT-specific parameters improve land surface model predictions significantly at independent verification sites and for independent verification periods so that their potential for upscaling is demonstrated. However, simulation results also indicate that possibly the estimated parameters mask other model errors. This would imply that their application at climatic time scales would not improve model predictions. A central question is whether the integration of many different data streams (e.g., biomass, remotely sensed LAI) could solve the problems indicated here.

  10. Thermo-mechanical models of obduction applied to the Oman ophiolite

    NASA Astrophysics Data System (ADS)

    Thibault, Duretz; Philippe, Agard; Philippe, Yamato; Céline, Ducassou; Taras, Gerya; Evguenii, Burov

    2015-04-01

    During obduction regional-scale fragments of oceanic lithosphere (ophiolites) are emplaced somewhat enigmatically on top of lighter continental lithosphere. We herein use two-dimensional thermo-mechanical models to investigate the feasibility and controlling parameters of obduction. The models are designed using available geological data from the Oman (Semail) ophiolite. Initial and boundary conditions are constrained by plate kinematic and geochronological data and modeling results are validated against petrological and structural observations. The reference model consists of three distinct stages: (1) initiation of oceanic subduction initiation away from Arabian margin, (2) emplacement of the Oman Ophiolite atop the Arabian margin, (2) dome-like exhumation of the subducted Arabian margin beneath the overlying ophiolite. A parametric study suggests that 350-400 km of shortening allows to best fit both the peak P-T conditions of the subducted margin (1.5-2.5 GPa / 450-600°C) and the dimensions of the ophiolite (~170 km width), in agreement with previous estimations. Our results further confirm that the locus of obduction initiation is close to the eastern edge of the Arabian margin (~100 km) and indicate that obduction is facilitated by a strong continental basement rheology.

  11. Outcome of Early Initiation of Peritoneal Dialysis in Patients with End-Stage Renal Failure

    PubMed Central

    Oh, Kook-Hwan; Hwang, Young-Hwan; Cho, Jung-Hwa; Kim, Mira; Ju, Kyung Don; Joo, Kwon Wook; Kim, Dong Ki; Kim, Yon Su; Ahn, Curie

    2012-01-01

    Recent studies reported that early initiation of hemodialysis may increase mortality. However, studies that assessed the influence of early initiation of peritoneal dialysis (PD) yielded controversial results. In the present study, we evaluated the prognosis of early initiation of PD on the various outcomes of end stage renal failure patients by using propensity-score matching methods. Incident PD patients (n = 491) who started PD at SNU Hospital were enrolled. The patients were divided into 'early starters (n = 244)' and 'late starters (n = 247)' on the basis of the estimated glomerular filtration rate (eGFR) at the start of dialysis. The calculated propensity-score was used for one-to-one matching. After propensity-score-based matching (n = 136, for each group), no significant differences were observed in terms of all-cause mortality (P = 0.17), technique failure (P = 0.62), cardiovascular event (P = 0.96) and composite event (P = 0.86) between the early and late starters. Stratification analysis in the propensity-score quartiles (n = 491) exhibited no trend toward better or poorer survival in terms of all-cause mortality. In conclusion, early commencement of PD does not reduce the mortality risk and other outcomes. Although the recent guidelines suggest that initiation of dialysis at higher eGFR, physicians should not determine the time to initiate PD therapy simply rely on the eGFR alone. PMID:22323864

  12. Hybrid method to estimate two-layered superficial tissue optical properties from simulated data of diffuse reflectance spectroscopy.

    PubMed

    Hsieh, Hong-Po; Ko, Fan-Hua; Sung, Kung-Bin

    2018-04-20

    An iterative curve fitting method has been applied in both simulation [J. Biomed. Opt.17, 107003 (2012)JBOPFO1083-366810.1117/1.JBO.17.10.107003] and phantom [J. Biomed. Opt.19, 077002 (2014)JBOPFO1083-366810.1117/1.JBO.19.7.077002] studies to accurately extract optical properties and the top layer thickness of a two-layered superficial tissue model from diffuse reflectance spectroscopy (DRS) data. This paper describes a hybrid two-step parameter estimation procedure to address two main issues of the previous method, including (1) high computational intensity and (2) converging to local minima. The parameter estimation procedure contained a novel initial estimation step to obtain an initial guess, which was used by a subsequent iterative fitting step to optimize the parameter estimation. A lookup table was used in both steps to quickly obtain reflectance spectra and reduce computational intensity. On simulated DRS data, the proposed parameter estimation procedure achieved high estimation accuracy and a 95% reduction of computational time compared to previous studies. Furthermore, the proposed initial estimation step led to better convergence of the following fitting step. Strategies used in the proposed procedure could benefit both the modeling and experimental data processing of not only DRS but also related approaches such as near-infrared spectroscopy.

  13. Risk of thromboembolism in women taking ethinylestradiol/drospirenone and other oral contraceptives.

    PubMed

    Seeger, John D; Loughlin, Jeanne; Eng, P Mona; Clifford, C Robin; Cutone, Jennifer; Walker, Alexander M

    2007-09-01

    The oral contraceptive ethinylestradiol 0.03 mg/drospirenone 3 mg contains a progestin component that differs from other oral contraceptives. Case reports and prescription event monitoring suggested that ethinylestradiol/drospirenone might be associated with an elevated risk of thromboembolism. We sought to estimate the association between ethinylestradiol/drospirenone and risk of thromboembolism relative to the association among other oral contraceptives. We identified ethinylestradiol/drospirenone initiators and a twofold larger group of other oral contraceptive initiators between June 2001 and June 2004 within a U.S. health insurer database. The comparison group was selected to have demographic and health care characteristics preceding oral contraceptive initiation that were similar to ethinylestradiol/drospirenone initiators. Thromboembolism during the follow-up of the cohorts was identified through claims for medical services, and only medical record-confirmed cases were included in analyses. The primary (as-matched) analysis used proportional hazards regression, whereas a secondary (as-treated) analysis accounted for changes in oral contraceptives during follow-up using Poisson regression. The 22,429 ethinylestradiol/drospirenone initiators and 44,858 other oral contraceptive initiators were followed for an average of 7.6 months, and there were 18 cases of thromboembolism in ethinylestradiol/drospirenone initiators and 39 in the comparators (rate ratio 0.9, 95% confidence interval 0.5-1.6). More than 9,000 women would need to be prescribed oral contraceptives to observe a difference of one case of thromboembolism. Results of the as-treated analysis were similar to those of the as-matched analysis. Ethinylestradiol/drospirenone initiators and initiators of other oral contraceptives are similarly likely to experience thromboembolism. II.

  14. Estimation of teleported and gained parameters in a non-inertial frame

    NASA Astrophysics Data System (ADS)

    Metwally, N.

    2017-04-01

    Quantum Fisher information is introduced as a measure of estimating the teleported information between two users, one of which is uniformly accelerated. We show that the final teleported state depends on the initial parameters, in addition to the gained parameters during the teleportation process. The estimation degree of these parameters depends on the value of the acceleration, the used single mode approximation (within/beyond), the type of encoded information (classic/quantum) in the teleported state, and the entanglement of the initial communication channel. The estimation degree of the parameters can be maximized if the partners teleport classical information.

  15. Filamentary and hierarchical pictures - Kinetic energy criterion

    NASA Technical Reports Server (NTRS)

    Klypin, Anatoly A.; Melott, Adrian L.

    1992-01-01

    We present a new criterion for formation of second-generation filaments. The criterion called the kinetic energy ratio, KR, is based on comparison of peculiar velocities at different scales. We suggest that the clumpiness of the distribution in some cases might be less important than the 'coldness' or 'hotness' of the flow for formation of coherent structures. The kinetic energy ratio is analogous to the Mach number except for one essential difference. If at some scale KR is greater than 1, as estimated at the linear stage, then when fluctuations of this scale reach nonlinearity, the objects they produce must be anisotropic ('filamentary'). In the case of power-law initial spectra the kinetic ratio criterion suggests that the border line is the power-spectrum with the slope n = -1.

  16. When do species-tree and concatenated estimates disagree? An empirical analysis with higher-level scincid lizard phylogeny.

    PubMed

    Lambert, Shea M; Reeder, Tod W; Wiens, John J

    2015-01-01

    Simulation studies suggest that coalescent-based species-tree methods are generally more accurate than concatenated analyses. However, these species-tree methods remain impractical for many large datasets. Thus, a critical but unresolved issue is when and why concatenated and coalescent species-tree estimates will differ. We predict such differences for branches in concatenated trees that are short, weakly supported, and have conflicting gene trees. We test these predictions in Scincidae, the largest lizard family, with data from 10 nuclear genes for 17 ingroup taxa and 44 genes for 12 taxa. We support our initial predictions, andsuggest that simply considering uncertainty in concatenated trees may sometimes encompass the differences between these methods. We also found that relaxed-clock concatenated trees can be surprisingly similar to the species-tree estimate. Remarkably, the coalescent species-tree estimates had slightly lower support values when based on many more genes (44 vs. 10) and a small (∼30%) reduction in taxon sampling. Thus, taxon sampling may be more important than gene sampling when applying species-tree methods to deep phylogenetic questions. Finally, our coalescent species-tree estimates tentatively support division of Scincidae into three monophyletic subfamilies, a result otherwise found only in concatenated analyses with extensive species sampling. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Multiple scene attitude estimator performance for LANDSAT-1

    NASA Technical Reports Server (NTRS)

    Rifman, S. S.; Monuki, A. T.; Shortwell, C. P.

    1979-01-01

    Initial results are presented to demonstrate the performance of a linear sequential estimator (Kalman Filter) used to estimate a LANDSAT 1 spacecraft attitude time series defined for four scenes. With the revised estimator a GCP poor scene - a scene with no usable geodetic control points (GCPs) - can be rectified to higher accuracies than otherwise based on the use of GCPs in adjacent scenes. Attitude estimation errors was determined by the use of GCPs located in the GCP-poor test scene, but which are not used to update the Kalman filter. Initial results achieved indicate that errors of 500m (rms) can be attained for the GCP-poor scenes. Operational factors are related to various scenarios.

  18. Statistical errors and systematic biases in the calibration of the convective core overshooting with eclipsing binaries. A case study: TZ Fornacis

    NASA Astrophysics Data System (ADS)

    Valle, G.; Dell'Omodarme, M.; Prada Moroni, P. G.; Degl'Innocenti, S.

    2017-04-01

    Context. Recently published work has made high-precision fundamental parameters available for the binary system TZ Fornacis, making it an ideal target for the calibration of stellar models. Aims: Relying on these observations, we attempt to constrain the initial helium abundance, the age and the efficiency of the convective core overshooting. Our main aim is in pointing out the biases in the results due to not accounting for some sources of uncertainty. Methods: We adopt the SCEPtER pipeline, a maximum likelihood technique based on fine grids of stellar models computed for various values of metallicity, initial helium abundance and overshooting efficiency by means of two independent stellar evolutionary codes, namely FRANEC and MESA. Results: Beside the degeneracy between the estimated age and overshooting efficiency, we found the existence of multiple independent groups of solutions. The best one suggests a system of age 1.10 ± 0.07 Gyr composed of a primary star in the central helium burning stage and a secondary in the sub-giant branch (SGB). The resulting initial helium abundance is consistent with a helium-to-metal enrichment ratio of ΔY/ ΔZ = 1; the core overshooting parameter is β = 0.15 ± 0.01 for FRANEC and fov = 0.013 ± 0.001 for MESA. The second class of solutions, characterised by a worse goodness-of-fit, still suggest a primary star in the central helium-burning stage but a secondary in the overall contraction phase, at the end of the main sequence (MS). In this case, the FRANEC grid provides an age of Gyr and a core overshooting parameter , while the MESA grid gives 1.23 ± 0.03 Gyr and fov = 0.025 ± 0.003. We analyse the impact on the results of a larger, but typical, mass uncertainty and of neglecting the uncertainty in the initial helium content of the system. We show that very precise mass determinations with uncertainty of a few thousandths of solar mass are required to obtain reliable determinations of stellar parameters, as mass errors larger than approximately 1% lead to estimates that are not only less precise but also biased. Moreover, we show that a fit obtained with a grid of models computed at a fixed ΔY/ ΔZ - thus neglecting the current uncertainty in the initial helium content of the system - can provide severely biased age and overshooting estimates. The possibility of independent overshooting efficiencies for the two stars of the system is also explored. Conclusions: The present analysis confirms that to constrain the core overshooting parameter by means of binary systems is a very difficult task that requires an observational precision still rarely achieved and a robust statistical treatment of the error sources.

  19. Economic analysis of the global polio eradication initiative.

    PubMed

    Duintjer Tebbens, Radboud J; Pallansch, Mark A; Cochi, Stephen L; Wassilak, Steven G F; Linkins, Jennifer; Sutter, Roland W; Aylward, R Bruce; Thompson, Kimberly M

    2010-12-16

    The global polio eradication initiative (GPEI), which started in 1988, represents the single largest, internationally coordinated public health project to date. Completion remains within reach, with type 2 wild polioviruses apparently eradicated since 1999 and fewer than 2000 annual paralytic poliomyelitis cases of wild types 1 and 3 reported since then. This economic analysis of the GPEI reflects the status of the program as of February 2010, including full consideration of post-eradication policies. For the GPEI intervention, we consider the actual pre-eradication experience to date followed by two distinct potential future post-eradication vaccination policies. We estimate GPEI costs based on actual and projected expenditures and poliomyelitis incidence using reported numbers corrected for underreporting and model projections. For the comparator, which assumes only routine vaccination for polio historically and into the future (i.e., no GPEI), we estimate poliomyelitis incidence using a dynamic infection transmission model and costs based on numbers of vaccinated children. Cost-effectiveness ratios for the GPEI vs. only routine vaccination qualify as highly cost-effective based on standard criteria. We estimate incremental net benefits of the GPEI between 1988 and 2035 of approximately 40-50 billion dollars (2008 US dollars; 1988 net present values). Despite the high costs of achieving eradication in low-income countries, low-income countries account for approximately 85% of the total net benefits generated by the GPEI in the base case analysis. The total economic costs saved per prevented paralytic poliomyelitis case drive the incremental net benefits, which become positive even if we estimate the loss in productivity as a result of disability as below the recommended value of one year in average per-capita gross national income per disability-adjusted life year saved. Sensitivity analysis suggests that the finding of positive net benefits of the GPEI remains robust over a wide range of assumptions, and that consideration of the additional net benefits of externalities that occurred during polio campaigns to date, such as the mortality reduction associated with delivery of Vitamin A supplements, significantly increases the net benefits. This study finds a strong economic justification for the GPEI despite the rising costs of the initiative. Copyright © 2010 Elsevier Ltd. All rights reserved.

  20. Transfer of aged Pu to cattle grazing on a contaminated environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, R.O.; Engel, D.W.; Smith, D.D.

    1988-03-01

    Estimates are obtained of the fraction of ingested or inhaled 239+240Pu transferred to blood and tissues of a reproducing herd of beef cattle, individuals of which grazed within fenced enclosures for up to 1064 d under natural conditions with no supplemental feeding at an arid site contaminated 16 y previously with Pu oxide. The estimated (geometric mean (GM)) fraction of Pu transferred from the gastrointestinal tract to blood serum was about 5 x 10(-6) (geometric standard error (GSE) = 1.4) with an approximate upper bound of about 2 x 10(-5). These results are in reasonable agreement with the value ofmore » 1 x 10(-5) recommended for human radiation protection purposes by the International Commission on Radiological Protection (ICRP) for insoluble Pu oxides that are free of very small particles. Also, results from a laboratory study by Stanley (St75), in which large doses of /sup 238/Pu were orally administered daily to dairy cattle for 19 consecutive days, suggest that aged 239+240Pu at this arid grazing site may not be more biologically available to blood serum than fresh 239+240Pu oxide. The estimated fractions of 239+240Pu transferred from blood serum to tissues of adult grazing cattle were: femur (3.2 X 10(-2), 1.8; GM, GSE), vertebra (1.4 X 10(-1), 1.6), liver (2.3 X 10(-1), 2.0), muscle (1.3 X 10(-1), 1.9), female gonads (7.9 X 10(-5), 1.5), and kidney (1.4 X 10(-3), 1.7). The blood-to-tissue fractional transfers for cattle initially exposed in utero were greater than those exposed only as adults by a factor of about 4 for femur (statistically significant) and of about 2 for other tissues (not significant). The estimated (GM) fraction of inhaled Pu initially deposited in the pulmonary lung was 0.34 (GSE = 1.3) for adults and 0.15 (GSE = 1.3) for cattle initially exposed in utero (a statistically significant difference).« less

  1. Estimation of chaotic coupled map lattices using symbolic vector dynamics

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Pei, Wenjiang; Cheung, Yiu-ming; Shen, Yi; He, Zhenya

    2010-01-01

    In [K. Wang, W.J. Pei, Z.Y. He, Y.M. Cheung, Phys. Lett. A 367 (2007) 316], an original symbolic vector dynamics based method has been proposed for initial condition estimation in additive white Gaussian noisy environment. The estimation precision of this estimation method is determined by symbolic errors of the symbolic vector sequence gotten by symbolizing the received signal. This Letter further develops the symbolic vector dynamical estimation method. We correct symbolic errors with backward vector and the estimated values by using different symbols, and thus the estimation precision can be improved. Both theoretical and experimental results show that this algorithm enables us to recover initial condition of coupled map lattice exactly in both noisy and noise free cases. Therefore, we provide novel analytical techniques for understanding turbulences in coupled map lattice.

  2. Pharmacist-patient communication about medication regimen adjustment during Ramadan.

    PubMed

    Amin, Mohamed E K; Chewning, Betty

    2016-12-01

    During Ramadan, Muslims fast from dawn to sunset while abstaining from food and drink. Although Muslim patients may be aware of their religious exemption from fasting, many patients still choose not to take that exemption and fast. This study examines pharmacists' initiation and timing of communication about medication regimen adjustment (MRA) with patients related to Ramadan. Predictors for initiating this communication with patients were also explored. A probability sample of community pharmacists in Alexandria, Egypt was surveyed. The self-administered instrument covered timing and likelihood of initiating discussion about MRA. Using ordered logistic regression, a model was estimated to predict pharmacists' initiation of the conversation on MRA during Ramadan. Ninety-three percent of the 298 approached pharmacists completed surveys. Only 16% of the pharmacists reported that they themselves usually initiated the conversation on MRA. Pharmacists' initiation of these conversations was associated with pharmacists' perceived importance of MRA on pharmacy revenue odds ratio ((OR) = 1.24, CI = 1.03-1.48). Eighty percent of the responding pharmacists reported the MRA conversation for chronic conditions started either 1-3 days before, or during the first week of Ramadan. These results suggest considerable pharmacist patient communication gaps regarding medication use during Ramadan. It is especially important for pharmacists and other health professionals to initiate communication with Muslim patients early enough to identify how best to help patients transition safely into and out of Ramadan as they fast. © 2016 Royal Pharmaceutical Society.

  3. Trends in CD4 cell count response to first-line antiretroviral treatment in HIV-positive patients from Asia, 2003-2013: TREAT Asia HIV Observational Database Low Intensity Transfer.

    PubMed

    De La Mata, Nicole L; Ly, Penh S; Ng, Oon T; Nguyen, Kinh V; Merati, Tuti P; Pham, Thuy T; Lee, Man P; Choi, Jun Y; Sohn, Annette H; Law, Matthew G; Kumarasamy, Nagalingeswaran

    2017-11-01

    Antiretroviral treatment (ART) guidelines have changed over the past decade, recommending earlier initiation and more tolerable regimens. The study objective was to examine the CD4 response to ART, depending on the year of ART initiation, in HIV-positive patients in the Asia-Pacific. We included HIV-positive adult patients who initiated ART between 2003 and 2013 in our regional cohort from eight urban referral centres in seven countries within Asia. We used mixed-effects linear regression models to evaluate differences in CD4 response by year of ART initiation during 36 months of follow-up, adjusted a priori for other covariates. Overall, 16,962 patients were included. Patients initiating in 2006-9 and 2010-13 had an estimated mean CD4 cell count increase of 8 and 15 cells/µl, respectively, at any given time during the 36-month follow-up, compared to those in 2003-5. The median CD4 cell count at ART initiation also increased from 96 cells/µl in 2003-5 to 173 cells/µl in 2010-13. Our results suggest that the CD4 response to ART is modestly higher for those initiating ART in more recent years. Moreover, fewer patients are presenting with lower absolute CD4 cell counts over time. This is likely to reduce their risk of opportunistic infections and future non-AIDS defining cancers.

  4. Depth dependence of earthquake frequency-magnitude distributions in California: Implications for rupture initiation

    USGS Publications Warehouse

    Mori, J.; Abercrombie, R.E.

    1997-01-01

    Statistics of earthquakes in California show linear frequency-magnitude relationships in the range of M2.0 to M5.5 for various data sets. Assuming Gutenberg-Richter distributions, there is a systematic decrease in b value with increasing depth of earthquakes. We find consistent results for various data sets from northern and southern California that both include and exclude the larger aftershock sequences. We suggest that at shallow depth (???0 to 6 km) conditions with more heterogeneous material properties and lower lithospheric stress prevail. Rupture initiations are more likely to stop before growing into large earthquakes, producing relatively more smaller earthquakes and consequently higher b values. These ideas help to explain the depth-dependent observations of foreshocks in the western United States. The higher occurrence rate of foreshocks preceding shallow earthquakes can be interpreted in terms of rupture initiations that are stopped before growing into the mainshock. At greater depth (9-15 km), any rupture initiation is more likely to continue growing into a larger event, so there are fewer foreshocks. If one assumes that frequency-magnitude statistics can be used to estimate probabilities of a small rupture initiation growing into a larger earthquake, then a small (M2) rupture initiation at 9 to 12 km depth is 18 times more likely to grow into a M5.5 or larger event, compared to the same small rupture initiation at 0 to 3 km. Copyright 1997 by the American Geophysical Union.

  5. Study of the ablative effects on tektites. [wake shielding during atmospheric entry

    NASA Technical Reports Server (NTRS)

    Sepri, P.; Chen, K. K.

    1976-01-01

    Equations are presented which provide approximate parameters describing surface heating and tektite deceleration during atmosphere passage. Numerical estimates of these parameters using typical initial and ambient conditions support the conclusion that the commonly assumed trajectories would not have produced some of the observed surface markings. It is suggested that tektites did not enter the atmosphere singly but rather in a swarm dense enough to afford wake shielding according to a shock envelope model which is proposed. A further aerodynamic mechanism is described which is compatible with hemispherical pits occurring on tektite surfaces.

  6. Paleoseismic Evidence for Recurrence of Earthquakes near Charleston, South Carolina

    NASA Astrophysics Data System (ADS)

    Talwani, Pradeep; Cox, John

    1985-07-01

    A destructive earthquake that occurred in 1886 near Charleston, South Carolina, was associated with widespread liquefaction of shallow sand structures and their extravasation to the surface. Several seismically induced paleoliquefaction structures preserved within the shallow sediments in the meizoseismal area of the 1886 event were identified. Field evidence and radiocarbon dates suggest that at least two earthquakes of magnitudes greater than 6.2 preceded the 1886 event in the past 3000 to 3700 years. The evidence yielded an initial estimate of about 1500 to 1800 years for the maximum recurrence of destructive, intraplate earthquakes in the Charleston region.

  7. Future Directions for the National Health Accounts

    PubMed Central

    Huskamp, Haiden A.; Newhouse, Joseph P.

    1999-01-01

    Over the past 15 years, the Health Care Financing Administration (HCFA) has engaged in ongoing efforts to improve the methodology and data collection processes used to develop the national health accounts (NHA) estimates of national health expenditures (NHE). In March 1998, HCFA initiated a third conference to explore possible improvements or useful extensions to the current NHA projects. This article summarizes the issues discussed at the conference, provides an overview of three commissioned papers on future directions for the NHA that were presented, and summarizes suggestions made by participants regarding future directions for the accounts. PMID:11481786

  8. Mass Loss Near the Eddington Limit

    NASA Astrophysics Data System (ADS)

    Bjorkman, J. E.

    2005-09-01

    We investigate whether ``continuum'' opacity near the Fe peak at log T = 5.2 can produce the great eruption of η Car. Our simple estimates show that η Car can be super-Eddington (Γ >1) well below the photosphere. The super-Eddington region is sufficiently extended that it can drive a very large mass loss rate (a few 0.1 M⊙ yr-1) to well above the escape speed (several 100 km s-1). Furthermore once initiated, it appears plausible that continuum-driving may ``run away,'' approaching the photon tiring limit. This suggests continuum-driving may be capable of producing the great eruption.

  9. Trajectory-Based Takeoff Time Predictions Applied to Tactical Departure Scheduling: Concept Description, System Design, and Initial Observations

    NASA Technical Reports Server (NTRS)

    Engelland, Shawn A.; Capps, Alan

    2011-01-01

    Current aircraft departure release times are based on manual estimates of aircraft takeoff times. Uncertainty in takeoff time estimates may result in missed opportunities to merge into constrained en route streams and lead to lost throughput. However, technology exists to improve takeoff time estimates by using the aircraft surface trajectory predictions that enable air traffic control tower (ATCT) decision support tools. NASA s Precision Departure Release Capability (PDRC) is designed to use automated surface trajectory-based takeoff time estimates to improve en route tactical departure scheduling. This is accomplished by integrating an ATCT decision support tool with an en route tactical departure scheduling decision support tool. The PDRC concept and prototype software have been developed, and an initial test was completed at air traffic control facilities in Dallas/Fort Worth. This paper describes the PDRC operational concept, system design, and initial observations.

  10. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor.

    PubMed

    Kim, Heegwang; Park, Jinho; Park, Hasil; Paik, Joonki

    2017-12-09

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.

  11. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor

    PubMed Central

    Park, Jinho; Park, Hasil

    2017-01-01

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system. PMID:29232826

  12. Robust estimation of thermodynamic parameters (ΔH, ΔS and ΔCp) for prediction of retention time in gas chromatography - Part II (Application).

    PubMed

    Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira

    2015-12-18

    For this work, an analysis of parameter estimation for the retention factor in GC model was performed, considering two different criteria: sum of square error, and maximum error in absolute value; relevant statistics are described for each case. The main contribution of this work is the implementation of an initialization scheme (specialized) for the estimated parameters, which features fast convergence (low computational time) and is based on knowledge of the surface of the error criterion. In an application to a series of alkanes, specialized initialization resulted in significant reduction to the number of evaluations of the objective function (reducing computational time) in the parameter estimation. The obtained reduction happened between one and two orders of magnitude, compared with the simple random initialization. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. An algorithm for computing moments-based flood quantile estimates when historical flood information is available

    USGS Publications Warehouse

    Cohn, T.A.; Lane, W.L.; Baier, W.G.

    1997-01-01

    This paper presents the expected moments algorithm (EMA), a simple and efficient method for incorporating historical and paleoflood information into flood frequency studies. EMA can utilize three types of at-site flood information: systematic stream gage record; information about the magnitude of historical floods; and knowledge of the number of years in the historical period when no large flood occurred. EMA employs an iterative procedure to compute method-of-moments parameter estimates. Initial parameter estimates are calculated from systematic stream gage data. These moments are then updated by including the measured historical peaks and the expected moments, given the previously estimated parameters, of the below-threshold floods from the historical period. The updated moments result in new parameter estimates, and the last two steps are repeated until the algorithm converges. Monte Carlo simulations compare EMA, Bulletin 17B's [United States Water Resources Council, 1982] historically weighted moments adjustment, and maximum likelihood estimators when fitting the three parameters of the log-Pearson type III distribution. These simulations demonstrate that EMA is more efficient than the Bulletin 17B method, and that it is nearly as efficient as maximum likelihood estimation (MLE). The experiments also suggest that EMA has two advantages over MLE when dealing with the log-Pearson type III distribution: It appears that EMA estimates always exist and that they are unique, although neither result has been proven. EMA can be used with binomial or interval-censored data and with any distributional family amenable to method-of-moments estimation.

  14. An algorithm for computing moments-based flood quantile estimates when historical flood information is available

    NASA Astrophysics Data System (ADS)

    Cohn, T. A.; Lane, W. L.; Baier, W. G.

    This paper presents the expected moments algorithm (EMA), a simple and efficient method for incorporating historical and paleoflood information into flood frequency studies. EMA can utilize three types of at-site flood information: systematic stream gage record; information about the magnitude of historical floods; and knowledge of the number of years in the historical period when no large flood occurred. EMA employs an iterative procedure to compute method-of-moments parameter estimates. Initial parameter estimates are calculated from systematic stream gage data. These moments are then updated by including the measured historical peaks and the expected moments, given the previously estimated parameters, of the below-threshold floods from the historical period. The updated moments result in new parameter estimates, and the last two steps are repeated until the algorithm converges. Monte Carlo simulations compare EMA, Bulletin 17B's [United States Water Resources Council, 1982] historically weighted moments adjustment, and maximum likelihood estimators when fitting the three parameters of the log-Pearson type III distribution. These simulations demonstrate that EMA is more efficient than the Bulletin 17B method, and that it is nearly as efficient as maximum likelihood estimation (MLE). The experiments also suggest that EMA has two advantages over MLE when dealing with the log-Pearson type III distribution: It appears that EMA estimates always exist and that they are unique, although neither result has been proven. EMA can be used with binomial or interval-censored data and with any distributional family amenable to method-of-moments estimation.

  15. Cost-effectiveness analysis of interferon beta-1b for the treatment of patients with a first clinical event suggestive of multiple sclerosis.

    PubMed

    Caloyeras, John P; Zhang, Bin; Wang, Cheng; Eriksson, Marianne; Fredrikson, Sten; Beckmann, Karola; Knappertz, Volker; Pohl, Christoph; Hartung, Hans-Peter; Shah, Dhvani; Miller, Jeffrey D; Sandbrink, Rupert; Lanius, Vivian; Gondek, Kathleen; Russell, Mason W

    2012-05-01

    To assess, from a Swedish societal perspective, the cost effectiveness of interferon β-1b (IFNB-1b) after an initial clinical event suggestive of multiple sclerosis (MS) (ie, early treatment) compared with treatment after onset of clinically definite MS (CDMS) (ie, delayed treatment). A Markov model was developed, using patient level data from the BENEFIT trial and published literature, to estimate health outcomes and costs associated with IFNB-1b for hypothetical cohorts of patients after an initial clinical event suggestive of MS. Health states were defined by Kurtzke Expanded Disability Status Scale (EDSS) scores. Model outcomes included quality-adjusted life years (QALYs), total costs (including both direct and indirect costs), and incremental cost-effectiveness ratios. Sensitivity analyses were performed on key model parameters to assess the robustness of model results. In the base case scenario, early IFNB-1b treatment was economically dominant (ie, less costly and more effective) versus delayed IFNB-1b treatment when QALYs were used as the effectiveness metric. Sensitivity analyses showed that the cost-effectiveness results were sensitive to model time horizon. Compared with the delayed treatment strategy, early treatment of MS was also associated with delayed EDSS progressions, prolonged time to CDMS diagnosis, and a reduction in frequency of relapse. Early treatment with IFNB-1b for a first clinical event suggestive of MS was found to improve patient outcomes while controlling costs. Copyright © 2012 Elsevier HS Journals, Inc. All rights reserved.

  16. Defluoridation of drinking water using a new flow column-electrocoagulation reactor (FCER) - Experimental, statistical, and economic approach.

    PubMed

    Hashim, Khalid S; Shaw, Andy; Al Khaddar, Rafid; Ortoneda Pedrola, Montserrat; Phipps, David

    2017-07-15

    A new batch, flow column electrocoagulation reactor (FCER) that utilises a perforated plate flow column as a mixer has been used to remove fluoride from drinking water. A comprehensive study has been carried out to assess its performance. The efficiency of fluoride removal (R%) as a function of key operational parameters such as initial pH, detention time (t), current density (CD), inter-electrode distance (ID) and initial concentration (C 0 ) has been examined and an empirical model has been developed. A scanning electron microscopy (SEM) investigation of the influence of the EC process on morphology of the surface of the aluminium electrodes, showed the erosion caused by aluminium loss. A preliminary estimation of the reactor's operating cost is suggested, allowing for the energy from recycling of hydrogen gas hydrogen gas produced amount. The results obtained showed that 98% of fluoride was removed within 25 min of electrolysis at pH of 6, ID of 5 mm, and CD of 2 mA/cm 2 . The general relationship between fluoride removal and operating parameters could be described by a linear model with R 2 of 0.823. The contribution of the operating parameters to the suggested model followed the order: t > CD > C 0  > ID > pH. The SEM images obtained showed that, after the EC process, the surface of the anodes, became non-uniform with a large number of irregularities due to the generation of aluminium hydroxides. It is suggested that these do not materially affect the performance. A provisional estimate of the operating cost was 0.379 US $/m 3 . Additionally, it has been found that 0.6 kW/m 3 is potentially recoverable from the H 2 gas. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  17. Population ecology of breeding Pacific common eiders on the Yukon-Kuskokwim Delta, Alaska

    USGS Publications Warehouse

    Wilson, Heather M.; Flint, Paul L.; Powell, Abby N.; Grand, J. Barry; Moral, Christine L.

    2012-01-01

    Populations of Pacific common eiders (Somateria mollissima v-nigrum) on the Yukon-Kuskokwim Delta (YKD) in western Alaska declined by 50–90% from 1957 to 1992 and then stabilized at reduced numbers from the early 1990s to the present. We investigated the underlying processes affecting their population dynamics by collection and analysis of demographic data from Pacific common eiders at 3 sites on the YKD (1991–2004) for 29 site-years. We examined variation in components of reproduction, tested hypotheses about the influence of specific ecological factors on life-history variables, and investigated their relative contributions to local population dynamics. Reproductive output was low and variable, both within and among individuals, whereas apparent survival of adult females was high and relatively invariant (0.89 ± 0.005). All reproductive parameters varied across study sites and years. Clutch initiation dates ranged from 4 May to 28 June, with peak (modal) initiation occurring on 26 May. Females at an island study site consistently initiated clutches 3–5 days earlier in each year than those on 2 mainland sites. Population variance in nest initiation date was negatively related to the peak, suggesting increased synchrony in years of delayed initiation. On average, total clutch size (laid) ranged from 4.8 to 6.6 eggs, and declined with date of nest initiation. After accounting for partial predation and non-viability of eggs, average clutch size at hatch ranged from 2.0 to 5.8 eggs. Within seasons, daily survival probability (DSP) of nests was lowest during egg-laying and late-initiation dates. Estimated nest survival varied considerably across sites and years (mean = 0.55, range: 0.06–0.92), but process variance in nest survival was relatively low (0.02, CI: 0.01–0.05), indicating that most variance was likely attributed to sampling error. We found evidence that observer effects may have reduced overall nest survival by 0.0–0.36 across site-years. Study sites with lower sample sizes and more frequent visitations appeared to experience greater observer effects. In general, Pacific common eiders exhibited high spatio-temporal variance in reproductive components. Larger clutch sizes and high nest survival at early initiation dates suggested directional selection favoring early nesting. However, stochastic environmental effects may have precluded response to this apparent selection pressure. Our results suggest that females breeding early in the season have the greatest reproductive value, as these birds lay the largest clutches and have the highest probability of successfully hatching. We developed stochastic, stage-based, matrix population models that incorporated observed spatio-temporal (process) variance and co-variation in vital rates, and projected the stable stage distribution () and population growth rate (λ). We used perturbation analyses to examine the relative influence of changes in vital rates on λ and variance decomposition to assess the proportion of variation in λ explained by process variation in each vital rate. In addition to matrix-based λ, we estimated λ using capture–recapture approaches, and log-linear regression. We found the stable age distribution for Pacific common eiders was weighted heavily towards experienced adult females (≥4 yr of age), and all calculations of λ indicated that the YKD population was stable to slightly increasing (λmatrix = 1.02, CI: 1.00–1.04); λreverse-capture–recapture = 1.05, CI: 0.99–1.11; λlog-linear = 1.04, CI: 0.98–1.10). Perturbation analyses suggested the population would respond most dramatically to changes in adult female survival (relative influence of adult survival was 1.5 times that of fecundity), whereas retrospective variation in λ was primarily explained by fecundity parameters (60%), particularly duckling survival (42%). Among components of fecundity, sensitivities were highest for duckling survival, suggesti

  18. Multiple data sources improve DNA-based mark-recapture population estimates of grizzly bears.

    PubMed

    Boulanger, John; Kendall, Katherine C; Stetz, Jeffrey B; Roon, David A; Waits, Lisette P; Paetkau, David

    2008-04-01

    A fundamental challenge to estimating population size with mark-recapture methods is heterogeneous capture probabilities and subsequent bias of population estimates. Confronting this problem usually requires substantial sampling effort that can be difficult to achieve for some species, such as carnivores. We developed a methodology that uses two data sources to deal with heterogeneity and applied this to DNA mark-recapture data from grizzly bears (Ursus arctos). We improved population estimates by incorporating additional DNA "captures" of grizzly bears obtained by collecting hair from unbaited bear rub trees concurrently with baited, grid-based, hair snag sampling. We consider a Lincoln-Petersen estimator with hair snag captures as the initial session and rub tree captures as the recapture session and develop an estimator in program MARK that treats hair snag and rub tree samples as successive sessions. Using empirical data from a large-scale project in the greater Glacier National Park, Montana, USA, area and simulation modeling we evaluate these methods and compare the results to hair-snag-only estimates. Empirical results indicate that, compared with hair-snag-only data, the joint hair-snag-rub-tree methods produce similar but more precise estimates if capture and recapture rates are reasonably high for both methods. Simulation results suggest that estimators are potentially affected by correlation of capture probabilities between sample types in the presence of heterogeneity. Overall, closed population Huggins-Pledger estimators showed the highest precision and were most robust to sparse data, heterogeneity, and capture probability correlation among sampling types. Results also indicate that these estimators can be used when a segment of the population has zero capture probability for one of the methods. We propose that this general methodology may be useful for other species in which mark-recapture data are available from multiple sources.

  19. Statistical Reviewers Improve Reporting in Biomedical Articles: A Randomized Trial

    PubMed Central

    Cobo, Erik; Selva-O'Callagham, Albert; Ribera, Josep-Maria; Cardellach, Francesc; Dominguez, Ruth; Vilardell, Miquel

    2007-01-01

    Background Although peer review is widely considered to be the most credible way of selecting manuscripts and improving the quality of accepted papers in scientific journals, there is little evidence to support its use. Our aim was to estimate the effects on manuscript quality of either adding a statistical peer reviewer or suggesting the use of checklists such as CONSORT or STARD to clinical reviewers or both. Methodology and Principal Findings Interventions were defined as 1) the addition of a statistical reviewer to the clinical peer review process, and 2) suggesting reporting guidelines to reviewers; with “no statistical expert” and “no checklist” as controls. The two interventions were crossed in a 2×2 balanced factorial design including original research articles consecutively selected, between May 2004 and March 2005, by the Medicina Clinica (Barc) editorial committee. We randomized manuscripts to minimize differences in terms of baseline quality and type of study (intervention, longitudinal, cross-sectional, others). Sample-size calculations indicated that 100 papers provide an 80% power to test a 55% standardized difference. We specified the main outcome as the increment in quality of papers as measured on the Goodman Scale. Two blinded evaluators rated the quality of manuscripts at initial submission and final post peer review version. Of the 327 manuscripts submitted to the journal, 131 were accepted for further review, and 129 were randomized. Of those, 14 that were lost to follow-up showed no differences in initial quality to the followed-up papers. Hence, 115 were included in the main analysis, with 16 rejected for publication after peer review. 21 (18.3%) of the 115 included papers were interventions, 46 (40.0%) were longitudinal designs, 28 (24.3%) cross-sectional and 20 (17.4%) others. The 16 (13.9%) rejected papers had a significantly lower initial score on the overall Goodman scale than accepted papers (difference 15.0, 95% CI: 4.6–24.4). The effect of suggesting a guideline to the reviewers had no effect on change in overall quality as measured by the Goodman scale (0.9, 95% CI: −0.3–+2.1). The estimated effect of adding a statistical reviewer was 5.5 (95% CI: 4.3–6.7), showing a significant improvement in quality. Conclusions and Significance This prospective randomized study shows the positive effect of adding a statistical reviewer to the field-expert peers in improving manuscript quality. We did not find a statistically significant positive effect by suggesting reviewers use reporting guidelines. PMID:17389922

  20. On the initiation of subduction

    NASA Technical Reports Server (NTRS)

    Mueller, Steve; Phillips, Roger J.

    1991-01-01

    Estimates of shear resistance associated with lithospheric thrusting and convergence represent lower bounds on the force necessary to promote trench formation. Three environments proposed as preferential sites of incipient subduction are investigated: passive continental margins, transform faults/fracture zones, and extinct ridges. None of these are predicted to convert into subduction zones simply by the accumulation of local gravitational stresses. Subduction cannot initiate through the foundering of dense oceanic lithosphere immediately adjacent to passive continental margins. The attempted subduction of buoyant material at a mature trench can result in large compressional forces in both subducting and overriding plates. This is the only tectonic force sufficient to trigger the nucleation of a new subduction zone. The ubiquitous distribution of transform faults and fracture zones, combined with the common proximity of these features to mature subduction complexes, suggests that they may represent the most likely sites of trench formation if they are even marginally weaker than normal oceanic lithosphere.

  1. The dynamics of income-related health inequality among American children.

    PubMed

    Chatterji, Pinka; Lahiri, Kajal; Song, Jingya

    2013-05-01

    We estimate and decompose income-related inequality in child health in the USA and analyze its dynamics using the recently introduced health mobility index. Data come from the 1997, 2002, and 2007 waves of the Child Development Supplement of the Panel Study of Income Dynamics. The findings show that income-related child health inequality remains stable as children grow up and enter adolescence. The main factor underlying income-related child health inequality is income itself, although other factors, such as maternal education, also play a role. Decomposition of income-related health mobility indicates that health changes over time are more favorable to children with lower initial family incomes versus children with higher initial family incomes. However, offsetting this effect, our findings also suggest that changes in income ranking over time are positively related to children's subsequent health status. Copyright © 2012 John Wiley & Sons, Ltd.

  2. Optimal Design for Placements of Tsunami Observing Systems to Accurately Characterize the Inducing Earthquake

    NASA Astrophysics Data System (ADS)

    Mulia, Iyan E.; Gusman, Aditya Riadi; Satake, Kenji

    2017-12-01

    Recently, there are numerous tsunami observation networks deployed in several major tsunamigenic regions. However, guidance on where to optimally place the measurement devices is limited. This study presents a methodological approach to select strategic observation locations for the purpose of tsunami source characterizations, particularly in terms of the fault slip distribution. Initially, we identify favorable locations and determine the initial number of observations. These locations are selected based on extrema of empirical orthogonal function (EOF) spatial modes. To further improve the accuracy, we apply an optimization algorithm called a mesh adaptive direct search to remove redundant measurement locations from the EOF-generated points. We test the proposed approach using multiple hypothetical tsunami sources around the Nankai Trough, Japan. The results suggest that the optimized observation points can produce more accurate fault slip estimates with considerably less number of observations compared to the existing tsunami observation networks.

  3. Estimating the global incidence of traumatic spinal cord injury.

    PubMed

    Fitzharris, M; Cripps, R A; Lee, B B

    2014-02-01

    Population modelling--forecasting. To estimate the global incidence of traumatic spinal cord injury (TSCI). An initiative of the International Spinal Cord Society (ISCoS) Prevention Committee. Regression techniques were used to derive regional and global estimates of TSCI incidence. Using the findings of 31 published studies, a regression model was fitted using a known number of TSCI cases as the dependent variable and the population at risk as the single independent variable. In the process of deriving TSCI incidence, an alternative TSCI model was specified in an attempt to arrive at an optimal way of estimating the global incidence of TSCI. The global incidence of TSCI was estimated to be 23 cases per 1,000,000 persons in 2007 (179,312 cases per annum). World Health Organization's regional results are provided. Understanding the incidence of TSCI is important for health service planning and for the determination of injury prevention priorities. In the absence of high-quality epidemiological studies of TSCI in each country, the estimation of TSCI obtained through population modelling can be used to overcome known deficits in global spinal cord injury (SCI) data. The incidence of TSCI is context specific, and an alternative regression model demonstrated how TSCI incidence estimates could be improved with additional data. The results highlight the need for data standardisation and comprehensive reporting of national level TSCI data. A step-wise approach from the collation of conventional epidemiological data through to population modelling is suggested.

  4. Estimating recharge rates with analytic element models and parameter estimation

    USGS Publications Warehouse

    Dripps, W.R.; Hunt, R.J.; Anderson, M.P.

    2006-01-01

    Quantifying the spatial and temporal distribution of recharge is usually a prerequisite for effective ground water flow modeling. In this study, an analytic element (AE) code (GFLOW) was used with a nonlinear parameter estimation code (UCODE) to quantify the spatial and temporal distribution of recharge using measured base flows as calibration targets. The ease and flexibility of AE model construction and evaluation make this approach well suited for recharge estimation. An AE flow model of an undeveloped watershed in northern Wisconsin was optimized to match median annual base flows at four stream gages for 1996 to 2000 to demonstrate the approach. Initial optimizations that assumed a constant distributed recharge rate provided good matches (within 5%) to most of the annual base flow estimates, but discrepancies of >12% at certain gages suggested that a single value of recharge for the entire watershed is inappropriate. Subsequent optimizations that allowed for spatially distributed recharge zones based on the distribution of vegetation types improved the fit and confirmed that vegetation can influence spatial recharge variability in this watershed. Temporally, the annual recharge values varied >2.5-fold between 1996 and 2000 during which there was an observed 1.7-fold difference in annual precipitation, underscoring the influence of nonclimatic factors on interannual recharge variability for regional flow modeling. The final recharge values compared favorably with more labor-intensive field measurements of recharge and results from studies, supporting the utility of using linked AE-parameter estimation codes for recharge estimation. Copyright ?? 2005 The Author(s).

  5. Average discharge, perennial flow initiation, and channel initiation - small southern Appalachian basins

    Treesearch

    B. Lane Rivenbark; C. Rhett Jackson

    2004-01-01

    Regional average evapotranspiration estimates developed by water balance techniques are frequently used to estimate average discharge in ungaged strttams. However, the lower stream size range for the validity of these techniques has not been explored. Flow records were collected and evaluated for 16 small streams in the Southern Appalachians to test whether the...

  6. On the Error of the Dixon Plot for Estimating the Inhibition Constant between Enzyme and Inhibitor

    ERIC Educational Resources Information Center

    Fukushima, Yoshihiro; Ushimaru, Makoto; Takahara, Satoshi

    2002-01-01

    In textbook treatments of enzyme inhibition kinetics, adjustment of the initial inhibitor concentration for inhibitor bound to enzyme is often neglected. For example, in graphical plots such as the Dixon plot for estimation of an inhibition constant, the initial concentration of inhibitor is usually plotted instead of the true inhibitor…

  7. 32 CFR Appendix D to Part 169a - Commercial Activities Management Information System (CAMIS)

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... to provide an initial estimate of the manpower associated with the activity (or activities). The initial estimate of the manpower in this section of the CCR will be in all cases those manpower figures... Medical Program of the Uniformed Services (CHAMPUS) [3D1] E—Defense Advanced Research Projects Agency F...

  8. 32 CFR Appendix D to Part 169a - Commercial Activities Management Information System (CAMIS)

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... to provide an initial estimate of the manpower associated with the activity (or activities). The initial estimate of the manpower in this section of the CCR will be in all cases those manpower figures... Medical Program of the Uniformed Services (CHAMPUS) [3D1] E—Defense Advanced Research Projects Agency F...

  9. 32 CFR Appendix D to Part 169a - Commercial Activities Management Information System (CAMIS)

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... to provide an initial estimate of the manpower associated with the activity (or activities). The initial estimate of the manpower in this section of the CCR will be in all cases those manpower figures... Medical Program of the Uniformed Services (CHAMPUS) [3D1] E—Defense Advanced Research Projects Agency F...

  10. 32 CFR Appendix D to Part 169a - Commercial Activities Management Information System (CAMIS)

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... to provide an initial estimate of the manpower associated with the activity (or activities). The initial estimate of the manpower in this section of the CCR will be in all cases those manpower figures... Medical Program of the Uniformed Services (CHAMPUS) [3D1] E—Defense Advanced Research Projects Agency F...

  11. Assessing power of large river fish monitoring programs to detect population changes: the Missouri River sturgeon example

    USGS Publications Warehouse

    Wildhaber, M.L.; Holan, S.H.; Bryan, J.L.; Gladish, D.W.; Ellersieck, M.

    2011-01-01

    In 2003, the US Army Corps of Engineers initiated the Pallid Sturgeon Population Assessment Program (PSPAP) to monitor pallid sturgeon and the fish community of the Missouri River. The power analysis of PSPAP presented here was conducted to guide sampling design and effort decisions. The PSPAP sampling design has a nested structure with multiple gear subsamples within a river bend. Power analyses were based on a normal linear mixed model, using a mixed cell means approach, with variance estimates from the original data. It was found that, at current effort levels, at least 20 years for pallid and 10 years for shovelnose sturgeon is needed to detect a 5% annual decline. Modified bootstrap simulations suggest power estimates from the original data are conservative due to excessive zero fish counts. In general, the approach presented is applicable to a wide array of animal monitoring programs.

  12. Undisturbed upright stance control in the elderly: Part 2. Postural-control impairments of elderly fallers.

    PubMed

    Berger, L; Chuzel, M; Buisson, G; Rougier, P

    2005-09-01

    A common way of predicting falling risks in elderly people can be to study center of pressure (CP) trajectories during undisturbed upright stance maintenance. By estimating the difference between CP and center of gravity (CG) motions (CP - CGv), one can estimate the neuromuscular activity. The results of this study, which included 34 sedentary elderly persons aged over 75 years (21 fallers and 13 nonfallers), demonstrated significantly increased CGh and CP - CGv motions in both axes for the fallers. In addition, the fallers presented larger CGh motions in the mediolateral axis, suggesting an enlarged loading-unloading mechanism, which could have reflected the adoption of a step-initiating strategy. As highlighted by fractional Brownian motion modeling, the distance covered by the CP - CGv motions before the successive control mechanisms switched was enhanced for the fallers in both axes, therefore increasing the risk that the CG would be outside of the base of support.

  13. A cat's tale: the impact of genetic restoration on Florida panther population dynamics and persistence.

    PubMed

    Hostetler, Jeffrey A; Onorato, David P; Jansen, Deborah; Oli, Madan K

    2013-05-01

    1. Genetic restoration has been suggested as a management tool for mitigating detrimental effects of inbreeding depression in small, inbred populations, but the demographic mechanisms underlying population-level responses to genetic restoration remain poorly understood. 2. We studied the dynamics and persistence of the endangered Florida panther Puma concolor coryi population and evaluated the potential influence of genetic restoration on population growth and persistence parameters. As part of the genetic restoration programme, eight female Texas pumas P. c. stanleyana were released into Florida panther habitat in southern Florida in 1995. 3. The overall asymptotic population growth rate (λ) was 1.04 (5th and 95th percentiles: 0.95-1.14), suggesting an increase in the panther population of approximately 4% per year. Considering the effects of environmental and demographic stochasticities and density-dependence, the probability that the population will fall below 10 panthers within 100 years was 0.072 (0-0.606). 4. Our results suggest that the population would have declined at 5% per year (λ = 0.95; 0.83-1.08) in the absence of genetic restoration. Retrospective life table response experiment analysis revealed that the positive effect of genetic restoration on survival of kittens was primarily responsible for the substantial growth of the panther population that would otherwise have been declining. 5. For comparative purposes, we also estimated probability of quasi-extinction under two scenarios - implementation of genetic restoration and no genetic restoration initiative - using the estimated abundance of panthers in 1995, the year genetic restoration was initiated. Assuming no density-dependence, the probability that the panther population would fall below 10 panthers by 2010 was 0.098 (0.002-0.332) for the restoration scenario and 0.445 (0.032-0.944) for the no restoration scenario, providing further evidence that the panther population would have faced a substantially higher risk of extinction if the genetic restoration initiative had not been implemented. 6. Our results, along with those reporting increases in population size and improvements in biomedical correlates of inbreeding depression, provide strong evidence that genetic restoration substantially contributed to the observed increases in the Florida panther population. © 2012 The Authors. Journal of Animal Ecology © 2012 British Ecological Society.

  14. Hemodynamics-Driven Deposition of Intraluminal Thrombus in Abdominal Aortic Aneurysms

    PubMed Central

    Di Achille, P.; Tellides, G.; Humphrey, J.D.

    2016-01-01

    Accumulating evidence suggests that intraluminal thrombus plays many roles in the natural history of abdominal aortic aneurysms. There is, therefore, a pressing need for computational models that can describe and predict the initiation and progression of thrombus in aneurysms. In this paper, we introduce a phenomenological metric for thrombus deposition potential and use hemodynamic simulations based on medical images from six patients to identify best-fit values of the two key model parameters. We then introduce a shape optimization method to predict the associated radial growth of the thrombus into the lumen based on the expectation that thrombus initiation will create a thrombogenic surface, which in turn will promote growth until increasing hemodynamically induced frictional forces prevent any further cell or protein deposition. Comparisons between predicted and actual intraluminal thrombus in the six patient-specific aneurysms suggest that this phenomenological description provides a good first estimate of thrombus deposition. We submit further that, because the biologically active region of the thrombus appears to be confined to a thin luminal layer, predictions of morphology alone may be sufficient to inform fluid-solid-growth models of aneurysmal growth and remodeling. PMID:27569676

  15. Seasonal variation in adolescent conceptions, induced abortions, and late initiation of prenatal care.

    PubMed

    Petersen, D J; Alexander, G R

    1992-01-01

    The monthly distribution of conceptions among adolescents and the proportion of adolescent pregnancies that are voluntarily terminated by induced abortion by month of conception are the objects of this study. Additionally, seasonal variations in the timing of initiation of prenatal care services by adolescents are investigated. Vital records files of single live births, fetal deaths, and induced terminations of pregnancy to residents in the State of South Carolina, 1979-86, were aggregated to estimate conceptions. There was a significant difference between adolescents and adults in the monthly distribution of conceptions. The peak month of adolescent conceptions coincided with the end of the school year. Pregnancies of adolescents occurring at this time further demonstrated later access of prenatal care services than conceptions occurring at other times of the year, most notably during the school term. These findings suggest that there is considerable opportunity for improving the availability of reproductive health care services for adolescents. The results specifically suggest the potential benefit of increasing adolescent pregnancy prevention efforts prior to high-risk events and increasing the availability of and access to health care and counseling services to adolescents during the school recess months of the summer.

  16. Building upon the Great Waters Initiative: Scoping study for potential polyaromatic hydrocarbon deposition into San Diego Bay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koehler, J.; Sylte, W.W.

    1997-12-31

    The deposition of atmospheric polyaromatic hydrocarbons (PAHs) into San Diego Bay was evaluated at an initial study level. This study was part of an overall initial estimate of PAH waste loading to San Diego Bay from all environmental pathways. The study of air pollutant deposition to water bodies has gained increased attention both as a component of Total Maximum Daily Load (TMDL) determinations required under the Clean Water Act and pursuant to federal funding authorized by the 1990 Clean Air Act Amendments to study the atmospheric deposition of hazardous air pollutants to the Great Waters, which includes coastal waters. Tomore » date, studies under the Clean Air Act have included the Great Lakes, Chesapeake Bay, Lake Champlain, and Delaware Bay. Given the limited resources of this initial study for San Diego Bay, the focus was on maximizing the use of existing data and information. The approach developed included the statistical evaluation of measured atmospheric PAH concentrations in the San Diego area, the extrapolation of EPA study results of atmospheric PAH concentrations above Lake Michigan to supplement the San Diego data, the estimation of dry and wet deposition with published calculation methods considering local wind and rainfall data, and the comparison of resulting PAH deposition estimates for San Diego Bay with estimated PAH emissions from ship and commercial boat activity in the San Diego area. The resulting PAH deposition and ship emission estimates were within the same order of magnitude. Since a significant contributor to the atmospheric deposition of PAHs to the Bay is expected to be from shipping traffic, this result provides a check on the order of magnitude on the PAH deposition estimate. Also, when compared against initial estimates of PAH loading to San Diego Bay from other environmental pathways, the atmospheric deposition pathway appears to be a significant contributor.« less

  17. Temperature acclimation rate of aerobic scope and feeding metabolism in fishes: implications in a thermally extreme future

    PubMed Central

    Sandblom, Erik; Gräns, Albin; Axelsson, Michael; Seth, Henrik

    2014-01-01

    Temperature acclimation may offset the increased energy expenditure (standard metabolic rate, SMR) and reduced scope for activity (aerobic scope, AS) predicted to occur with local and global warming in fishes and other ectotherms. Yet, the time course and mechanisms of this process is little understood. Acclimation dynamics of SMR, maximum metabolic rate, AS and the specific dynamic action of feeding (SDA) were determined in shorthorn sculpin (Myoxocephalus scorpius) after transfer from 10°C to 16°C. SMR increased in the first week by 82% reducing AS to 55% of initial values, while peak postprandial metabolism was initially greater. This meant that the estimated AS during peak SDA approached zero, constraining digestion and leaving little room for additional aerobic processes. After eight weeks at 16°C, SMR was restored, while AS and the estimated AS during peak SDA recovered partly. Collectively, this demonstrated a considerable capacity for metabolic thermal compensation, which should be better incorporated into future models on organismal responses to climate change. A mathematical model based on the empirical data suggested that phenotypes with fast acclimation rates may be favoured by natural selection as the accumulated energetic cost of a slow acclimation rate increases in a warmer future with exacerbated thermal variations. PMID:25232133

  18. Hand-mouth transfer and potential for exposure to E. coli and F+ coliphage in beach sand, Chicago, Illinois

    USGS Publications Warehouse

    Whitman, R.L.; Przybyla-Kelly, K.; Shively, D.A.; Nevers, M.B.; Byappanahalli, M.N.

    2009-01-01

    Beach sand contains fecal indicator bacteria, often in densities greatly exceeding the adjacent swimming waters. We examined the transferability of Escherichia coli and F+ coliphage (MS2) from beach sand to hands in order to estimate the potential subsequent health risk. Sand with high initial E. coli concentrations was collected from a Chicago beach. Individuals manipulated the sand for 60 seconds, and rinse water was analysed for E. coli and coliphage. E. coli densities transferred were correlated with density in sand rather than surface area of an individual's hand, and the amount of coliphage transferred from seeded sand was different among individuals. In sequential rinsing, percentage reduction was 92% for E. coli and 98% for coliphage. Using dose-response estimates developed for swimming water, it was determined that the number of individuals per thousand that would develop gastrointestinal symptoms would be 11 if all E. coli on the fingertip were ingested or 33 if all E. coli on the hand were ingested. These results suggest that beach sand may be an important medium for microbial exposure; bacteria transfer is related to initial concentration in the sand; and rinsing may be effective in limiting oral exposure to sand-borne microbes of human concern.

  19. Detection of skeletal muscle metastases on initial staging of lung cancer: a retrospective case series.

    PubMed

    Bocchino, Marialuisa; Valente, Tullio; Somma, Francesco; de Rosa, Ilaria; Bifulco, Marco; Rea, Gaetano

    2014-03-01

    Estimation of skeletal muscle metastases (SMMs) at the time of diagnosis and/or initial staging of lung cancer. Retrospective evaluation of clinical charts and imaging data suggestive of SMMs of patients with histology-proved lung cancer over a 5-year period. SMMs were identified in 46 out of 1,754 patients. Single and multiple (62.9% of cases) SMMs were detected by total body multi-detector computed tomography (MDCT). They were associated with poorly differentiated (43%) and advanced adenocarcinomas (52%) without clinically relevant symptoms and/or signs. Psoas and buttock muscles were most frequently involved (33.3%). MDCT findings consisted of well-defined homogeneously hyperdense oval masses (31%), lesions with ring-like enhancement and central hypoattenuation (68%), or large abscess-like necrotic lesions (24%). Sonography revealed well-defined hypoechoic masses (41.6%), ill-defined hypoechoic lesions (33.3%), or anechoic areas with a necrotic centre (25%). Positron emission tomography revealed that all SMMs were metabolically active. SMMs are uncommon but not negligible in lung cancer, with an estimated prevalence of 2.62% in our series. Although histology remains the recommended method, use of high-performance imaging techniques and increased clinical suspicion may improve their early detection. Efforts addressing their effect on the natural history of lung cancer are needed.

  20. Eyes that bind us: Gaze leading induces an implicit sense of agency.

    PubMed

    Stephenson, Lisa J; Edwards, S Gareth; Howard, Emma E; Bayliss, Andrew P

    2018-03-01

    Humans feel a sense of agency over the effects their motor system causes. This is the case for manual actions such as pushing buttons, kicking footballs, and all acts that affect the physical environment. We ask whether initiating joint attention - causing another person to follow our eye movement - can elicit an implicit sense of agency over this congruent gaze response. Eye movements themselves cannot directly affect the physical environment, but joint attention is an example of how eye movements can indirectly cause social outcomes. Here we show that leading the gaze of an on-screen face induces an underestimation of the temporal gap between action and consequence (Experiments 1 and 2). This underestimation effect, named 'temporal binding,' is thought to be a measure of an implicit sense of agency. Experiment 3 asked whether merely making an eye movement in a non-agentic, non-social context might also affect temporal estimation, and no reliable effects were detected, implying that inconsequential oculomotor acts do not reliably affect temporal estimations under these conditions. Together, these findings suggest that an implicit sense of agency is generated when initiating joint attention interactions. This is important for understanding how humans can efficiently detect and understand the social consequences of their actions. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Evidence for top-heavy stellar initial mass functions with increasing density and decreasing metallicity

    NASA Astrophysics Data System (ADS)

    Marks, Michael; Kroupa, Pavel; Dabringhausen, Jörg; Pawlowski, Marcel S.

    2012-05-01

    Residual-gas expulsion after cluster formation has recently been shown to leave an imprint in the low-mass present-day stellar mass function (PDMF) which allowed the estimation of birth conditions of some Galactic globular clusters (GCs) such as mass, radius and star formation efficiency. We show that in order to explain their characteristics (masses, radii, metallicity and PDMF) their stellar initial mass function (IMF) must have been top heavy. It is found that the IMF is required to become more top heavy the lower the cluster metallicity and the larger the pre-GC cloud-core density are. The deduced trends are in qualitative agreement with theoretical expectation. The results are consistent with estimates of the shape of the high-mass end of the IMF in the Arches cluster, Westerlund 1, R136 and NGC 3603, as well as with the IMF independently constrained for ultra-compact dwarf galaxies (UCDs). The latter suggests that GCs and UCDs might have formed along the same channel or that UCDs formed via mergers of GCs. A Fundamental Plane is found which describes the variation of the IMF with density and metallicity of the pre-GC cloud cores. The implications for the evolution of galaxies and chemical enrichment over cosmological times are expected to be major.

  2. The assessment of premorbid intellectual ability following right-hemisphere stroke: reliability of a lexical decision task.

    PubMed

    Gillespie, David C; Bowen, Audrey; Foster, Jonathan K

    2012-01-01

    Comparing current with estimated premorbid performance helps identify acquired cognitive deficits after brain injury. Tests of reading pronunciation, often used to measure premorbid ability, are inappropriate for stroke patients with motor speech problems. The Spot-the-Word Test (STWT), a measure of lexical decision, offers an alternative approach for estimating premorbid capacity in those with speech problems. However, little is known about the STWT's reliability. In the present study, a consecutive sample of right-hemisphere stroke (RHS) patients (n = 56) completed the STWT at 4 and 16 weeks poststroke. A control group, individually matched to the patients for age and initial STWT score, also completed the STWT on two occasions. More than 80% of patients had STWT scores at retest within 2 scaled score points of their initial score, suggesting that the STWT is a reliable measure for most individuals with RHS. However, RHS patients had significantly greater score change than controls. Limits of agreement analysis revealed that approximately 1 in 7 patients obtained abnormally large STWT score improvements at retest. It is concluded that although the STWT is a useful assessment tool for stroke clinicians, this instrument may significantly underestimate premorbid level of ability in approximately 14% of stroke patients.

  3. An Optimization-Based State Estimatioin Framework for Large-Scale Natural Gas Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jalving, Jordan; Zavala, Victor M.

    We propose an optimization-based state estimation framework to track internal spacetime flow and pressure profiles of natural gas networks during dynamic transients. We find that the estimation problem is ill-posed (because of the infinite-dimensional nature of the states) and that this leads to instability of the estimator when short estimation horizons are used. To circumvent this issue, we propose moving horizon strategies that incorporate prior information. In particular, we propose a strategy that initializes the prior using steady-state information and compare its performance against a strategy that does not initialize the prior. We find that both strategies are capable ofmore » tracking the state profiles but we also find that superior performance is obtained with steady-state prior initialization. We also find that, under the proposed framework, pressure sensor information at junctions is sufficient to track the state profiles. We also derive approximate transport models and show that some of these can be used to achieve significant computational speed-ups without sacrificing estimation performance. We show that the estimator can be easily implemented in the graph-based modeling framework Plasmo.jl and use a multipipeline network study to demonstrate the developments.« less

  4. Compost mixture influence of interactive physical parameters on microbial kinetics and substrate fractionation.

    PubMed

    Mohajer, Ardavan; Tremier, Anne; Barrington, Suzelle; Teglia, Cecile

    2010-01-01

    Composting is a feasible biological treatment for the recycling of wastewater sludge as a soil amendment. The process can be optimized by selecting an initial compost recipe with physical properties that enhance microbial activity. The present study measured the microbial O(2) uptake rate (OUR) in 16 sludge and wood residue mixtures to estimate the kinetics parameters of maximum growth rate mu(m) and rate of organic matter hydrolysis K(h), as well as the initial biodegradable organic matter fractions present. The starting mixtures consisted of a wide range of moisture content (MC), waste to bulking agent (BA) ratio (W/BA ratio) and BA particle size, which were placed in a laboratory respirometry apparatus to measure their OUR over 4 weeks. A microbial model based on the activated sludge process was used to calculate the kinetic parameters and was found to adequately reproduced OUR curves over time, except for the lag phase and peak OUR, which was not represented and generally over-estimated, respectively. The maximum growth rate mu(m), was found to have a quadratic relationship with MC and a negative association with BA particle size. As a result, increasing MC up to 50% and using a smaller BA particle size of 8-12 mm was seen to maximize mu(m). The rate of hydrolysis K(h) was found to have a linear association with both MC and BA particle size. The model also estimated the initial readily biodegradable organic matter fraction, MB(0), and the slower biodegradable matter requiring hydrolysis, MH(0). The sum of MB(0) and MH(0) was associated with MC, W/BA ratio and the interaction between these two parameters, suggesting that O(2) availability was a key factor in determining the value of these two fractions. The study reinforced the idea that optimization of the physical characteristics of a compost mixture requires a holistic approach. 2010 Elsevier Ltd. All rights reserved.

  5. Energy Gap in the Aetiology of Body Weight Gain and Obesity: A Challenging Concept with a Complex Evaluation and Pitfalls

    PubMed Central

    Schutz, Yves; Byrne, Nuala M.; Dulloo, Abdul; Hills, Andrew P.

    2014-01-01

    The concept of energy gap(s) is useful for understanding the consequence of a small daily, weekly, or monthly positive energy balance and the inconspicuous shift in weight gain ultimately leading to overweight and obesity. Energy gap is a dynamic concept: an initial positive energy gap incurred via an increase in energy intake (or a decrease in physical activity) is not constant, may fade out with time if the initial conditions are maintained, and depends on the ‘efficiency’ with which the readjustment of the energy imbalance gap occurs with time. The metabolic response to an energy imbalance gap and the magnitude of the energy gap(s) can be estimated by at least two methods, i.e. i) assessment by longitudinal overfeeding studies, imposing (by design) an initial positive energy imbalance gap; ii) retrospective assessment based on epidemiological surveys, whereby the accumulated endogenous energy storage per unit of time is calculated from the change in body weight and body composition. In order to illustrate the difficulty of accurately assessing an energy gap we have used, as an illustrative example, a recent epidemiological study which tracked changes in total energy intake (estimated by gross food availability) and body weight over 3 decades in the US, combined with total energy expenditure prediction from body weight using doubly labelled water data. At the population level, the study attempted to assess the cause of the energy gap purported to be entirely due to increased food intake. Based on an estimate of change in energy intake judged to be more reliable (i.e. in the same study population) and together with calculations of simple energetic indices, our analysis suggests that conclusions about the fundamental causes of obesity development in a population (excess intake vs. low physical activity or both) is clouded by a high level of uncertainty. PMID:24457473

  6. Energy gap in the aetiology of body weight gain and obesity: a challenging concept with a complex evaluation and pitfalls.

    PubMed

    Schutz, Yves; Byrne, Nuala M; Dulloo, Abdul; Hills, Andrew P

    2014-01-01

    The concept of energy gap(s) is useful for understanding the consequence of a small daily, weekly, or monthly positive energy balance and the inconspicuous shift in weight gain ultimately leading to overweight and obesity. Energy gap is a dynamic concept: an initial positive energy gap incurred via an increase in energy intake (or a decrease in physical activity) is not constant, may fade out with time if the initial conditions are maintained, and depends on the 'efficiency' with which the readjustment of the energy imbalance gap occurs with time. The metabolic response to an energy imbalance gap and the magnitude of the energy gap(s) can be estimated by at least two methods, i.e. i) assessment by longitudinal overfeeding studies, imposing (by design) an initial positive energy imbalance gap; ii) retrospective assessment based on epidemiological surveys, whereby the accumulated endogenous energy storage per unit of time is calculated from the change in body weight and body composition. In order to illustrate the difficulty of accurately assessing an energy gap we have used, as an illustrative example, a recent epidemiological study which tracked changes in total energy intake (estimated by gross food availability) and body weight over 3 decades in the US, combined with total energy expenditure prediction from body weight using doubly labelled water data. At the population level, the study attempted to assess the cause of the energy gap purported to be entirely due to increased food intake. Based on an estimate of change in energy intake judged to be more reliable (i.e. in the same study population) and together with calculations of simple energetic indices, our analysis suggests that conclusions about the fundamental causes of obesity development in a population (excess intake vs. low physical activity or both) is clouded by a high level of uncertainty. © 2014 S. Karger GmbH, Freiburg.

  7. Novel kinetic spectrophotometric method for estimation of certain biologically active phenolic sympathomimetic drugs in their bulk powders and different pharmaceutical formulations

    NASA Astrophysics Data System (ADS)

    Omar, Mahmoud A.; Badr El-Din, Khalid M.; Salem, Hesham; Abdelmageed, Osama H.

    2018-03-01

    A simple, selective and sensitive kinetic spectrophotometric method was described for estimation of four phenolic sympathomimetic drugs namely; terbutaline sulfate, fenoterol hydrobromide, isoxsuprine hydrochloride and etilefrine hydrochloride. This method is depended on the oxidation of the phenolic drugs with Folin-Ciocalteu reagent in presence of sodium carbonate. The rate of color development at 747-760 nm was measured spectrophotometrically. The experimental parameters controlling the color development were fully studied and optimized. The reaction mechanism for color development was proposed. The calibration graphs for both the initial rate and fixed time methods were constructed, where linear correlations were found in the general concentration ranges of 3.65 × 10- 6-2.19 × 10- 5 mol L- 1 and 2-24.0 μg mL- 1 with correlation coefficients in the following range 0.9992-0.9999, 0.9991-0.9998 respectively. The limits of detection and quantitation for the initial rate and fixed time methods were found to be in general concentration range 0.109-0.273, 0.363-0.910 and 0.210-0.483, 0.700-1.611 μg mL- 1 respectively. The developed method was validated according to ICH and USP 30 -NF 25 guidelines. The suggested method was successfully implemented to the estimation of these drugs in their commercial pharmaceutical formulations and the recovery percentages obtained were ranged from 97.63% ± 1.37 to 100.17% ± 0.95 and 97.29% ± 0.74 to 100.14 ± 0.81 for initial rate and fixed time methods respectively. The data obtained from the analysis of dosage forms were compared with those obtained by reported methods. Statistical analysis of these results indicated no significant variation in the accuracy and precision of both the proposed and reported methods.

  8. NASA Instrument Cost/Schedule Model

    NASA Technical Reports Server (NTRS)

    Habib-Agahi, Hamid; Mrozinski, Joe; Fox, George

    2011-01-01

    NASA's Office of Independent Program and Cost Evaluation (IPCE) has established a number of initiatives to improve its cost and schedule estimating capabilities. 12One of these initiatives has resulted in the JPL developed NASA Instrument Cost Model. NICM is a cost and schedule estimator that contains: A system level cost estimation tool; a subsystem level cost estimation tool; a database of cost and technical parameters of over 140 previously flown remote sensing and in-situ instruments; a schedule estimator; a set of rules to estimate cost and schedule by life cycle phases (B/C/D); and a novel tool for developing joint probability distributions for cost and schedule risk (Joint Confidence Level (JCL)). This paper describes the development and use of NICM, including the data normalization processes, data mining methods (cluster analysis, principal components analysis, regression analysis and bootstrap cross validation), the estimating equations themselves and a demonstration of the NICM tool suite.

  9. Empirical Bayes Estimation of Coalescence Times from Nucleotide Sequence Data.

    PubMed

    King, Leandra; Wakeley, John

    2016-09-01

    We demonstrate the advantages of using information at many unlinked loci to better calibrate estimates of the time to the most recent common ancestor (TMRCA) at a given locus. To this end, we apply a simple empirical Bayes method to estimate the TMRCA. This method is both asymptotically optimal, in the sense that the estimator converges to the true value when the number of unlinked loci for which we have information is large, and has the advantage of not making any assumptions about demographic history. The algorithm works as follows: we first split the sample at each locus into inferred left and right clades to obtain many estimates of the TMRCA, which we can average to obtain an initial estimate of the TMRCA. We then use nucleotide sequence data from other unlinked loci to form an empirical distribution that we can use to improve this initial estimate. Copyright © 2016 by the Genetics Society of America.

  10. The effects of MgADP on cross-bridge kinetics: a laser flash photolysis study of guinea-pig smooth muscle.

    PubMed Central

    Nishiye, E; Somlyo, A V; Török, K; Somlyo, A P

    1993-01-01

    1. The effects of MgADP on cross-bridge kinetics were investigated using laser flash photolysis of caged ATP (P3-1(2-nitrophenyl) ethyladenosine 5'-triphosphate), in guinea-pig portal vein smooth muscle permeabilized with Staphylococcus aureus alpha-toxin. Isometric tension and in-phase stiffness transitions from rigor state were monitored upon photolysis of caged ATP. The estimated concentration of ATP released from caged ATP by high-pressure liquid chromatography (HPLC) was 1.3 mM. 2. The time course of relaxation initiated by photolysis of caged ATP in the absence of Ca2+ was well fitted during the initial 200 ms by two exponential functions with time constants of, respectively, tau 1 = 34 ms and tau 2 = 1.2 s and relative amplitudes of 0.14 and 0.86. Multiple exponential functions were needed to fit longer intervals; the half-time of the overall relaxation was 0.8 s. The second order rate constant for cross-bridge detachment by ATP, estimated from the rate of initial relaxation, was 0.4-2.3 x 10(4) M-1 s-1. 3. MgADP dose dependently reduced both the relative amplitude of the first component and the rate constant of the second component of relaxation. Conversely, treatment of muscles with apyrase, to deplete endogenous ADP, increased the relative amplitude of the first component. In the presence of MgADP, in-phase stiffness decreased during force maintenance, suggesting that the force per cross-bridge increased. The apparent dissociation constant (Kd) of MgADP for the cross-bridge binding site, estimated from its concentration-dependent effect on the relative amplitude of the first component, was 1.3 microM. This affinity is much higher than the previously reported values (50-300 microM for smooth muscle; 18-400 microM for skeletal muscle; 7-10 microM for cardiac muscle). It is possible that the high affinity reflects the properties of a state generated during the co-operative reattachment cycle, rather than that of the rigor bridge. 4. The rate constant of MgADP release from cross-bridges, estimated from its concentration-dependent effect on the rate constant of the second (tau 2) component, was 0.35-7.7 s-1. To the extent that reattachment of cross-bridges could slow relaxation even during the initial 200 ms, this rate constant may be an underestimate. 5. Inorganic phosphate (Pi, 30 mM) did not affect the rate of relaxation during the initial approximately 50 ms, but accelerated the slower phase of relaxation, consistent with a cyclic cross-bridge model in which Pi increases the proportion of cross-bridges in detached ('weakly bound') states.(ABSTRACT TRUNCATED AT 400 WORDS) Images Fig. 1 PMID:8487195

  11. A Geographically Explicit Genetic Model of Worldwide Human-Settlement History

    PubMed Central

    Liu, Hua; Prugnolle, Franck; Manica, Andrea; Balloux, François

    2006-01-01

    Currently available genetic and archaeological evidence is generally interpreted as supportive of a recent single origin of modern humans in East Africa. However, this is where the near consensus on human settlement history ends, and considerable uncertainty clouds any more detailed aspect of human colonization history. Here, we present a dynamic genetic model of human settlement history coupled with explicit geographical distances from East Africa, the likely origin of modern humans. We search for the best-supported parameter space by fitting our analytical prediction to genetic data that are based on 52 human populations analyzed at 783 autosomal microsatellite markers. This framework allows us to jointly estimate the key parameters of the expansion of modern humans. Our best estimates suggest an initial expansion of modern humans ∼56,000 years ago from a small founding population of ∼1,000 effective individuals. Our model further points to high growth rates in newly colonized habitats. The general fit of the model with the data is excellent. This suggests that coupling analytical genetic models with explicit demography and geography provides a powerful tool for making inferences on human-settlement history. PMID:16826514

  12. A random-walk algorithm for modeling lithospheric density and the role of body forces in the evolution of the Midcontinent Rift

    USGS Publications Warehouse

    Levandowski, William Brower; Boyd, Oliver; Briggs, Richard; Gold, Ryan D.

    2015-01-01

    We test this algorithm on the Proterozoic Midcontinent Rift (MCR), north-central U.S. The MCR provides a challenge because it hosts a gravity high overlying low shear-wave velocity crust in a generally flat region. Our initial density estimates are derived from a seismic velocity/crustal thickness model based on joint inversion of surface-wave dispersion and receiver functions. By adjusting these estimates to reproduce gravity and topography, we generate a lithospheric-scale model that reveals dense middle crust and eclogitized lowermost crust within the rift. Mantle lithospheric density beneath the MCR is not anomalous, consistent with geochemical evidence that lithospheric mantle was not the primary source of rift-related magmas and suggesting that extension occurred in response to far-field stress rather than a hot mantle plume. Similarly, the subsequent inversion of normal faults resulted from changing far-field stress that exploited not only warm, recently faulted crust but also a gravitational potential energy low in the MCR. The success of this density modeling algorithm in the face of such apparently contradictory geophysical properties suggests that it may be applicable to a variety of tectonic and geodynamic problems. 

  13. LOW-METALLICITY YOUNG CLUSTERS IN THE OUTER GALAXY. II. SH 2-208

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yasui, Chikako; Kobayashi, Naoto; Izumi, Natsuko

    We obtained deep near-infrared images of Sh 2-208, one of the lowest-metallicity H ii regions in the Galaxy, [O/H] = −0.8 dex. We detected a young cluster in the center of the H ii region with a limiting magnitude of K = 18.0 mag (10 σ ), which corresponds to a mass detection limit of ∼0.2 M {sub ⊙}. This enables the comparison of star-forming properties under low metallicity with those of the solar neighborhood. We identified 89 cluster members. From the fitting of the K -band luminosity function (KLF), the age and distance of the cluster are estimated to be ∼0.5more » Myr and ∼4 kpc, respectively. The estimated young age is consistent with the detection of strong CO emission in the cluster region and the estimated large extinction of cluster members ( A{sub V}  ∼ 4–25 mag). The observed KLF suggests that the underlying initial mass function (IMF) of the low-metallicity cluster is not significantly different from canonical IMFs in the solar neighborhood in terms of both high-mass slope and IMF peak (characteristic mass). Despite the very young age, the disk fraction of the cluster is estimated at only 27% ± 6%, which is significantly lower than those in the solar metallicity. Those results are similar to Sh 2-207, which is another star-forming region close to Sh 2-208 with a separation of 12 pc, suggesting that their star-forming activities in low-metallicity environments are essentially identical to those in the solar neighborhood, except for the disk dispersal timescale. From large-scale mid-infrared images, we suggest that sequential star formation is taking place in Sh 2-207, Sh 2-208, and the surrounding region, triggered by an expanding bubble with a ∼30 pc radius.« less

  14. Re-estimating sample size in cluster randomised trials with active recruitment within clusters.

    PubMed

    van Schie, S; Moerbeek, M

    2014-08-30

    Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster level and individual level variance should be known before the study starts, but this is often not the case. We suggest using an internal pilot study design to address this problem of unknown variances. A pilot can be useful to re-estimate the variances and re-calculate the sample size during the trial. Using simulated data, it is shown that an initially low or high power can be adjusted using an internal pilot with the type I error rate remaining within an acceptable range. The intracluster correlation coefficient can be re-estimated with more precision, which has a positive effect on the sample size. We conclude that an internal pilot study design may be used if active recruitment is feasible within a limited number of clusters. Copyright © 2014 John Wiley & Sons, Ltd.

  15. Truncated RAP-MUSIC (TRAP-MUSIC) for MEG and EEG source localization.

    PubMed

    Mäkelä, Niko; Stenroos, Matti; Sarvas, Jukka; Ilmoniemi, Risto J

    2018-02-15

    Electrically active brain regions can be located applying MUltiple SIgnal Classification (MUSIC) on magneto- or electroencephalographic (MEG; EEG) data. We introduce a new MUSIC method, called truncated recursively-applied-and-projected MUSIC (TRAP-MUSIC). It corrects a hidden deficiency of the conventional RAP-MUSIC algorithm, which prevents estimation of the true number of brain-signal sources accurately. The correction is done by applying a sequential dimension reduction to the signal-subspace projection. We show that TRAP-MUSIC significantly improves the performance of MUSIC-type localization; in particular, it successfully and robustly locates active brain regions and estimates their number. We compare TRAP-MUSIC and RAP-MUSIC in simulations with varying key parameters, e.g., signal-to-noise ratio, correlation between source time-courses, and initial estimate for the dimension of the signal space. In addition, we validate TRAP-MUSIC with measured MEG data. We suggest that with the proposed TRAP-MUSIC method, MUSIC-type localization could become more reliable and suitable for various online and offline MEG and EEG applications. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Layered ejecta craters and the early water/ice aquifer on Mars

    NASA Astrophysics Data System (ADS)

    Oberbeck, V. R.

    2009-03-01

    A model for emplacement of deposits of impact craters is presented that explains the size range of Martian layered ejecta craters between 5 km and 60 km in diameter in the low and middle latitudes. The impact model provides estimates of the water content of crater deposits relative to volatile content in the aquifer of Mars. These estimates together with the amount of water required to initiate fluid flow in terrestrial debris flows provide an estimate of 21% by volume (7.6 × 107 km3) of water/ice that was stored between 0.27 and 2.5 km depth in the crust of Mars during Hesperian and Amazonian time. This would have been sufficient to supply the water for an ocean in the northern lowlands of Mars. The existence of fluidized craters smaller than 5 km diameter in some places on Mars suggests that volatiles were present locally at depths less than 0.27 km. Deposits of Martian craters may be ideal sites for searches for fossils of early organisms that may have existed in the water table if life originated on Mars.

  17. A model-based correction for outcome reporting bias in meta-analysis.

    PubMed

    Copas, John; Dwan, Kerry; Kirkham, Jamie; Williamson, Paula

    2014-04-01

    It is often suspected (or known) that outcomes published in medical trials are selectively reported. A systematic review for a particular outcome of interest can only include studies where that outcome was reported and so may omit, for example, a study that has considered several outcome measures but only reports those giving significant results. Using the methodology of the Outcome Reporting Bias (ORB) in Trials study of (Kirkham and others, 2010. The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews. British Medical Journal 340, c365), we suggest a likelihood-based model for estimating the effect of ORB on confidence intervals and p-values in meta-analysis. Correcting for bias has the effect of moving estimated treatment effects toward the null and hence more cautious assessments of significance. The bias can be very substantial, sometimes sufficient to completely overturn previous claims of significance. We re-analyze two contrasting examples, and derive a simple fixed effects approximation that can be used to give an initial estimate of the effect of ORB in practice.

  18. A method of recovering the initial vectors of globally coupled map lattices based on symbolic dynamics

    NASA Astrophysics Data System (ADS)

    Sun, Li-Sha; Kang, Xiao-Yun; Zhang, Qiong; Lin, Lan-Xin

    2011-12-01

    Based on symbolic dynamics, a novel computationally efficient algorithm is proposed to estimate the unknown initial vectors of globally coupled map lattices (CMLs). It is proved that not all inverse chaotic mapping functions are satisfied for contraction mapping. It is found that the values in phase space do not always converge on their initial values with respect to sufficient backward iteration of the symbolic vectors in terms of global convergence or divergence (CD). Both CD property and the coupling strength are directly related to the mapping function of the existing CML. Furthermore, the CD properties of Logistic, Bernoulli, and Tent chaotic mapping functions are investigated and compared. Various simulation results and the performances of the initial vector estimation with different signal-to-noise ratios (SNRs) are also provided to confirm the proposed algorithm. Finally, based on the spatiotemporal chaotic characteristics of the CML, the conditions of estimating the initial vectors using symbolic dynamics are discussed. The presented method provides both theoretical and experimental results for better understanding and characterizing the behaviours of spatiotemporal chaotic systems.

  19. Vehicle speed affects both pre-skid braking kinematics and average tire/roadway friction.

    PubMed

    Heinrichs, Bradley E; Allin, Boyd D; Bowler, James J; Siegmund, Gunter P

    2004-09-01

    Vehicles decelerate between brake application and skid onset. To better estimate a vehicle's speed and position at brake application, we investigated how vehicle deceleration varied with initial speed during both the pre-skid and skidding intervals on dry asphalt. Skid-to-stop tests were performed from four initial speeds (20, 40, 60, and 80 km/h) using three different grades of tire (economy, touring, and performance) on a single vehicle and a single road surface. Average skidding friction was found to vary with initial speed and tire type. The post-brake/pre-skid speed loss, elapsed time, distance travelled, and effective friction were found to vary with initial speed. Based on these data, a method using skid mark length to predict vehicle speed and position at brake application rather than skid onset was shown to improve estimates of initial vehicle speed by up to 10 km/h and estimates of vehicle position at brake application by up to 8 m compared to conventional methods that ignore the post-brake/pre-skid interval. Copyright 2003 Elsevier Ltd.

  20. Cooling rates and crystallization dynamics of shallow level pegmatite-aplite dikes, San Diego County, California

    USGS Publications Warehouse

    Webber, Karen L.; Simmons, William B.; Falster, Alexander U.; Foord, Eugene E.

    1999-01-01

    Pegmatites of the Pala and Mesa Grande Pegmatite Districts, San Diego County, California are typically thin, sheet-like composite pegmatite-aplite dikes. Aplitic portions of many dikes display pronounced mineralogical layering referred to as "line rock," characterized by fine-grained, garnet-rich bands alternating with albite- and quartz-rich bands. Thermal modeling was performed for four dikes in San Diego County including the 1 m thick Himalaya dike, the 2 m thick Mission dike, the 8 m thick George Ashley dike, and the 25 m thick Stewart dike. Calculations were based on conductive cooling equations accounting for latent heat of crystallization, a melt emplacement temperature of 650 °C into 150 °C fractured, gabbroic country rock at a depth of 5 km, and an estimated 3 wt% initial H2O content in the melt. Cooling to -5 cm/s. Crystal size distribution (CSD) studies of garnet from layered aplites suggest growth rates of about 10-6 cm/s. These results indicate that the dikes cooled and crystallized rapidly, with variable nucleation rates but high overall crystal-growth rates. Initial high nucleation rates coincident with emplacement and strong undercooling can account for the millimeter-size aplite grains. Lower nucleation rates coupled with high growth rates can explain the decimeter-size minerals in the hanging walls, cores, and miarolitic cavities of the pegmatites. The presence of tourmaline and/or lepidolite throughout these dikes suggests that although the melts were initially H2O-undersaturated, high melt concentrations of incompatible (or fluxing) components such as B, F, and Li (±H2O), aided in the development of large pegmatitic crystals that grew rapidly in the short times suggested by the conductive cooling models.

  1. A theoretical framework to predict the most likely ion path in particle imaging.

    PubMed

    Collins-Fekete, Charles-Antoine; Volz, Lennart; Portillo, Stephen K N; Beaulieu, Luc; Seco, Joao

    2017-03-07

    In this work, a generic rigorous Bayesian formalism is introduced to predict the most likely path of any ion crossing a medium between two detection points. The path is predicted based on a combination of the particle scattering in the material and measurements of its initial and final position, direction and energy. The path estimate's precision is compared to the Monte Carlo simulated path. Every ion from hydrogen to carbon is simulated in two scenarios, (1) where the range is fixed and (2) where the initial velocity is fixed. In the scenario where the range is kept constant, the maximal root-mean-square error between the estimated path and the Monte Carlo path drops significantly between the proton path estimate (0.50 mm) and the helium path estimate (0.18 mm), but less so up to the carbon path estimate (0.09 mm). However, this scenario is identified as the configuration that maximizes the dose while minimizing the path resolution. In the scenario where the initial velocity is fixed, the maximal root-mean-square error between the estimated path and the Monte Carlo path drops significantly between the proton path estimate (0.29 mm) and the helium path estimate (0.09 mm) but increases for heavier ions up to carbon (0.12 mm). As a result, helium is found to be the particle with the most accurate path estimate for the lowest dose, potentially leading to tomographic images of higher spatial resolution.

  2. Methods of sequential estimation for determining initial data in numerical weather prediction. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Cohn, S. E.

    1982-01-01

    Numerical weather prediction (NWP) is an initial-value problem for a system of nonlinear differential equations, in which initial values are known incompletely and inaccurately. Observational data available at the initial time must therefore be supplemented by data available prior to the initial time, a problem known as meteorological data assimilation. A further complication in NWP is that solutions of the governing equations evolve on two different time scales, a fast one and a slow one, whereas fast scale motions in the atmosphere are not reliably observed. This leads to the so called initialization problem: initial values must be constrained to result in a slowly evolving forecast. The theory of estimation of stochastic dynamic systems provides a natural approach to such problems. For linear stochastic dynamic models, the Kalman-Bucy (KB) sequential filter is the optimal data assimilation method, for linear models, the optimal combined data assimilation-initialization method is a modified version of the KB filter.

  3. Recovery from PTSD following Hurricane Katrina

    PubMed Central

    McLaughlin, Katie A.; Berglund, Patricia; Gruber, Michael J.; Kessler, Ronald C.; Sampson, Nancy A.; Zaslavsky, Alan M.

    2011-01-01

    Background We examined patterns and correlates of speed of recovery of estimated posttraumatic stress disorder (PTSD) among people who developed PTSD in the wake of Hurricane Katrina. Method A probability sample of pre-hurricane residents of areas affected by Hurricane Katrina was administered a telephone survey 7-19 months following the hurricane and again 24-27 months post-hurricane. The baseline survey assessed PTSD using a validated screening scale and assessed a number of hypothesized predictors of PTSD recovery that included socio-demographics, pre-hurricane history of psychopathology, hurricane-related stressors, social support, and social competence. Exposure to post-hurricane stressors and course of estimated PTSD were assessed in a follow-up interview. Results An estimated 17.1% of respondents had a history of estimated hurricane-related PTSD at baseline and 29.2% by the follow-up survey. Of the respondents who developed estimated hurricane-related PTSD, 39.0% recovered by the time of the follow-up survey with a mean duration of 16.5 months. Predictors of slow recovery included exposure to a life-threatening situation, hurricane-related housing adversity, and high income. Other socio-demographics, history of psychopathology, social support, social competence, and post-hurricane stressors were unrelated to recovery from estimated PTSD. Conclusions The majority of adults who developed estimated PTSD after Hurricane Katrina did not recover within 18-27 months. Delayed onset was common. Findings document the importance of initial trauma exposure severity in predicting course of illness and suggest that pre- and post-trauma factors typically associated with course of estimated PTSD did not influence recovery following Hurricane Katrina. PMID:21308887

  4. The Influence of External Load on Quadriceps Muscle and Tendon Dynamics during Jumping.

    PubMed

    Earp, Jacob E; Newton, Robert U; Cormie, Prue; Blazevich, Anthony J

    2017-11-01

    Tendons possess both viscous (rate-dependent) and elastic (rate-independent) properties that determine tendon function. During high-speed movements external loading increases both the magnitude (FT) and rate (RFDT) of tendon loading. The influence of external loading on muscle and tendon dynamics during maximal vertical jumping was explored. Ten resistance-trained men performed parallel-depth, countermovement vertical jumps with and without additional load (0%, 30%, 60%, and 90% of maximum squat lift strength), while joint kinetics and kinematics, quadriceps tendon length (LT) and patellar tendon FT and RFDT were estimated using integrated ultrasound, motion analysis and force platform data and muscle tendon modelling. Estimated FT and RFDT, but not peak LT, increased with external loading. Temporal comparisons between 0% and 90% loads revealed that FT was greater with 90% loading throughout the majority of the movement (11%-81% and 87%-95% movement duration). However, RFDT was greater with 90% load only during the early movement initiation phase (8%-15% movement duration) but was greater in the 0% load condition later in the eccentric phase (27%-38% movement duration). LT was longer during the early movement (12%-23% movement duration) but shorter in the late eccentric and early concentric phases (48%-55% movement duration) with 90% load. External loading positively influenced peak FT and RFDT but tendon strain appeared unaffected, suggesting no additive effect of external loading on patellar tendon lengthening during human jumping. Temporal analysis revealed that external loading resulted in a large initial RFDT that may have caused dynamic stiffening of the tendon and attenuated tendon strain throughout the movement. These results suggest that external loading influences tendon lengthening in both a load- and movement-dependent manner.

  5. Benthic nutrient sources to hypereutrophic upper Klamath Lake, Oregon, USA.

    PubMed

    Kuwabara, James S; Topping, Brent R; Lynch, Dennis D; Carter, James L; Essaid, Hedeff I

    2009-03-01

    Three collecting trips were coordinated in April, May, and August 2006 to sample the water column and benthos of hypereutrophic Upper Klamath Lake (OR, USA) through the annual cyanophyte bloom of Aphanizomenon flos-aquae. A pore-water profiler was designed and fabricated to obtain the first high-resolution (centimeter-scale) estimates of the vertical concentration gradients of macro- and micronutrients for diffusive-flux determinations. A consistently positive benthic flux for soluble reactive phosphorus (SRP) was observed with solute release from the sediment, ranging between 0.4 and 6.1 mg/m(2)/d. The mass flux over an approximate 200-km(2) lake area was comparable in magnitude to riverine inputs. An additional concern related to fish toxicity was identified when dissolved ammonium also displayed consistently positive benthic fluxes of 4 to 134 mg/m(2)/d, again comparable to riverine inputs. Although phosphorus was a logical initial choice by water quality managers for the limiting nutrient when nitrogen-fixing cyanophytes dominate, initial trace-element results from the lake and major inflowing tributaries suggested that the role of iron limitation on primary productivity should be investigated. Dissolved iron became depleted in the lake water column during the course of the algal bloom, while dissolved ammonium and SRP increased. Elevated macroinvertebrate densities, at least of the order of 10(4) individuals/m(2), suggested that the diffusive-flux estimates may be significantly enhanced by bioturbation. In addition, heat-flux modeling indicated that groundwater advection of nutrients could also significantly contribute to internal nutrient loading. Accurate environmental assessments of lentic systems and reasonable expectations for point-source management require quantitative consideration of internal solute sources.

  6. Spatio-temporal dynamics and laterality effects of face inversion, feature presence and configuration, and face outline

    PubMed Central

    Marinkovic, Ksenija; Courtney, Maureen G.; Witzel, Thomas; Dale, Anders M.; Halgren, Eric

    2014-01-01

    Although a crucial role of the fusiform gyrus (FG) in face processing has been demonstrated with a variety of methods, converging evidence suggests that face processing involves an interactive and overlapping processing cascade in distributed brain areas. Here we examine the spatio-temporal stages and their functional tuning to face inversion, presence and configuration of inner features, and face contour in healthy subjects during passive viewing. Anatomically-constrained magnetoencephalography (aMEG) combines high-density whole-head MEG recordings and distributed source modeling with high-resolution structural MRI. Each person's reconstructed cortical surface served to constrain noise-normalized minimum norm inverse source estimates. The earliest activity was estimated to the occipital cortex at ~100 ms after stimulus onset and was sensitive to an initial coarse level visual analysis. Activity in the right-lateralized ventral temporal area (inclusive of the FG) peaked at ~160 ms and was largest to inverted faces. Images containing facial features in the veridical and rearranged configuration irrespective of the facial outline elicited intermediate level activity. The M160 stage may provide structural representations necessary for downstream distributed areas to process identity and emotional expression. However, inverted faces additionally engaged the left ventral temporal area at ~180 ms and were uniquely subserved by bilateral processing. This observation is consistent with the dual route model and spared processing of inverted faces in prosopagnosia. The subsequent deflection, peaking at ~240 ms in the anterior temporal areas bilaterally, was largest to normal, upright faces. It may reflect initial engagement of the distributed network subserving individuation and familiarity. These results support dynamic models suggesting that processing of unfamiliar faces in the absence of a cognitive task is subserved by a distributed and interactive neural circuit. PMID:25426044

  7. Benthic nutrient sources to hypereutrophic Upper Klamath Lake, Oregon, USA

    USGS Publications Warehouse

    Kuwabara, J.S.; Topping, B.R.; Lynch, D.D.; Carter, J.L.; Essaid, H.I.

    2009-01-01

    Three collecting trips were coordinated in April, May, and August 2006 to sample the water column and benthos of hypereutrophic Upper Klamath Lake (OR, USA) through the annual cyanophyte bloom of Aphanizomenon flos-aquae. A porewater profiler was designed and fabricated to obtain the first high-resolution (centimeter-scale) estimates of the vertical, concentration gradients of macro- and micronutrients for diffusive-flux determinations. A consistently positive benthic flux for soluble reactive phosphorus (SRP) was observed with solute release from the sediment, ranging between 0.4 and 6.1 mg/m2/d. The mass flux over an approximate 200-km2 lake area was comparable in magnitude to riverine inputs. An additional concern, related to fish toxicity was identified when dissolved ammonium also displayed consistently positive benthic fluxes of 4 to 134 mg/m2/d, again, comparable to riverine inputs. Although phosphorus was a logical initial choice by water quality managers for the limiting nutrient when nitrogen-fixing cyanophytes dominate, initial trace-element results from the lake and major inflowing tributaries suggested that the role of iron limitation on primary productivity should be investigated. Dissolved iron became depleted in the lake water column during the course of the algal bloom, while dissolved ammonium and SRP increased. Elevated macroinvertebrate densities, at least of the order of 104 individuals/m2, suggested, that the diffusive-flux estimates may be significantly enhanced, by bioturbation. In addition, heat-flux modeling indicated that groundwater advection of nutrients could also significantly contribute to internal nutrient loading. Accurate environmental assessments of lentic systems and reasonable expectations for point-source management require quantitative consideration of internal solute sources ?? 2009 SETAC.

  8. Renesting by dusky Canada geese on the Copper River Delta, Alaska

    USGS Publications Warehouse

    Fondell, Thomas F.; Grand, James B.; Miller, David A.W.; Anthony, R. Michael

    2006-01-01

    The population of dusky Canada geese (Branta canadensis occidentalis; hereafter duskies) breeding on the Copper River Delta (CRD), Alaska, USA, has been in long-term decline, largely as a result of reduced productivity. Estimates of renesting rates by duskies may be useful for adjusting estimates of the size of the breeding population derived from aerial surveys and for understanding population dynamics. We used a marked population of dusky females to obtain estimates of renesting propensity and renesting interval on the CRD, 1999–2000. Continuation nests, replacement nests initiated without a break in the laying sequence, resulted only after first nests were destroyed in the laying stage with ≤4 eggs laid. Renesting propensity declined with nest age from 72% in mid-laying to 30% in early incubation. Between first nests and renests, mean interval was 11.9 ± 0.6 days, mean distance was 74.5 m (range 0–214 m), and clutch size declined 0.9 ± 0.4 eggs. We incorporated our renesting estimates and available estimates of other nesting parameters into an individual-based model to predict the proportion of first nests, continuation nests, and renests, and to examine female success on the CRD, 1997–2000. Our model predicted that 19–36% of nests each year were continuation nests and renests. Also, through 15 May (the approx. date of breeding ground surveys), 1.1–1.3 nests were initiated per female. Thus, the number of nests per female would have a significant, though relatively consistent, effect on adjusting the relation between numbers of nests found on ground surveys versus numbers of birds seen during aerial surveys. We also suggest a method that managers could use to predict nests per female using nest success of early nests. Our model predicted that relative to observed estimates of nest success, female success was 32–100% greater, due to replacement nests. Thus, although nest success remains low, production for duskies was higher than previously thought. For dusky Canada geese, managers need to consider both continuation nests and renests in designing surveys and in calculating adjustment factors for the expansion of aerial survey data using nest densities.

  9. Pneumocystis pneumonia outbreak among renal transplant recipients at a North American transplant center: Risk factors and implications for infection control.

    PubMed

    Mulpuru, Sunita; Knoll, Greg; Weir, Colleen; Desjardins, Marc; Johnson, Daniel; Gorn, Ivan; Fairhead, Todd; Bissonnette, Janice; Bruce, Natalie; Toye, Baldwin; Suh, Kathryn; Roth, Virginia

    2016-04-01

    Pneumocystis pneumonia is a severe opportunistic fungal infection. Outbreaks among renal transplant recipients have been reported in Europe and Japan, but never in North America. We conducted a retrospective case-control study among adult renal transplant recipients at a Canadian center, using a 3:1 matching scheme. Ten cases and 30 controls were matched based on initial transplantation date, and all patients received prophylaxis with trimethoprim-sulfamethoxazole for 1 year posttransplantation. The median time between transplantation and infection was 10.2 years, and all patients survived. Compared with controls, case patients had statistically lower estimated glomerular filtration rate (29.3 mL/min vs 66.3 mL/min; P = .028) and lymphopenia (0.51 × 10(9)/L vs 1.25 × 10(9)/L; P = .002). Transmission mapping revealed significant overlap in the clinic and laboratory visits among case vs control patients (P = .0002). One hundred percent of patients (4 out of 4) successfully genotyped had the same strain of Pneumocystis jirovecii. Our study demonstrated an outbreak of pneumocystis more than 10 years following initial transplantation, despite using recommended initial prophylaxis. We identified low estimated glomerular filtration rate and lymphopenia as risk factors for infection. Overlapping ambulatory care visits were identified as important potential sources of infection transmission, suggesting that institutions should re-evaluate policy and infrastructure strategies to interrupt transmission of respiratory pathogens. Copyright © 2016 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    KRASNITZ,A.; VENUGOPALAN,R.

    The dynamics of low-x partons in the transverse plane of a high-energy nuclear collision is classical, and therefore admits a fully non-perturbative numerical treatment. The authors report results of a recent study estimating the initial energy density in the central region of a collision. Preliminary estimates of the number of gluons per unit rapidity, and the initial transverse momentum distribution of gluons, are also provided.

  11. Did Better Colleges Bring Better Jobs? Estimating the Effects of College Quality on Initial Employment for College Graduates in China

    ERIC Educational Resources Information Center

    Yu, Li

    2017-01-01

    The unemployment problem of college students in China has drawn much attention from academics and society. Using the 2011 College Student Labor Market (CSLM) survey data from Tsinghua University, this paper estimated the effects of college quality on initial employment, including employment status and employment unit ownership for fresh college…

  12. Economic cost of initial attack and large-fire suppression

    Treesearch

    Armando González-Cabán

    1983-01-01

    A procedure has been developed for estimating the economic cost of initial attack and large-fire suppression. The procedure uses a per-unit approach to estimate total attack and suppression costs on an input-by-input basis. Fire management inputs (FMIs) are the production units used. All direct and indirect costs are charged to the FMIs. With the unit approach, all...

  13. Using the Concept of "Population Dose" in Planning and Evaluating Community-Level Obesity Prevention Initiatives

    ERIC Educational Resources Information Center

    Cheadle, Allen; Schwartz, Pamela M.; Rauzon, Suzanne; Bourcier, Emily; Senter, Sandra; Spring, Rebecca; Beery, William L.

    2013-01-01

    When planning and evaluating community-level initiatives focused on policy and environment change, it is useful to have estimates of the impact on behavioral outcomes of particular strategies (e.g., building a new walking trail to promote physical activity). We have created a measure of estimated strategy-level impact--"population dose"--based on…

  14. Investigation of practical initial attenuation image estimates in TOF-MLAA reconstruction for PET/MR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Ju-Chieh, E-mail: chengjuchieh@gmail.com; Y

    Purpose: Time-of-flight joint attenuation and activity positron emission tomography reconstruction requires additional calibration (scale factors) or constraints during or post-reconstruction to produce a quantitative μ-map. In this work, the impact of various initializations of the joint reconstruction was investigated, and the initial average mu-value (IAM) method was introduced such that the forward-projection of the initial μ-map is already very close to that of the reference μ-map, thus reducing/minimizing the offset (scale factor) during the early iterations of the joint reconstruction. Consequently, the accuracy and efficiency of unconstrained joint reconstruction such as time-of-flight maximum likelihood estimation of attenuation and activity (TOF-MLAA)more » can be improved by the proposed IAM method. Methods: 2D simulations of brain and chest were used to evaluate TOF-MLAA with various initial estimates which include the object filled with water uniformly (conventional initial estimate), bone uniformly, the average μ-value uniformly (IAM magnitude initialization method), and the perfect spatial μ-distribution but with a wrong magnitude (initialization in terms of distribution). 3D GATE simulation was also performed for the chest phantom under a typical clinical scanning condition, and the simulated data were reconstructed with a fully corrected list-mode TOF-MLAA algorithm with various initial estimates. The accuracy of the average μ-values within the brain, chest, and abdomen regions obtained from the MR derived μ-maps was also evaluated using computed tomography μ-maps as the gold-standard. Results: The estimated μ-map with the initialization in terms of magnitude (i.e., average μ-value) was observed to reach the reference more quickly and naturally as compared to all other cases. Both 2D and 3D GATE simulations produced similar results, and it was observed that the proposed IAM approach can produce quantitative μ-map/emission when the corrections for physical effects such as scatter and randoms were included. The average μ-value obtained from MR derived μ-map was accurate within 5% with corrections for bone, fat, and uniform lungs. Conclusions: The proposed IAM-TOF-MLAA can produce quantitative μ-map without any calibration provided that there are sufficient counts in the measured data. For low count data, noise reduction and additional regularization/rescaling techniques need to be applied and investigated. The average μ-value within the object is prior information which can be extracted from MR and patient database, and it is feasible to obtain accurate average μ-value using MR derived μ-map with corrections as demonstrated in this work.« less

  15. The dynamic role of parental influences in preventing adolescent smoking initiation.

    PubMed

    Mahabee-Gittens, E Melinda; Xiao, Yang; Gordon, Judith S; Khoury, Jane C

    2013-04-01

    As adolescents grow, protective parental influences become less important and peer influences take precedence in adolescent's initiation of smoking. It is unknown how and when this occurs. We sought to: prospectively estimate incidence rates of smoking initiation from late childhood through mid-adolescence, identify important risk and protective parental influences on smoking initiation, and examine their dynamic nature in order to identify key ages. Longitudinal data from the National Survey of Parents and Youth of 8 nationally representative age cohorts (9-16 years) of never smokers in the U.S. were used (N=5705 dyads at baseline). Analysis involved a series of lagged logistic regression models using a cohort-sequential design. The mean sample cumulative incidence rates of tobacco use increased from 1.8% to 22.5% between the 9 and 16 years old age cohorts. Among risk factors, peer smoking was the most important across all ages; 11-15 year-olds who spent time with peers who smoked had 2 to 6.5 times higher odds of initiating smoking. Parent-youth connectedness significantly decreased the odds of smoking initiation by 14-37% in 11-14 year-olds; parental monitoring and punishment for smoking decreased the odds of smoking initiation risk by 36-59% in 10-15 year-olds, and by 15-28% in 12-14 year-olds, respectively. Parental influences are important in protecting against smoking initiation across adolescence. At the same time, association with peers who smoke is a very strong risk factor. Our findings provide empirical evidence to suggest that in order to prevent youth from initiating smoking, parents should be actively involved in their adolescents' lives and guard them against association with peers who smoke. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Effects of New Funding Models for Patient-Centered Medical Homes on Primary Care Practice Finances and Services: Results of a Microsimulation Model.

    PubMed

    Basu, Sanjay; Phillips, Russell S; Song, Zirui; Landon, Bruce E; Bitton, Asaf

    2016-09-01

    We assess the financial implications for primary care practices of participating in patient-centered medical home (PCMH) funding initiatives. We estimated practices' changes in net revenue under 3 PCMH funding initiatives: increased fee-for-service (FFS) payments, traditional FFS with additional per-member-per-month (PMPM) payments, or traditional FFS with PMPM and pay-for-performance (P4P) payments. Net revenue estimates were based on a validated microsimulation model utilizing national practice surveys. Simulated practices reflecting the national range of practice size, location, and patient population were examined under several potential changes in clinical services: investments in patient tracking, communications, and quality improvement; increased support staff; altered visit templates to accommodate longer visits, telephone visits or electronic visits; and extended service delivery hours. Under the status quo of traditional FFS payments, clinics operate near their maximum estimated possible net revenue levels, suggesting they respond strongly to existing financial incentives. Practices gained substantial additional net annual revenue per full-time physician under PMPM or PMPM plus P4P payments ($113,300 per year, 95% CI, $28,500 to $198,200) but not under increased FFS payments (-$53,500, 95% CI, -$69,700 to -$37,200), after accounting for costs of meeting PCMH funding requirements. Expanding services beyond minimum required levels decreased net revenue, because traditional FFS revenues decreased. PCMH funding through PMPM payments could substantially improve practice finances but will not offer sufficient financial incentives to expand services beyond minimum requirements for PCMH funding. © 2016 Annals of Family Medicine, Inc.

  17. The kinematic evolution of the Macquarie Plate: A case study for the fragmentation of oceanic lithosphere

    NASA Astrophysics Data System (ADS)

    Choi, Hakkyum; Kim, Seung-Sep; Dyment, Jérôme; Granot, Roi; Park, Sung-Hyun; Hong, Jong Kuk

    2017-11-01

    The tectonic evolution of the Southeast Indian Ridge (SEIR), and in particular of its easternmost edge, has not been constrained by high-resolution shipboard data and therefore the kinematic details of its behavior are uncertain. Using new shipboard magnetic data obtained by R/VIB Araon and M/V L'Astrolabe along the easternmost SEIR and available archived magnetic data, we estimated the finite rotation parameters of the Macquarie-Antarctic and Australian-Antarctic motions for eight anomalies (1o, 2, 2Ay, 2Ao, 3y, 3o, 3Ay, and 3Ao). These new finite rotations indicate that the Macquarie Plate since its creation ∼6.24 million years ago behaved as an independent and rigid plate, confirming previous estimates. The change in the Australian-Antarctic spreading direction from N-S to NW-SE appears to coincide with the formation of the Macquarie Plate at ∼6.24 Ma. Analysis of the estimated plate motions indicates that the initiation and growth stages of the Macquarie Plate resemble the kinematic evolution of other microplates and continental breakup, whereby a rapid acceleration in angular velocity took place after its initial formation, followed by a slow decay, suggesting that a decrease in the resistive strength force might have played a significant role in the kinematic evolution of the microplate. The motions of the Macquarie Plate during its growth stages may have been further enhanced by the increased subducting rates along the Hjort Trench, while the Macquarie Plate has exhibited constant growth by seafloor spreading.

  18. Bilingualism does not alter cognitive decline or dementia risk among Spanish-speaking immigrants.

    PubMed

    Zahodne, Laura B; Schofield, Peter W; Farrell, Meagan T; Stern, Yaakov; Manly, Jennifer J

    2014-03-01

    Clinic-based studies suggest that dementia is diagnosed at older ages in bilinguals compared with monolinguals. The current study sought to test this hypothesis in a large, prospective, community-based study of initially nondemented Hispanic immigrants living in a Spanish-speaking enclave of northern Manhattan. Participants included 1,067 participants in the Washington/Hamilton Heights Inwood Columbia Aging Project (WHICAP) who were tested in Spanish and followed at 18-24 month intervals for up to 23 years. Spanish-English bilingualism was estimated via both self-report and an objective measure of English reading level. Multilevel models for change estimated the independent effects of bilingualism on cognitive decline in 4 domains: episodic memory, language, executive function, and speed. Over the course of the study, 282 participants developed dementia. Cox regression was used to estimate the independent effect of bilingualism on dementia conversion. Covariates included country of origin, gender, education, time spent in the United States, recruitment cohort, and age at enrollment. Independent of the covariates, bilingualism was associated with better memory and executive function at baseline. However, bilingualism was not independently associated with rates of cognitive decline or dementia conversion. Results were similar whether bilingualism was measured via self-report or an objective test of reading level. This study does not support a protective effect of bilingualism on age-related cognitive decline or the development of dementia. In this sample of Hispanic immigrants, bilingualism is related to higher initial scores on cognitive tests and higher educational attainment and may not represent a unique source of cognitive reserve. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  19. Bilingualism Does Not Alter Cognitive Decline or Dementia Risk among Spanish-Speaking Immigrants

    PubMed Central

    Zahodne, Laura B.; Schofield, Peter W.; Farrell, Meagan T.; Stern, Yaakov; Manly, Jennifer J.

    2013-01-01

    Objective Clinic-based studies suggest that dementia is diagnosed at older ages in bilinguals compared to monolinguals. The current study sought to test this hypothesis in a large, prospective, community-based study of initially non-demented Hispanic immigrants living in a Spanish-speaking enclave of Northern Manhattan. Method Participants included 1,067 participants in the Washington/Hamilton Heights Inwood Columbia Aging Project (WHICAP) who were tested in Spanish and followed at 18–24 month intervals for up to 23 years. Spanish-English bilingualism was estimated via both self-report and an objective measure of English reading level. Multilevel models for change estimated the independent effects of bilingualism on cognitive decline in four domains: episodic memory, language, executive function, and speed. Over the course of the study, 282 participants developed dementia. Cox regression was used to estimate the independent effect of bilingualism on dementia conversion. Covariates included country of origin, gender, education, time spent in the United States, recruitment cohort, and age at enrollment. Results Independent of the covariates, bilingualism was associated with better memory and executive function at baseline. However bilingualism was not independently associated with rates of cognitive decline or dementia conversion. Results were similar whether bilingualism was measured via self-report or an objective test of reading level. Conclusions This study does not support a protective effect of bilingualism on age-related cognitive decline or the development of dementia. In this sample of Hispanic immigrants, bilingualism is related to higher initial scores on cognitive tests and higher educational attainment and may not represent a unique source of cognitive reserve. PMID:24188113

  20. Simulating estimation of California fossil fuel and biosphere carbon dioxide exchanges combining in situ tower and satellite column observations

    DOE PAGES

    Fischer, Marc L.; Parazoo, Nicholas; Brophy, Kieran; ...

    2017-03-09

    Here, we report simulation experiments estimating the uncertainties in California regional fossil fuel and biosphere CO 2 exchanges that might be obtained by using an atmospheric inverse modeling system driven by the combination of ground-based observations of radiocarbon and total CO 2, together with column-mean CO 2 observations from NASA's Orbiting Carbon Observatory (OCO-2). The work includes an initial examination of statistical uncertainties in prior models for CO 2 exchange, in radiocarbon-based fossil fuel CO 2 measurements, in OCO-2 measurements, and in a regional atmospheric transport modeling system. Using these nominal assumptions for measurement and model uncertainties, we find thatmore » flask measurements of radiocarbon and total CO 2 at 10 towers can be used to distinguish between different fossil fuel emission data products for major urban regions of California. We then show that the combination of flask and OCO-2 observations yields posterior uncertainties in monthly-mean fossil fuel emissions of ~5–10%, levels likely useful for policy relevant evaluation of bottom-up fossil fuel emission estimates. Similarly, we find that inversions yield uncertainties in monthly biosphere CO 2 exchange of ~6%–12%, depending on season, providing useful information on net carbon uptake in California's forests and agricultural lands. Finally, initial sensitivity analysis suggests that obtaining the above results requires control of systematic biases below approximately 0.5 ppm, placing requirements on accuracy of the atmospheric measurements, background subtraction, and atmospheric transport modeling.« less

  1. SN 2013ab: a normal Type IIP supernova in NGC 5669

    NASA Astrophysics Data System (ADS)

    Bose, Subhash; Valenti, Stefano; Misra, Kuntal; Pumo, Maria Letizia; Zampieri, Luca; Sand, David; Kumar, Brijesh; Pastorello, Andrea; Sutaria, Firoza; Maccarone, Thomas J.; Kumar, Brajesh; Graham, M. L.; Howell, D. Andrew; Ochner, Paolo; Chandola, H. C.; Pandey, Shashi B.

    2015-07-01

    We present densely sampled ultraviolet/optical photometric and low-resolution optical spectroscopic observations of the Type IIP supernova 2013ab in the nearby (˜24 Mpc) galaxy NGC 5669, from 2 to 190 d after explosion. Continuous photometric observations, with the cadence of typically a day to one week, were acquired with the 1-2 m class telescopes in the Las Cumbres Observatory Global Telescope network, ARIES telescopes in India and various other telescopes around the globe. The light curve and spectra suggest that the supernova (SN) is a normal Type IIP event with a plateau duration of ˜80 d with mid-plateau absolute visual magnitude of -16.7, although with a steeper decline during the plateau (0.92 mag 100 d-1 in V band) relative to other archetypal SNe of similar brightness. The velocity profile of SN 2013ab shows striking resemblance with those of SNe 1999em and 2012aw. Following the Rabinak & Waxman prescription, the initial temperature evolution of the SN emission allows us to estimate the progenitor radius to be ˜800 R⊙, indicating that the SN originated from a red supergiant star. The distance to the SN host galaxy is estimated to be 24.3 Mpc from expanding photosphere method. From our observations, we estimate that 0.064 M⊙ of 56Ni was synthesized in the explosion. General relativistic, radiation hydrodynamical modelling of the SN infers an explosion energy of 0.35 × 1051 erg, a progenitor mass (at the time of explosion) of ˜9 M⊙ and an initial radius of ˜600 R⊙.

  2. Effects of New Funding Models for Patient-Centered Medical Homes on Primary Care Practice Finances and Services: Results of a Microsimulation Model

    PubMed Central

    Basu, Sanjay; Phillips, Russell S.; Song, Zirui; Landon, Bruce E.; Bitton, Asaf

    2016-01-01

    PURPOSE We assess the financial implications for primary care practices of participating in patient-centered medical home (PCMH) funding initiatives. METHODS We estimated practices’ changes in net revenue under 3 PCMH funding initiatives: increased fee-for-service (FFS) payments, traditional FFS with additional per-member-per-month (PMPM) payments, or traditional FFS with PMPM and pay-for-performance (P4P) payments. Net revenue estimates were based on a validated microsimulation model utilizing national practice surveys. Simulated practices reflecting the national range of practice size, location, and patient population were examined under several potential changes in clinical services: investments in patient tracking, communications, and quality improvement; increased support staff; altered visit templates to accommodate longer visits, telephone visits or electronic visits; and extended service delivery hours. RESULTS Under the status quo of traditional FFS payments, clinics operate near their maximum estimated possible net revenue levels, suggesting they respond strongly to existing financial incentives. Practices gained substantial additional net annual revenue per full-time physician under PMPM or PMPM plus P4P payments ($113,300 per year, 95% CI, $28,500 to $198,200) but not under increased FFS payments (−$53,500, 95% CI, −$69,700 to −$37,200), after accounting for costs of meeting PCMH funding requirements. Expanding services beyond minimum required levels decreased net revenue, because traditional FFS revenues decreased. CONCLUSIONS PCMH funding through PMPM payments could substantially improve practice finances but will not offer sufficient financial incentives to expand services beyond minimum requirements for PCMH funding. PMID:27621156

  3. Adjusted adaptive Lasso for covariate model-building in nonlinear mixed-effect pharmacokinetic models.

    PubMed

    Haem, Elham; Harling, Kajsa; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Karlsson, Mats O

    2017-02-01

    One important aim in population pharmacokinetics (PK) and pharmacodynamics is identification and quantification of the relationships between the parameters and covariates. Lasso has been suggested as a technique for simultaneous estimation and covariate selection. In linear regression, it has been shown that Lasso possesses no oracle properties, which means it asymptotically performs as though the true underlying model was given in advance. Adaptive Lasso (ALasso) with appropriate initial weights is claimed to possess oracle properties; however, it can lead to poor predictive performance when there is multicollinearity between covariates. This simulation study implemented a new version of ALasso, called adjusted ALasso (AALasso), to take into account the ratio of the standard error of the maximum likelihood (ML) estimator to the ML coefficient as the initial weight in ALasso to deal with multicollinearity in non-linear mixed-effect models. The performance of AALasso was compared with that of ALasso and Lasso. PK data was simulated in four set-ups from a one-compartment bolus input model. Covariates were created by sampling from a multivariate standard normal distribution with no, low (0.2), moderate (0.5) or high (0.7) correlation. The true covariates influenced only clearance at different magnitudes. AALasso, ALasso and Lasso were compared in terms of mean absolute prediction error and error of the estimated covariate coefficient. The results show that AALasso performed better in small data sets, even in those in which a high correlation existed between covariates. This makes AALasso a promising method for covariate selection in nonlinear mixed-effect models.

  4. Simulating estimation of California fossil fuel and biosphere carbon dioxide exchanges combining in situ tower and satellite column observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fischer, Marc L.; Parazoo, Nicholas; Brophy, Kieran

    Here, we report simulation experiments estimating the uncertainties in California regional fossil fuel and biosphere CO 2 exchanges that might be obtained by using an atmospheric inverse modeling system driven by the combination of ground-based observations of radiocarbon and total CO 2, together with column-mean CO 2 observations from NASA's Orbiting Carbon Observatory (OCO-2). The work includes an initial examination of statistical uncertainties in prior models for CO 2 exchange, in radiocarbon-based fossil fuel CO 2 measurements, in OCO-2 measurements, and in a regional atmospheric transport modeling system. Using these nominal assumptions for measurement and model uncertainties, we find thatmore » flask measurements of radiocarbon and total CO 2 at 10 towers can be used to distinguish between different fossil fuel emission data products for major urban regions of California. We then show that the combination of flask and OCO-2 observations yields posterior uncertainties in monthly-mean fossil fuel emissions of ~5–10%, levels likely useful for policy relevant evaluation of bottom-up fossil fuel emission estimates. Similarly, we find that inversions yield uncertainties in monthly biosphere CO 2 exchange of ~6%–12%, depending on season, providing useful information on net carbon uptake in California's forests and agricultural lands. Finally, initial sensitivity analysis suggests that obtaining the above results requires control of systematic biases below approximately 0.5 ppm, placing requirements on accuracy of the atmospheric measurements, background subtraction, and atmospheric transport modeling.« less

  5. The Effects of Hot Corrosion Pits on the Fatigue Resistance of a Disk Superalloy

    NASA Technical Reports Server (NTRS)

    Gabb, Timothy P.; Telesman, Jack; Hazel, Brian; Mourer, David P.

    2009-01-01

    The effects of hot corrosion pits on low cycle fatigue life and failure modes of the disk superalloy ME3 were investigated. Low cycle fatigue specimens were subjected to hot corrosion exposures producing pits, then tested at low and high temperatures. Fatigue lives and failure initiation points were compared to those of specimens without corrosion pits. Several tests were interrupted to estimate the fraction of fatigue life that fatigue cracks initiated at pits. Corrosion pits significantly reduced fatigue life by 60 to 98 percent. Fatigue cracks initiated at a very small fraction of life for high temperature tests, but initiated at higher fractions in tests at low temperature. Critical pit sizes required to promote fatigue cracking were estimated, based on measurements of pits initiating cracks on fracture surfaces.

  6. Prevalence and predictors of anaemia in patients with HIV infection at the initiation of combined antiretroviral therapy in Xinjiang, China.

    PubMed

    Mijiti, Peierdun; Yuexin, Zhang; Min, Liu; Wubuli, Maimaitili; Kejun, Pan; Upur, Halmurat

    2015-03-01

    We retrospectively analysed routinely collected baseline data of 2252 patients with HIV infection registered in the National Free Antiretroviral Treatment Program in Xinjiang province, China, from 2006 to 2011 to estimate the prevalence and predictors of anaemia at the initiation of combined antiretroviral therapy. Anaemia was diagnosed using the criteria set forth by the World Health Organisation, and univariate and multivariate logistic regression analyses were performed to determine its predictors. The prevalences of mild, moderate, and severe anaemia at the initiation of combined antiretroviral therapy were 19.2%, 17.1%, and 2.6%, respectively. Overall, 38.9% of the patients were anaemic at the initiation of combined antiretroviral therapy. The multivariate logistic regression analysis indicated that Uyghur ethnicity, female gender, lower CD4 count, lower body mass index value, self-reported tuberculosis infection, and oral candidiasis were associated with a higher prevalence of anaemia, whereas higher serum alanine aminotransferase level was associated with a lower prevalence of anaemia. The results suggest that the overall prevalence of anaemia at the initiation of combined antiretroviral therapy in patients with HIV infection is high in Xinjiang, China, but severe anaemia is uncommon. Patients in China should be routinely checked for anaemia prior to combined antiretroviral therapy initiation, and healthcare providers should carefully select the appropriate first-line combined antiretroviral therapy regimens for anaemic patients. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  7. Estimating mangrove aboveground biomass from airborne LiDAR data: a case study from the Zambezi River delta

    NASA Astrophysics Data System (ADS)

    Fatoyinbo, Temilola; Feliciano, Emanuelle A.; Lagomasino, David; Kuk Lee, Seung; Trettin, Carl

    2018-02-01

    Mangroves are ecologically and economically important forested wetlands with the highest carbon (C) density of all terrestrial ecosystems. Because of their exceptionally large C stocks and importance as a coastal buffer, their protection and restoration has been proposed as an effective mitigation strategy for climate change. The inclusion of mangroves in mitigation strategies requires the quantification of C stocks (both above and belowground) and changes to accurately calculate emissions and sequestration. A growing number of countries are becoming interested in using mitigation initiatives, such as REDD+ (reducing emissions from deforestation and forest degradation), in these unique coastal forests. However, it is not yet clear how methods to measure C traditionally used for other ecosystems can be modified to estimate biomass in mangroves with the precision and accuracy needed for these initiatives. Airborne Lidar (ALS) data has often been proposed as the most accurate way for larger scale assessments but the application of ALS for coastal wetlands is scarce, primarily due to a lack of contemporaneous ALS and field measurements. Here, we evaluated the variability in field and Lidar-based estimates of aboveground biomass (AGB) through the combination of different local and regional allometric models and standardized height metrics that are comparable across spatial resolutions and sensor types, the end result being a simplified approach for accurately estimating mangrove AGB at large scales and determining the uncertainty by combining multiple allometric models. We then quantified wall-to-wall AGB stocks of a tall mangrove forest in the Zambezi Delta, Mozambique. Our results indicate that the Lidar H100 height metric correlates well with AGB estimates, with R 2 between 0.80 and 0.88 and RMSE of 33% or less. When comparing Lidar H100 AGB derived from three allometric models, mean AGB values range from 192 Mg ha-1 up to 252 Mg ha-1. We suggest the best model to predict AGB was based on the East Africa specific allometry and a power-based regression that used Lidar H100 as the height input with an R 2 of 0.85 and an RMSE of 122 Mg ha-1 or 33%. The total AGB of the Lidar inventoried mangrove area (6654 ha) was 1 350 902 Mg with a mean AGB of 203 Mg ha-1 ±166 Mg ha-1. Because the allometry suggested here was developed using standardized height metrics, it is recommended that the models can generate AGB estimates using other remote sensing instruments that are more readily accessible over other mangrove ecosystems on a large scale, and as part of future carbon monitoring efforts in mangroves.

  8. Patient-centered medical home implementation and primary care provider turnover.

    PubMed

    Sylling, Philip W; Wong, Edwin S; Liu, Chuan-Fen; Hernandez, Susan E; Batten, Adam J; Helfrich, Christian D; Nelson, Karin; Fihn, Stephan D; Hebert, Paul L

    2014-12-01

    The Veterans Health Administration (VHA) began implementing a patient-centered medical home (PCMH) model of care delivery in April 2010 through its Patient Aligned Care Team (PACT) initiative. PACT represents a substantial system reengineering of VHA primary care and its potential effect on primary care provider (PCP) turnover is an important but unexplored relationship. This study examined the association between a system-wide PCMH implementation and PCP turnover. This was a retrospective, longitudinal study of VHA-employed PCPs spanning 29 calendar quarters before PACT and eight quarters of PACT implementation. PCP employment periods were identified from administrative data and turnover was defined by an indicator on the last quarter of each uncensored period. An interrupted time series model was used to estimate the association between PACT and turnover, adjusting for secular trend and seasonality, provider and job characteristics, and local unemployment. We calculated average marginal effects (AME), which reflected the change in turnover probability associated with PACT implementation. The quarterly rate of PCP turnover was 3.06% before PACT and 3.38% after initiation of PACT. In adjusted analysis, PACT was associated with a modest increase in turnover (AME=4.0 additional PCPs per 1000 PCPs per quarter, P=0.004). Models with interaction terms suggested that the PACT-related change in turnover was increasing in provider age and experience. PACT was associated with a modest increase in PCP turnover, concentrated among older and more experienced providers, during initial implementation. Our findings suggest that policymakers should evaluate potential workforce effects when implementing PCMH.

  9. Prioritizing Scientific Initiatives.

    ERIC Educational Resources Information Center

    Bahcall, John N.

    1991-01-01

    Discussed is the way in which a limited number of astronomy research initiatives were chosen and prioritized based on a consensus of members from the Astronomy and Astrophysics Survey Committee. A list of recommended equipment initiatives and estimated costs is provided. (KR)

  10. Wind scatterometry with improved ambiguity selection and rain modeling

    NASA Astrophysics Data System (ADS)

    Draper, David Willis

    Although generally accurate, the quality of SeaWinds on QuikSCAT scatterometer ocean vector winds is compromised by certain natural phenomena and retrieval algorithm limitations. This dissertation addresses three main contributors to scatterometer estimate error: poor ambiguity selection, estimate uncertainty at low wind speeds, and rain corruption. A quality assurance (QA) analysis performed on SeaWinds data suggests that about 5% of SeaWinds data contain ambiguity selection errors and that scatterometer estimation error is correlated with low wind speeds and rain events. Ambiguity selection errors are partly due to the "nudging" step (initialization from outside data). A sophisticated new non-nudging ambiguity selection approach produces generally more consistent wind than the nudging method in moderate wind conditions. The non-nudging method selects 93% of the same ambiguities as the nudged data, validating both techniques, and indicating that ambiguity selection can be accomplished without nudging. Variability at low wind speeds is analyzed using tower-mounted scatterometer data. According to theory, below a threshold wind speed, the wind fails to generate the surface roughness necessary for wind measurement. A simple analysis suggests the existence of the threshold in much of the tower-mounted scatterometer data. However, the backscatter does not "go to zero" beneath the threshold in an uncontrolled environment as theory suggests, but rather has a mean drop and higher variability below the threshold. Rain is the largest weather-related contributor to scatterometer error, affecting approximately 4% to 10% of SeaWinds data. A simple model formed via comparison of co-located TRMM PR and SeaWinds measurements characterizes the average effect of rain on SeaWinds backscatter. The model is generally accurate to within 3 dB over the tropics. The rain/wind backscatter model is used to simultaneously retrieve wind and rain from SeaWinds measurements. The simultaneous wind/rain (SWR) estimation procedure can improve wind estimates during rain, while providing a scatterometer-based rain rate estimate. SWR also affords improved rain flagging for low to moderate rain rates. QuikSCAT-retrieved rain rates correlate well with TRMM PR instantaneous measurements and TMI monthly rain averages. SeaWinds rain measurements can be used to supplement data from other rain-measuring instruments, filling spatial and temporal gaps in coverage.

  11. The Program Cost of a Brief Video Intervention Shown in Sexually Transmitted Disease Clinic Waiting Rooms.

    PubMed

    Gift, Thomas L; OʼDonnell, Lydia N; Rietmeijer, Cornelis A; Malotte, Kevin C; Klausner, Jeffrey D; Margolis, Andrew D; Borkowf, Craig B; Kent, Charlotte K; Warner, Lee

    2016-01-01

    Patients in sexually transmitted disease (STD) clinic waiting rooms represent a potential audience for delivering health messages via video-based interventions. A controlled trial at 3 sites found that patients exposed to one intervention, Safe in the City, had a significantly lower incidence of STDs compared with patients in the control condition. An evaluation of the intervention's cost could help determine whether such interventions are programmatically viable. The cost of producing the Safe in the City intervention was estimated using study records, including logs, calendars, and contract invoices. Production costs were divided by the 1650 digital video kits initially fabricated to get an estimated cost per digital video. Clinic costs for showing the video in waiting rooms included staff time costs for equipment operation and hardware depreciation and were estimated for the 21-month study observation period retrospectively. The intervention cost an estimated $416,966 to develop, equaling $253 per digital video disk produced. Per-site costs to show the video intervention were estimated to be $2699 during the randomized trial. The cost of producing and implementing Safe in the City intervention suggests that similar interventions could potentially be produced and made available to end users at a price that would both cover production costs and be low enough that the end users could afford them.

  12. A modified NARMAX model-based self-tuner with fault tolerance for unknown nonlinear stochastic hybrid systems with an input-output direct feed-through term.

    PubMed

    Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W

    2014-01-01

    A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Tuning a physically-based model of the air-sea gas transfer velocity

    NASA Astrophysics Data System (ADS)

    Jeffery, C. D.; Robinson, I. S.; Woolf, D. K.

    Air-sea gas transfer velocities are estimated for one year using a 1-D upper-ocean model (GOTM) and a modified version of the NOAA-COARE transfer velocity parameterization. Tuning parameters are evaluated with the aim of bringing the physically based NOAA-COARE parameterization in line with current estimates, based on simple wind-speed dependent models derived from bomb-radiocarbon inventories and deliberate tracer release experiments. We suggest that A = 1.3 and B = 1.0, for the sub-layer scaling parameter and the bubble mediated exchange, respectively, are consistent with the global average CO 2 transfer velocity k. Using these parameters and a simple 2nd order polynomial approximation, with respect to wind speed, we estimate a global annual average k for CO 2 of 16.4 ± 5.6 cm h -1 when using global mean winds of 6.89 m s -1 from the NCEP/NCAR Reanalysis 1 1954-2000. The tuned model can be used to predict the transfer velocity of any gas, with appropriate treatment of the dependence on molecular properties including the strong solubility dependence of bubble-mediated transfer. For example, an initial estimate of the global average transfer velocity of DMS (a relatively soluble gas) is only 11.9 cm h -1 whilst for less soluble methane the estimate is 18.0 cm h -1.

  14. Different approaches to valuing the lost productivity of patients with migraine.

    PubMed

    Lofland, J H; Locklear, J C; Frick, K D

    2001-01-01

    To calculate and compare the human capital approach (HCA) and friction cost approach (FCA) methods for estimating the cost of lost productivity of migraineurs after the initiation of sumatriptan from a US societal perspective. Secondary, retrospective analysis to a prospective observational study. A mixed-model managed care organisation in western Pennsylvania, USA. Patients with migraine using sumatriptan therapy. Patient-reported questionnaires collected at baseline, 3 and 6 months after initiation of sumatriptan therapy. The cost of lost productivity estimated with the HCA and FCA methods. Of the 178 patients who completed the study, 51% were full-time employees, 13% were part-time, 18% were not working and 17% changed work status. Twenty-four percent reported a clerical or administrative position. From the HCA, the estimated total cost of lost productivity for 6 months following the initiation of sumatriptan was $US117905 (1996 values). From the FCA, the six-month estimated total cost of lost productivity ranged from $US28329 to $US117905 (1996 values). This was the first study to retrospectively estimate lost productivity of patients with migraine using the FCA methodology. Our results demonstrate that depending on the assumptions and illustrations employed, the FCA can yield lost productivity estimates that vary greatly as a percentage of the HCA estimate. Prospective investigations are needed to better determine the components and the nature of the lost productivity for chronic episodic diseases such as migraine headache.

  15. Intrinsic fluorescence biomarkers in cells treated with chemopreventive drugs

    NASA Astrophysics Data System (ADS)

    Kirkpatrick, Nathaniel D.; Brands, William R.; Zou, Changping; Brewer, Molly A.; Utzinger, Urs

    2005-03-01

    Non-invasive monitoring of cellular metabolism offers promising insights into areas ranging from biomarkers for drug activity to cancer diagnosis. Fluorescence spectroscopy can be utilized in order to exploit endogenous fluorophores, typically metabolic co-factors nicotinamide adenine dinucleotide (NADH) and flavin adenine dinucleotide (FAD), and estimate the redox status of the sample. Fluorescence spectroscopy was applied to follow metabolic changes in epithelial ovarian cells as well as bladder epithelial cancer cells during treatment with a chemopreventive drug that initiates cellular quiescence. Fluorescence signals consistent with NADH, FAD, and tryptophan were measured to monitor cellular activity, redox status, and protein content. Cells were treated with varying concentrations of N-4-(hydroxyphenyl) retinamide (4-HPR) and measured in a stable environment with a sensitive fluorescence spectrometer. A subset of measurements was completed on a low concentration of cells to demonstrate feasibility for medical application such as in bladder or ovary washes. Results suggest that all of the cells responded with similar dose dependence but started at different estimated redox ratio baseline levels correlating with cell cycle, growth inhibition, and apoptosis assays. NADH and tryptophan related fluorescence changed significantly while FAD related fluorescence remained unaltered. Fluorescence data collected from approximately 1000 - 2000 cells, comparable to a bladder or ovary wash, was measurable and useful for future experiments. This study suggests that future intrinsic biomarker measurements may need to be most sensitive to changes in NADH and tryptophan related fluorescence while using FAD related fluorescence to help estimate the baseline redox ratio and predict response to chemopreventive agents.

  16. UXO Burial Prediction Fidelity

    DTIC Science & Technology

    2017-07-01

    been developed to predict the initial penetration depth of underwater mines . SERDP would like to know if and how these existing mine models could be...designed for near-cylindrical mines —for munitions, however, projectile-specific drag, lift, and moment coefficients are needed for estimating...as inputs.  Other models have been built to estimate these initial conditions for mines dropped into water.  Can these mine models be useful for

  17. Growth and yield predictions for upland oak stands. 10 years after initial thinning

    Treesearch

    Martin E. Dale; Martin E. Dale

    1972-01-01

    The purpose of this paper is to furnish part of the needed information, that is, quantitative estimates of growth and yield 10 years after initial thinning of upland oak stands. All estimates are computed from a system of equations. These predictions are presented here in tabular form for convenient visual inspection of growth and yield trends. The tables show growth...

  18. A pharmacometric case study regarding the sensitivity of structural model parameter estimation to error in patient reported dosing times.

    PubMed

    Knights, Jonathan; Rohatagi, Shashank

    2015-12-01

    Although there is a body of literature focused on minimizing the effect of dosing inaccuracies on pharmacokinetic (PK) parameter estimation, most of the work centers on missing doses. No attempt has been made to specifically characterize the effect of error in reported dosing times. Additionally, existing work has largely dealt with cases in which the compound of interest is dosed at an interval no less than its terminal half-life. This work provides a case study investigating how error in patient reported dosing times might affect the accuracy of structural model parameter estimation under sparse sampling conditions when the dosing interval is less than the terminal half-life of the compound, and the underlying kinetics are monoexponential. Additional effects due to noncompliance with dosing events are not explored and it is assumed that the structural model and reasonable initial estimates of the model parameters are known. Under the conditions of our simulations, with structural model CV % ranging from ~20 to 60 %, parameter estimation inaccuracy derived from error in reported dosing times was largely controlled around 10 % on average. Given that no observed dosing was included in the design and sparse sampling was utilized, we believe these error results represent a practical ceiling given the variability and parameter estimates for the one-compartment model. The findings suggest additional investigations may be of interest and are noteworthy given the inability of current PK software platforms to accommodate error in dosing times.

  19. Water resources management: Hydrologic characterization through hydrograph simulation may bias streamflow statistics

    NASA Astrophysics Data System (ADS)

    Farmer, W. H.; Kiang, J. E.

    2017-12-01

    The development, deployment and maintenance of water resources management infrastructure and practices rely on hydrologic characterization, which requires an understanding of local hydrology. With regards to streamflow, this understanding is typically quantified with statistics derived from long-term streamgage records. However, a fundamental problem is how to characterize local hydrology without the luxury of streamgage records, a problem that complicates water resources management at ungaged locations and for long-term future projections. This problem has typically been addressed through the development of point estimators, such as regression equations, to estimate particular statistics. Physically-based precipitation-runoff models, which are capable of producing simulated hydrographs, offer an alternative to point estimators. The advantage of simulated hydrographs is that they can be used to compute any number of streamflow statistics from a single source (the simulated hydrograph) rather than relying on a diverse set of point estimators. However, the use of simulated hydrographs introduces a degree of model uncertainty that is propagated through to estimated streamflow statistics and may have drastic effects on management decisions. We compare the accuracy and precision of streamflow statistics (e.g. the mean annual streamflow, the annual maximum streamflow exceeded in 10% of years, and the minimum seven-day average streamflow exceeded in 90% of years, among others) derived from point estimators (e.g. regressions, kriging, machine learning) to that of statistics derived from simulated hydrographs across the continental United States. Initial results suggest that the error introduced through hydrograph simulation may substantially bias the resulting hydrologic characterization.

  20. Evidence for extensive genetic diversity and substructuring of the Babesia bovis metapopulation.

    PubMed

    Flores, D A; Minichiello, Y; Araujo, F R; Shkap, V; Benítez, D; Echaide, I; Rolls, P; Mosqueda, J; Pacheco, G M; Petterson, M; Florin-Christensen, M; Schnittger, L

    2013-11-01

    Babesia bovis is a tick-transmitted haemoprotozoan and a causative agent of bovine babesiosis, a cattle disease that causes significant economic loss in tropical and subtropical regions. A panel of nineteen micro- and minisatellite markers was used to estimate population genetic parameters of eighteen parasite isolates originating from different continents, countries and geographic regions including North America (Mexico, USA), South America (Argentina, Brazil), the Middle East (Israel) and Australia. For eleven of the eighteen isolates, a unique haplotype was inferred suggesting selection of a single genotype by either in vitro cultivation or amplification in splenectomized calves. Furthermore, a high genetic diversity (H = 0.780) over all marker loci was estimated. Linkage disequilibrium was observed in the total study group but also in sample subgroups from the Americas, Brazil, and Israel and Australia. In contrast, corresponding to their more confined geographic origin, samples from Israel and Argentina were each found to be in equilibrium suggestive of random mating and frequent genetic exchange. The genetic differentiation (F(ST)) of the total study group over all nineteen loci was estimated by analysis of variance (Θ) and Nei's estimation of heterozygosity (G(ST')) as 0.296 and 0.312, respectively. Thus, about 30% of the genetic diversity of the parasite population is associated with genetic differences between parasite isolates sampled from the different geographic regions. The pairwise similarity of multilocus genotypes (MLGs) was assessed and a neighbour-joining dendrogram generated. MLGs were found to cluster according to the country/continent of origin of isolates, but did not distinguish the attenuated from the pathogenic parasite state. The distant geographic origin of the isolates studied allows an initial glimpse into the large extent of genetic diversity and differentiation of the B. bovis population on a global scale. © 2013 Blackwell Verlag GmbH.

  1. An open source framework for tracking and state estimation ('Stone Soup')

    NASA Astrophysics Data System (ADS)

    Thomas, Paul A.; Barr, Jordi; Balaji, Bhashyam; White, Kruger

    2017-05-01

    The ability to detect and unambiguously follow all moving entities in a state-space is important in multiple domains both in defence (e.g. air surveillance, maritime situational awareness, ground moving target indication) and the civil sphere (e.g. astronomy, biology, epidemiology, dispersion modelling). However, tracking and state estimation researchers and practitioners have difficulties recreating state-of-the-art algorithms in order to benchmark their own work. Furthermore, system developers need to assess which algorithms meet operational requirements objectively and exhaustively rather than intuitively or driven by personal favourites. We have therefore commenced the development of a collaborative initiative to create an open source framework for production, demonstration and evaluation of Tracking and State Estimation algorithms. The initiative will develop a (MIT-licensed) software platform for researchers and practitioners to test, verify and benchmark a variety of multi-sensor and multi-object state estimation algorithms. The initiative is supported by four defence laboratories, who will contribute to the development effort for the framework. The tracking and state estimation community will derive significant benefits from this work, including: access to repositories of verified and validated tracking and state estimation algorithms, a framework for the evaluation of multiple algorithms, standardisation of interfaces and access to challenging data sets. Keywords: Tracking,

  2. Geostatistical applications in ground-water modeling in south-central Kansas

    USGS Publications Warehouse

    Ma, T.-S.; Sophocleous, M.; Yu, Y.-S.

    1999-01-01

    This paper emphasizes the supportive role of geostatistics in applying ground-water models. Field data of 1994 ground-water level, bedrock, and saltwater-freshwater interface elevations in south-central Kansas were collected and analyzed using the geostatistical approach. Ordinary kriging was adopted to estimate initial conditions for ground-water levels and topography of the Permian bedrock at the nodes of a finite difference grid used in a three-dimensional numerical model. Cokriging was used to estimate initial conditions for the saltwater-freshwater interface. An assessment of uncertainties in the estimated data is presented. The kriged and cokriged estimation variances were analyzed to evaluate the adequacy of data employed in the modeling. Although water levels and bedrock elevations are well described by spherical semivariogram models, additional data are required for better cokriging estimation of the interface data. The geostatistically analyzed data were employed in a numerical model of the Siefkes site in the project area. Results indicate that the computed chloride concentrations and ground-water drawdowns reproduced the observed data satisfactorily.This paper emphasizes the supportive role of geostatistics in applying ground-water models. Field data of 1994 ground-water level, bedrock, and saltwater-freshwater interface elevations in south-central Kansas were collected and analyzed using the geostatistical approach. Ordinary kriging was adopted to estimate initial conditions for ground-water levels and topography of the Permian bedrock at the nodes of a finite difference grid used in a three-dimensional numerical model. Cokriging was used to estimate initial conditions for the saltwater-freshwater interface. An assessment of uncertainties in the estimated data is presented. The kriged and cokriged estimation variances were analyzed to evaluate the adequacy of data employed in the modeling. Although water levels and bedrock elevations are well described by spherical semivariogram models, additional data are required for better cokriging estimation of the interface data. The geostatistically analyzed data were employed in a numerical model of the Siefkes site in the project area. Results indicate that the computed chloride concentrations and ground-water drawdowns reproduced the observed data satisfactorily.

  3. A new Method for the Estimation of Initial Condition Uncertainty Structures in Mesoscale Models

    NASA Astrophysics Data System (ADS)

    Keller, J. D.; Bach, L.; Hense, A.

    2012-12-01

    The estimation of fast growing error modes of a system is a key interest of ensemble data assimilation when assessing uncertainty in initial conditions. Over the last two decades three methods (and variations of these methods) have evolved for global numerical weather prediction models: ensemble Kalman filter, singular vectors and breeding of growing modes (or now ensemble transform). While the former incorporates a priori model error information and observation error estimates to determine ensemble initial conditions, the latter two techniques directly address the error structures associated with Lyapunov vectors. However, in global models these structures are mainly associated with transient global wave patterns. When assessing initial condition uncertainty in mesoscale limited area models, several problems regarding the aforementioned techniques arise: (a) additional sources of uncertainty on the smaller scales contribute to the error and (b) error structures from the global scale may quickly move through the model domain (depending on the size of the domain). To address the latter problem, perturbation structures from global models are often included in the mesoscale predictions as perturbed boundary conditions. However, the initial perturbations (when used) are often generated with a variant of an ensemble Kalman filter which does not necessarily focus on the large scale error patterns. In the framework of the European regional reanalysis project of the Hans-Ertel-Center for Weather Research we use a mesoscale model with an implemented nudging data assimilation scheme which does not support ensemble data assimilation at all. In preparation of an ensemble-based regional reanalysis and for the estimation of three-dimensional atmospheric covariance structures, we implemented a new method for the assessment of fast growing error modes for mesoscale limited area models. The so-called self-breeding is development based on the breeding of growing modes technique. Initial perturbations are integrated forward for a short time period and then rescaled and added to the initial state again. Iterating this rapid breeding cycle provides estimates for the initial uncertainty structure (or local Lyapunov vectors) given a specific norm. To avoid that all ensemble perturbations converge towards the leading local Lyapunov vector we apply an ensemble transform variant to orthogonalize the perturbations in the sub-space spanned by the ensemble. By choosing different kind of norms to measure perturbation growth, this technique allows for estimating uncertainty patterns targeted at specific sources of errors (e.g. convection, turbulence). With case study experiments we show applications of the self-breeding method for different sources of uncertainty and different horizontal scales.

  4. Dissolution of minerals with rough surfaces

    NASA Astrophysics Data System (ADS)

    de Assis, Thiago A.; Aarão Reis, Fábio D. A.

    2018-05-01

    We study dissolution of minerals with initial rough surfaces using kinetic Monte Carlo simulations and a scaling approach. We consider a simple cubic lattice structure, a thermally activated rate of detachment of a molecule (site), and rough surface configurations produced by fractional Brownian motion algorithm. First we revisit the problem of dissolution of initial flat surfaces, in which the dissolution rate rF reaches an approximately constant value at short times and is controlled by detachment of step edge sites. For initial rough surfaces, the dissolution rate r at short times is much larger than rF ; after dissolution of some hundreds of molecular layers, r decreases by some orders of magnitude across several time decades. Meanwhile, the surface evolves through configurations of decreasing energy, beginning with dissolution of isolated sites, then formation of terraces with disordered boundaries, their growth, and final smoothing. A crossover time to a smooth configuration is defined when r = 1.5rF ; the surface retreat at the crossover is approximately 3 times the initial roughness and is temperature-independent, while the crossover time is proportional to the initial roughness and is controlled by step-edge site detachment. The initial dissolution process is described by the so-called rough rates, which are measured for fixed ratios between the surface retreat and the initial roughness. The temperature dependence of the rough rates indicates control by kink site detachment; in general, it suggests that rough rates are controlled by the weakest microscopic bonds during the nucleation and formation of the lowest energy configurations of the crystalline surface. Our results are related to recent laboratory studies which show enhanced dissolution in polished calcite surfaces. In the application to calcite dissolution in alkaline environment, the minimal values of recently measured dissolution rate spectra give rF ∼10-9 mol/(m2 s), and the calculated rate laws of our model give rough rates in the range 10-6 -10-5 mol/(m2 s). This estimate is consistent with the range of calcite dissolution rates obtained in a recent work after treatment of literature data, which suggests the universal control of kink site dissolution in short term laboratory works. The weak effects of lattice size on our results also suggest that smoothing of mineral grain surfaces across geological times may be a microscopic explanation for the difference of chemical weathering rate of silicate minerals in laboratory and in the environment.

  5. Assessing the cost of electronic health records: a review of cost indicators.

    PubMed

    Gallego, Ana Isabel; Gagnon, Marie-Pierre; Desmartis, Marie

    2010-11-01

    We systematically reviewed PubMed and EBSCO business, looking for cost indicators of electronic health record (EHR) implementations and their associated benefit indicators. We provide a set of the most common cost and benefit (CB) indicators used in the EHR literature, as well as an overall estimate of the CB related to EHR implementation. Overall, CB evaluation of EHR implementation showed a rapid capital-recovering process. On average, the annual benefits were 76.5% of the first-year costs and 308.6% of the annual costs. However, the initial investments were not recovered in a few studied implementations. Distinctions in reporting fixed and variable costs are suggested.

  6. Energy Savings Potential and Research, Development, & Demonstration Opportunities for Commercial Building Heating, Ventilation, and Air Conditioning Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    none,

    2011-09-01

    This report covers an assessment of 182 different heating, ventilation, and air-conditioning (HVAC) technologies for U.S. commercial buildings to identify and provide analysis on 17 priority technology options in various stages of development. The analyses include an estimation of technical energy-savings potential, description of technical maturity, description of non-energy benefits, description of current barriers for market adoption, and description of the technology’s applicability to different building or HVAC equipment types. From these technology descriptions, are suggestions for potential research, development and demonstration (RD&D) initiatives that would support further development of the priority technology options.

  7. ETI, SETI and today's public opinion

    NASA Astrophysics Data System (ADS)

    Pinotti, Roberto

    During the last three decades the general public's initial opinions about ETI and SETI changed, turning ignorance, fear and superficiality into a gradual understanding of the importance of these concepts. After a brief analysis of this changing psycho-sociological attitude, the paper provides an "estimate of the situation" about general interest for ETI and SETI, suggesting a growing awareness in today's public opinion. Science fiction movies like Close Encounters of the Third Kind and E.T. the Extra-Terrestrial and popular interest in UFOs as visitors from outer space played a major role in the average man's acceptance of the reality of extra-terrestrial life and its meaning for mankind.

  8. Isotherm, kinetic, and thermodynamic study of ciprofloxacin sorption on sediments.

    PubMed

    Mutavdžić Pavlović, Dragana; Ćurković, Lidija; Grčić, Ivana; Šimić, Iva; Župan, Josip

    2017-04-01

    In this study, equilibrium isotherms, kinetics and thermodynamics of ciprofloxacin on seven sediments in a batch sorption process were examined. The effects of contact time, initial ciprofloxacin concentration, temperature and ionic strength on the sorption process were studied. The K d parameter from linear sorption model was determined by linear regression analysis, while the Freundlich and Dubinin-Radushkevich (D-R) sorption models were applied to describe the equilibrium isotherms by linear and nonlinear methods. The estimated K d values varied from 171 to 37,347 mL/g. The obtained values of E (free energy estimated from D-R isotherm model) were between 3.51 and 8.64 kJ/mol, which indicated a physical nature of ciprofloxacin sorption on studied sediments. According to obtained n values as measure of intensity of sorption estimate from Freundlich isotherm model (from 0.69 to 1.442), ciprofloxacin sorption on sediments can be categorized from poor to moderately difficult sorption characteristics. Kinetics data were best fitted by the pseudo-second-order model (R 2  > 0.999). Thermodynamic parameters including the Gibbs free energy (ΔG°), enthalpy (ΔH°) and entropy (ΔS°) were calculated to estimate the nature of ciprofloxacin sorption. Results suggested that sorption on sediments was a spontaneous exothermic process.

  9. New estimation method of neutron skyshine for a high-energy particle accelerator

    NASA Astrophysics Data System (ADS)

    Oh, Joo-Hee; Jung, Nam-Suk; Lee, Hee-Seock; Ko, Seung-Kook

    2016-09-01

    A skyshine is the dominant component of the prompt radiation at off-site. Several experimental studies have been done to estimate the neutron skyshine at a few accelerator facilities. In this work, the neutron transports from a source place to off-site location were simulated using the Monte Carlo codes, FLUKA and PHITS. The transport paths were classified as skyshine, direct (transport), groundshine and multiple-shine to understand the contribution of each path and to develop a general evaluation method. The effect of each path was estimated in the view of the dose at far locations. The neutron dose was calculated using the neutron energy spectra obtained from each detector placed up to a maximum of 1 km from the accelerator. The highest altitude of the sky region in this simulation was set as 2 km from the floor of the accelerator facility. The initial model of this study was the 10 GeV electron accelerator, PAL-XFEL. Different compositions and densities of air, soil and ordinary concrete were applied in this calculation, and their dependences were reviewed. The estimation method used in this study was compared with the well-known methods suggested by Rindi, Stevenson and Stepleton, and also with the simple code, SHINE3. The results obtained using this method agreed well with those using Rindi's formula.

  10. Validation of daily increments periodicity in otoliths of spotted gar

    USGS Publications Warehouse

    Snow, Richard A.; Long, James M.; Frenette, Bryan D.

    2017-01-01

    Accurate age and growth information is essential in successful management of fish populations and for understanding early life history. We validated daily increment deposition, including the timing of first ring formation, for spotted gar (Lepisosteus oculatus) through 127 days post hatch. Fry were produced from hatchery-spawned specimens, and up to 10 individuals per week were sacrificed and their otoliths (sagitta, lapillus, and asteriscus) removed for daily age estimation. Daily age estimates for all three otolith pairs were significantly related to known age. The strongest relationships existed for measurements from the sagitta (r2 = 0.98) and the lapillus (r2 = 0.99) with asteriscus (r2 = 0.95) the lowest. All age prediction models resulted in a slope near unity, indicating that ring deposition occurred approximately daily. Initiation of ring formation varied among otolith types, with deposition beginning 3, 7, and 9 days for the sagitta, lapillus, and asteriscus, respectively. Results of this study suggested that otoliths are useful to estimate daily age of spotted gar juveniles; these data may be used to back calculate hatch dates, estimate early growth rates, and correlate with environmental factor that influence spawning in wild populations. is early life history information will be valuable in better understanding the ecology of this species. 

  11. Algorithms for Autonomous GPS Orbit Determination and Formation Flying: Investigation of Initialization Approaches and Orbit Determination for HEO

    NASA Technical Reports Server (NTRS)

    Axelrad, Penina; Speed, Eden; Leitner, Jesse A. (Technical Monitor)

    2002-01-01

    This report summarizes the efforts to date in processing GPS measurements in High Earth Orbit (HEO) applications by the Colorado Center for Astrodynamics Research (CCAR). Two specific projects were conducted; initialization of the orbit propagation software, GEODE, using nominal orbital elements for the IMEX orbit, and processing of actual and simulated GPS data from the AMSAT satellite using a Doppler-only batch filter. CCAR has investigated a number of approaches for initialization of the GEODE orbit estimator with little a priori information. This document describes a batch solution approach that uses pseudorange or Doppler measurements collected over an orbital arc to compute an epoch state estimate. The algorithm is based on limited orbital element knowledge from which a coarse estimate of satellite position and velocity can be determined and used to initialize GEODE. This algorithm assumes knowledge of nominal orbital elements, (a, e, i, omega, omega) and uses a search on time of perigee passage (tau(sub p)) to estimate the host satellite position within the orbit and the approximate receiver clock bias. Results of the method are shown for a simulation including large orbital uncertainties and measurement errors. In addition, CCAR has attempted to process GPS data from the AMSAT satellite to obtain an initial estimation of the orbit. Limited GPS data have been received to date, with few satellites tracked and no computed point solutions. Unknown variables in the received data have made computations of a precise orbit using the recovered pseudorange difficult. This document describes the Doppler-only batch approach used to compute the AMSAT orbit. Both actual flight data from AMSAT, and simulated data generated using the Satellite Tool Kit and Goddard Space Flight Center's Flight Simulator, were processed. Results for each case and conclusion are presented.

  12. Global burden of maternal and congenital syphilis in 2008 and 2012: a health systems modelling study.

    PubMed

    Wijesooriya, N Saman; Rochat, Roger W; Kamb, Mary L; Turlapati, Prasad; Temmerman, Marleen; Broutet, Nathalie; Newman, Lori M

    2016-08-01

    In 2007, WHO launched a global initiative for the elimination of mother-to-child transmission of syphilis (congenital syphilis). An important aspect of the initiative is strengthening surveillance to monitor progress towards elimination. In 2008, using a health systems model with country data inputs, WHO estimated that 1·4 million maternal syphilis infections caused 520 000 adverse pregnancy outcomes. To assess progress, we updated the 2008 estimates and estimated the 2012 global prevalence and cases of maternal and congenital syphilis. We used a health systems model approved by the Child Health Epidemiology Reference Group. WHO and UN databases provided inputs on livebirths, antenatal care coverage, and syphilis testing, seropositivity, and treatment in antenatal care. For 2012 estimates, we used data collected between 2009 and 2012. We updated the 2008 estimates using data collected between 2000 and 2008, compared these with 2012 estimates using data collected between 2009 and 2012, and performed subanalyses to validate results. In 2012, an estimated 930 000 maternal syphilis infections caused 350 000 adverse pregnancy outcomes including 143 000 early fetal deaths and stillbirths, 62 000 neonatal deaths, 44 000 preterm or low weight births, and 102 000 infected infants worldwide. Nearly 80% of adverse outcomes (274 000) occurred in women who received antenatal care at least once. Comparing the updated 2008 estimates with the 2012 estimates, maternal syphilis decreased by 38% (from 1 488 394 cases in 2008 to 927 936 cases in 2012) and congenital syphilis decreased by 39% (from 576 784 to 350 915). India represented 65% of the decrease. Analysis excluding India still showed an 18% decrease in maternal and congenital cases of syphilis worldwide. Maternal and congenital syphilis decreased worldwide from 2008 to 2012, which suggests progress towards the elimination of mother-to-child transmission of syphilis. Nonetheless, maternal syphilis caused substantial adverse pregnancy outcomes, even in women receiving antenatal care. Improved access to quality antenatal care, including syphilis testing and treatment, and robust data are all important for achieving the elimination of mother-to-child transmission of syphilis. The UNDP-UNFPA-UNICEF-WHO-World Bank Special Programme of Research, Development and Research Training in Human Reproduction in WHO, and the US Centers for Disease Control and Prevention. Copyright © 2016 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY-NC-ND license. Published by Elsevier Ltd.. All rights reserved.

  13. The study on biomass fraction estimate methodology of municipal solid waste incinerator in Korea.

    PubMed

    Kang, Seongmin; Kim, Seungjin; Lee, Jeongwoo; Yun, Hyunki; Kim, Ki-Hyun; Jeon, Eui-Chan

    2016-10-01

    In Korea, the amount of greenhouse gases released due to waste materials was 14,800,000 t CO2eq in 2012, which increased from 5,000,000 t CO2eq in 2010. This included the amount released due to incineration, which has gradually increased since 2010. Incineration was found to be the biggest contributor to greenhouse gases, with 7,400,000 t CO2eq released in 2012. Therefore, with regards to the trading of greenhouse gases emissions initiated in 2015 and the writing of the national inventory report, it is important to increase the reliability of the measurements related to the incineration of waste materials. This research explored methods for estimating the biomass fraction at Korean MSW incinerator facilities and compared the biomass fractions obtained with the different biomass fraction estimation methods. The biomass fraction was estimated by the method using default values of fossil carbon fraction suggested by IPCC, the method using the solid waste composition, and the method using incinerator flue gas. The highest biomass fractions in Korean municipal solid waste incinerator facilities were estimated by the IPCC Default method, followed by the MSW analysis method and the Flue gas analysis method. Therefore, the difference in the biomass fraction estimate was the greatest between the IPCC Default and the Flue gas analysis methods. The difference between the MSW analysis and the flue gas analysis methods was smaller than the difference with IPCC Default method. This suggested that the use of the IPCC default method cannot reflect the characteristics of Korean waste incinerator facilities and Korean MSW. Incineration is one of most effective methods for disposal of municipal solid waste (MSW). This paper investigates the applicability of using biomass content to estimate the amount of CO2 released, and compares the biomass contents determined by different methods in order to establish a method for estimating biomass in the MSW incinerator facilities of Korea. After analyzing the biomass contents of the collected solid waste samples and the flue gas samples, the results were compared with the Intergovernmental Panel on Climate Change (IPCC) method, and it seems that to calculate the biomass fraction it is better to use the flue gas analysis method than the IPCC method. It is valuable to design and operate a real new incineration power plant, especially for the estimation of greenhouse gas emissions.

  14. Manual Optical Attitude Re-initialization of a Crew Vehicle in Space Using Bias Corrected Gyro Data

    NASA Astrophysics Data System (ADS)

    Gioia, Christopher J.

    NASA and other space agencies have shown interest in sending humans on missions beyond low Earth orbit. Proposed is an algorithm that estimates the attitude of a manned spacecraft using measured line-of-sight (LOS) vectors to stars and gyroscope measurements. The Manual Optical Attitude Reinitialization (MOAR) algorithm and corresponding device draw inspiration from existing technology from the Gemini, Apollo and Space Shuttle programs. The improvement over these devices is the capability of estimating gyro bias completely independent from re-initializing attitude. It may be applied to the lost-in-space problem, where the spacecraft's attitude is unknown. In this work, a model was constructed that simulated gyro data using the Farrenkopf gyro model, and LOS measurements from a spotting scope were then computed from it. Using these simulated measurements, gyro bias was estimated by comparing measured interior star angles to those derived from a star catalog and then minimizing the difference using an optimization technique. Several optimization techniques were analyzed, and it was determined that the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm performed the best when combined with a grid search technique. Once estimated, the gyro bias was removed and attitude was determined by solving the Wahba Problem via the Singular Value Decomposition (SVD) approach. Several Monte Carlo simulations were performed that looked at different operating conditions for the MOAR algorithm. These included the effects of bias instability, using different constellations for data collection, sampling star measurements in different orders, and varying the time between measurements. A common method of estimating gyro bias and attitude in a Multiplicative Extended Kalman Filter (MEKF) was also explored and disproven for use in the MOAR algorithm. A prototype was also constructed to validate the proposed concepts. It was built using a simple spotting scope, MEMS grade IMU, and a Raspberry Pi computer. It was mounted on a tripod, used to target stars with the scope and measure the rotation between them using the IMU. The raw measurements were then post-processed using the MOAR algorithm, and attitude estimates were determined. Two different constellations---the Big Dipper and Orion---were used for experimental data collection. The results suggest that the novel method of estimating gyro bias independently from attitude in this document is credible for use onboard a spacecraft.

  15. Dosing of selective serotonin reuptake inhibitors in children and adults before and after the FDA black-box warning

    PubMed Central

    Bushnell, Greta A; Stürmer, Til; Swanson, Sonja A; White, Alice; Azrael, Deborah; Pate, Virginia; Miller, Matthew

    2016-01-01

    Objective Prior research evaluated various effects of the antidepressant black-box warning on the risk of suicidality in children, but the dosing of antidepressants has not been considered. This study estimated, relative to the FDA warnings, whether the initial antidepressant dose prescribed decreased and the proportion augmenting dose on the second fill increased. Method The study utilized the LifeLink Health Plan Claims Database. The study cohort consisted of commercially insured children (5–17 years), young adults (18–24 years), and adults (25–64 years) initiating an SSRI (citalopram, fluoxetine, paroxetine, or sertraline) from 1/1/2000 to 12/31/2009. Dose-per-day was determined by days supply, strength, and quantity dispensed. Initiation on low dose, defined based on guidelines, and dose augmentations (dose increase >1mg/day) on the second prescription were considered across time periods related to the antidepressant warnings. Results Of 51,948 children who initiated an SSRI, 15% initiated on low dose in the period before the 2004 black-box warning and 31% in the period after the warning (a 16 percentage-point change); there was a smaller percentage-point change in young adults (6%) and adults (3%). The overall increase in dose augmentations in children and young adults was driven by the increase in patients initiating on a low dose. Conclusions As guidelines recommend children initiate antidepressant treatment on low dose, findings that an increased proportion of commercially insured children initiated an SSRI on low dose after the 2004 black-box warning suggest prescribing practices surrounding SSRI dosing improved in children following the warning but dosing practices still fall short of guidelines. PMID:26567938

  16. Comparison of maximum runup through analytical and numerical approaches for different fault parameters estimates

    NASA Astrophysics Data System (ADS)

    Kanoglu, U.; Wronna, M.; Baptista, M. A.; Miranda, J. M. A.

    2017-12-01

    The one-dimensional analytical runup theory in combination with near shore synthetic waveforms is a promising tool for tsunami rapid early warning systems. Its application in realistic cases with complex bathymetry and initial wave condition from inverse modelling have shown that maximum runup values can be estimated reasonably well. In this study we generate a simplistic bathymetry domains which resemble realistic near-shore features. We investigate the accuracy of the analytical runup formulae to the variation of fault source parameters and near-shore bathymetric features. To do this we systematically vary the fault plane parameters to compute the initial tsunami wave condition. Subsequently, we use the initial conditions to run the numerical tsunami model using coupled system of four nested grids and compare the results to the analytical estimates. Variation of the dip angle of the fault plane showed that analytical estimates have less than 10% difference for angles 5-45 degrees in a simple bathymetric domain. These results shows that the use of analytical formulae for fast run up estimates constitutes a very promising approach in a simple bathymetric domain and might be implemented in Hazard Mapping and Early Warning.

  17. Applying constraints on model-based methods: Estimation of rate constants in a second order consecutive reaction

    NASA Astrophysics Data System (ADS)

    Kompany-Zareh, Mohsen; Khoshkam, Maryam

    2013-02-01

    This paper describes estimation of reaction rate constants and pure ultraviolet/visible (UV-vis) spectra of the component involved in a second order consecutive reaction between Ortho-Amino benzoeic acid (o-ABA) and Diazoniom ions (DIAZO), with one intermediate. In the described system, o-ABA was not absorbing in the visible region of interest and thus, closure rank deficiency problem did not exist. Concentration profiles were determined by solving differential equations of the corresponding kinetic model. In that sense, three types of model-based procedures were applied to estimate the rate constants of the kinetic system, according to Levenberg/Marquardt (NGL/M) algorithm. Original data-based, Score-based and concentration-based objective functions were included in these nonlinear fitting procedures. Results showed that when there is error in initial concentrations, accuracy of estimated rate constants strongly depends on the type of applied objective function in fitting procedure. Moreover, flexibility in application of different constraints and optimization of the initial concentrations estimation during the fitting procedure were investigated. Results showed a considerable decrease in ambiguity of obtained parameters by applying appropriate constraints and adjustable initial concentrations of reagents.

  18. Financial planning for major initiatives: a framework for success.

    PubMed

    Harris, John M

    2007-11-01

    A solid framework for assessing a major strategic initiative consists of four broad steps: Initial considerations, including level of analysis required and resources that will be brought to bear. Preliminary financial estimates for board approval to further assess the initiative. Assessment of potential partners' interest in the project. Feasibility analysis for board green light.

  19. Flight data identification of six degree-of-freedom stability and control derivatives of a large crane type helicopter

    NASA Technical Reports Server (NTRS)

    Tomaine, R. L.

    1976-01-01

    Flight test data from a large 'crane' type helicopter were collected and processed for the purpose of identifying vehicle rigid body stability and control derivatives. The process consisted of using digital and Kalman filtering techniques for state estimation and Extended Kalman filtering for parameter identification, utilizing a least squares algorithm for initial derivative and variance estimates. Data were processed for indicated airspeeds from 0 m/sec to 152 m/sec. Pulse, doublet and step control inputs were investigated. Digital filter frequency did not have a major effect on the identification process, while the initial derivative estimates and the estimated variances had an appreciable effect on many derivative estimates. The major derivatives identified agreed fairly well with analytical predictions and engineering experience. Doublet control inputs provided better results than pulse or step inputs.

  20. Quantum Parameter Estimation: From Experimental Design to Constructive Algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Le; Chen, Xi; Zhang, Ming; Dai, Hong-Yi

    2017-11-01

    In this paper we design the following two-step scheme to estimate the model parameter ω 0 of the quantum system: first we utilize the Fisher information with respect to an intermediate variable v=\\cos ({ω }0t) to determine an optimal initial state and to seek optimal parameters of the POVM measurement operators; second we explore how to estimate ω 0 from v by choosing t when a priori information knowledge of ω 0 is available. Our optimal initial state can achieve the maximum quantum Fisher information. The formulation of the optimal time t is obtained and the complete algorithm for parameter estimation is presented. We further explore how the lower bound of the estimation deviation depends on the a priori information of the model. Supported by the National Natural Science Foundation of China under Grant Nos. 61273202, 61673389, and 61134008

  1. Chondrule magnetic properties

    NASA Technical Reports Server (NTRS)

    Wasilewski, P. J.; Obryan, M. V.

    1994-01-01

    The topics discussed include the following: chondrule magnetic properties; chondrules from the same meteorite; and REM values (the ratio for remanence initially measured to saturation remanence in 1 Tesla field). The preliminary field estimates for chondrules magnetizing environments range from minimal to a least several mT. These estimates are based on REM values and the characteristics of the remanence initially measured (natural remanence) thermal demagnetization compared to the saturation remanence in 1 Tesla field demagnetization.

  2. Changes in the retreatment radiation tolerance of the spinal cord with time after the initial treatment.

    PubMed

    Woolley, Thomas E; Belmonte-Beitia, Juan; Calvo, Gabriel F; Hopewell, John W; Gaffney, Eamonn A; Jones, Bleddyn

    2018-06-01

    To estimate, from experimental data, the retreatment radiation 'tolerances' of the spinal cord at different times after initial treatment. A model was developed to show the relationship between the biological effective doses (BEDs) for two separate courses of treatment with the BED of each course being expressed as a percentage of the designated 'retreatment tolerance' BED value, denoted [Formula: see text] and [Formula: see text]. The primate data of Ang et al. ( 2001 ) were used to determine the fitted parameters. However, based on rodent data, recovery was assumed to commence 70 days after the first course was complete, and with a non-linear relationship to the magnitude of the initial BED (BED init ). The model, taking into account the above processes, provides estimates of the retreatment tolerance dose after different times. Extrapolations from the experimental data can provide conservative estimates for the clinic, with a lower acceptable myelopathy incidence. Care must be taken to convert the predicted [Formula: see text] value into a formal BED value and then a practical dose fractionation schedule. Used with caution, the proposed model allows estimations of retreatment doses with elapsed times ranging from 70 days up to three years after the initial course of treatment.

  3. ASW Fusion on a PC

    DTIC Science & Technology

    2004-06-01

    14 C. AOU GENERATION ....................................................................................15 D ...25 C. ESTIMATING INITIAL TARGET LOCATION ......................................27 D . ESTIMATING AN AOU...the East-West Direction Estimated Mean Velocity in the North-South D E Xµ = = E-W Vel N-S Velirection lon lat µ µ µ µ

  4. Numerical trials of HISSE

    NASA Technical Reports Server (NTRS)

    Peters, C.; Kampe, F. (Principal Investigator)

    1980-01-01

    The mathematical description and implementation of the statistical estimation procedure known as the Houston integrated spatial/spectral estimator (HISSE) is discussed. HISSE is based on a normal mixture model and is designed to take advantage of spectral and spatial information of LANDSAT data pixels, utilizing the initial classification and clustering information provided by the AMOEBA algorithm. The HISSE calculates parametric estimates of class proportions which reduce the error inherent in estimates derived from typical classify and count procedures common to nonparametric clustering algorithms. It also singles out spatial groupings of pixels which are most suitable for labeling classes. These calculations are designed to aid the analyst/interpreter in labeling patches with a crop class label. Finally, HISSE's initial performance on an actual LANDSAT agricultural ground truth data set is reported.

  5. Topography of closed depressions, scarps, and grabens in the North Tharsis region of Mars: implications for shallow crustal discontinuities and graben formation

    USGS Publications Warehouse

    Davis, Philip A.; Tanaka, Kenneth L.; Golombek, Matthew P.

    1995-01-01

    Using Viking Orbiter images, detailed photoclinometric profiles were obtained across 10 irregular depressions, 32 fretted fractures, 49 troughs and pits, 124 solitary scarps, and 370 simple grabens in the north Tharsis region of Mars. These data allow inferences to be made on the shallow crustal structure of this region. The frequency modes of measured scarp heights correspond with previous general thickness estimates of the heavily cratered and ridged plains units. The depths of the flat-floored irregular depressions (55-175 m), fretted fractures (85-890 m), and troughs and pits (60-1620 m) are also similar to scarp heights (thicknesses) of the geologic units in which these depressions occur, which suggests that the depths of these flat-floored features were controlled by erosional base levels created by lithologic contacts. Although the features have a similar age, both their depths and their observed local structural control increase in the order listed above, which suggests that the more advanced stages of associated fracturing facilitated the development of these depressions by increasing permeability. If a ground-ice zone is a factor in development of these features, as has been suggested, our observation that the depths of these features decrease with increasing latitude suggests that either the thickness of the ground-ice zone does not increase poleward or the depths of the depressions were controlled by the top of the ground-ice zone whose depth may decrease with latitude. Deeper discontinuities are inferred from fault-intersection depths of 370 simple grabens (assuming 60° dipping faults that initiate at a mechanical discontinuity) in Tempe Terra and Alba Patera and from the depths of the large, flat-floored troughs in Tempe Terra. The frequency distributions of these fault-intersection and large trough depths show a concentration at 1.0-1.6 km depth, similar to data obtained for Syria, Sinai, and Lunae Plana. The consistency of these depth data over such a large region of western Mars suggests that a discontinuity or a process that transcends local and regional geology is responsible for the formation of these features. If this discontinuity is represented by the base of the cryosphere, its uniform depth over 55° of latitude suggests that the cryosphere did not thicken poleward. Alternatively, the concentration of depths at 1.0-1.6 km may represent the upper level of noneruptive dike ascent (lateral dike propagation) of Mars, which is controlled by gravity and atmospheric pressure and magma and country-rock characteristics, and was probably controlled, in part, by ground ice. Fault-intersection depths in the north Tharsis region locally extend down to a depth of 5-7 km. The depth data between 2 and 3 km are attributed to the discontinuity at the interface of megaregolith and basement or to the upper limit of noneruptive dike ascent of magma with a high volatile content. Intersection depths greater than 3 km, which were found at Alba Patera, may be due to the megaregolith-basement discontinuity, which was buried and depressed by volcanic loading, or to the upper level of noneruptive dike ascent of magma with a low volatile content. The near absence of narrow simple grabens with fault-initiation depths less than 0.6-1.0 km in this study area, as well as in most of western Mars, suggests that this depth represents the minimum depth that normal faults can initiate; at shallower depths tension cracks or joints would form instead. This hypothesis is supported by the application of the Griffith failure criterion to this minimum depth of normal fault initiation, which suggests that shallow crustal materials have a tensile strength of 2-4 MPa throughout most of western Mars, in close agreement with previous estimates of tensile strength of martian basaltic rock.

  6. Lidar and Hyperspectral Remote Sensing for the Analysis of Coniferous Biomass Stocks and Fluxes

    NASA Astrophysics Data System (ADS)

    Halligan, K. Q.; Roberts, D. A.

    2006-12-01

    Airborne lidar and hyperspectral data can improve estimates of aboveground carbon stocks and fluxes through their complimentary responses to vegetation structure and biochemistry. While strong relationships have been demonstrated between lidar-estimated vegetation structural parameters and field data, research is needed to explore the portability of these methods across a range of topographic conditions, disturbance histories, vegetation type and climate. Additionally, research is needed to evaluate contributions of hyperspectral data in refining biomass estimates and determination of fluxes. To address these questions we are a conducting study of lidar and hyperspectral remote sensing data across sites including coniferous forests, broadleaf deciduous forests and a tropical rainforest. Here we focus on a single study site, Yellowstone National Park, where tree heights, stem locations, above ground biomass and basal area were mapped using first-return small-footprint lidar data. A new method using lidar intensity data was developed for separating the terrain and vegetation components in lidar data using a two-scale iterative local minima filter. Resulting Digital Terrain Models (DTM) and Digital Canopy Models (DCM) were then processed to retrieve a diversity of vertical and horizontal structure metrics. Univariate linear models were used to estimate individual tree heights while stepwise linear regression was used to estimate aboveground biomass and basal area. Three small-area field datasets were compared for their utility in model building and validation of vegetation structure parameters. All structural parameters were linearly correlated with lidar-derived metrics, with higher accuracies obtained where field and imagery data were precisely collocated . Initial analysis of hyperspectral data suggests that vegetation health metrics including measures of live and dead vegetation and stress indices may provide good indicators of carbon flux by mapping vegetation vigor or senescence. Additionally, the strength of hyperspectral data for vegetation classification suggests these data have additional utility for modeling carbon flux dynamics by allowing more accurate plant functional type mapping.

  7. Embargo on Lion Hunting Trophies from West Africa: An Effective Measure or a Threat to Lion Conservation?

    PubMed Central

    Bouché, Philippe; Crosmary, William; Kafando, Pierre; Doamba, Benoit; Kidjo, Ferdinand Claude; Vermeulen, Cédric; Chardonnet, Philippe

    2016-01-01

    The W-Arly-Pendjari (WAP) ecosystem, shared among Benin, Burkina Faso and Niger, represents the last lion stronghold of West Africa. To assess the impact of trophy hunting on lion populations in hunting areas of the WAP, we analyzed trends in harvest rates from 1999 to 2014. We also investigated whether the hunting areas with higher initial hunting intensity experienced steeper declines in lion harvest between 1999 and 2014, and whether lion densities in hunting areas were lower than in national parks. Lion harvest rate remained overall constant in the WAP. At initial hunting intensities below 1.5 lions/1000km2, most hunting areas experienced an increase in lion harvest rate, although that increase was of lower magnitude for hunting areas with higher initial hunting intensity. The proportion of hunting areas that experienced a decline in lion harvest rate increased at initial hunting intensities above 1.5 lions/1000km2. In 2014, the lion population of the WAP was estimated with a spoor count at 418 (230–648) adults and sub-adult individuals, comparable to the 311 (123–498) individuals estimated in the previous 2012 spoor survey. We found no significant lion spoor density differences between national parks and hunting areas. Hunting areas with higher mean harvest rates did not have lower lion densities. The ratio of large adult males, females and sub-adults was similar between the national parks and the hunting areas. These results suggested that the lion population was not significantly affected by hunting in the WAP. We concluded that a quota of 1 lion/1000km2 would be sustainable for the WAP. Based on our results, an import embargo on lion trophies from the WAP would not be justified. It could ruin the incentive of local actors to conserve lions in hunting areas, and lead to a drastic reduction of lion range in West Africa. PMID:27182985

  8. Embargo on Lion Hunting Trophies from West Africa: An Effective Measure or a Threat to Lion Conservation?

    PubMed

    Bouché, Philippe; Crosmary, William; Kafando, Pierre; Doamba, Benoit; Kidjo, Ferdinand Claude; Vermeulen, Cédric; Chardonnet, Philippe

    2016-01-01

    The W-Arly-Pendjari (WAP) ecosystem, shared among Benin, Burkina Faso and Niger, represents the last lion stronghold of West Africa. To assess the impact of trophy hunting on lion populations in hunting areas of the WAP, we analyzed trends in harvest rates from 1999 to 2014. We also investigated whether the hunting areas with higher initial hunting intensity experienced steeper declines in lion harvest between 1999 and 2014, and whether lion densities in hunting areas were lower than in national parks. Lion harvest rate remained overall constant in the WAP. At initial hunting intensities below 1.5 lions/1000km2, most hunting areas experienced an increase in lion harvest rate, although that increase was of lower magnitude for hunting areas with higher initial hunting intensity. The proportion of hunting areas that experienced a decline in lion harvest rate increased at initial hunting intensities above 1.5 lions/1000km2. In 2014, the lion population of the WAP was estimated with a spoor count at 418 (230-648) adults and sub-adult individuals, comparable to the 311 (123-498) individuals estimated in the previous 2012 spoor survey. We found no significant lion spoor density differences between national parks and hunting areas. Hunting areas with higher mean harvest rates did not have lower lion densities. The ratio of large adult males, females and sub-adults was similar between the national parks and the hunting areas. These results suggested that the lion population was not significantly affected by hunting in the WAP. We concluded that a quota of 1 lion/1000km2 would be sustainable for the WAP. Based on our results, an import embargo on lion trophies from the WAP would not be justified. It could ruin the incentive of local actors to conserve lions in hunting areas, and lead to a drastic reduction of lion range in West Africa.

  9. Sensitivity of physical parameterizations on prediction of tropical cyclone Nargis over the Bay of Bengal using WRF model

    NASA Astrophysics Data System (ADS)

    Raju, P. V. S.; Potty, Jayaraman; Mohanty, U. C.

    2011-09-01

    Comprehensive sensitivity analyses on physical parameterization schemes of Weather Research Forecast (WRF-ARW core) model have been carried out for the prediction of track and intensity of tropical cyclones by taking the example of cyclone Nargis, which formed over the Bay of Bengal and hit Myanmar on 02 May 2008, causing widespread damages in terms of human and economic losses. The model performances are also evaluated with different initial conditions of 12 h intervals starting from the cyclogenesis to the near landfall time. The initial and boundary conditions for all the model simulations are drawn from the global operational analysis and forecast products of National Center for Environmental Prediction (NCEP-GFS) available for the public at 1° lon/lat resolution. The results of the sensitivity analyses indicate that a combination of non-local parabolic type exchange coefficient PBL scheme of Yonsei University (YSU), deep and shallow convection scheme with mass flux approach for cumulus parameterization (Kain-Fritsch), and NCEP operational cloud microphysics scheme with diagnostic mixed phase processes (Ferrier), predicts better track and intensity as compared against the Joint Typhoon Warning Center (JTWC) estimates. Further, the final choice of the physical parameterization schemes selected from the above sensitivity experiments is used for model integration with different initial conditions. The results reveal that the cyclone track, intensity and time of landfall are well simulated by the model with an average intensity error of about 8 hPa, maximum wind error of 12 m s-1and track error of 77 km. The simulations also show that the landfall time error and intensity error are decreasing with delayed initial condition, suggesting that the model forecast is more dependable when the cyclone approaches the coast. The distribution and intensity of rainfall are also well simulated by the model and comparable with the TRMM estimates.

  10. Predicting the future prevalence of cigarette smoking in Italy over the next three decades.

    PubMed

    Carreras, Giulia; Gorini, Giuseppe; Gallus, Silvano; Iannucci, Laura; Levy, David T

    2012-10-01

    Smoking prevalence in Italy decreased by 37% from 1980 to now. This is due to changes in smoking initiation and cessation rates and is in part attributable to the development of tobacco control policies. This work aims to estimate the age- and sex-specific smoking initiation and cessation probabilities for different time periods and to predict the future smoking prevalence in Italy, assuming different scenarios. A dynamic model describing the evolution of current, former and never smokers was developed. Cessation and relapse rates were estimated by fitting the model with smoking prevalence in Italy, 1986-2009. The estimated parameters were used to predict prevalence, according to scenarios: (1) 2000-09 initiation/cessation; (2) half initiation; (3) double cessation; (4) Scenarios 2+3; (5) triple cessation; and (6) Scenarios 2+5. Maintaining the 2000-09 initiation/cessation, the 10% goal will not be achieved within next three decades: prevalence will stabilize at 12.1% for women and 20.3% for men. The goal could be rapidly achieved for women by halving initiation and tripling cessation (9.9%, 2016), or tripling cessation only (10.4%, 2017); for men halving initiation and tripling cessation (10.8%, 2024), or doubling cessation and halving initiation (10.5%, 2033), or tripling cessation only (10.8%, 2033). The 10% goal will be achieved within the next few decades, mainly by increasing smoking cessation. Policies to reach this goal would include increasing cigarette taxes, introducing total reimbursement of smoking cessation treatment, with a further development of quitlines and smoking cessation services. These measures are not yet fully implemented in Italy.

  11. Is there a weekend bias in clutch-initiation dates from citizen science? Implications for studies of avian breeding phenology.

    PubMed

    Cooper, Caren B

    2014-09-01

    Accurate phenology data, such as the timing of migration and reproduction, is important for understanding how climate change influences birds. Given contradictory findings among localized studies regarding mismatches in timing of reproduction and peak food supply, broader-scale information is needed to understand how whole species respond to environmental change. Citizen science-participation of the public in genuine research-increases the geographic scale of research. Recent studies, however, showed weekend bias in reported first-arrival dates for migratory songbirds in databases created by citizen-science projects. I investigated whether weekend bias existed for clutch-initiation dates for common species in US citizen-science projects. Participants visited nests on Saturdays more frequently than other days. When participants visited nests during the laying stage, biased timing of visits did not translate into bias in estimated clutch-initiation dates, based on back-dating with the assumption of one egg laid per day. Participants, however, only visited nests during the laying stage for 25% of attempts of cup-nesting species and 58% of attempts in nest boxes. In some years, in lieu of visit data, participants provided their own estimates of clutch-initiation dates and were asked "did you visit the nest during the laying period?" Those participants who answered the question provided estimates of clutch-initiation dates with no day-of-week bias, irrespective of their answer. Those who did not answer the question were more likely to estimate clutch initiation on a Saturday. Data from citizen-science projects are useful in phenological studies when temporal biases can be checked and corrected through protocols and/or analytical methods.

  12. Is there a weekend bias in clutch-initiation dates from citizen science? Implications for studies of avian breeding phenology

    NASA Astrophysics Data System (ADS)

    Cooper, Caren B.

    2014-09-01

    Accurate phenology data, such as the timing of migration and reproduction, is important for understanding how climate change influences birds. Given contradictory findings among localized studies regarding mismatches in timing of reproduction and peak food supply, broader-scale information is needed to understand how whole species respond to environmental change. Citizen science—participation of the public in genuine research—increases the geographic scale of research. Recent studies, however, showed weekend bias in reported first-arrival dates for migratory songbirds in databases created by citizen-science projects. I investigated whether weekend bias existed for clutch-initiation dates for common species in US citizen-science projects. Participants visited nests on Saturdays more frequently than other days. When participants visited nests during the laying stage, biased timing of visits did not translate into bias in estimated clutch-initiation dates, based on back-dating with the assumption of one egg laid per day. Participants, however, only visited nests during the laying stage for 25 % of attempts of cup-nesting species and 58 % of attempts in nest boxes. In some years, in lieu of visit data, participants provided their own estimates of clutch-initiation dates and were asked "did you visit the nest during the laying period?" Those participants who answered the question provided estimates of clutch-initiation dates with no day-of-week bias, irrespective of their answer. Those who did not answer the question were more likely to estimate clutch initiation on a Saturday. Data from citizen-science projects are useful in phenological studies when temporal biases can be checked and corrected through protocols and/or analytical methods.

  13. Evaluating the Real-time and Offline Performance of the Virtual Seismologist Earthquake Early Warning Algorithm

    NASA Astrophysics Data System (ADS)

    Cua, G.; Fischer, M.; Heaton, T.; Wiemer, S.

    2009-04-01

    The Virtual Seismologist (VS) algorithm is a Bayesian approach to regional, network-based earthquake early warning (EEW). Bayes' theorem as applied in the VS algorithm states that the most probable source estimates at any given time is a combination of contributions from relatively static prior information that does not change over the timescale of earthquake rupture and a likelihood function that evolves with time to take into account incoming pick and amplitude observations from the on-going earthquake. Potentially useful types of prior information include network topology or station health status, regional hazard maps, earthquake forecasts, and the Gutenberg-Richter magnitude-frequency relationship. The VS codes provide magnitude and location estimates once picks are available at 4 stations; these source estimates are subsequently updated each second. The algorithm predicts the geographical distribution of peak ground acceleration and velocity using the estimated magnitude and location and appropriate ground motion prediction equations; the peak ground motion estimates are also updated each second. Implementation of the VS algorithm in California and Switzerland is funded by the Seismic Early Warning for Europe (SAFER) project. The VS method is one of three EEW algorithms whose real-time performance is being evaluated and tested by the California Integrated Seismic Network (CISN) EEW project. A crucial component of operational EEW algorithms is the ability to distinguish between noise and earthquake-related signals in real-time. We discuss various empirical approaches that allow the VS algorithm to operate in the presence of noise. Real-time operation of the VS codes at the Southern California Seismic Network (SCSN) began in July 2008. On average, the VS algorithm provides initial magnitude, location, origin time, and ground motion distribution estimates within 17 seconds of the earthquake origin time. These initial estimate times are dominated by the time for 4 acceptable picks to be available, and thus are heavily influenced by the station density in a given region; these initial estimate times also include the effects of telemetry delay, which ranges between 6 and 15 seconds at the SCSN, and processing time (~1 second). Other relevant performance statistics include: 95% of initial real-time location estimates are within 20 km of the actual epicenter, 97% of initial real-time magnitude estimates are within one magnitude unit of the network magnitude. Extension of real-time VS operations to networks in Northern California is an on-going effort. In Switzerland, the VS codes have been run on offline waveform data from over 125 earthquakes recorded by the Swiss Digital Seismic Network (SDSN) and the Swiss Strong Motion Network (SSMS). We discuss the performance of the VS algorithm on these datasets in terms of magnitude, location, and ground motion estimation.

  14. The plasma membrane-associated NADH oxidase (ECTO-NOX) of mouse skin responds to blue light

    NASA Technical Reports Server (NTRS)

    Morre, D. James; Morre, Dorothy M.

    2003-01-01

    NADH oxidases of the external plasma membrane surface (ECTO-NOX proteins) are characterized by oscillations in activity with a regular period length of 24 min. Explants of mouse skin exhibit the oscillatory activity as estimated from the decrease in A(340) suggesting that individual ECTO-NOX molecules must somehow be induced to function synchronously. Transfer of explants of mouse skin from darkness to blue light (495 nm, 2 min, 50 micromol m(-1) s(-1)) resulted in initiation of a new activity maximum (entrainment) with a midpoint 36 min after light exposure followed by maxima every 24 min thereafter. Addition of melatonin resulted in a new maximum 24 min after melatonin addition. The findings suggest that the ECTO-NOX proteins play a central role in the entrainment of the biological clock both by light and by melatonin.

  15. Fluvial valleys in the heavily cratered terrains of Mars: Evidence for paleoclimatic change?

    NASA Technical Reports Server (NTRS)

    Gulick, V. C.; Baker, V. R.

    1993-01-01

    Whether the formation of the Martian valley networks provides unequivocal evidence for drastically different climatic conditions remains debatable. Recent theoretical climate modeling precludes the existence of a temperate climate early in Mars' geological history. An alternative hypothesis suggests that Mars had a globally higher heat flow early in its geological history, bringing water tables to within 350 m of the surface. While a globally higher heat flow would initiate ground water circulation at depth, the valley networks probably required water tables to be even closer to the surface. Additionally, it was previously reported that the clustered distribution of the valley networks within terrain types, particularly in the heavily cratered highlands, suggests regional hydrological processes were important. The case for localized hydrothermal systems is summarized and estimates of both erosion volumes and of the implied water volumes for several Martian valley systems are presented.

  16. Biomarkers as Indicators of Respiration During Laboratory Incubations of Alaskan Arctic Tundra Permafrost Soils

    NASA Astrophysics Data System (ADS)

    Hutchings, J.; Schuur, E.; Bianchi, T. S.; Bracho, R. G.

    2015-12-01

    High latitude permafrost soils are estimated to store 1,330 - 1,580 Pg C, which account for ca. 40% of global soil C and nearly twice that of atmospheric C. Disproportionate heating of high latitude regions during climate warming potentially results in permafrost thaw and degradation of surficial and previously-frozen soil C. Understanding how newly-thawed soils respond to microbial degradation is essential to predicting C emissions from this region. Laboratory incubations have been a key tool in understanding potential respiration rates from high latitude soils. A recent study found that among the common soil measurements, C:N was the best predictor of C losses. Here, we analyzed Alaskan Arctic tundra soils from before and after a nearly 3-year laboratory incubation. Bulk geochemical values as well as the following biomarkers were measured: lignin, amino acids, n-alkanes, and glycerol dialkyl glycerol tetraethers (GDGT). We found that initial C:N did not predict C losses and no significant change in C:N between initial and final samples. The lignin acid to aldehyde (Ad:Al) degradation index showed the same results with a lack of C loss prediction and no significant change during the experiment. However, we did find that C:N and Ad:Al had a significant negative correlation suggesting behavior consistent with expectations. The failure to predict C losses was likely influenced by a number of factors, including the possibility that biomarkers were tracking a smaller fraction of slower cycling components of soil C. To better interpret these results, we also used a hydroxyproline-based amino acid degradation index and n-alkanes to estimate the contribution Sphagnum mosses to soil samples - known to have slower turnover times than vascular plants. Finally, we applied a GDGT soil temperature proxy to estimate the growing season soil temperatures before each incubation, as well as investigating the effects of incubation temperature on the index's temperature estimate.

  17. Chronic Kidney Disease, Fluid Overload and Diuretics: A Complicated Triangle.

    PubMed

    Khan, Yusra Habib; Sarriff, Azmi; Adnan, Azreen Syazril; Khan, Amer Hayat; Mallhi, Tauqeer Hussain

    2016-01-01

    Despite promising role of diuretics to manage fluid overload among chronic kidney disease (CKD) patients, their use is associated with adverse renal outcomes. Current study aimed to determine the extent of renal deterioration with diuretic therapy. A total 312 non-dialysis dependent CKD (NDD-CKD) patients were prospectively followed-up for one year. Fluid overload was assessed via bioimpedance spectroscopy. Estimated GFR (eGFR) was calculated from serum creatinine values by using Chronic Kidney Disease- Epidemiology Collaboration (CKD-EPI) equation. Out of 312 patients, 64 (20.5%) were hypovolemic while euvolemia and hypervolemia were observed in 113 (36.1%) and 135 (43.4%) patients. Overall 144 patients were using diuretics among which 98 (72.6%) were hypervolemic, 35 (30.9%) euvolemic and 11 (17.2%) were hypovolemic. The mean decline in estimated GFR of entire cohort was -2.5 ± 1.4 ml/min/1.73m2 at the end of follow up. The use of diuretics was significantly associated with decline in eGFR. A total of 36 (11.5%) patients initiated renal replacement therapy (RRT) and need of RRT was more profound among diuretic users. The use of diuretics was associated with adverse renal outcomes indicated by decline in eGFR and increasing risk of RRT initiation in our cohort of NDD-CKD patients. Therefore, it is cautiously suggested to carefully prescribe diuretics by keeping in view benefit versus harm for each patient.

  18. Increasing the demand for childhood vaccination in developing countries: a systematic review

    PubMed Central

    2009-01-01

    Background Attempts to maintain or increase vaccination coverage almost all focus on supply side interventions: improving availability and delivery of vaccines. The effectiveness and cost-effectiveness of efforts to increase demand is uncertain. Methods We performed a systematic review of studies that provided quantitative estimates of the impact of demand side interventions on uptake of routine childhood vaccination. We retrieved studies published up to Sept 2008. Results The initial search retrieved 468 potentially eligible studies, including four systematic reviews and eight original studies of the impact of interventions to increase demand for vaccination. We identified only two randomised controlled trials. Interventions with an impact on vaccination uptake included knowledge translation (KT) (mass media, village resource rooms and community discussions) and non-KT initiatives (incentives, economic empowerment, household visits by extension workers). Most claimed to increase vaccine coverage by 20 to 30%. Estimates of the cost per vaccinated child varied considerably with several in the range of $10-20 per vaccinated child. Conclusion Most studies reviewed here represented a low level of evidence. Mass media campaigns may be effective, but the impact depends on access to media and may be costly if run at a local level. The persistence of positive effects has not been investigated. The economics of demand side interventions have not been adequately assessed, but available data suggest that some may be very cost-effective. PMID:19828063

  19. Preliminary experience and learning curve for laparoendoscopic single-site retroperitoneal pyeloplasty.

    PubMed

    Ou, Zhenyu; Qi, Lin; Yang, Jinrui; Chen, Xiang; Cao, Zhenzhen; Zu, Xiongbing; Liu, Longfei; Wang, Long

    2013-09-01

    To report our preliminary experience and to assess the learning curve for laparoendoscopic single-site retroperitoneal pyeloplasty (LESS-RP) for ureteropelvic junction obstruction (UPJO). From July 2010 to February 2012, LESS-RP was performed in 27 patients affected with UPJO by a single surgeon. A homemade single-access platform and both conventional and prebent instruments were applied. Patient characteristics and perioperative outcomes were analyzed. The cumulative sum (CUSUM) method was used to evaluate the learning curve. The LESS-RP was successfully accomplished in all 27 patients. The mean operative time (OT) was 175.9±22.5 minutes, and the mean estimated blood loss was 83.3±27.1 mL. We used the OT as a proxy to assess the learning curve. The CUSUM learning curve can be divided into two distinct phases: the initial 12 cases and the last 15 cases. There were significant differences in the mean OT (195.6 minutes versus 159.1 minutes, P<.001) and mean estimated blood loss (97.2 mL versus 72.2 mL, P=.014) between the two phases. The two phases did not differ in other parameters. LESS-RP is a safe and feasible procedure. The learning curve of a single surgeon suggests that the initial learning phase for LESS-RP can be completed after approximately 12 cases.

  20. Reconstructing the Initial Human Occupation of the Northern Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Madsen, David; Brantingham, Jeffrey; Sun, Yongjuan; Rhode, David; Mingjie, Yi; Perreault, Charles

    2017-04-01

    We identified and dated 20 archaeological sites, many containing multiple occupations, above 3000 m on the northeastern margin of the Tibetan Plateau (TP) during a decade-long Sino-American Tibet Paleolithic Project. The ages of these sites are controlled by 68 AMS radiocarbon dates, as well as associated luminescence age estimates. Together these sites suggest the initial occupation of the high northern TP occurred in two phases: 1) an early phase dating to 16-8 ka, characterized by short-term hunting camps occupied by small groups of foragers likely originating from lower elevation, but relatively nearby, base camps; and 2) a later phase dating to 8-5 ka, characterized by longer-term residential camps likely occupied by larger family groups also originating from nearby lower elevations. Whether or not these later family groups were full-time foragers or were pastoralists linked to farming communities remains under investigation. This pattern closely matches genetically-based estimates of rapid population increases. Both phases appear related to major climatic episodes: a period of rapid post-glacial warming, spread of higher elevation alpine grassland/meadow environments, and enhanced populations of larger herbivores; and a period of mid-Holocene warming that allowed farming/pastoralism to develop at higher elevations. We identified no sites dating to the LGM or earlier and genetic separation of Tibetan populations likely occurred on the lower elevation plateau margins. By 5 ka essentially modern settlement/subsistence patterns were established.

  1. Temperature acclimation rate of aerobic scope and feeding metabolism in fishes: implications in a thermally extreme future.

    PubMed

    Sandblom, Erik; Gräns, Albin; Axelsson, Michael; Seth, Henrik

    2014-11-07

    Temperature acclimation may offset the increased energy expenditure (standard metabolic rate, SMR) and reduced scope for activity (aerobic scope, AS) predicted to occur with local and global warming in fishes and other ectotherms. Yet, the time course and mechanisms of this process is little understood. Acclimation dynamics of SMR, maximum metabolic rate, AS and the specific dynamic action of feeding (SDA) were determined in shorthorn sculpin (Myoxocephalus scorpius) after transfer from 10°C to 16°C. SMR increased in the first week by 82% reducing AS to 55% of initial values, while peak postprandial metabolism was initially greater. This meant that the estimated AS during peak SDA approached zero, constraining digestion and leaving little room for additional aerobic processes. After eight weeks at 16°C, SMR was restored, while AS and the estimated AS during peak SDA recovered partly. Collectively, this demonstrated a considerable capacity for metabolic thermal compensation, which should be better incorporated into future models on organismal responses to climate change. A mathematical model based on the empirical data suggested that phenotypes with fast acclimation rates may be favoured by natural selection as the accumulated energetic cost of a slow acclimation rate increases in a warmer future with exacerbated thermal variations. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  2. Topography of eye-position sensitivity of saccades evoked electrically from the cat's superior colliculus.

    PubMed

    McIlwain, J T

    1990-03-01

    Saccades evoked electrically from the deep layers of the superior colliculus have been examined in the alert cat with its head fixed. Amplitudes of the vertical and horizontal components varied linearly with the starting position of the eye. The slopes of the linear-regression lines provided an estimate of the sensitivity of these components to initial eye position. In observations on 29 sites in nine cats, the vertical and horizontal components of saccades evoked from a given site were rarely influenced to the same degree by initial eye position. For most sites, the horizontal component was more sensitive than the vertical component. Sensitivities of vertical and horizontal components were lowest near the representations of the horizontal and vertical meridians, respectively, of the collicular retinotopic map, but otherwise exhibited no systematic retinotopic dependence. Estimates of component amplitudes for saccades evoked from the center of the oculomotor range also diverged significantly from those predicted from the retinotopic map. The results of this and previous studies indicate that electrical stimulation of the cat's superior colliculus cannot yield a unique oculomotor map or one that is in register everywhere with the sensory retinotopic map. Several features of these observations suggest that electrical stimulation of the colliculus produces faulty activation of a saccadic control system that computes target position with respect to the head and that small and large saccades are controlled differently.

  3. Wheat productivity estimates using LANDSAT data

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F.; Colwell, J. E. (Principal Investigator); Rice, D. P.; Bresnahan, P. A.

    1977-01-01

    The author has identified the following significant results. Large area LANDSAT yield estimates were generated. These results were compared with estimates computed using a meteorological yield model (CCEA). Both of these estimates were compared with Kansas Crop and Livestock Reporting Service (KCLRS) estimates of yield, in an attempt to assess the relative and absolute accuracy of the LANDSAT and CCEA estimates. Results were inconclusive. A large area direct wheat prediction procedure was implemented. Initial results have produced a wheat production estimate comparable with the KCLRS estimate.

  4. Magnitude Estimation for the 2011 Tohoku-Oki Earthquake Based on Ground Motion Prediction Equations

    NASA Astrophysics Data System (ADS)

    Eshaghi, Attieh; Tiampo, Kristy F.; Ghofrani, Hadi; Atkinson, Gail M.

    2015-08-01

    This study investigates whether real-time strong ground motion data from seismic stations could have been used to provide an accurate estimate of the magnitude of the 2011 Tohoku-Oki earthquake in Japan. Ultimately, such an estimate could be used as input data for a tsunami forecast and would lead to more robust earthquake and tsunami early warning. We collected the strong motion accelerograms recorded by borehole and free-field (surface) Kiban Kyoshin network stations that registered this mega-thrust earthquake in order to perform an off-line test to estimate the magnitude based on ground motion prediction equations (GMPEs). GMPEs for peak ground acceleration and peak ground velocity (PGV) from a previous study by Eshaghi et al. in the Bulletin of the Seismological Society of America 103. (2013) derived using events with moment magnitude ( M) ≥ 5.0, 1998-2010, were used to estimate the magnitude of this event. We developed new GMPEs using a more complete database (1998-2011), which added only 1 year but approximately twice as much data to the initial catalog (including important large events), to improve the determination of attenuation parameters and magnitude scaling. These new GMPEs were used to estimate the magnitude of the Tohoku-Oki event. The estimates obtained were compared with real time magnitude estimates provided by the existing earthquake early warning system in Japan. Unlike the current operational magnitude estimation methods, our method did not saturate and can provide robust estimates of moment magnitude within ~100 s after earthquake onset for both catalogs. It was found that correcting for average shear-wave velocity in the uppermost 30 m () improved the accuracy of magnitude estimates from surface recordings, particularly for magnitude estimates of PGV (Mpgv). The new GMPEs also were used to estimate the magnitude of all earthquakes in the new catalog with at least 20 records. Results show that the magnitude estimate from PGV values using borehole recordings had the smallest standard deviation among the estimated magnitudes and produced more stable and robust magnitude estimates. This suggests that incorporating borehole strong ground-motion records immediately available after the occurrence of large earthquakes can provide robust and accurate magnitude estimation.

  5. Trait and state anxiety across academic evaluative contexts: development and validation of the MTEA-12 and MSEA-12 scales.

    PubMed

    Sotardi, Valerie A

    2018-05-01

    Educational measures of anxiety focus heavily on students' experiences with tests yet overlook other assessment contexts. In this research, two brief multiscale questionnaires were developed and validated to measure trait evaluation anxiety (MTEA-12) and state evaluation anxiety (MSEA-12) for use in various assessment contexts in non-clinical, educational settings. The research included a cross-sectional analysis of self-report data using authentic assessment settings in which evaluation anxiety was measured. Instruments were tested using a validation sample of 241 first-year university students in New Zealand. Scale development included component structures for state and trait scales based on existing theoretical frameworks. Analyses using confirmatory factor analysis and descriptive statistics indicate that the scales are reliable and structurally valid. Multivariate general linear modeling using subscales from the MTEA-12, MSEA-12, and student grades suggest adequate criterion-related validity. Initial predictive validity in which one relevant MTEA-12 factor explained between 21% and 54% of the variance in three MSEA-12 factors. Results document MTEA-12 and MSEA-12 as reliable measures of trait and state dimensions of evaluation anxiety for test and writing contexts. Initial estimates suggest the scales as having promising validity, and recommendations for further validation are outlined.

  6. Engagement in the HIV Care Continuum among Key Populations in Tijuana, Mexico.

    PubMed

    Smith, Laramie R; Patterson, Thomas L; Magis-Rodriguez, Carlos; Ojeda, Victoria D; Burgos, Jose Luis; Rojas, Sarah A; Zúñiga, María Luisa; Strathdee, Steffanie A

    2016-05-01

    In Tijuana, Mexico, HIV is concentrated in sub-epidemics of key populations: persons who inject drugs (PWID), sex workers (SW), and men who have sex with men (MSM). To date, data on engagement in the HIV care continuum among these key populations, particularly in resource-constrained settings, are sparse. We pooled available epidemiological data from six studies (N = 3368) to examine HIV testing and treatment uptake in these key populations; finding an overall HIV prevalence of 5.7 %. Of the 191 identified HIV-positive persons, only 11.5 % knew their HIV-positive status and 3.7 % were on ART. Observed differences between these HIV-positive key populations suggest PWID (vs. non-PWID) were least likely to have previously tested or initiate HIV care. MSM (vs. non-MSM) were more likely to have previously tested but not more likely to know their HIV-positive status. Of persons aware of their HIV-positive status, SW (vs. non-SW) were more likely to initiate HIV care. Findings suggest engagement of key populations in HIV treatment is far below estimates observed for similarly resource-constrained generalized epidemics in sub-Saharan Africa. These data provide one of the first empirical-snapshots highlighting the extent of HIV treatment disparities in key populations.

  7. Dynein-ADP as a force-generating intermediate revealed by a rapid reactivation of flagellar axoneme.

    PubMed Central

    Tani, T; Kamimura, S

    1999-01-01

    Fragmented flagellar axonemes of sand dollar spermatozoa were reactivated by rapid photolysis of caged ATP. After a time lag of 10 ms, axonemes treated with protease started sliding disintegration. Axonemes without protease digestion started nanometer-scale high-frequency oscillation after a similar time lag. Force development in the sliding disintegration was measured with a flexible glass needle and its time course was corresponded well to that of the dynein-ADP intermediate production estimated using kinetic rates previously reported. However, with a high concentration ( approximately 80 microM) of vanadate, which binds to the dynein-ADP intermediate and forms a stable complex of dynein-ADP-vanadate, the time course of force development in sliding disintegration was not affected at all. In the case of high frequency oscillation, the time lag to start the oscillation, the initial amplitude, and the initial frequency were not affected by vanadate, though the oscillation once started was damped more quickly at higher concentrations of vanadate. These results suggest that during the initial turnover of ATP hydrolysis, force generation of dynein is not blocked by vanadate. A vanadate-insensitive dynein-ADP is postulated as a force-generating intermediate. PMID:10465762

  8. Accuracy of genetic code translation and its orthogonal corruption by aminoglycosides and Mg2+ ions.

    PubMed

    Zhang, Jingji; Pavlov, Michael Y; Ehrenberg, Måns

    2018-02-16

    We studied the effects of aminoglycosides and changing Mg2+ ion concentration on the accuracy of initial codon selection by aminoacyl-tRNA in ternary complex with elongation factor Tu and GTP (T3) on mRNA programmed ribosomes. Aminoglycosides decrease the accuracy by changing the equilibrium constants of 'monitoring bases' A1492, A1493 and G530 in 16S rRNA in favor of their 'activated' state by large, aminoglycoside-specific factors, which are the same for cognate and near-cognate codons. Increasing Mg2+ concentration decreases the accuracy by slowing dissociation of T3 from its initial codon- and aminoglycoside-independent binding state on the ribosome. The distinct accuracy-corrupting mechanisms for aminoglycosides and Mg2+ ions prompted us to re-interpret previous biochemical experiments and functional implications of existing high resolution ribosome structures. We estimate the upper thermodynamic limit to the accuracy, the 'intrinsic selectivity' of the ribosome. We conclude that aminoglycosides do not alter the intrinsic selectivity but reduce the fraction of it that is expressed as the accuracy of initial selection. We suggest that induced fit increases the accuracy and speed of codon reading at unaltered intrinsic selectivity of the ribosome.

  9. Cost and cost-effectiveness of computerized vs. in-person motivational interventions in the criminal justice system.

    PubMed

    Cowell, Alexander J; Zarkin, Gary A; Wedehase, Brendan J; Lerch, Jennifer; Walters, Scott T; Taxman, Faye S

    2018-04-01

    Although substance use is common among probationers in the United States, treatment initiation remains an ongoing problem. Among the explanations for low treatment initiation are that probationers are insufficiently motivated to seek treatment, and that probation staff have insufficient training and resources to use evidence-based strategies such as motivational interviewing. A web-based intervention based on motivational enhancement principles may address some of the challenges of initiating treatment but has not been tested to date in probation settings. The current study evaluated the cost-effectiveness of a computerized intervention, Motivational Assessment Program to Initiate Treatment (MAPIT), relative to face-to-face Motivational Interviewing (MI) and supervision as usual (SAU), delivered at the outset of probation. The intervention took place in probation departments in two U.S. cities. The baseline sample comprised 316 participants (MAPIT = 104, MI = 103, and SAU = 109), 90% (n = 285) of whom completed the 6-month follow-up. Costs were estimated from study records and time logs kept by interventionists. The effectiveness outcome was self-reported initiation into any treatment (formal or informal) within 2 and 6 months of the baseline interview. The cost-effectiveness analysis involved assessing dominance and computing incremental cost-effectiveness ratios and cost-effectiveness acceptability curves. Implementation costs were used in the base case of the cost-effectiveness analysis, which excludes both a hypothetical license fee to recoup development costs and startup costs. An intent-to-treat approach was taken. MAPIT cost $79.37 per participant, which was ~$55 lower than the MI cost of $134.27 per participant. Appointment reminders comprised a large proportion of the cost of the MAPIT and MI intervention arms. In the base case, relative to SAU, MAPIT cost $6.70 per percentage point increase in the probability of initiating treatment. If a decision-maker is willing to pay $15 or more to improve the probability of initiating treatment by 1%, estimates suggest she can be 70% confident that MAPIT is good value relative to SAU at the 2-month follow-up and 90% confident that MAPIT is good value at the 6-month follow-up. Web-based MAPIT may be good value compared to in-person delivered alternatives. This conclusion is qualified because the results are not robust to narrowing the outcome to initiating formal treatment only. Further work should explore ways to improve access to efficacious treatment in probation settings. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Reconstructing mantle volatile contents through the veil of degassing

    NASA Astrophysics Data System (ADS)

    Tucker, J.; Mukhopadhyay, S.; Gonnermann, H. M.

    2014-12-01

    The abundance of volatile elements in the mantle reveals critical information about the Earth's origin and evolution such as the chemical constituents that built the Earth and material exchange between the mantle and exosphere. However, due to magmatic degassing, volatile element abundances measured in basalts usually do not represent those in undegassed magmas and hence in the mantle source of the basalts. While estimates of average mantle concentrations of some volatile species can be obtained, such as from the 3He flux into the oceans, volatile element variability within the mantle remains poorly constrained. Here, we use CO2-He-Ne-Ar-Xe measurements in basalts and a new degassing model to reconstruct the initial volatile contents of 8 MORBs from the Mid-Atlantic Ridge and Southwest Indian Ridge that span a wide geochemical range from depleted to enriched MORBs. We first show that equilibrium degassing (e.g. Rayleigh degassing), cannot simultaneously fit the measured CO2-He-Ne-Ar-Xe compositions in MORBs and argue that kinetic fractionation between bubbles and melt lowers the dissolved ratios of light to heavy noble gas species in the melt from that expected at equilibrium. We present a degassing model (after Gonnermann and Mukhopadhyay, 2007) that explicitly accounts for diffusive fractionation between melt and bubbles. The model computes the degassed composition based on an initial volatile composition and a diffusive timescale. To reconstruct the undegassed volatile content of a sample, we find the initial composition and degassing timescale which minimize the misfit between predicted and measured degassed compositions. Initial 3He contents calculated for the 8 MORB samples vary by a factor of ~7. We observe a correlation between initial 3He and CO2 contents, indicating relatively constant CO2/3He ratios despite the geochemical diversity and variable gas content in the basalts. Importantly, the gas-rich popping rock from the North Atlantic, as well as the average mantle ratio computed from the ridge 3He flux and independently estimated CO2 content fall along the same correlation. This observation suggests that undegassed CO2 and noble gas concentrations can be reconstructed in individual samples through measurement of noble gases and CO2 in erupted basalts.

  11. Factors associated with duck nest success in the prairie pothole region of Canada

    USGS Publications Warehouse

    Greenwood, Raymond J.; Sargeant, Alan B.; Johnson, Douglas H.; Cowardin, Lewis M.; Shaffer, Terry L.

    1995-01-01

    Populations of some dabbling ducks have declined sharply in recent decades and information is needed to understand reasons for this. During 1982-85, we studied duck nesting for 1-4 years in 17 1.6 by 16.0-km, high-density duck areas in the Prairie Pothole Region (PPR) of Canada, 9 in parkland and 8 in prairie. We estimated nest-initiation dates, habitat preferences, nest success, and nest fates for mallards (Anas platyrhynchos), gadwalls (A. strepera), blue-winged teals (A. discors), northern shovelers (A. clypeata), and northern pintails (A. acuta). We also examined the relation of mallard production to geographic and temporal variation in wetlands, breeding populations, nesting effort, and hatch rate.Average periods of nest initiation were similar for mallards and northern pintails, and nearly twice as long as those of gadwalls, blue-winged teals, and northern shovelers. Median date of nest initiation was related to presence of wet wetlands (contained visible standing water), spring precipitation, and May temperature. Length of initiation period was related to presence of wet wetlands and precipitation in May, June temperature, and nest success; it was negatively related overall to drought that prevailed over much of Prairie Canada during the study, especially in 1984.Mallards, gadwalls, and northern pintails nested most often in brush in native grassland, blue-winged teals in road rights-of-way, and northern shovelers in hayfields and small (< 2 ha) untilled tracts of upland habitat (hereafter called Odd area). Among 8 habitat classes that composed all suitable nesting habitat of each study area, nest success estimates averaged 25% in Woodland, 19% in Brush, 18% in Hayland, 16% in Wetland, 15% in Grass, 11% in Odd area, 8% in Right-of-way, and 2% in Cropland. We detected no significant difference in nest success among species: mallard (11%), gadwall (14%), blue-winged teal (15%), northern shoveler (12%), and northern pintail (7%). Annual nest success (pooled by study area and averaged [unweighted] over all study areas) was 17% in 1982, 15% in 1983, 7% in 1984, and 14% in 1985.We estimated that predators destroyed 72% of mallard, gadwall, blue-winged teal, and northern shoveler nests and 65% of northern pintail nests. In prairie, average nest success decreased about 4 percentage points for every 10 percentage points increase in Cropland, suggesting that under conditions of 1982-85, local populations of these species probably were not stable when Cropland exceeded about 56% of available habitat. We found recent remains of 573 dead ducks during 1983-85; most were females (Anas spp.) apparently killed by predators. In some years, mallards and northern pintails were more numerous among dead ducks than we expected. More females than males were found dead among mallards and northern shovelers, suggesting higher vulnerability of females. Of factors we examined, nest-success rate appeared to be the most influential factor in determining mallard production. Nest success varied both geographically and annually.

  12. Object motion computation for the initiation of smooth pursuit eye movements in humans.

    PubMed

    Wallace, Julian M; Stone, Leland S; Masson, Guillaume S

    2005-04-01

    Pursuing an object with smooth eye movements requires an accurate estimate of its two-dimensional (2D) trajectory. This 2D motion computation requires that different local motion measurements are extracted and combined to recover the global object-motion direction and speed. Several combination rules have been proposed such as vector averaging (VA), intersection of constraints (IOC), or 2D feature tracking (2DFT). To examine this computation, we investigated the time course of smooth pursuit eye movements driven by simple objects of different shapes. For type II diamond (where the direction of true object motion is dramatically different from the vector average of the 1-dimensional edge motions, i.e., VA not equal IOC = 2DFT), the ocular tracking is initiated in the vector average direction. Over a period of less than 300 ms, the eye-tracking direction converges on the true object motion. The reduction of the tracking error starts before the closing of the oculomotor loop. For type I diamonds (where the direction of true object motion is identical to the vector average direction, i.e., VA = IOC = 2DFT), there is no such bias. We quantified this effect by calculating the direction error between responses to types I and II and measuring its maximum value and time constant. At low contrast and high speeds, the initial bias in tracking direction is larger and takes longer to converge onto the actual object-motion direction. This effect is attenuated with the introduction of more 2D information to the extent that it was totally obliterated with a texture-filled type II diamond. These results suggest a flexible 2D computation for motion integration, which combines all available one-dimensional (edge) and 2D (feature) motion information to refine the estimate of object-motion direction over time.

  13. Neural Correlates of User-initiated Motor Success and Failure - A Brain-Computer Interface Perspective.

    PubMed

    Yazmir, Boris; Reiner, Miriam

    2018-05-15

    Any motor action is, by nature, potentially accompanied by human errors. In order to facilitate development of error-tailored Brain-Computer Interface (BCI) correction systems, we focused on internal, human-initiated errors, and investigated EEG correlates of user outcome successes and errors during a continuous 3D virtual tennis game against a computer player. We used a multisensory, 3D, highly immersive environment. Missing and repelling the tennis ball were considered, as 'error' (miss) and 'success' (repel). Unlike most previous studies, where the environment "encouraged" the participant to perform a mistake, here errors happened naturally, resulting from motor-perceptual-cognitive processes of incorrect estimation of the ball kinematics, and can be regarded as user internal, self-initiated errors. Results show distinct and well-defined Event-Related Potentials (ERPs), embedded in the ongoing EEG, that differ across conditions by waveforms, scalp signal distribution maps, source estimation results (sLORETA) and time-frequency patterns, establishing a series of typical features that allow valid discrimination between user internal outcome success and error. The significant delay in latency between positive peaks of error- and success-related ERPs, suggests a cross-talk between top-down and bottom-up processing, represented by an outcome recognition process, in the context of the game world. Success-related ERPs had a central scalp distribution, while error-related ERPs were centro-parietal. The unique characteristics and sharp differences between EEG correlates of error/success provide the crucial components for an improved BCI system. The features of the EEG waveform can be used to detect user action outcome, to be fed into the BCI correction system. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  14. Smoking in movies and adolescent smoking initiation: longitudinal study in six European countries.

    PubMed

    Morgenstern, Matthis; Sargent, James D; Engels, Rutger C M E; Scholte, Ron H J; Florek, Ewa; Hunt, Kate; Sweeting, Helen; Mathis, Federica; Faggiano, Fabrizio; Hanewinkel, Reiner

    2013-04-01

    Longitudinal studies from the U.S. suggest a causal relationship between exposure to images of smoking in movies and adolescent smoking onset. This study investigates whether adolescent smoking onset is predicted by the amount of exposure to smoking in movies across six European countries with various cultural and regulatory approaches to tobacco. Longitudinal survey of 9987 adolescent never-smokers recruited in the years 2009-2010 (mean age=13.2 years) in 112 state-funded schools from Germany, Iceland, Italy, The Netherlands, Poland, and the United Kingdom (UK), and followed up in 2011. Exposure to movie smoking was estimated from 250 top-grossing movies in each country. Multilevel mixed-effects Poisson regressions were performed in 2012 to assess the relationship between exposure at baseline and smoking status at follow-up. During the observation period (M=12 months), 17% of the sample initiated smoking. The estimated mean exposure to on-screen tobacco was 1560 occurrences. Overall, and after controlling for age; gender; family affluence; school performance; TV screen time; personality characteristics; and smoking status of peers, parents, and siblings, exposure to each additional 1000 tobacco occurrences increased the adjusted relative risk for smoking onset by 13% (95% CI=8%, 17%, p<0.001). The crude relationship between movie smoking exposure and smoking initiation was significant in all countries; after covariate adjustment, the relationship remained significant in Germany, Iceland, The Netherlands, Poland, and UK. Seeing smoking in movies is a predictor of smoking onset in various cultural contexts. The results confirm that limiting young people's exposure to movie smoking might be an effective way to decrease adolescent smoking onset. Copyright © 2013 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  15. Global Genomic Epidemiology of Salmonella enterica Serovar Typhimurium DT104

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leekitcharoenphon, Pimlapas; Hendriksen, Rene S.; Le Hello, Simon

    It has been 30 years since the initial emergence and subsequent rapid global spread of multidrug-resistant Salmonella enterica serovar Typhimurium DT104 (MDR DT104). Nonetheless, its origin and transmission route have never been revealed. In this paper, we used whole-genome sequencing (WGS) and temporally structured sequence analysis within a Bayesian framework to reconstruct temporal and spatial phylogenetic trees and estimate the rates of mutation and divergence times of 315 S. Typhimurium DT104 isolates sampled from 1969 to 2012 from 21 countries on six continents. DT104 was estimated to have emerged initially as antimicrobial susceptible in ~1948 (95% credible interval [CI], 1934more » to 1962) and later became MDR DT104 in ~1972 (95% CI, 1972 to 1988) through horizontal transfer of the 13-kb Salmonella genomic island 1 (SGI1) MDR region into susceptible strains already containing SGI1. This was followed by multiple transmission events, initially from central Europe and later between several European countries. An independent transmission to the United States and another to Japan occurred, and from there MDR DT104 was probably transmitted to Taiwan and Canada. An independent acquisition of resistance genes took place in Thailand in ~1975 (95% CI, 1975 to 1990). In Denmark, WGS analysis provided evidence for transmission of the organism between herds of animals. Interestingly, the demographic history of Danish MDR DT104 provided evidence for the success of the program to eradicate Salmonella from pig herds in Denmark from 1996 to 2000. Finally, the results from this study refute several hypotheses on the evolution of DT104 and suggest that WGS may be useful in monitoring emerging clones and devising strategies for prevention of Salmonella infections.« less

  16. Global Genomic Epidemiology of Salmonella enterica Serovar Typhimurium DT104

    PubMed Central

    Hendriksen, Rene S.; Le Hello, Simon; Weill, François-Xavier; Baggesen, Dorte Lau; Jun, Se-Ran; Lund, Ole; Crook, Derrick W.; Wilson, Daniel J.; Aarestrup, Frank M.

    2016-01-01

    It has been 30 years since the initial emergence and subsequent rapid global spread of multidrug-resistant Salmonella enterica serovar Typhimurium DT104 (MDR DT104). Nonetheless, its origin and transmission route have never been revealed. We used whole-genome sequencing (WGS) and temporally structured sequence analysis within a Bayesian framework to reconstruct temporal and spatial phylogenetic trees and estimate the rates of mutation and divergence times of 315 S. Typhimurium DT104 isolates sampled from 1969 to 2012 from 21 countries on six continents. DT104 was estimated to have emerged initially as antimicrobial susceptible in ∼1948 (95% credible interval [CI], 1934 to 1962) and later became MDR DT104 in ∼1972 (95% CI, 1972 to 1988) through horizontal transfer of the 13-kb Salmonella genomic island 1 (SGI1) MDR region into susceptible strains already containing SGI1. This was followed by multiple transmission events, initially from central Europe and later between several European countries. An independent transmission to the United States and another to Japan occurred, and from there MDR DT104 was probably transmitted to Taiwan and Canada. An independent acquisition of resistance genes took place in Thailand in ∼1975 (95% CI, 1975 to 1990). In Denmark, WGS analysis provided evidence for transmission of the organism between herds of animals. Interestingly, the demographic history of Danish MDR DT104 provided evidence for the success of the program to eradicate Salmonella from pig herds in Denmark from 1996 to 2000. The results from this study refute several hypotheses on the evolution of DT104 and suggest that WGS may be useful in monitoring emerging clones and devising strategies for prevention of Salmonella infections. PMID:26944846

  17. Change in life satisfaction of adults with pediatric-onset spinal cord injury.

    PubMed

    Chen, Yuying; Anderson, Caroline J; Vogel, Lawrence C; Chlan, Kathleen M; Betz, Randal R; McDonald, Craig M

    2008-12-01

    To examine the change in life satisfaction over time and potential contributing factors among adults with pediatric-onset spinal cord injury (SCI). Prospective dynamic cohort study. Community. Individuals who sustained a SCI before age 19 years (N=278) were initially interviewed at age 24 years or older and followed on an annual basis between 1996 and 2006. Not applicable. A structured telephone interview was conducted to obtain the measures of Satisfaction with Life Scale (SWLS), physical independence, participation, and psychologic functioning. The hierarchical linear modeling was performed to characterize individual person-specific time paths and estimate the average rate of change in SWLS over time. A total of 1171 interviews were conducted among 184 men and 94 women (89% white; baseline age, 27.1+/-3.4 y; baseline years since injury, 12.8+/-4.9). The initial SWLS score averaged 24.2 and was estimated to increase by 0.14 a year (P=.10). After adjusting for potential confounding factors, the overall life satisfaction was significantly higher for women and those who were married/living with a partner; were employed/students; did not use illicit drugs; and scored high in the FIM, the mental health component of the Short Form-12, and the social integration subscale of the Craig Handicap Assessment and Reporting Technique. The rate of change in life satisfaction did not differ significantly by any personal, medical, and psychosocial characteristics under investigation. The study findings suggest that people who feel unsatisfied with life initially are likely to stay unsatisfied over time if the critical determinant factors remain unchanged in their life. To minimize the risk of decreasing life satisfaction, several modifiable risk factors identified in the present study could be targeted for intervention.

  18. Smoking in Movies and Adolescent Smoking Initiation

    PubMed Central

    Morgenstern, Matthis; Sargent, James D.; Engels, Rutger C.M.E.; Scholte, Ron H.J.; Florek, Ewa; Hunt, Kate; Sweeting, Helen; Mathis, Federica; Faggiano, Fabrizio; Hanewinkel, Reiner

    2013-01-01

    Background Longitudinal studies from the U.S. suggest a causal relationship between exposure to images of smoking in movies and adolescent smoking onset. Purpose This study investigates whether adolescent smoking onset is predicted by the amount of exposure to smoking in movies across six European countries with various cultural and regulatory approaches to tobacco. Methods Longitudinal survey of 9987 adolescent never-smokers recruited in the years 2009–2010 (mean age 13.2 years) in 112 state-funded schools from Germany, Iceland, Italy, The Netherlands, Poland, and the United Kingdom (UK), and followed-up in 2011. Exposure to movie smoking was estimated from 250 top-grossing movies in each country. Multilevel mixed-effects Poisson regressions were performed in 2012 to assess the relationship between exposure at baseline and smoking status at follow-up. Results During the observation period (M=12 months), 17% of the sample initiated smoking. The estimated mean exposure to on-screen tobacco was 1560 occurrences. Overall, and after controlling for age; gender; family affluence; school performance; TVscreen time; personality characteristics; and smoking status of peers, parents, and siblings, exposure to each additional 1000 tobacco occurrences increased the adjusted relative risk for smoking onset by 13% (95% CI=8%, 17%, p<0.001). The crude relationship between movie smoking exposure and smoking initiation was significant in all countries; after covariate adjustment, the relationship remained significant in Germany, Iceland, The Netherlands, Poland, and UK. Conclusions Seeing smoking in movies is a predictor of smoking onset in various cultural contexts. The results confirm that limiting young people’s exposure to movie smoking might be an effective way to decrease adolescent smoking onset. PMID:23498098

  19. Global Genomic Epidemiology of Salmonella enterica Serovar Typhimurium DT104

    DOE PAGES

    Leekitcharoenphon, Pimlapas; Hendriksen, Rene S.; Le Hello, Simon; ...

    2016-03-04

    It has been 30 years since the initial emergence and subsequent rapid global spread of multidrug-resistant Salmonella enterica serovar Typhimurium DT104 (MDR DT104). Nonetheless, its origin and transmission route have never been revealed. In this paper, we used whole-genome sequencing (WGS) and temporally structured sequence analysis within a Bayesian framework to reconstruct temporal and spatial phylogenetic trees and estimate the rates of mutation and divergence times of 315 S. Typhimurium DT104 isolates sampled from 1969 to 2012 from 21 countries on six continents. DT104 was estimated to have emerged initially as antimicrobial susceptible in ~1948 (95% credible interval [CI], 1934more » to 1962) and later became MDR DT104 in ~1972 (95% CI, 1972 to 1988) through horizontal transfer of the 13-kb Salmonella genomic island 1 (SGI1) MDR region into susceptible strains already containing SGI1. This was followed by multiple transmission events, initially from central Europe and later between several European countries. An independent transmission to the United States and another to Japan occurred, and from there MDR DT104 was probably transmitted to Taiwan and Canada. An independent acquisition of resistance genes took place in Thailand in ~1975 (95% CI, 1975 to 1990). In Denmark, WGS analysis provided evidence for transmission of the organism between herds of animals. Interestingly, the demographic history of Danish MDR DT104 provided evidence for the success of the program to eradicate Salmonella from pig herds in Denmark from 1996 to 2000. Finally, the results from this study refute several hypotheses on the evolution of DT104 and suggest that WGS may be useful in monitoring emerging clones and devising strategies for prevention of Salmonella infections.« less

  20. A hybrid optimization approach to the estimation of distributed parameters in two-dimensional confined aquifers

    USGS Publications Warehouse

    Heidari, M.; Ranjithan, S.R.

    1998-01-01

    In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.

  1. Timing of initiation, patterns of breastfeeding, and infant survival: prospective analysis of pooled data from three randomised trials.

    PubMed

    2016-04-01

    Although the benefits of exclusive breastfeeding for child health and survival, particularly in the post-neonatal period, are established, the independent beneficial effect of early breastfeeding initiation remains unclear. We studied the association between timing of breastfeeding initiation and post-enrolment neonatal and post-neonatal mortality up to 6 months of age, as well as the associations between breastfeeding pattern and mortality. We examined associations between timing of breastfeeding initiation, post-enrolment neonatal mortality (enrolment 28 days), and post-neonatal mortality up to 6 months of age (29-180 days) in a large cohort from three neonatal vitamin A trials in Ghana, India, and Tanzania. Newborn babies were eligible for these trials if their mother reported that they were likely to stay in the study area for the next 6 months, they could feed orally, were aged less than 3 days, and the primary caregiver gave informed consent. We excluded infants who initiated breastfeeding after 96 h, did not initiate, or had missing initiation status. We pooled the data from both randomised groups of the three trials and then categorised time of breastfeeding initiation as: at ≤1 h, 2-23 h, and 24-96 h. We defined breastfeeding patterns as exclusive, predominant, or partial breastfeeding at 4 days, 1 month, and 3 months of age. We estimated relative risks using log binomial regression and Poisson regression with robust variances. Multivariate models controlled for site and potential confounders. Of 99 938 enrolled infants, 99 632 babies initiated breastfeeding by 96 h of age and were included in our prospective cohort. 56 981 (57·2%) initiated breastfeeding at ≤1 h, 38 043 (38·2%) at 2-23 h, and 4608 (4·6%) at 24-96 h. Compared with infants initiating breastfeeding within the first hour of life, neonatal mortality between enrolment and 28 days was higher in infants initiating at 2-23 h (adjusted relative risk 1·41 [95% CI 1·24-1·62], p<0·0001), and in those initiating at 24-96 h (1·79 [1·39-2·30], p<0·0001). These associations were similar when deaths in the first 4 days of life were excluded (1·32 [1·10-1·58], p=0·003, for breastfeeding initiation at 2-23 h, and 1·90 [1·38-2·62], p=0·0001, for initiation at 24-96 h). When data were stratified by exclusive breastfeeding status at 4 days of age (p value for interaction=0·690), these associations were also similar in magnitude but with wider confidence intervals for initiation at 2-23 h (1·41 [1·12-1·77], p=0·003) and for initiation at 24-96 h (1·51 [0·63-3·65], p=0·357). Exclusive breastfeeding was also associated with the lower mortality during the first 6 months of life (1-3 months mortality: exclusive vs partial breastfeeding at 1 month 1·83 [1·45-2·32], p<0·0001, and exclusive breastfeeding vs no breastfeeding at 1 month 10·88 [8·27-14·31], p<0·0001). Our findings suggest that early initiation of breastfeeding reduces neonatal and early infant mortality both through increasing rates of exclusive breastfeeding and by additional mechanisms. Both practices should be promoted by public health programmes and should be used in models to estimate lives saved. Bill & Melinda Gates Foundation through a grant to the WHO. Copyright © 2015 World Health Organization; licensee Elsevier. This is an Open Access article published without any waiver of WHO's privileges and immunities under international law, convention, or agreement. This Article should not be reproduced for use in association with the promotion of commercial products, services, or any legal entity. There should be no suggestion that WHO endorses any specific organisation or products. The use of the WHO logo is not permitted. This notice should be preserved along with the Article's original URL.

  2. Unsteady motion, finite Reynolds numbers, and wall effect on Vorticella convallaria contribute contraction force greater than the stokes drag.

    PubMed

    Ryu, Sangjin; Matsudaira, Paul

    2010-06-02

    Contraction of Vorticella convallaria, a sessile ciliated protozoan, is completed within a few milliseconds and results in a retraction of its cell body toward the substratum by coiling its stalk. Previous studies have modeled the cell body as a sphere and assumed a drag force that satisfies Stokes' law. However, the contraction-induced flow of the medium is transient and bounded by the substrate, and the maximum Reynolds number is larger than unity. Thus, calculations of contractile force from the drag force are incomplete. In this study, we analyzed fluid flow during contraction by the particle tracking velocimetry and computational fluid dynamics simulations to estimate the contractile force. Particle paths show that the induced flow is limited by the substrate. Simulation-based force estimates suggest that the combined effect of the flow unsteadiness, the finite Reynolds number, and the substrate comprises 35% of the total force. The work done in the early stage of contraction and the maximum power output are similar regardless of the medium viscosity. These results suggest that, during the initial development of force, V. convallaria uses a common mechanism for performing mechanical work irrespective of viscous loading conditions. Copyright (c) 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  3. Historic Properties Report: Volunteer Army Ammunition Plant, Chattanooga, Tennessee.

    DTIC Science & Technology

    1984-08-01

    on file at AMCCOM Historcal Office. 3. Building use was determined by government category code; some support structures, such as change houses in the...preservation program to be carried out for the property. It should include a maintenance and repair schedule and estimated initial and annual costs . The...and estimated initial and annual costs . The preservation plan should be approved by the State Historic Preservation Officer and the Advisory Council in

  4. Global a priori estimates for the inhomogeneous Landau equation with moderately soft potentials

    NASA Astrophysics Data System (ADS)

    Cameron, Stephen; Silvestre, Luis; Snelson, Stanley

    2018-05-01

    We establish a priori upper bounds for solutions to the spatially inhomogeneous Landau equation in the case of moderately soft potentials, with arbitrary initial data, under the assumption that mass, energy and entropy densities stay under control. Our pointwise estimates decay polynomially in the velocity variable. We also show that if the initial data satisfies a Gaussian upper bound, this bound is propagated for all positive times.

  5. Global well-posedness and asymptotic behavior of solutions for the three-dimensional MHD equations with Hall and ion-slip effects

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaopeng; Zhu, Mingxuan

    2018-04-01

    In this paper, we consider the small initial data global well-posedness of solutions for the magnetohydrodynamics with Hall and ion-slip effects in R^3. In addition, we also establish the temporal decay estimates for the weak solutions. With these estimates in hand, we study the algebraic time decay for higher-order Sobolev norms of small initial data solutions.

  6. Parameter identification of thermophilic anaerobic degradation of valerate.

    PubMed

    Flotats, Xavier; Ahring, Birgitte K; Angelidaki, Irini

    2003-01-01

    The considered mathematical model of the decomposition of valerate presents three unknown kinetic parameters, two unknown stoichiometric coefficients, and three unknown initial concentrations for biomass. Applying a structural identifiability study, we concluded that it is necessary to perform simultaneous batch experiments with different initial conditions for estimating these parameters. Four simultaneous batch experiments were conducted at 55 degrees C, characterized by four different initial acetate concentrations. Product inhibition of valerate degradation by acetate was considered. Practical identification was done optimizing the sum of the multiple determination coefficients for all measured state variables and for all experiments simultaneously. The estimated values of kinetic parameters and stoichiometric coefficients were characterized by the parameter correlation matrix, the confidence interval, and the student's t-test at 5% significance level with positive results except for the saturation constant, for which more experiments for improving its identifiability should be conducted. In this article, we discuss kinetic parameter estimation methods.

  7. Sunlight and Skin Cancer: Lessons from the Immune System

    PubMed Central

    Ullrich, Stephen E.

    2009-01-01

    The ultraviolet (UV) radiation in sunlight induces skin cancer development. Skin cancer is the most common form of human neoplasia. Estimates suggest that in excess of 1.5 million new cases of skin cancer (www.cancer.org/statistics) will be diagnosed in the United States this year Fortunately, because of their highly visible location, skin cancers are more rapidly diagnosed and more easily treated than other types of cancer. Be that as it may, approximately 10,000 Americans a year die from skin cancer, and the cost of treating skin cancer in the United States (both melanoma and non-melanoma skin cancer) is estimated to be in excess of $2.9 billion a year. In addition to causing skin cancer, UV radiation is also immune suppressive. In fact, data from studies with both experimental animals and biopsy proven skin cancer patients suggest that there is an association between the immune suppressive effects of UV radiation and its carcinogenic potential. Recent studies in my laboratory have focused on understanding the initial molecular events that induce immune suppression. We made two novel observations: First UV-induced keratinocyte-derived platelet activating factor plays a role in the induction of immune suppression. Second, cis-urocanic acid, a skin derived immunosuppressive compound mediates immune suppression by binding to serotonin receptors on target cells. Recent findings suggest that blocking the binding of these compounds to their receptors not only inhibits UV-induced immune suppression but it also interferes with skin cancer induction. PMID:17443748

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yasui, Chikako; Kobayashi, Naoto; Izumi, Natsuko

    To study star formation in low-metallicity environments ([M/H] ∼ −1 dex), we obtained deep near-infrared (NIR) images of Sh 2-207 (S207), which is an H ii region in the outer Galaxy with a spectroscopically determined metallicity of [O/H] ≃ −0.8 dex. We identified a young cluster in the western region of S207 with a limiting magnitude of K{sub S} = 19.0 mag (10σ) that corresponds to a mass detection limit of ≲0.1 M{sub ⊙} and enables the comparison of star-forming properties under low metallicity with those of the solar neighborhood. From the fitting of the K-band luminosity function (KLF), the age and distance of the S207more » cluster are estimated at 2–3 Myr and ∼4 kpc, respectively. The estimated age is consistent with the suggestion of small extinctions of stars in the cluster (A{sub V} ∼ 3 mag) and the non-detection of molecular clouds. The reasonably good fit between the observed KLF and the model KLF suggests that the underlying initial mass function (IMF) of the cluster down to the detection limit is not significantly different from the typical IMFs in the solar metallicity. From the fraction of stars with NIR excesses, a low disk fraction (<10%) in the cluster with a relatively young age is suggested, as we had previously proposed.« less

  9. Contribution of Doñana Wetlands to Carbon Sequestration

    PubMed Central

    Morris, Edward P.; Flecha, Susana; Figuerola, Jordi; Costas, Eduardo; Navarro, Gabriel; Ruiz, Javier; Rodriguez, Pablo; Huertas, Emma

    2013-01-01

    Inland and transitional aquatic systems play an important role in global carbon (C) cycling. Yet, the C dynamics of wetlands and floodplains are poorly defined and field data is scarce. Air-water fluxes in the wetlands of Doñana Natural Area (SW Spain) were examined by measuring alkalinity, pH and other physiochemical parameters in a range of water bodies during 2010–2011. Areal fluxes were calculated and, using remote sensing, an estimate of the contribution of aquatic habitats to gaseous transport was derived. Semi-permanent ponds adjacent to the large Guadalquivir estuary acted as mild sinks, whilst temporal wetlands were strong sources of (−0.8 and 36.3 ). Fluxes in semi-permanent streams and ponds changed seasonally; acting as sources in spring-winter and mild sinks in autumn (16.7 and −1.2 ). Overall, Doñana's water bodies were a net annual source of (5.2 ). Up–scaling clarified the overwhelming contribution of seasonal flooding and allochthonous organic matter inputs in determining regional air-water gaseous transport (13.1 ). Nevertheless, this estimate is about 6 times < local marsh net primary production, suggesting the system acts as an annual net sink. Initial indications suggest longer hydroperiods may favour autochthonous C capture by phytoplankton. Direct anthropogenic impacts have reduced the hydroperiod in Doñana and this maybe exacerbated by climate change (less rainfall and more evaporation), suggesting potential for the modification of C sequestration. PMID:23977044

  10. High speed radiometric measurements of IED detonation fireballs

    NASA Astrophysics Data System (ADS)

    Spidell, Matthew T.; Gordon, J. Motos; Pitz, Jeremey; Gross, Kevin C.; Perram, Glen P.

    2010-04-01

    Continuum emission is predominant in fireball spectral phenomena and in some demonstrated cases, fine detail in the temporal evolution of infrared spectral emissions can be used to estimate size and chemical composition of the device. Recent work indicates that a few narrow radiometric bands may reveal forensic information needed for the explosive discrimination and classification problem, representing an essential step in moving from "laboratory" measurements to a rugged, fieldable system. To explore phenomena not observable in previous experiments, a high speed (10μs resolution) radiometer with four channels spanning the infrared spectrum observed the detonation of nine home made explosive (HME) devices in the < 100lb class. Radiometric measurements indicate that the detonation fireball is well approximated as a single temperature blackbody at early time (0 < t <~ 3ms). The effective radius obtained from absolute intensity indicates fireball growth at supersonic velocity during this time. Peak fireball temperatures during this initial detonation range between 3000.3500K. The initial temperature decay with time (t <~ 10ms) can be described by a simple phenomenological model based on radiative cooling. After this rapid decay, temperature exhibits a small, steady increase with time (10 <~ t <~ 50ms) and peaking somewhere between 1000.1500K-likely the result of post-detonation combustion-before subsequent cooling back to ambient conditions . Radius derived from radiometric measurements can be described well (R2 > 0.98) using blast model functional forms, suggesting that energy release could be estimated from single-pixel radiometric detectors. Comparison of radiometer-derived fireball size with FLIR infrared imagery indicate the Planckian intensity size estimates are about a factor of two smaller than the physical extent of the fireball.

  11. Real-world resource use and costs of haemophilia A-related bleeding.

    PubMed

    Shrestha, A; Eldar-Lissai, A; Hou, N; Lakdawalla, D N; Batt, K

    2017-07-01

    Prophylaxis treatment is recommended for haemophilia patients, but associated real-world economic costs and potential cost-savings associated with improved disease management are not fully known. This study aimed to assess haemophilia A-related resource use and cost by treatment type (prophylaxis versus non-prophylaxis) and any associated cost-savings. Truven MarketScan Commercial claims data (2004-2012) were used to identify haemophilia A-related healthcare utilization, healthcare costs and patterns of prophylaxis and non-prophylaxis treatment among 6- to 64-year-old males. We estimated bleeding-related resource utilization and costs in three age groups (6-18, 19-44, 45-64) by treatment types and assessed the extent to which early initiation of prophylactic treatment can mitigate them. T-tests and ordinary least squares regressions were used to compare unadjusted and demographics-adjusted cost estimates. Among children, overall haemophilia- and bleeding-related non-pharmacy costs were substantially lower for patients receiving prophylaxis (haemophilia-related: $15,864 vs. $53,408; P < 0.001; bleeding-related: $696 vs. $2013, respectively; P = 0.04). Among younger adults (19-44), haemophilia-related non-pharmacy costs were lower for patients receiving prophylaxis ($22,028 vs. $56,311, respectively; P = 0.001). Among children, these savings fully offset the incremental pharmacy cost due to prophylaxis. Among younger adults, the savings offset approximately 34% of the incremental pharmacy cost. No differences were found for older adults (45-64). These results suggest that initiating prophylaxis earlier in life may reduce the healthcare costs of bleeding events and their long-term complications. Future studies should strive to collect more detailed information on disease severity and treatment protocols to improve estimates of disease burden. © 2017 John Wiley & Sons Ltd.

  12. Clinical Outcomes from Androgen Signaling-directed Therapy after Treatment with Abiraterone Acetate and Prednisone in Patients with Metastatic Castration-resistant Prostate Cancer: Post Hoc Analysis of COU-AA-302.

    PubMed

    Smith, Matthew R; Saad, Fred; Rathkopf, Dana E; Mulders, Peter F A; de Bono, Johann S; Small, Eric J; Shore, Neal D; Fizazi, Karim; Kheoh, Thian; Li, Jinhui; De Porre, Peter; Todd, Mary B; Yu, Margaret K; Ryan, Charles J

    2017-07-01

    In the COU-AA-302 trial, abiraterone acetate plus prednisone significantly increased overall survival for patients with chemotherapy-naïve metastatic castration-resistant prostate cancer (mCRPC). Limited information exists regarding response to subsequent androgen signaling-directed therapies following abiraterone acetate plus prednisone in patients with mCRPC. We investigated clinical outcomes associated with subsequent abiraterone acetate plus prednisone (55 patients) and enzalutamide (33 patients) in a post hoc analysis of COU-AA-302. Prostate-specific antigen (PSA) response was assessed. Median time to PSA progression was estimated using the Kaplan-Meier method. The PSA response rate (≥50% PSA decline, unconfirmed) was 44% and 67%, respectively. The median time to PSA progression was 3.9 mo (range 2.6-not estimable) for subsequent abiraterone acetate plus prednisone and 2.8 mo (range 1.8-not estimable) for subsequent enzalutamide. The majority of patients (68%) received intervening chemotherapy before subsequent abiraterone acetate plus prednisone or enzalutamide. While acknowledging the limitations of post hoc analyses and high censoring (>75%) in both treatment groups, these results suggest that subsequent therapy with abiraterone acetate plus prednisone or enzalutamide for patients who progressed on abiraterone acetate is associated with limited clinical benefit. This analysis showed limited clinical benefit for subsequent abiraterone acetate plus prednisone or enzalutamide in patients with metastatic castration-resistant prostate cancer following initial treatment with abiraterone acetate plus prednisone. This analysis does not support prioritization of subsequent abiraterone acetate plus prednisone or enzalutamide following initial therapy with abiraterone acetate plus prednisone. Copyright © 2017 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  13. Improving Operating Room Efficiency via Reduction and Standardization of Video-Assisted Thoracoscopic Surgery Instrumentation.

    PubMed

    Friend, Tynan H; Paula, Ashley; Klemm, Jason; Rosa, Mark; Levine, Wilton

    2018-05-28

    Being the economic powerhouses of most large medical centers, operating rooms (ORs) require the highest levels of teamwork, communication, and efficiency in order to optimize patient safety and reduce hospital waste. A major component of OR waste comes from unused surgical instrumentation; instruments that are frequently prepared for procedures but are never touched by the surgical team still require a full reprocessing cycle at the conclusion of the case. Based on our own previous successes in the perioperative domain, in this work we detail an initiative that reduces surgical instrumentation waste of video-assisted thoracoscopic surgery (VATS) procedures by placing thoracotomy conversion instrumentation in a standby location and designing a specific instrument kit to be used solely for VATS cases. Our estimates suggest that this initiative will reduce at least 91,800 pounds of unnecessary surgical instrumentation from cycling through our ORs and reprocessing department annually, resulting in increased OR team communication without sacrificing the highest standard of patient safety.

  14. Wind-influenced projectile motion

    NASA Astrophysics Data System (ADS)

    Bernardo, Reginald Christian; Perico Esguerra, Jose; Day Vallejos, Jazmine; Jerard Canda, Jeff

    2015-03-01

    We solved the wind-influenced projectile motion problem with the same initial and final heights and obtained exact analytical expressions for the shape of the trajectory, range, maximum height, time of flight, time of ascent, and time of descent with the help of the Lambert W function. It turns out that the range and maximum horizontal displacement are not always equal. When launched at a critical angle, the projectile will return to its starting position. It turns out that a launch angle of 90° maximizes the time of flight, time of ascent, time of descent, and maximum height and that the launch angle corresponding to maximum range can be obtained by solving a transcendental equation. Finally, we expressed in a parametric equation the locus of points corresponding to maximum heights for projectiles launched from the ground with the same initial speed in all directions. We used the results to estimate how much a moderate wind can modify a golf ball’s range and suggested other possible applications.

  15. Lexical stress encoding in single word production estimated by event-related brain potentials.

    PubMed

    Schiller, Niels O

    2006-09-27

    An event-related brain potentials (ERPs) experiment was carried out to investigate the time course of lexical stress encoding in language production. Native speakers of Dutch viewed a series of pictures corresponding to bisyllabic names which were either stressed on the first or on the second syllable and made go/no-go decisions on the lexical stress location of those picture names. Behavioral results replicated a pattern that was observed earlier, i.e. faster button-press latencies to initial as compared to final stress targets. The electrophysiological results indicated that participants could make a lexical stress decision significantly earlier when picture names had initial than when they had final stress. Moreover, the present data suggest the time course of lexical stress encoding during single word form formation in language production. When word length is corrected for, the temporal interval for lexical stress encoding specified by the current ERP results falls into the time window previously identified for phonological encoding in language production.

  16. Simultaneous measurements of concentration and velocity in the Richtmyer-Meshkov instability

    NASA Astrophysics Data System (ADS)

    Reese, Dan; Ames, Alex; Noble, Chris; Oakley, Jason; Rothamer, David; Bonazza, Riccardo

    2017-11-01

    The Richtmyer-Meshkov instability (RMI) is studied experimentally in the Wisconsin Shock Tube Laboratory (WiSTL) using a broadband, shear layer initial condition at the interface between a helium-acetone mixture and argon. This interface (Atwood number A=0.7) is accelerated by either a M=1.6 or M=2.2 planar shock wave, and the development of the RMI is investigated through simultaneous planar laser-induced fluorescence (PLIF) and particle image velocimetry (PIV) measurements at the initial condition and four post-shock times. Three Reynolds stresses, the planar turbulent kinetic energy, the Taylor microscale are calculated from the concentration and velocity fields. The external Reynolds number is estimated from the Taylor scale and the velocity statistics. The results suggest that the flow transitions to fully developed turbulence by the third post-shock time for the high Mach number case, while it may not at the lower Mach number. The authors would like to acknowledge the support of the Department of Energy.

  17. Degradation potentials of dissolved organic carbon (DOC) from thawed permafrost peat

    PubMed Central

    Panneer Selvam, Balathandayuthabani; Lapierre, Jean-François; Guillemette, Francois; Voigt, Carolina; Lamprecht, Richard E.; Biasi, Christina; Christensen, Torben R.; Martikainen, Pertti J.; Berggren, Martin

    2017-01-01

    Global warming can substantially affect the export of dissolved organic carbon (DOC) from peat-permafrost to aquatic systems. The direct degradability of such peat-derived DOC, however, is poorly constrained because previous permafrost thaw studies have mainly addressed mineral soil catchments or DOC pools that have already been processed in surface waters. We incubated peat cores from a palsa mire to compare an active layer and an experimentally thawed permafrost layer with regard to DOC composition and degradation potentials of pore water DOC. Our results show that DOC from the thawed permafrost layer had high initial degradation potentials compared with DOC from the active layer. In fact, the DOC that showed the highest bio- and photo-degradability, respectively, originated in the thawed permafrost layer. Our study sheds new light on the DOC composition of peat-permafrost directly upon thaw and suggests that past estimates of carbon-dioxide emissions from thawed peat permafrost may be biased as they have overlooked the initial mineralization potential of the exported DOC. PMID:28378792

  18. Elite athletes' estimates of the prevalence of illicit drug use: evidence for the false consensus effect.

    PubMed

    Dunn, Matthew; Thomas, Johanna O; Swift, Wendy; Burns, Lucinda

    2012-01-01

    The false consensus effect (FCE) is the tendency for people to assume that others share their attitudes and behaviours to a greater extent than they actually do. The FCE has been demonstrated for a range of health behaviours, including substance use. The study aimed to explore the relationship between elite athlete's engagement in recreational drug use and their consensus estimates (the FCE) and to determine whether those who engage in the behaviour overestimate the use of others around them. The FCE was investigated among 974 elite Australian athletes who were classified according to their drug use history. Participants tended to report that there was a higher prevalence of drug use among athletes in general compared with athletes in their sport, and these estimates appeared to be influenced by participants' drug use history. While overestimation of drug use by participants was not common, this overestimation also appeared to be influenced by athletes' drug use history. The results suggest that athletes who have a history of illicit drug use overestimate the prevalence of drug use among athletes. These findings may be helpful in the formulation of normative education initiatives. © 2011 Australasian Professional Society on Alcohol and other Drugs.

  19. Estimate of Shock-Hugoniot Adiabat of Liquids from Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Bouton, E.; Vidal, P.

    2007-12-01

    Shock states are generally obtained from shock velocity (D) and material velocity (u) measurements. In this paper, we propose a hydrodynamical method for estimating the (D-u) relation of Nitromethane from easily measured properties of the initial state. The method is based upon the differentiation of the Rankine-Hugoniot jump relations with the initial temperature considered as a variable and under the constraint of a unique nondimensional shock-Hugoniot. We then obtain an ordinary differential equation for the shock velocity D in the variable u. Upon integration, this method predicts the shock Hugoniot of liquid Nitromethane with a 5% accuracy for initial temperatures ranging from 250 K to 360 K.

  20. Evaluation of factors associated with second remission in dogs with lymphoma undergoing retreatment with a cyclophosphamide, doxorubicin, vincristine, and prednisone chemotherapy protocol: 95 cases (2000-2007).

    PubMed

    Flory, Andrea B; Rassnick, Kenneth M; Erb, Hollis N; Garrett, Laura D; Northrup, Nicole C; Selting, Kim A; Phillips, Brenda S; Locke, Jennifer E; Chretin, John D

    2011-02-15

    To evaluate factors associated with second remission in dogs with lymphoma retreated with a cyclophosphamide, doxorubicin, vincristine, and prednisone (CHOP) protocol after relapse following initial treatment with a first-line 6-month CHOP protocol. Retrospective case series. 95 dogs with lymphoma. Medical records were reviewed. Remission duration was estimated by use of the Kaplan-Meier method. Factors potentially associated with prognosis were examined. Median remission duration after the first-line CHOP protocol was 289 days (range, 150 to 1,457 days). Overall, 78% (95% confidence interval [CI], 69% to 86%) of dogs achieved a complete remission following retreatment, with a median second remission duration of 159 days (95% CI, 126 to 212 days). Duration of time off chemotherapy was associated with likelihood of response to retreatment; median time off chemotherapy was 140 days for dogs that achieved a complete remission after retreatment and 84 days for dogs that failed to respond to retreatment. Second remission duration was associated with remission duration after initial chemotherapy; median second remission duration for dogs with initial remission duration ≥ 289 days was 214 days (95% CI, 168 to 491 days), compared with 98 days (95% CI, 70 to 144 days) for dogs with initial remission duration < 289 days. Findings suggested that retreatment with the CHOP protocol can be effective in dogs with lymphoma that successfully complete an initial 6-month CHOP protocol.

  1. Impact of generic antiretroviral therapy (ART) and free ART programs on time to initiation of ART at a tertiary HIV care center in Chennai, India.

    PubMed

    Solomon, Sunil S; Lucas, Gregory M; Kumarasamy, Nagalingeswaran; Yepthomi, Tokugha; Balakrishnan, Pachamuthu; Ganesh, Aylur K; Anand, Santhanam; Moore, Richard D; Solomon, Suniti; Mehta, Shruti H

    2013-08-01

    Antiretroviral therapy (ART) access in the developing world has improved, but whether increased access has translated to more rapid treatment initiation among those who need it is unknown. We characterize time to ART initiation across three eras of ART availability in Chennai, India (1996-1999: pregeneric; 2000-2003: generic; 2004-2007: free rollout). Between 1996 and 2007, 11,171 patients registered for care at the YR Gaitonde Centre for AIDS Research and Education (YRGCARE), a tertiary HIV referral center in southern India. Of these, 5726 patients became eligible for ART during this period as per Indian guidelines for initiation of ART. Generalized gamma survival models were used to estimate relative times (RT) to ART initiation by calendar periods of eligibility. Time to initiation of ART among patients in Chennai, India was also compared to an HIV clinical cohort in Baltimore, USA. Median age of the YRGCARE patients was 34 years; 77% were male. The median CD4 at presentation was 140 cells/µl. After adjustment for demographics, CD4 and WHO stage, persons in the pregeneric era took 3.25 times longer (95% confidence interval [CI]: 2.53-4.17) to initiate ART versus the generic era and persons in the free rollout era initiated ART more rapidly than the generic era (RT: 0.73; 95% CI: 0.63-0.83). Adjusting for differences across centers, patients at YRGCARE took longer than patients in the Johns Hopkins Clinical Cohort (JHCC) to initiate ART in the pregeneric era (RT: 4.90; 95% CI: 3.37-7.13) but in the free rollout era, YRGCARE patients took only about a quarter of the time (RT: 0.31; 95% CI: 0.22-0.44). These data demonstrate the benefits of generic ART and government rollouts on time to initiation of ART in one developing country setting and suggests that access to ART may be comparable to developed country settings.

  2. Estimating mangrove in Florida: trials monitoring rare ecosystems

    Treesearch

    Mark J. Brown

    2015-01-01

    Mangrove species are keystone components in coastal ecosystems and are the interface between forest land and sea. Yet, estimates of their area have varied widely. Forest Inventory and Analysis (FIA) data from ground-based sample plots provide one estimate of the resource. Initial FIA estimates of the mangrove resource in Florida varied dramatically from those compiled...

  3. Correlates of human papillomavirus (HPV) vaccination initiation and completion among 18-26 year olds in the United States.

    PubMed

    Adjei Boakye, Eric; Lew, Daphne; Muthukrishnan, Meera; Tobo, Betelihem B; Rohde, Rebecca L; Varvares, Mark A; Osazuwa-Peters, Nosayaba

    2018-04-30

    To examine correlates of HPV vaccination uptake in a nationally representative sample of 18-26-year-old adults. Young adults aged 18-26 years were identified from the 2014 and 2015 National Health Interview Survey (n = 7588). Survey-weighted multivariable logistic regression models estimated sociodemographic factors associated with HPV vaccine initiation (≥1 dose) and completion (≥3 doses). Approximately 27% of study participants had initiated the HPV vaccine and 16% had completed the HPV vaccine. Participants were less likely to initiate the vaccine if they were men [(adjusted odds ratio) 0.19; (95% confidence interval) 0.16-0.23], had a high school diploma (0.40; 0.31-0.52) or less (0.46; 0.32-0.64) vs. college graduates, and were born outside the United States (0.52; 0.40-0.69). But, participants were more likely to initiate the HPV vaccine if they visited the doctor's office 1-5 times (2.09; 1.56-2.81), or ≥ 6 times (1.86; 1.48-2.34) within the last 12 months vs. no visits. Odds of completing HPV vaccine uptake followed the same pattern as initiation. And after stratifying the study population by gender and foreign-born status, these variables remained statistically significant. In our nationally representative study, only one out of six 18-26 year olds completed the required vaccine doses. Men, individuals with high school or less education, and those born outside the United States were less likely to initiate and complete the HPV vaccination. Our findings suggest that it may be useful to develop targeted interventions to promote HPV vaccination among those in the catch-up age range.

  4. Dosing of Selective Serotonin Reuptake Inhibitors Among Children and Adults Before and After the FDA Black-Box Warning.

    PubMed

    Bushnell, Greta A; Stürmer, Til; Swanson, Sonja A; White, Alice; Azrael, Deborah; Pate, Virginia; Miller, Matthew

    2016-03-01

    Prior research evaluated various effects of the 2004 black-box warning by the U.S. Food and Drug Administration (FDA) on the risk of suicidality among children associated with use of antidepressants, but the warning's effect on dosing of antidepressants has not been evaluated. This study estimated whether the initial antidepressant dose prescribed decreased and the proportion of patients who augmented the dose on the second fill increased following the 2004 warning and its 2007 expansion to young adults. The study utilized the LifeLink Health Plan Claims Database. The study cohort consisted of commercially insured children (ages 5-17), young adults (18-24), and adults (25-64) who initiated a selective serotonin reuptake inhibitor (SSRI) (citalopram, fluoxetine, paroxetine, or sertraline) from January 1, 2000, to December 31, 2009. Dose per day was determined by days' supply, strength, and quantity dispensed. Initiation with a low dose and augmentation of >1 mg/day on the second prescription before and after the 2004 warning were considered. Of 51,948 children who initiated an SSRI, 15% initiated with a low dose before the 2004 warning compared with 31% after the warning; there was a smaller change among young adults (6 percentage points) and adults (3 percentage points). The overall increase in dose augmentations among children and young adults was driven by the increase in patients initiating with a low dose. The proportion of commercially insured children initiating an SSRI with a low dose was higher after the 2004 FDA warning on the risk of suicidality among children, suggesting improved prescribing practices surrounding SSRI dosing among children.

  5. Investigations of Stratosphere-Troposphere Exchange of Ozone Derived From MLS Observations

    NASA Technical Reports Server (NTRS)

    Olsen, Mark A.; Schoeberl, Mark R.; Ziemke, Jerry R.

    2006-01-01

    Daily high-resolution maps of stratospheric ozone have been constructed using observations by MLS combined with trajectory information. These fields are used to determine the extratropical stratosphere-troposphere exchange (STE) of ozone for the year 2005 using two diagnostic methods. The resulting two annual estimates compare well with past model- and observational-based estimates. Initial analyses of the seasonal characteristics indicate that significant STE of ozone in the polar regions occurs only during spring and early summer. We also examine evidence that the Antarctic ozone hole is responsible for a rapid decrease in the rate of ozone STE during the SH spring. Subtracting the high-resolution stratospheric ozone fiom OMI total column measurements creates a high-resolution tropospheric ozone residual (HTOR) product. The HTOR fields are compared to the spatial distribution of the ozone STE. We show that the mean tropospheric ozone maxima tend to occur near locations of significant ozone STE. This suggests that STE may be responsible for a significant fraction of many mean tropospheric ozone anomalies.

  6. Implications of Middle School Behavior Problems for High School Graduation and Employment Outcomes of Young Adults: Estimation of a Recursive Model.

    PubMed

    Karakus, Mustafa C; Salkever, David S; Slade, Eric P; Ialongo, Nicholas; Stuart, Elizabeth

    2012-01-01

    The potentially serious adverse impacts of behavior problems during adolescence on employment outcomes in adulthood provide a key economic rationale for early intervention programs. However, the extent to which lower educational attainment accounts for the total impact of adolescent behavior problems on later employment remains unclear As an initial step in exploring this issue, we specify and estimate a recursive bivariate probit model that 1) relates middle school behavior problems to high school graduation and 2) models later employment in young adulthood as a function of these behavior problems and of high school graduation. Our model thus allows for both a direct effect of behavior problems on later employment as well as an indirect effect that operates via graduation from high school. Our empirical results, based on analysis of data from the NELS, suggest that the direct effects of externalizing behavior problems on later employment are not significant but that these problems have important indirect effects operating through high school graduation.

  7. Application of Reflected Global Navigation Satellite System (GNSS-R) Signals in the Estimation of Sea Roughness Effects in Microwave Radiometry

    NASA Technical Reports Server (NTRS)

    Voo, Justin K.; Garrison, James L.; Yueh, Simon H.; Grant, Michael S.; Fore, Alexander G.; Haase, Jennifer S.; Clauss, Bryan

    2010-01-01

    In February-March 2009 NASA JPL conducted an airborne field campaign using the Passive Active L-band System (PALS) and the Ku-band Polarimetric Scatterometer (PolSCAT) collecting measurements of brightness temperature and near surface wind speeds. Flights were conducted over a region of expected high-speed winds in the Atlantic Ocean, for the purposes of algorithm development for salinity retrievals. Wind speeds encountered were in the range of 5 to 25 m/s during the two weeks deployment. The NASA-Langley GPS delay-mapping receiver (DMR) was also flown to collect GPS signals reflected from the ocean surface and generate post-correlation power vs. delay measurements. This data was used to estimate ocean surface roughness and a strong correlation with brightness temperature was found. Initial results suggest that reflected GPS signals, using small low-power instruments, will provide an additional source of data for correcting brightness temperature measurements for the purpose of sea surface salinity retrievals.

  8. WIC in Your Neighborhood: New Evidence on the Impacts of Geographic Access to Clinics

    PubMed Central

    Rossin-Slater, Maya

    2013-01-01

    A large body of evidence indicates that conditions in-utero and health at birth matter for individuals’ long-run outcomes, suggesting potential value in programs aimed at pregnant women and young children. This paper uses a novel identification strategy and data from birth and administrative records over 2005–2009 to provide causal estimates of the effects of geographic access to the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC). My empirical approach uses within-ZIP-code variation in WIC clinic presence together with maternal fixed effects, and accounts for the potential endogeneity of mobility, gestational-age bias, and measurement error in gestation. I find that access to WIC increases food benefit take-up, pregnancy weight gain, birth weight, and the probability of breastfeeding initiation at the time of hospital discharge. The estimated effects are strongest for mothers with a high school education or less, who are most likely eligible for WIC services. PMID:24043906

  9. Effects of practice on the Wechsler Adult Intelligence Scale-IV across 3- and 6-month intervals.

    PubMed

    Estevis, Eduardo; Basso, Michael R; Combs, Dennis

    2012-01-01

    A total of 54 participants (age M = 20.9; education M = 14.9; initial Full Scale IQ M = 111.6) were administered the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) at baseline and again either 3 or 6 months later. Scores on the Full Scale IQ, Verbal Comprehension, Working Memory, Perceptual Reasoning, Processing Speed, and General Ability Indices improved approximately 7, 5, 4, 5, 9, and 6 points, respectively, and increases were similar regardless of whether the re-examination occurred over 3- or 6-month intervals. Reliable change indices (RCI) were computed using the simple difference and bivariate regression methods, providing estimated base rates of change across time. The regression method provided more accurate estimates of reliable change than did the simple difference between baseline and follow-up scores. These findings suggest that prior exposure to the WAIS-IV results in significant score increments. These gains reflect practice effects instead of genuine intellectual changes, which may lead to errors in clinical judgment.

  10. Investigating the kinetics of the enzymatic depolymerization of polygalacturonic acid in continuous UF-membrane reactors.

    PubMed

    Gallifuoco, Alberto; Cantarella, Maria; Marucci, Mariagrazia

    2007-01-01

    A stirred tank membrane reactor is used to study the kinetics of polygalacturonic acid (PGA) enzymatic hydrolysis. The reactor operates in semicontinuous configuration: the native biopolymer is loaded at the initial time and the system is continuously fed with the buffer. The effect of retention time (from 101 to 142 min) and membrane molecular weight cutoff (from 1 to 30 kDa) on the rate of permeable oligomers production is investigated. Reaction products are clustered in two different classes, those sized below the membrane cutoff and those above. The reducing power measured in the permeate is used as an estimate of total product concentration. The characteristic breakdown times range from 40 to 100 min. The overall kinetics obeys a first-order law with a characteristic time estimated to 24 min. New mathematical data handling are developed and illustrated using the experimental data obtained. Finally, the body of the experimental results suggests useful indications (reactor productivity, breakdown induction period) for implementing the bioprocess at the industrial scale.

  11. Modeling Evolution of the Chandeleur Barrier Islands, Southeastern Louisiana: Initial Exploration of a Possible Threshold Crossing

    NASA Astrophysics Data System (ADS)

    Moore, L. J.; List, J. H.; Williams, S. J.

    2007-12-01

    Airborne photographic and lidar observations of the 72 km-long Chandeleur Island arc in southeastern Louisiana since August 2005 indicate that large volumes of sediment were removed from the islands during and following Hurricane Katrina and suggest that a return to pre-storm island configuration may be unlikely. Others have suggested, based on recent field observations, that the southern portion of the Chandeleur Islands may be showing signs of becoming an inner shelf shoal. In contrast to these observations, plentiful sand has been observed in the nearshore farther to the north; based on this finding it has been suggested that at least the northern portion of the Chandeleur Islands may be poised for recovery. Given the range of observations, it is unclear if Hurricane Katrina initiated a threshold crossing in the Chandeleurs causing the subaerial, landward- migrating barrier islands to begin evolving as submerged sand shoals. If a threshold crossing has not yet occurred and the Chandeleurs do recover from the impact of Hurricane Katrina, it remains uncertain how imminent a threshold crossing may be. To better understand the potential future evolution of the Chandeleur Islands and to assess the combination of factors that are likely to cause a threshold crossing in this environment, a series of initial model experiments are being conducted using the morphological-behavior model GEOMBEST. This model simulates the evolution of coastal morphology and stratigraphy resulting from changes in relative sea level and sediment supply, and provides insight into how barriers evolve over time scales ranging from decades to millennia. Vibracore logs, geophysical records, bathymetric surveys, and lidar surveys provide data necessary to design the model domain, while sediment budget studies, estimates of sea-level rise rates, and measurements of shoreline change rates provide input and calibration parameters. Late Holocene model runs simulate the evolution of 42 km-long North Chandeleur Island as it migrated from the distal end of the St. Bernard Delta to its modern position. Building on the late Holocene simulation, we present a series of initial, multi-decadal forward model experiments that assess the combination of factors, including relative sea-level rise rates, sediment supply rates, and geologic framework, that are likely to initiate a threshold crossing in the Chandeleur Islands.

  12. Building an alternative fuel refueling network: How many stations are needed and where should they be placed?

    NASA Astrophysics Data System (ADS)

    Nicholas, Michael Anselm

    Gasoline stations are so numerous that the fear of running out of fuel is likely not a top concern among drivers. This may not be the case with the introduction of a new alternative fuel such as hydrogen or electricity. The next three chapters, originally written as peer reviewed journal papers[1,2,3], examine the characteristics of refueling in today's gasoline network and compares these characteristics to hypothetical new alternative fuel networks. Together, they suggest that alternative fuel networks with many fewer stations than exist in the gasoline network could be acceptable to future consumers. This acceptability is measured in three ways. The first chapter examines the distance from home to the nearest station and finds that if alternative fuel stations were one-third as numerous as gasoline stations, the travel time to the nearest station was virtually identical to that of gasoline stations. The results suggest that even for station networks numbering only one-twentieth the current number of outlets, the difference in travel time with respect to gasoline is relatively small. Acceptability was examined in the second chapter by analyzing the spatial refueling patterns of gasoline. This reveals that the volume of fuel sold is greater around the highways and that the route from home to the nearest highway entrance may account for a large portion of refueling. This suggests that the first alternative fuel stations could be sited along the highway near entrances and could provide acceptable access to fuel for those who use these highway entrances to access the wider region. Subsequent stations could be sited closer to the homes of customers. The third chapter estimates acceptability, measured in terms of initial vehicle purchase price, of refueling away from one's own town. A pilot survey using a map-based questionnaire was distributed to 20 respondents. Respondents chose ten stations locations to enable their most important destinations. The alternative fuel vehicle was then compared to the equivalent gasoline vehicle. The effect on initial purchase price of the vehicle is estimated when some or all of these stations are available. Single-vehicle households put a higher premium on station availability than multi-vehicle households.

  13. Impact of antiretroviral therapy on tuberculosis incidence among HIV-positive patients in high-income countries.

    PubMed

    del Amo, Julia; Moreno, Santiago; Bucher, Heiner C; Furrer, Hansjakob; Logan, Roger; Sterne, Jonathan; Pérez-Hoyos, Santiago; Jarrín, Inma; Phillips, Andrew; Lodi, Sara; van Sighem, Ard; de Wolf, Wolf; Sabin, Caroline; Bansi, Loveleen; Justice, Amy; Goulet, Joseph; Miró, José M; Ferrer, Elena; Meyer, Laurence; Seng, Rémonie; Toulomi, Giota; Gargalianos, Panagiotis; Costagliola, Dominique; Abgrall, Sophie; Hernán, Miguel A

    2012-05-01

    The lower tuberculosis incidence reported in human immunodeficiency virus (HIV)-positive individuals receiving combined antiretroviral therapy (cART) is difficult to interpret causally. Furthermore, the role of unmasking immune reconstitution inflammatory syndrome (IRIS) is unclear. We aim to estimate the effect of cART on tuberculosis incidence in HIV-positive individuals in high-income countries. The HIV-CAUSAL Collaboration consisted of 12 cohorts from the United States and Europe of HIV-positive, ART-naive, AIDS-free individuals aged ≥18 years with baseline CD4 cell count and HIV RNA levels followed up from 1996 through 2007. We estimated hazard ratios (HRs) for cART versus no cART, adjusted for time-varying CD4 cell count and HIV RNA level via inverse probability weighting. Of 65 121 individuals, 712 developed tuberculosis over 28 months of median follow-up (incidence, 3.0 cases per 1000 person-years). The HR for tuberculosis for cART versus no cART was 0.56 (95% confidence interval [CI], 0.44-0.72) overall, 1.04 (95% CI, 0.64-1.68) for individuals aged >50 years, and 1.46 (95% CI, 0.70-3.04) for people with a CD4 cell count of <50 cells/μL. Compared with people who had not started cART, HRs differed by time since cART initiation: 1.36 (95% CI, 0.98-1.89) for initiation <3 months ago and 0.44 (95% CI, 0.34-0.58) for initiation ≥3 months ago. Compared with people who had not initiated cART, HRs <3 months after cART initiation were 0.67 (95% CI, 0.38-1.18), 1.51 (95% CI, 0.98-2.31), and 3.20 (95% CI, 1.34-7.60) for people <35, 35-50, and >50 years old, respectively, and 2.30 (95% CI, 1.03-5.14) for people with a CD4 cell count of <50 cells/μL. Tuberculosis incidence decreased after cART initiation but not among people >50 years old or with CD4 cell counts of <50 cells/μL. Despite an overall decrease in tuberculosis incidence, the increased rate during 3 months of ART suggests unmasking IRIS.

  14. Estimating unbiased economies of scale of HIV prevention projects: a case study of Avahan.

    PubMed

    Lépine, Aurélia; Vassall, Anna; Chandrashekar, Sudha; Blanc, Elodie; Le Nestour, Alexis

    2015-04-01

    Governments and donors are investing considerable resources on HIV prevention in order to scale up these services rapidly. Given the current economic climate, providers of HIV prevention services increasingly need to demonstrate that these investments offer good 'value for money'. One of the primary routes to achieve efficiency is to take advantage of economies of scale (a reduction in the average cost of a health service as provision scales-up), yet empirical evidence on economies of scale is scarce. Methodologically, the estimation of economies of scale is hampered by several statistical issues preventing causal inference and thus making the estimation of economies of scale complex. In order to estimate unbiased economies of scale when scaling up HIV prevention services, we apply our analysis to one of the few HIV prevention programmes globally delivered at a large scale: the Indian Avahan initiative. We costed the project by collecting data from the 138 Avahan NGOs and the supporting partners in the first four years of its scale-up, between 2004 and 2007. We develop a parsimonious empirical model and apply a system Generalized Method of Moments (GMM) and fixed-effects Instrumental Variable (IV) estimators to estimate unbiased economies of scale. At the programme level, we find that, after controlling for the endogeneity of scale, the scale-up of Avahan has generated high economies of scale. Our findings suggest that average cost reductions per person reached are achievable when scaling-up HIV prevention in low and middle income countries. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. The efficacy of respondent-driven sampling for the health assessment of minority populations.

    PubMed

    Badowski, Grazyna; Somera, Lilnabeth P; Simsiman, Brayan; Lee, Hye-Ryeon; Cassel, Kevin; Yamanaka, Alisha; Ren, JunHao

    2017-10-01

    Respondent driven sampling (RDS) is a relatively new network sampling technique typically employed for hard-to-reach populations. Like snowball sampling, initial respondents or "seeds" recruit additional respondents from their network of friends. Under certain assumptions, the method promises to produce a sample independent from the biases that may have been introduced by the non-random choice of "seeds." We conducted a survey on health communication in Guam's general population using the RDS method, the first survey that has utilized this methodology in Guam. It was conducted in hopes of identifying a cost-efficient non-probability sampling strategy that could generate reasonable population estimates for both minority and general populations. RDS data was collected in Guam in 2013 (n=511) and population estimates were compared with 2012 BRFSS data (n=2031) and the 2010 census data. The estimates were calculated using the unweighted RDS sample and the weighted sample using RDS inference methods and compared with known population characteristics. The sample size was reached in 23days, providing evidence that the RDS method is a viable, cost-effective data collection method, which can provide reasonable population estimates. However, the results also suggest that the RDS inference methods used to reduce bias, based on self-reported estimates of network sizes, may not always work. Caution is needed when interpreting RDS study findings. For a more diverse sample, data collection should not be conducted in just one location. Fewer questions about network estimates should be asked, and more careful consideration should be given to the kind of incentives offered to participants. Copyright © 2017. Published by Elsevier Ltd.

  16. How long do I have? Observational study on communication about life expectancy with advanced cancer patients.

    PubMed

    Henselmans, I; Smets, E M A; Han, P K J; de Haes, H C J C; Laarhoven, H W M van

    2017-10-01

    To examine how communication about life expectancy is initiated in consultations about palliative chemotherapy, and what prognostic information is presented. Patients with advanced cancer (n=41) with a median life expectancy <1year and oncologists (n=6) and oncologists-in-training (n=7) meeting with them in consultations (n=62) to discuss palliative chemotherapy were included. Verbatim transcripts of audio-recorded consultations were analyzed using MAXqda10. Life expectancy was addressed in 19 of 62 of the consultations. In all cases, patients took the initiative, most often through direct questions. Estimates were provided in 12 consultations in various formats: the likelihood of experiencing a significant event, point estimates or general time scales of "months to years", often with an emphasis on the "years". The indeterminacy of estimates was consistently stressed. Also their potential inadequacy was regularly addressed, often by describing beneficial prognostic predictors for the specific patient. Oncologists did not address the reliability or precision of estimates. Oncologists did not initiate talk about life expectancy, they used different formats, emphasized the positive and stressed unpredictability, yet not ambiguity of estimates. Prognostic communication should be part of the medical curriculum. Further research should address the effect of different formats of information provision. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Access to destinations : arterial data acquisition and network-wide travel time estimation (phase II).

    DOT National Transportation Integrated Search

    2010-03-01

    The objectives of this project were to (a) produce historic estimates of travel times on Twin-Cities arterials : for 1995 and 2005, and (b) develop an initial architecture and database that could, in the future, produce timely : estimates of arterial...

  18. Integrated carbon budget models for the Everglades terrestrial-coastal-oceanic gradient: Current status and needs for inter-site comparisons

    USGS Publications Warehouse

    Troxler, Tiffany G.; Gaiser, Evelyn; Barr, Jordan; Fuentes, Jose D.; Jaffe, Rudolf; Childers, Daniel L.; Collado-Vides, Ligia; Rivera-Monroy, Victor H.; Castañeda-Moya, Edward; Anderson, William; Chambers, Randy; Chen, Meilian; Coronado-Molina, Carlos; Davis, Stephen E.; Engel, Victor C.; Fitz, Carl; Fourqurean, James; Frankovich, Tom; Kominoski, John; Madden, Chris; Malone, Sparkle L.; Oberbauer, Steve F.; Olivas, Paulo; Richards, Jennifer; Saunders, Colin; Schedlbauer, Jessica; Scinto, Leonard J.; Sklar, Fred; Smith, Thomas J.; Smoak, Joseph M.; Starr, Gregory; Twilley, Robert; Whelan, Kevin

    2013-01-01

    Recent studies suggest that coastal ecosystems can bury significantly more C than tropical forests, indicating that continued coastal development and exposure to sea level rise and storms will have global biogeochemical consequences. The Florida Coastal Everglades Long Term Ecological Research (FCE LTER) site provides an excellent subtropical system for examining carbon (C) balance because of its exposure to historical changes in freshwater distribution and sea level rise and its history of significant long-term carbon-cycling studies. FCE LTER scientists used net ecosystem C balance and net ecosystem exchange data to estimate C budgets for riverine mangrove, freshwater marsh, and seagrass meadows, providing insights into the magnitude of C accumulation and lateral aquatic C transport. Rates of net C production in the riverine mangrove forest exceeded those reported for many tropical systems, including terrestrial forests, but there are considerable uncertainties around those estimates due to the high potential for gain and loss of C through aquatic fluxes. C production was approximately balanced between gain and loss in Everglades marshes; however, the contribution of periphyton increases uncertainty in these estimates. Moreover, while the approaches used for these initial estimates were informative, a resolved approach for addressing areas of uncertainty is critically needed for coastal wetland ecosystems. Once resolved, these C balance estimates, in conjunction with an understanding of drivers and key ecosystem feedbacks, can inform cross-system studies of ecosystem response to long-term changes in climate, hydrologic management, and other land use along coastlines

  19. Mars global digital dune database and initial science results

    USGS Publications Warehouse

    Hayward, R.K.; Mullins, K.F.; Fenton, L.K.; Hare, T.M.; Titus, T.N.; Bourke, M.C.; Colaprete, A.; Christensen, P.R.

    2007-01-01

    A new Mars Global Digital Dune Database (MGD3) constructed using Thermal Emission Imaging System (THEMIS) infrared (IR) images provides a comprehensive and quantitative view of the geographic distribution of moderate- to large-size dune fields (area >1 kM2) that will help researchers to understand global climatic and sedimentary processes that have shaped the surface of Mars. MGD3 extends from 65??N to 65??S latitude and includes ???550 dune fields, covering ???70,000 km2, with an estimated total volume of ???3,600 km3. This area, when combined with polar dune estimates, suggests moderate- to large-size dune field coverage on Mars may total ???800,000 km2, ???6 times less than the total areal estimate of ???5,000,000 km2 for terrestrial dunes. Where availability and quality of THEMIS visible (VIS) or Mars Orbiter Camera. narrow-angle (MOC NA) images allow, we classify dunes and include dune slipface measurements, which are derived from gross dune morphology and represent the prevailing wind direction at the last time of significant dune modification. For dunes located within craters, the azimuth from crater centroid to dune field centroid (referred to as dune centroid azimuth) is calculated and can provide an accurate method for tracking dune migration within smooth-floored craters. These indicators of wind direction are compared to output from a general circulation model (GCM). Dune centroid azimuth values generally correlate to regional wind patterns. Slipface orientations are less well correlated, suggesting that local topographic effects may play a larger role in dune orientation than regional winds. Copyright 2007 by the American Geophysical Union.

  20. Integrating Temperature-Dependent Life Table Data into a Matrix Projection Model for Drosophila suzukii Population Estimation

    PubMed Central

    Wiman, Nik G.; Walton, Vaughn M.; Dalton, Daniel T.; Anfora, Gianfranco; Burrack, Hannah J.; Chiu, Joanna C.; Daane, Kent M.; Grassi, Alberto; Miller, Betsey; Tochen, Samantha; Wang, Xingeng; Ioriatti, Claudio

    2014-01-01

    Temperature-dependent fecundity and survival data was integrated into a matrix population model to describe relative Drosophila suzukii Matsumura (Diptera: Drosophilidae) population increase and age structure based on environmental conditions. This novel modification of the classic Leslie matrix population model is presented as a way to examine how insect populations interact with the environment, and has application as a predictor of population density. For D. suzukii, we examined model implications for pest pressure on crops. As case studies, we examined model predictions in three small fruit production regions in the United States (US) and one in Italy. These production regions have distinctly different climates. In general, patterns of adult D. suzukii trap activity broadly mimicked seasonal population levels predicted by the model using only temperature data. Age structure of estimated populations suggest that trap and fruit infestation data are of limited value and are insufficient for model validation. Thus, we suggest alternative experiments for validation. The model is advantageous in that it provides stage-specific population estimation, which can potentially guide management strategies and provide unique opportunities to simulate stage-specific management effects such as insecticide applications or the effect of biological control on a specific life-stage. The two factors that drive initiation of the model are suitable temperatures (biofix) and availability of a suitable host medium (fruit). Although there are many factors affecting population dynamics of D. suzukii in the field, temperature-dependent survival and reproduction are believed to be the main drivers for D. suzukii populations. PMID:25192013

  1. Estimating the 'consumer surplus' for branded versus standardised tobacco packaging.

    PubMed

    Gendall, Philip; Eckert, Christine; Hoek, Janet; Farley, Tessa; Louviere, Jordan; Wilson, Nick; Edwards, Richard

    2016-11-01

    Tobacco companies question whether standardised (or 'plain') packaging will change smokers' behaviour. We addressed this question by estimating how standardised packaging compared to a proven tobacco control intervention, price increases through excise taxes, thus providing a quantitative measure of standardised packaging's likely effect. We conducted an online study of 311 New Zealand smokers aged 18 years and above that comprised a willingness-to-pay task comparing a branded and a standardised pack at four different price levels, and a choice experiment. The latter used an alternative-specific design, where the alternatives were a branded pack or a standardised pack, with warning theme and price varied for each pack. Respondents had higher purchase likelihoods for the branded pack (with a 30% warning) than the standardised pack (with a 75% warning) at each price level tested, and, on average, were willing to pay approximately 5% more for a branded pack. The choice experiment produced a very similar estimate of 'consumer surplus' for a branded pack. However, the size of the 'consumer surplus' varied between warning themes and by respondents' demographic characteristics. These two experiments suggest standardised packaging and larger warning labels could have a similar overall effect on adult New Zealand smokers as a 5% tobacco price increase. The findings provide further evidence for the efficacy of standardised packaging, which focuses primarily on reducing youth initiation, and suggest this measure will also bring notable benefits to adult smokers. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  2. Foot placement relies on state estimation during visually guided walking.

    PubMed

    Maeda, Rodrigo S; O'Connor, Shawn M; Donelan, J Maxwell; Marigold, Daniel S

    2017-02-01

    As we walk, we must accurately place our feet to stabilize our motion and to navigate our environment. We must also achieve this accuracy despite imperfect sensory feedback and unexpected disturbances. In this study we tested whether the nervous system uses state estimation to beneficially combine sensory feedback with forward model predictions to compensate for these challenges. Specifically, subjects wore prism lenses during a visually guided walking task, and we used trial-by-trial variation in prism lenses to add uncertainty to visual feedback and induce a reweighting of this input. To expose altered weighting, we added a consistent prism shift that required subjects to adapt their estimate of the visuomotor mapping relationship between a perceived target location and the motor command necessary to step to that position. With added prism noise, subjects responded to the consistent prism shift with smaller initial foot placement error but took longer to adapt, compatible with our mathematical model of the walking task that leverages state estimation to compensate for noise. Much like when we perform voluntary and discrete movements with our arms, it appears our nervous systems uses state estimation during walking to accurately reach our foot to the ground. Accurate foot placement is essential for safe walking. We used computational models and human walking experiments to test how our nervous system achieves this accuracy. We find that our control of foot placement beneficially combines sensory feedback with internal forward model predictions to accurately estimate the body's state. Our results match recent computational neuroscience findings for reaching movements, suggesting that state estimation is a general mechanism of human motor control. Copyright © 2017 the American Physiological Society.

  3. Tracing children's vocabulary development from preschool through the school-age years: An 8-year longitudinal study

    PubMed Central

    Kang, Cuiping; Liu, Hongyun; Zhang, Yuping; McBride-Chang, Catherine; Tardif, Twila; Li, Hong; Liang, Weilan; Zhang, Zhixiang; Shu, Hua

    2014-01-01

    In this 8-year longitudinal study, we traced the vocabulary growth of Chinese children, explored potential precursors of vocabulary knowledge, and investigated how vocabulary growth predicted future reading skills. Two hundred sixty-four (264) native Chinese children from Beijing were measured on a variety of reading and language tasks over 8 years. Between the ages of 4 to 10 years, they were administered tasks of vocabulary and related cognitive skills. At age 11, comprehensive reading skills, including character recognition, reading fluency, and reading comprehension were examined. Individual differences in vocabulary developmental profiles were estimated using the intercept-slope cluster method. Vocabulary development was then examined in relation to later reading outcomes. Three subgroups of lexical growth were classified, namely high-high (with a large initial vocabulary size and a fast growth rate), low-high (with a small initial vocabulary size and a fast growth rate) and low-low (with a small initial vocabulary size and a slow growth rate) groups. Low-high and low-low groups were distinguishable mostly through phonological skills, morphological skills and other reading-related cognitive skills. Childhood vocabulary development (using intercept and slope) explained subsequent reading skills. Findings suggest that language-related and reading-related cognitive skills differ among groups with different developmental trajectories of vocabulary, and the initial size and growth rate of vocabulary may be two predictors for later reading development. PMID:24962559

  4. Waterbird use of catfish ponds and migratory bird habitat initiative wetlands in Mississippi

    USGS Publications Warehouse

    Feaga, James S.; Vilella, Francisco; Kaminski, Richard M.; Davis, J. Brian

    2015-01-01

    Aquaculture can provide important surrogate habitats for waterbirds. In response to the 2010 Deepwater Horizon oil spill, the National Resource Conservation Service enacted the Migratory Bird Habitat Initiative through which incentivized landowners provided wetland habitats for migrating waterbirds. Diversity and abundance of waterbirds in six production and four idled aquaculture facilities in the Mississippi Alluvial Valley were estimated during the winters of 2011–2013. Wintering waterbirds exhibited similar densities on production (i.e., ∼22 birds/ha) and idled (i.e., ∼20 birds/ha) sites. A total of 42 species were found using both types of aquaculture wetlands combined, but there was considerable departure in bird guilds occupying the two wetland types. The primary users of production ponds were diving and dabbling ducks and American coots. However, idled ponds, with varying water depths (e.g., mudflats to 20 cm) and diverse emergent vegetation-water interspersion, attracted over 30 species of waterbirds and, on average, had more species of waterbirds from fall through early spring than catfish production ponds. Conservation through the Migratory Bird Habitat Initiative was likely responsible for this difference. Our results suggest production and idled Migratory Bird Habitat Initiative aquaculture impoundments produced suitable conditions for various waterbird species and highlight the importance of conservation programs on private lands that promote diversity in vegetation structure and water depths to enhance waterbird diversity.

  5. An improved procedure for El Nino forecasting: Implications for predictability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, D.; Zebiak, S.E.; Cane, M.A.

    A coupled ocean-atmosphere data assimilation procedure yields improved forecasts of El Nino for the 1980s compared with previous forecasting procedures. As in earlier forecasts with the same model, no oceanic data were used, and only wind information was assimilated. The improvement is attributed to the explicit consideration of air-sea interaction in the initialization. These results suggest that El Nino is more predictable than previously estimated, but that predictability may vary on decadal or longer time scales. This procedure also eliminates the well-known spring barrier to El Nino prediction, which implies that it may not be intrinsic to the real climatemore » system. 24 refs., 5 figs., 1 tab.« less

  6. Transition of planar Couette flow at infinite Reynolds numbers.

    PubMed

    Itano, Tomoaki; Akinaga, Takeshi; Generalis, Sotos C; Sugihara-Seki, Masako

    2013-11-01

    An outline of the state space of planar Couette flow at high Reynolds numbers (Re<10^{5}) is investigated via a variety of efficient numerical techniques. It is verified from nonlinear analysis that the lower branch of the hairpin vortex state (HVS) asymptotically approaches the primary (laminar) state with increasing Re. It is also predicted that the lower branch of the HVS at high Re belongs to the stability boundary that initiates a transition to turbulence, and that one of the unstable manifolds of the lower branch of HVS lies on the boundary. These facts suggest HVS may provide a criterion to estimate a minimum perturbation arising transition to turbulent states at the infinite Re limit.

  7. Accountable care organizations: financial advantages of larger hospital organizations.

    PubMed

    Camargo, Rodrigo; Camargo, Thaisa; Deslich, Stacie; Paul, David P; Coustasse, Alberto

    2014-01-01

    Accountable care organizations (ACOs) are groups of providers who agree to accept the responsibility for elevating the health status of a defined group of patients, with the goal of enabling people to take charge of their health and enroll in shared decision making with providers. The large initial investment required (estimated at $1.8 million) to develop an ACO implies that the participation of large health care organizations, especially hospitals and health systems, is required for success. Findings of this study suggest that ACOs based in a larger hospital organization are more likely to meet Centers for Medicare and Medicaid Services criteria for formation because of financial and structural assets of those entities.

  8. Tackling the child malnutrition problem: from what and why to how much and how.

    PubMed

    McLachlan, Milla

    2006-12-01

    There is strong economic evidence to invest in improving the economic status of young children, yet programs remain underresourced. Returns on investment in child nutrition in terms of improved health, better education outcomes and increased productivity are substantial, and cost estimates for effective programs are in the range of $2.8 to $5.3 billion. These amounts are modest when compared with total international development assistance or current spending on luxury goods in wealthy nations. New initiatives to redefine nutrition science and to apply innovative problem-solving technologies to the global nutrition problem suggest that steps are being taken to accelerate progress toward a malnutrition-free world.

  9. Lack of consensus in social systems

    NASA Astrophysics Data System (ADS)

    Benczik, I. J.; Benczik, S. Z.; Schmittmann, B.; Zia, R. K. P.

    2008-05-01

    We propose an exactly solvable model for the dynamics of voters in a two-party system. The opinion formation process is modeled on a random network of agents. The dynamical nature of interpersonal relations is also reflected in the model, as the connections in the network evolve with the dynamics of the voters. In the infinite time limit, an exact solution predicts the emergence of consensus, for arbitrary initial conditions. However, before consensus is reached, two different metastable states can persist for exponentially long times. One state reflects a perfect balancing of opinions, the other reflects a completely static situation. An estimate of the associated lifetimes suggests that lack of consensus is typical for large systems.

  10. Cost and coverage: implications of the McCain plan to restructure health insurance.

    PubMed

    Buchmueller, Thomas; Glied, Sherry A; Royalty, Anne; Swartz, Katherine

    2008-01-01

    Senator John McCain's (R-AZ) health plan would eliminate the current tax exclusion of employer payments for health coverage, replace the exclusion with a refundable tax credit for those who purchase coverage, and encourage Americans to move to a national market for nongroup insurance. Middle-range estimates suggest that initially this change will have little impact on the number of uninsured people, although within five years this number will likely grow as the value of the tax credit falls relative to rising health care costs. Moving toward a relatively unregulated nongroup market will tend to raise costs, reduce the generosity of benefits, and leave people with fewer consumer protections.

  11. Association between vascular access creation and deceleration of estimated glomerular filtration rate decline in late-stage chronic kidney disease patients transitioning to end-stage renal disease.

    PubMed

    Sumida, Keiichi; Molnar, Miklos Z; Potukuchi, Praveen K; Thomas, Fridtjof; Lu, Jun Ling; Ravel, Vanessa A; Soohoo, Melissa; Rhee, Connie M; Streja, Elani; Yamagata, Kunihiro; Kalantar-Zadeh, Kamyar; Kovesdy, Csaba P

    2017-08-01

    Prior studies have suggested that arteriovenous fistula (AVF) or graft (AVG) creation may be associated with slowing of estimated glomerular filtration rate (eGFR) decline. It is unclear if this is attributable to the physiological benefits of a mature access on systemic circulation versus confounding factors. We examined a nationwide cohort of 3026 US veterans with advanced chronic kidney disease (CKD) transitioning to dialysis between 2007 and 2011 who had a pre-dialysis AVF/AVG and had at least three outpatient eGFR measurements both before and after AVF/AVG creation. Slopes of eGFR were estimated using mixed-effects models adjusted for fixed and time-dependent confounders, and compared separately for the pre- and post-AVF/AVG period overall and in patients stratified by AVF/AVG maturation. In all, 3514 patients without AVF/AVG who started dialysis with a catheter served as comparators, using an arbitrary 6-month index date before dialysis initiation to assess change in eGFR slopes. Of the 3026 patients with AVF/AVG (mean age 67 years, 98% male, 75% diabetic), 71% had a mature AVF/AVG at dialysis initiation. eGFR decline accelerated in the last 6 months prior to dialysis in patients with a catheter (median, from -6.0 to -16.3 mL/min/1.73 m2/year, P < 0.001), while a significant deceleration of eGFR decline was seen after vascular access creation in those with AVF/AVG (median, from -5.6 to -4.1 mL/min/1.73 m2/year, P < 0.001). Findings were independent of AVF/AVG maturation status and were robust in adjusted models. The creation of pre-dialysis AVF/AVG appears to be associated with eGFR slope deceleration and, consequently, may delay the onset of dialysis initiation in advanced CKD patients. Published by Oxford University Press on behalf of ERA-EDTA 2016. This work is written by US Government employees and is in the public domain in the US.

  12. Degradation of SO 2, NO 2 and NH 3 leading to formation of secondary inorganic aerosols: An environmental chamber study

    NASA Astrophysics Data System (ADS)

    Behera, Sailesh N.; Sharma, Mukesh

    2011-08-01

    We have examined the interactions of gaseous pollutants and primary aerosols that can produce secondary inorganic aerosols. The specific objective was to estimate degradation rates of precursor gases (NH 3, NO 2 and SO 2) responsible for formation of secondary inorganic aerosols. A Teflon-based outdoor environmental chamber facility (volume 12.5 m 3) was built and checked for wall losses, leaks, solar transparency and ability to simulate photochemical reactions. The chamber was equipped with state-of-the-art instrumentation to monitor concentration-time profiles of precursor gases, ozone, and aerosol. A total of 14 experimental runs were carried out for estimating the degradation of precursor gases. The following initial conditions were maintained in the chamber: NO 2 = 246 ± 104 ppb(v), NH 3 = 548 ± 83 ppb(v), SO 2 = 238 ± 107 ppb(v), O 3 = 50 ± 11 ppb(v), PM 2.5 aerosol = 283438 ± 60524 No./litre. The concentration-time profile of gases followed first-order decay and were used for estimating degradation rates (NO 2 = 0.26 ± 0.15 h -1, SO 2 = 0.31 ± 0.17 h -1, NH 3 = 0.35 ± 0.21 h -1). We observed that degradation rates showed a statistical significant positive correlation (at 5% level of significance) with the initial PM 2.5 levels in the chamber (coefficient of correlation: 0.63 for NO 2; 0.62 for NH 3 and 0.51 for SO 2), suggesting that the existing surface of the aerosol could play a significant role in degradation of precursor gases. One or more gaseous species can be adsorbed on to the existing particles and these may undergo heterogeneous or homogeneous chemical transformation to produce secondary inorganic aerosols. Through correlation analysis, we have observed that degradation rates of precursor gases were dependent on initial molar ratio of (NH 3)/(NO 2 + SO 2), indicative of ammonia-rich and ammonia-poor situations for eventual production of ammonium salts.

  13. Estimating Soil Hydraulic Parameters using Gradient Based Approach

    NASA Astrophysics Data System (ADS)

    Rai, P. K.; Tripathi, S.

    2017-12-01

    The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.

  14. Statistical distribution of time to crack initiation and initial crack size using service data

    NASA Technical Reports Server (NTRS)

    Heller, R. A.; Yang, J. N.

    1977-01-01

    Crack growth inspection data gathered during the service life of the C-130 Hercules airplane were used in conjunction with a crack propagation rule to estimate the distribution of crack initiation times and of initial crack sizes. A Bayesian statistical approach was used to calculate the fraction of undetected initiation times as a function of the inspection time and the reliability of the inspection procedure used.

  15. Journal: A Review of Some Tracer-Test Design Equations for Tracer-Mass Estimation and Sample Collection Frequency

    EPA Science Inventory

    Determination of necessary tracer mass, initial sample-collection time, and subsequent sample-collection frequency are the three most difficult aspects to estimate for a proposed tracer test prior to conducting the tracer test. To facilitate tracer-mass estimation, 33 mass-estima...

  16. Estimation of Community Land Model parameters for an improved assessment of net carbon fluxes at European sites

    NASA Astrophysics Data System (ADS)

    Post, Hanna; Vrugt, Jasper A.; Fox, Andrew; Vereecken, Harry; Hendricks Franssen, Harrie-Jan

    2017-03-01

    The Community Land Model (CLM) contains many parameters whose values are uncertain and thus require careful estimation for model application at individual sites. Here we used Bayesian inference with the DiffeRential Evolution Adaptive Metropolis (DREAM(zs)) algorithm to estimate eight CLM v.4.5 ecosystem parameters using 1 year records of half-hourly net ecosystem CO2 exchange (NEE) observations of four central European sites with different plant functional types (PFTs). The posterior CLM parameter distributions of each site were estimated per individual season and on a yearly basis. These estimates were then evaluated using NEE data from an independent evaluation period and data from "nearby" FLUXNET sites at 600 km distance to the original sites. Latent variables (multipliers) were used to treat explicitly uncertainty in the initial carbon-nitrogen pools. The posterior parameter estimates were superior to their default values in their ability to track and explain the measured NEE data of each site. The seasonal parameter values reduced with more than 50% (averaged over all sites) the bias in the simulated NEE values. The most consistent performance of CLM during the evaluation period was found for the posterior parameter values of the forest PFTs, and contrary to the C3-grass and C3-crop sites, the latent variables of the initial pools further enhanced the quality-of-fit. The carbon sink function of the forest PFTs significantly increased with the posterior parameter estimates. We thus conclude that land surface model predictions of carbon stocks and fluxes require careful consideration of uncertain ecological parameters and initial states.

  17. Variation in abundance of Pacific Blue Mussel (Mytilus trossulus) in the Northern Gulf of Alaska, 2006-2015

    NASA Astrophysics Data System (ADS)

    Bodkin, James L.; Coletti, Heather A.; Ballachey, Brenda E.; Monson, Daniel H.; Esler, Daniel; Dean, Thomas A.

    2018-01-01

    Mussels are conspicuous and ecologically important components of nearshore marine communities around the globe. Pacific blue mussels (Mytilus trossulus) are common residents of intertidal habitats in protected waters of the North Pacific, serving as a conduit of primary production to a wide range of nearshore consumers including predatory invertebrates, sea ducks, shorebirds, sea otters, humans, and other terrestrial mammals. We monitored seven metrics of intertidal Pacific blue mussel abundance at five sites in each of three regions across the northern Gulf of Alaska: Katmai National Park and Preserve (Katmai) (2006-2015), Kenai Fjords National Park (Kenai Fjords) (2008-2015) and western Prince William Sound (WPWS) (2007-2015). Metrics included estimates of: % cover at two tide heights in randomly selected rocky intertidal habitat; and in selected mussel beds estimates of: the density of large mussels (≥ 20 mm); density of all mussels > 2 mm estimated from cores extracted from those mussel beds; bed size; and total abundance of large and all mussels, i.e. the product of density and bed size. We evaluated whether these measures of mussel abundance differed among sites or regions, whether mussel abundance varied over time, and whether temporal patterns in abundance were site specific, or synchronous at regional or Gulf-wide spatial scales. We found that, for all metrics, mussel abundance varied on a site-by-site basis. After accounting for site differences, we found similar temporal patterns in several measures of abundance (both % cover metrics, large mussel density, large mussel abundance, and mussel abundance estimated from cores), in which abundance was initially high, declined significantly over several years, and subsequently recovered. Averaged across all sites, we documented declines of 84% in large mussel abundance through 2013 with recovery to 41% of initial abundance by 2015. These findings suggest that factors operating across the northern Gulf of Alaska were affecting mussel survival and subsequently abundance. In contrast, density of primarily small mussels obtained from cores (as an index of recruitment), varied markedly by site, but did not show meaningful temporal trends. We interpret this to indicate that settlement was driven by site-specific features rather than Gulf wide factors. By extension, we hypothesize that temporal changes in mussel abundance observed was not a result of temporal variation in larval supply leading to variation in recruitment, but rather suggestive of mortality as a primary demographic factor driving mussel abundance. Our results highlight the need to better understand underlying mechanisms of change in mussels, as well as implications of that change to nearshore consumers.

  18. Variation in abundance of Pacific Blue Mussel (Mytilus trossulus) in the Northern Gulf of Alaska, 2006–2015

    USGS Publications Warehouse

    Bodkin, James L.; Coletti, Heather A.; Ballachey, Brenda E.; Monson, Daniel; Esler, Daniel N.; Dean, Thomas A.

    2017-01-01

    Mussels are conspicuous and ecologically important components of nearshore marine communities around the globe. Pacific blue mussels (Mytilus trossulus) are common residents of intertidal habitats in protected waters of the North Pacific, serving as a conduit of primary production to a wide range of nearshore consumers including predatory invertebrates, sea ducks, shorebirds, sea otters, humans, and other terrestrial mammals. We monitored seven metrics of intertidal Pacific blue mussel abundance at five sites in each of three regions across the northern Gulf of Alaska: Katmai National Park and Preserve (Katmai) (2006–2015), Kenai Fjords National Park (Kenai Fjords) (2008–2015) and western Prince William Sound (WPWS) (2007–2015). Metrics included estimates of: % cover at two tide heights in randomly selected rocky intertidal habitat; and in selected mussel beds estimates of: the density of large mussels (≥ 20 mm); density of all mussels > 2 mm estimated from cores extracted from those mussel beds; bed size; and total abundance of large and all mussels, i.e. the product of density and bed size. We evaluated whether these measures of mussel abundance differed among sites or regions, whether mussel abundance varied over time, and whether temporal patterns in abundance were site specific, or synchronous at regional or Gulf-wide spatial scales. We found that, for all metrics, mussel abundance varied on a site-by-site basis. After accounting for site differences, we found similar temporal patterns in several measures of abundance (both % cover metrics, large mussel density, large mussel abundance, and mussel abundance estimated from cores), in which abundance was initially high, declined significantly over several years, and subsequently recovered. Averaged across all sites, we documented declines of 84% in large mussel abundance through 2013 with recovery to 41% of initial abundance by 2015. These findings suggest that factors operating across the northern Gulf of Alaska were affecting mussel survival and subsequently abundance. In contrast, density of primarily small mussels obtained from cores (as an index of recruitment), varied markedly by site, but did not show meaningful temporal trends. We interpret this to indicate that settlement was driven by site-specific features rather than Gulf wide factors. By extension, we hypothesize that temporal changes in mussel abundance observed was not a result of temporal variation in larval supply leading to variation in recruitment, but rather suggestive of mortality as a primary demographic factor driving mussel abundance. Our results highlight the need to better understand underlying mechanisms of change in mussels, as well as implications of that change to nearshore consumers.

  19. Exploration of warm-up period in conceptual hydrological modelling

    NASA Astrophysics Data System (ADS)

    Kim, Kue Bum; Kwon, Hyun-Han; Han, Dawei

    2018-01-01

    One of the important issues in hydrological modelling is to specify the initial conditions of the catchment since it has a major impact on the response of the model. Although this issue should be a high priority among modelers, it has remained unaddressed by the community. The typical suggested warm-up period for the hydrological models has ranged from one to several years, which may lead to an underuse of data. The model warm-up is an adjustment process for the model to reach an 'optimal' state, where internal stores (e.g., soil moisture) move from the estimated initial condition to an 'optimal' state. This study explores the warm-up period of two conceptual hydrological models, HYMOD and IHACRES, in a southwestern England catchment. A series of hydrologic simulations were performed for different initial soil moisture conditions and different rainfall amounts to evaluate the sensitivity of the warm-up period. Evaluation of the results indicates that both initial wetness and rainfall amount affect the time required for model warm up, although it depends on the structure of the hydrological model. Approximately one and a half months are required for the model to warm up in HYMOD for our study catchment and climatic conditions. In addition, it requires less time to warm up under wetter initial conditions (i.e., saturated initial conditions). On the other hand, approximately six months is required for warm-up in IHACRES, and the wet or dry initial conditions have little effect on the warm-up period. Instead, the initial values that are close to the optimal value result in less warm-up time. These findings have implications for hydrologic model development, specifically in determining soil moisture initial conditions and warm-up periods to make full use of the available data, which is very important for catchments with short hydrological records.

  20. Control of storage-protein synthesis during seed development in pea (Pisum sativum L.).

    PubMed Central

    Gatehouse, J A; Evans, I M; Bown, D; Croy, R R; Boulter, D

    1982-01-01

    The tissue-specific syntheses of seed storage proteins in the cotyledons of developing pea (Pisum sativum L.) seeds have been demonstrated by estimates of their qualitative and quantitative accumulation by sodium dodecyl sulphate/polyacrylamide-gel electrophoresis and rocket immunoelectrophoresis respectively. Vicilin-fraction proteins initially accumulated faster than legumin, but whereas legumin was accumulated throughout development, different components of the vicilin fraction had their predominant periods of synthesis at different stages of development. The translation products in vitro of polysomes isolated from cotyledons at different stages of development reflected the synthesis in vivo of storage-protein polypeptides at corresponding times. The levels of storage-protein mRNA species during development were estimated by 'Northern' hybridization using cloned complementary-DNA probes. This technique showed that the levels of legumin and vicilin (47000-Mr precursors) mRNA species increased and decreased in agreement with estimated rates of synthesis of the respective polypeptides. The relative amounts of these messages, estimated by kinetic hybridization were also consistent. Legumin mRNA was present in leaf poly(A)+ RNA at less than one-thousandth of the level in cotyledon poly(A)+ (polyadenylated) RNA, demonstrating tissue-specific expression. Evidence is presented that storage-protein mRNA species are relatively long-lived, and it is suggested that storage-protein synthesis is regulated primarily at the transcriptional level. Images Fig. 2. Fig. 3. PMID:6897609

  1. Quantifying the transmission potential of pandemic influenza

    NASA Astrophysics Data System (ADS)

    Chowell, Gerardo; Nishiura, Hiroshi

    2008-03-01

    This article reviews quantitative methods to estimate the basic reproduction number of pandemic influenza, a key threshold quantity to help determine the intensity of interventions required to control the disease. Although it is difficult to assess the transmission potential of a probable future pandemic, historical epidemiologic data is readily available from previous pandemics, and as a reference quantity for future pandemic planning, mathematical and statistical analyses of historical data are crucial. In particular, because many historical records tend to document only the temporal distribution of cases or deaths (i.e. epidemic curve), our review focuses on methods to maximize the utility of time-evolution data and to clarify the detailed mechanisms of the spread of influenza. First, we highlight structured epidemic models and their parameter estimation method which can quantify the detailed disease dynamics including those we cannot observe directly. Duration-structured epidemic systems are subsequently presented, offering firm understanding of the definition of the basic and effective reproduction numbers. When the initial growth phase of an epidemic is investigated, the distribution of the generation time is key statistical information to appropriately estimate the transmission potential using the intrinsic growth rate. Applications of stochastic processes are also highlighted to estimate the transmission potential using similar data. Critically important characteristics of influenza data are subsequently summarized, followed by our conclusions to suggest potential future methodological improvements.

  2. Evanescent-wave particle velocimetry measurements of zeta-potentials in fused-silica microchannels.

    PubMed

    Cevheri, Necmettin; Yoda, Minami

    2013-07-01

    The wall ζ-potential ζ(w), the potential at the shear plane of the electric double layer, depends on the properties of the BGE solution such as the valence and type of electrolyte, the pH and the ionic strength. Most of the methods estimate ζ(w) from measurements of the EOF velocity magnitude ueo , usually spatially averaged over the entire capillary. In these initial studies, evanescent-wave particle velocimetry was used to measure ueo in steady EOF for a variety of monovalent aqueous solutions to evaluate the effect of small amounts of divalent cations, as well as the pH and ionic strength of BGE solutions. In brief, the magnitude of the EOF velocity of NaCl-NaOH and borate buffer-NaOH solutions was estimated from the measured velocities of radius α = 104 nm fluorescent polystyrene particles in 33 μm fused-silica microchannels. The particle ζ-potentials were measured separately using laser-Doppler micro-electrophoresis; ζ(w) was then determined from ueo. The results suggest that evanescent-wave particle velocimetry can be used to estimate ζ(w) for a variety of BGE solutions, and that it can be used in the future to estimate local wall ζ-potential, and hence spatial variations in ζ(w). © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Materials experiment carrier concepts definition study. Volume 3: Programmatics, part 2

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Project logic, schedule and funding information was derived to enable decisions to be made regarding implementation of MEC system development. A master schedule and cost and price estimates (ROM) were developed for a project that consists of development of an all-up MEC, its integration with payloads and its flight on one 90 day mission. In Part 2 of the study a simple initial MEC was defined to accommodate three MPS baseline payloads. The design of this initial MEC is illustrated. The project logic, detailed schedules, and ROM cost estimate relate to a project in which this initial MEC is developed, integrated with payloads and flown once for 180 days.

  4. Initial dynamic load estimates during configuration design

    NASA Technical Reports Server (NTRS)

    Schiff, Daniel

    1987-01-01

    This analysis includes the structural response to shock and vibration and evaluates the maximum deflections and material stresses and the potential for the occurrence of elastic instability, fatigue and fracture. The required computations are often performed by means of finite element analysis (FEA) computer programs in which the structure is simulated by a finite element model which may contain thousands of elements. The formulation of a finite element model can be time consuming, and substantial additional modeling effort may be necessary if the structure requires significant changes after initial analysis. Rapid methods for obtaining rough estimates of the structural response to shock and vibration are presented for the purpose of providing guidance during the initial mechanical design configuration stage.

  5. Estimate of shock-Hugoniot adiabat of liquids from hydrodyamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bouton, E.; Vidal, P.

    2007-12-12

    Shock states are generally obtained from shock velocity (D) and material velocity (u) measurements. In this paper, we propose a hydrodynamical method for estimating the (D-u) relation of Nitromethane from easily measured properties of the initial state. The method is based upon the differentiation of the Rankine-Hugoniot jump relations with the initial temperature considered as a variable and under the constraint of a unique nondimensional shock-Hugoniot. We then obtain an ordinary differential equation for the shock velocity D in the variable u. Upon integration, this method predicts the shock Hugoniot of liquid Nitromethane with a 5% accuracy for initial temperaturesmore » ranging from 250 K to 360 K.« less

  6. Estimating the Benefits of the Air Force Purchasing and Supply Chain Management Initiative

    DTIC Science & Technology

    2008-01-01

    sector, known as strategic sourcing.6 The Customer Relationship Management initiative ( CRM ) pro- vides a single customer point of contact for all... Customer Relationship Management initiative. commodity council A term used to describe a cross-functional sourc- ing group charged with formulating a...initiative has four major components, all based on commercial best practices (Gabreski, 2004): commodity councils customer relationship management

  7. 78 FR 13677 - Disease, Disability, and Injury Prevention and Control Special Emphasis Panels (SEP): Initial Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-28

    ... announced below concerns Monitoring Cause-Specific School Absenteeism for Estimating Community Wide... received in response to ``Monitoring Cause-Specific School Absenteeism for Estimating Community Wide...

  8. Quick estimate of oil discovery from gas-condensate reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarem, A.M.

    1966-10-24

    A quick method of estimating the depletion performance of gas-condensate reservoirs is presented by graphical representations. The method is based on correlations reported in the literature and expresses recoverable liquid as function of gas reserves, producing gas-oil ratio, and initial and final reservoir pressures. The amount of recoverable liquid reserves (RLR) under depletion conditions, is estimated from an equation which is given. Where the liquid-reserves are in stock-tank barrels the gas reserves are in Mcf, with the arbitrary constant, N calculated from one graphical representation by dividing fractional oil recovery by the initial gas-oil ratio and multiplying 10U6D for convenience.more » An equation is given for estimating the coefficient C. These factors (N and C) can be determined from the graphical representations. An example calculation is included.« less

  9. Automated startup of the MIT research reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwok, K.S.

    1992-01-01

    This summary describes the development, implementation, and testing of a generic method for performing automated startups of nuclear reactors described by space-independent kinetics under conditions of closed-loop digital control. The technique entails first obtaining a reliable estimate of the reactor's initial degree of subcriticality and then substituting that estimate into a model-based control law so as to permit a power increase from subcritical on a demanded trajectory. The estimation of subcriticality is accomplished by application of the perturbed reactivity method. The shutdown reactor is perturbed by the insertion of reactivity at a known rate. Observation of the resulting period permitsmore » determination of the initial degree of subcriticality. A major advantage to this method is that repeated estimates are obtained of the same quantity. Hence, statistical methods can be applied to improve the quality of the calculation.« less

  10. Exploring how nature and nurture affect the development of reading: An analysis of the Florida Twin Project on Reading

    PubMed Central

    Hart, Sara A.; Logan, Jessica A.R.; Soden-Hensler, Brooke; Kershaw, Sarah; Taylor, Jeanette; Schatschneider, Christopher

    2013-01-01

    Research on the development of reading skills through the primary school years has pointed to the importance of individual differences in initial ability as well as the growth of those skills. Additionally, it has been theorized that reading skills develop incrementally. The present study examined the genetic and environmental influences on two developmental models representing these parallel ideas, generalizing the findings to explore the processes of reading development. Participants were drawn from the Florida Twin Project on Reading, with a total of 2370 pairs of twins’ representative of the state of Florida. Twins’ oral reading fluency scores from school progress monitoring records collected in the fall of grades 1 through 5 were used to model development. Results suggested that genetic influences on the development of reading are general, shared across the early school years, as well as novel, with new genetic influences introduced at each of the first three years of school. The shared environment estimates suggest a pattern of general influences only, suggesting environmental effects which are moderate and stable across development. PMID:23294149

  11. Initial retrieval sequence and blending strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pemwell, D.L.; Grenard, C.E.

    1996-09-01

    This report documents the initial retrieval sequence and the methodology used to select it. Waste retrieval, storage, pretreatment and vitrification were modeled for candidate single-shell tank retrieval sequences. Performance of the sequences was measured by a set of metrics (for example,high-level waste glass volume, relative risk and schedule).Computer models were used to evaluate estimated glass volumes,process rates, retrieval dates, and blending strategy effects.The models were based on estimates of component inventories and concentrations, sludge wash factors and timing, retrieval annex limitations, etc.

  12. Resource utilization for non-operative cervical radiculopathy: Management by surgeons versus non-surgeons.

    PubMed

    Chung, Sophie H; Bohl, Daniel D; Paul, Jonathan T; Rihn, Jeffrey A; Harrop, James S; Ghogawala, Zoher; Hilibrand, Alan S; Grauer, Jonathan N

    2017-07-01

    To compare the estimated resource utilization for non-operative treatment of cervical radiculopathy if managed by surgeons versus non-surgeons. A Cervical Spine Research Society-sponsored survey was administered at a national spine surgery conference to surgeons and non-surgeons, as classified above. The survey asked questions regarding resource utilization and perceived costs for the "average patient" with cervical radiculopathy managed non-operatively. Resource utilization and perceived costs were compared between surgeon and non-surgeon participants, and between private practice and academic and/or hybrid groups that combine academic and private practices. In total, 101 of the 125 conference attendees participated in the survey (return rate 80.8%, of which 60% were surgeons). Surgeon and non-surgeon estimates for duration of non-operative care did not differ (3.3 versus 4.2 months, p=0.071). Estimates also did not differ for estimated number of physical therapy visits (10.5 versus 10.5, p=0.983), cervical injections (1.4 versus 1.7, p=0.272), chiropractic visits (3.1 versus 3.7, p=0.583), or perceived days off from work (14.9 versus 16.3, p=0.816). The only difference identified was that surgeon estimates of the number of physician visits while providing non-operative care were lower than non-surgeon estimates (3.2 versus 4.0, p=0.018). In terms of estimated costs, surgeon and non-surgeon were mostly similar (only difference being that surgeon estimates for the total cost of physician visits per patient were lower than non-surgeon estimates ($382 versus $579, p=0.007). Surgeon estimates for the percent of their patients that go on to receive surgery within 6 months were higher than non-surgeon estimates (28.6% versus 18.8%, p=0.018). Similarly, surgeon estimates for the percent of their patients to go on to receive surgery within 2 years were higher than non-surgeon estimates (37.8% versus 24.8%, p=0.013). Academic/hybrid and private practice group resource utilization estimates and costs were also compared, and no significant differences were found in any comparisons. Additionally, no significant differences were found in these groups for duration of non-operative care, or the estimates of the percent of patients who go on to receive surgery within 6 months or two years. These data suggest that patients with cervical radiculopathy managed by surgeons and those by non-surgeons have overall similar resource utilization during a non-operative trial. This suggests that relatively similar care is provided regardless of whom initiates the non-operative trial (surgeon or non-surgeon). Although surgeons thought their patients more likely to undergo surgery following a non-operative trial, this may be a bias due to patient referral-specifically, surgeons may be more likely than non-surgeons to manage patients with more severe or longer-standing radiculopathy. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction.

    PubMed

    Cobbs, Gary

    2012-08-16

    Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give better estimates of initial target concentration. Model 1 was found to be slightly more robust than model 2 giving better estimates of initial target concentration when estimation of parameters was done for qPCR curves with very different initial target concentration. Both models may be used to estimate the initial absolute concentration of target sequence when a standard curve is not available. It is argued that the kinetic approach to modeling and interpreting quantitative PCR data has the potential to give more precise estimates of the true initial target concentrations than other methods currently used for analysis of qPCR data. The two models presented here give a unified model of the qPCR process in that they explain the shape of the qPCR curve for a wide variety of initial target concentrations.

  14. Pros and cons of estimating the reproduction number from early epidemic growth rate of influenza A (H1N1) 2009.

    PubMed

    Nishiura, Hiroshi; Chowell, Gerardo; Safan, Muntaser; Castillo-Chavez, Carlos

    2010-01-07

    In many parts of the world, the exponential growth rate of infections during the initial epidemic phase has been used to make statistical inferences on the reproduction number, R, a summary measure of the transmission potential for the novel influenza A (H1N1) 2009. The growth rate at the initial stage of the epidemic in Japan led to estimates for R in the range 2.0 to 2.6, capturing the intensity of the initial outbreak among school-age children in May 2009. An updated estimate of R that takes into account the epidemic data from 29 May to 14 July is provided. An age-structured renewal process is employed to capture the age-dependent transmission dynamics, jointly estimating the reproduction number, the age-dependent susceptibility and the relative contribution of imported cases to secondary transmission. Pitfalls in estimating epidemic growth rates are identified and used for scrutinizing and re-assessing the results of our earlier estimate of R. Maximum likelihood estimates of R using the data from 29 May to 14 July ranged from 1.21 to 1.35. The next-generation matrix, based on our age-structured model, predicts that only 17.5% of the population will experience infection by the end of the first pandemic wave. Our earlier estimate of R did not fully capture the population-wide epidemic in quantifying the next-generation matrix from the estimated growth rate during the initial stage of the pandemic in Japan. In order to quantify R from the growth rate of cases, it is essential that the selected model captures the underlying transmission dynamics embedded in the data. Exploring additional epidemiological information will be useful for assessing the temporal dynamics. Although the simple concept of R is more easily grasped by the general public than that of the next-generation matrix, the matrix incorporating detailed information (e.g., age-specificity) is essential for reducing the levels of uncertainty in predictions and for assisting public health policymaking. Model-based prediction and policymaking are best described by sharing fundamental notions of heterogeneous risks of infection and death with non-experts to avoid potential confusion and/or possible misuse of modelling results.

  15. Age of smoking initiation among adolescents in Africa.

    PubMed

    Veeranki, Sreenivas P; John, Rijo M; Ibrahim, Abdallah; Pillendla, Divya; Thrasher, James F; Owusu, Daniel; Ouma, Ahmed E O; Mamudu, Hadii M

    2017-01-01

    To estimate prevalence and identify correlates of age of smoking initiation among adolescents in Africa. Data (n = 16,519) were obtained from nationally representative Global Youth Tobacco Surveys in nine West African countries. Study outcome was adolescents' age of smoking initiation categorized into six groups: ≤7, 8 or 9, 10 or 11, 12 or 13, 14 or 15 and never-smoker. Explanatory variables included sex, parental or peer smoking behavior, exposure to tobacco industry promotions, and knowledge about smoking harm. Weighted multinomial logit models were conducted to determine correlates associated with adolescents' age of smoking initiation. Age of smoking initiation was as early as ≤7 years; prevalence estimates ranged from 0.7 % in Ghana at 10 or 11 years age to 9.6 % in Cote d'Ivoire at 12 or 13 years age. Males, exposures to parental or peer smoking, and industry promotions were identified as significant correlates. West African policymakers should adopt a preventive approach consistent with the World Health Organization Framework Convention on Tobacco Control to prevent an adolescent from initiating smoking and developing into future regular smokers.

  16. Estimation of delays and other parameters in nonlinear functional differential equations

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Lamm, P. K. D.

    1983-01-01

    A spline-based approximation scheme for nonlinear nonautonomous delay differential equations is discussed. Convergence results (using dissipative type estimates on the underlying nonlinear operators) are given in the context of parameter estimation problems which include estimation of multiple delays and initial data as well as the usual coefficient-type parameters. A brief summary of some of the related numerical findings is also given.

  17. Impact of advanced pharmacy practice experience placement changes in colleges and schools of pharmacy.

    PubMed

    Duke, Lori J; Staton, April G; McCullough, Elizabeth S; Jain, Rahul; Miller, Mindi S; Lynn Stevenson, T; Fetterman, James W; Lynn Parham, R; Sheffield, Melody C; Unterwagner, Whitney L; McDuffie, Charles H

    2012-04-10

    To document the annual number of advanced pharmacy practice experience (APPE) placement changes for students across 5 colleges and schools of pharmacy, identify and compare initiating reasons, and estimate the associated administrative workload. Data collection occurred from finalization of the 2008-2009 APPE assignments throughout the last date of the APPE schedule. Internet-based customized tracking forms were used to categorize the initiating reason for the placement change and the administrative time required per change (0 to 120 minutes). APPE placement changes per institution varied from 14% to 53% of total assignments. Reasons for changes were: administrator initiated (20%), student initiated (23%), and site/preceptor initiated (57%) Total administrative time required per change varied across institutions from 3,130 to 22,750 minutes, while the average time per reassignment was 42.5 minutes. APPE placements are subject to high instability. Significant differences exist between public and private colleges and schools of pharmacy as to the number and type of APPE reassignments made and associated workload estimates.

  18. Estimating the gravitational-wave content of initial-data sets for numerical relativity using the Beetle--Burko scalar

    NASA Astrophysics Data System (ADS)

    Burko, Lior M.

    2006-04-01

    The Beetle--Burko radiation scalar is a gauge independent, tetrad independent, and background independent quantity that depends only on the radiative degrees of freedom where the notion of radiation is incontrovertible, and can be computed from spatial data as is typical in numerical relativity simulations even for strongly dynamical spacetimes. We show that the Beetle--Burko radiation scalar can be used for estimating the graviational-wave content of initial-data sets in numerical relativity, and can thus be useful for the construction of physically meaningful ones, and the identification of ``junk'' data on the intial value surface. We apply this method for the case of a momentarily stationary black hole binary, and demonstrate how the Beetle-- Burko scalar distinguishes between Misner and Brill--Lindquist initial data. The method, however, is robust, and is applicable to generic initial data sets. In addition to initial data sets, the Beetle--Burko radiation scalar is equally applicable also for evolution data.

  19. Characterization of Initial Parameter Information for Lifetime Prediction of Electronic Devices.

    PubMed

    Li, Zhigang; Liu, Boying; Yuan, Mengxiong; Zhang, Feifei; Guo, Jiaqiang

    2016-01-01

    Newly manufactured electronic devices are subject to different levels of potential defects existing among the initial parameter information of the devices. In this study, a characterization of electromagnetic relays that were operated at their optimal performance with appropriate and steady parameter values was performed to estimate the levels of their potential defects and to develop a lifetime prediction model. First, the initial parameter information value and stability were quantified to measure the performance of the electronics. In particular, the values of the initial parameter information were estimated using the probability-weighted average method, whereas the stability of the parameter information was determined by using the difference between the extrema and end points of the fitting curves for the initial parameter information. Second, a lifetime prediction model for small-sized samples was proposed on the basis of both measures. Finally, a model for the relationship of the initial contact resistance and stability over the lifetime of the sampled electromagnetic relays was proposed and verified. A comparison of the actual and predicted lifetimes of the relays revealed a 15.4% relative error, indicating that the lifetime of electronic devices can be predicted based on their initial parameter information.

  20. Characterization of Initial Parameter Information for Lifetime Prediction of Electronic Devices

    PubMed Central

    Li, Zhigang; Liu, Boying; Yuan, Mengxiong; Zhang, Feifei; Guo, Jiaqiang

    2016-01-01

    Newly manufactured electronic devices are subject to different levels of potential defects existing among the initial parameter information of the devices. In this study, a characterization of electromagnetic relays that were operated at their optimal performance with appropriate and steady parameter values was performed to estimate the levels of their potential defects and to develop a lifetime prediction model. First, the initial parameter information value and stability were quantified to measure the performance of the electronics. In particular, the values of the initial parameter information were estimated using the probability-weighted average method, whereas the stability of the parameter information was determined by using the difference between the extrema and end points of the fitting curves for the initial parameter information. Second, a lifetime prediction model for small-sized samples was proposed on the basis of both measures. Finally, a model for the relationship of the initial contact resistance and stability over the lifetime of the sampled electromagnetic relays was proposed and verified. A comparison of the actual and predicted lifetimes of the relays revealed a 15.4% relative error, indicating that the lifetime of electronic devices can be predicted based on their initial parameter information. PMID:27907188

  1. Quantifying the risks and benefits of efavirenz use in HIV-infected women of childbearing age in the United States

    PubMed Central

    Hsu, HE; Rydzak, CE; Cotich, KL; Wang, B; Sax, PE; Losina, E; Freedberg, KA; Goldie, SJ; Lu, Z; Walensky, RP

    2010-01-01

    Objectives We quantified the benefits (life expectancy gains) and harms (efavirenz-related teratogenicity) associated with using efavirenz in HIV-infected women of childbearing age in the United States. Methods We used data from the Women’s Interagency HIV Study in an HIV disease simulation model to estimate life expectancy in women who receive an efavirenz-based initial antiretroviral regimen compared with those who delay efavirenz use and receive a boosted protease inhibitor-based initial regimen. To estimate excess risk of teratogenic events with and without efavirenz exposure per 100,000 women, we incorporated literature-based rates of pregnancy, live births, and teratogenic events into a decision analytic model. We assumed a teratogenicity risk of 2.90 events/100 live births in women exposed to efavirenz during pregnancy and 2.68/100 live births in unexposed women. Results Survival for HIV-infected women who received an efavirenz-based initial antiretroviral therapy regimen was 0.89 years greater than for women receiving non-efavirenz-based initial therapy (28.91 vs. 28.02 years). The rate of teratogenic events was 77.26/100,000 exposed women, compared with 72.46/100,000 unexposed women. Survival estimates were sensitive to variations in treatment efficacy and AIDS-related mortality. Estimates of excess teratogenic events were most sensitive to pregnancy rates and number of teratogenic events/100 live births in efavirenz-exposed women. Conclusions Use of non-efavirenz-based initial antiretroviral therapy in HIV-infected women of childbearing age may reduce life expectancy gains from antiretroviral treatment, but may also prevent teratogenic events. Decision-making regarding efavirenz use presents a tradeoff between these two risks; this study can inform discussions between patients and health care providers. PMID:20561082

  2. Quantifying the risks and benefits of efavirenz use in HIV-infected women of childbearing age in the USA.

    PubMed

    Hsu, H E; Rydzak, C E; Cotich, K L; Wang, B; Sax, P E; Losina, E; Freedberg, K A; Goldie, S J; Lu, Z; Walensky, R P

    2011-02-01

    The aim of the study was to quantify the benefits (life expectancy gains) and risks (efavirenz-related teratogenicity) associated with using efavirenz in HIV-infected women of childbearing age in the USA. We used data from the Women's Interagency HIV Study in an HIV disease simulation model to estimate life expectancy in women who receive an efavirenz-based initial antiretroviral regimen compared with those who delay efavirenz use and receive a boosted protease inhibitor-based initial regimen. To estimate excess risk of teratogenic events with and without efavirenz exposure per 100,000 women, we incorporated literature-based rates of pregnancy, live births, and teratogenic events into a decision analytic model. We assumed a teratogenicity risk of 2.90 events/100 live births in women exposed to efavirenz during pregnancy and 2.68/100 live births in unexposed women. Survival for HIV-infected women who received an efavirenz-based initial antiretroviral therapy (ART) regimen was 0.89 years greater than for women receiving non-efavirenz-based initial therapy (28.91 vs. 28.02 years). The rate of teratogenic events was 77.26/100,000 exposed women, compared with 72.46/100,000 unexposed women. Survival estimates were sensitive to variations in treatment efficacy and AIDS-related mortality. Estimates of excess teratogenic events were most sensitive to pregnancy rates and number of teratogenic events/100 live births in efavirenz-exposed women. Use of non-efavirenz-based initial ART in HIV-infected women of childbearing age may reduce life expectancy gains from antiretroviral treatment, but may also prevent teratogenic events. Decision-making regarding efavirenz use presents a trade-off between these two risks; this study can inform discussions between patients and health care providers.

  3. Mechanisms of mercury removal by O 3 and OH in the atmosphere

    NASA Astrophysics Data System (ADS)

    Calvert, Jack G.; Lindberg, Steve E.

    The mechanisms of the reactions of gaseous Hg atoms with O 3 and OH radical are evaluated from current kinetic and enthalpy data. The reaction, O 3+Hg→HgO+O 2, is considered to be an unlikely pathway for atmospheric conditions. Considerations given here suggest that the reaction may occur with initial formation of a metastable HgO 3 molecule that in laboratory experiments is the source of the HgO product observed to accumulate on the walls of the reactor (HgO 3→HgO(s)+O 2). Laboratory studies of the gas phase reaction, Hg+OH→HgOH (2), have been reported using relative rate measurements initiated by photodissociation of an organic nitrite in mixtures of Hg vapor with NO, air and various reference hydrocarbons. Computer simulations of this reaction system suggest that the use of reactive reference gases (e.g., cyclohexane) leads to the generation of significant ozone in these NO x-RH-air mixtures, and the resulting O 3-Hg reaction can result in an over-estimate of the rate of reaction (2). Also the apparent rate coefficients for reaction (2) are highly dependent on the assumed rate coefficients of its competitive reactions of dissociation in HgOH→Hg+OH (3), and association of HgOH molecule with other free radicals present in the system: HgOH+ X→ XHgOH (4); X=OH, HO 2, RO, RO 2, NO, NO 2. Reaction (4) competes successfully with HgOH decomposition for the laboratory conditions employed, and the kinetic measurements relate to the rate determining reaction, Hg+OH→HgOH in this case. However, the use of these laboratory measurements of k2 to determine the extent of Hg removal by OH in the troposphere will greatly over-estimate the importance of Hg removal by this reaction.

  4. Erosion of volcanic ocean islands: insights from modeling, topographic analyses, and cosmogenic exposure dating

    NASA Astrophysics Data System (ADS)

    Huppert, K.; Perron, J. T.; Ferrier, K.; Mukhopadhyay, S.; Rosener, M.; Douglas, M.

    2016-12-01

    With homogeneous bedrock, dramatic rainfall gradients, paleoshorelines, and datable remnant topography, volcanic ocean islands provide an exceptional natural experiment in landscape evolution. Analyses traversing gradients in island climate and bedrock age have the potential to advance our understanding of landscape evolution in a diverse range of continental settings. However, as small, conical, dominantly subsiding, and initially highly permeable landmasses, islands are unique, and it remains unclear how these properties influence their erosional history. We use a landscape evolution model and observations from the Hawaiian island of Kaua'i and other islands to characterize the topographic evolution of volcanic ocean islands. We present new measurements of helium-3 concentrations in detrital olivine from 20 rivers on Kaua'i. These measurements indicate that minimum erosion rates over the past 3 to 48 kyr are on average 2.6 times faster than erosion rates averaged over the past 3.9 to 4.4 Myr estimated from the volume of river canyons. This apparent acceleration of erosion rates on Kaua'i is consistent with observations on other islands; erosion rates estimated from the volume of river canyons on 31 islands worldwide, combined with observations of minimal incision on young island volcanoes, suggest a progressive increase in erosion rates over the first few million years of island landscape development. Using a landscape evolution model, we perform a set of experiments to quantify the contribution of subsidence, climate change, and initial geometry to changes in island erosion rates through time. We base these experiments on the evolution of Kaua'i, and we use measured erosion rates and the observed topography to calibrate the model. We find that progressive steepening of island topography by canyon incision drives an acceleration of erosion rates over time. Increases in mean channel and hillslope gradient with island age in the global compilation suggest this may be a general trend in the topographic evolution of volcanic ocean islands.

  5. Unveiling the diversification dynamics of Australasian predaceous diving beetles in the Cenozoic.

    PubMed

    Toussaint, Emmanuel F A; Condamine, Fabien L; Hawlitschek, Oliver; Watts, Chris H; Porch, Nick; Hendrich, Lars; Balke, Michael

    2015-01-01

    During the Cenozoic, Australia experienced major climatic shifts that have had dramatic ecological consequences for the modern biota. Mesic tropical ecosystems were progressively restricted to the coasts and replaced by arid-adapted floral and faunal communities. Whilst the role of aridification has been investigated in a wide range of terrestrial lineages, the response of freshwater clades remains poorly investigated. To gain insights into the diversification processes underlying a freshwater radiation, we studied the evolutionary history of the Australasian predaceous diving beetles of the tribe Hydroporini (147 described species). We used an integrative approach including the latest methods in phylogenetics, divergence time estimation, ancestral character state reconstruction, and likelihood-based methods of diversification rate estimation. Phylogenies and dating analyses were reconstructed with molecular data from seven genes (mitochondrial and nuclear) for 117 species (plus 12 outgroups). Robust and well-resolved phylogenies indicate a late Oligocene origin of Australasian Hydroporini. Biogeographic analyses suggest an origin in the East Coast region of Australia, and a dynamic biogeographic scenario implying dispersal events. The group successfully colonized the tropical coastal regions carved by a rampant desertification, and also colonized groundwater ecosystems in Central Australia. Diversification rate analyses suggest that the ongoing aridification of Australia initiated in the Miocene contributed to a major wave of extinctions since the late Pliocene probably attributable to an increasing aridity, range contractions and seasonally disruptions resulting from Quaternary climatic changes. When comparing subterranean and epigean genera, our results show that contrasting mechanisms drove their diversification and therefore current diversity pattern. The Australasian Hydroporini radiation reflects a combination of processes that promoted both diversification, resulting from new ecological opportunities driven by initial aridification, and a subsequent loss of mesic adapted diversity due to increasing aridity. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Study of solid rocket motors for a space shuttle booster. Volume 2, book 3: Cost estimating data

    NASA Technical Reports Server (NTRS)

    Vanderesch, A. H.

    1972-01-01

    Cost estimating data for the 156 inch diameter, parallel burn solid rocket propellant engine selected for the space shuttle booster are presented. The costing aspects on the baseline motor are initially considered. From the baseline, sufficient data is obtained to provide cost estimates of alternate approaches.

  7. IUS/TUG orbital operations and mission support study. Volume 5: Cost estimates

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The costing approach, methodology, and rationale utilized for generating cost data for composite IUS and space tug orbital operations are discussed. Summary cost estimates are given along with cost data initially derived for the IUS program and space tug program individually, and cost estimates for each work breakdown structure element.

  8. Alternatives to the Moving Average

    Treesearch

    Paul C. van Deusen

    2001-01-01

    There are many possible estimators that could be used with annual inventory data. The 5-year moving average has been selected as a default estimator to provide initial results for states having available annual inventory data. User objectives for these estimates are discussed. The characteristics of a moving average are outlined. It is shown that moving average...

  9. Acellular pertussis vaccines effectiveness over time: A systematic review, meta-analysis and modeling study

    PubMed Central

    Chit, Ayman; Zivaripiran, Hossein; Shin, Thomas; Lee, Jason K. H.; Tomovici, Antigona; Macina, Denis; Johnson, David R.; Decker, Michael D.; Wu, Jianhong

    2018-01-01

    Background Acellular pertussis vaccine studies postulate that waning protection, particularly after the adolescent booster, is a major contributor to the increasing US pertussis incidence. However, these studies reported relative (ie, vs a population given prior doses of pertussis vaccine), not absolute (ie, vs a pertussis vaccine naïve population) efficacy following the adolescent booster. We aim to estimate the absolute protection offered by acellular pertussis vaccines. Methods We conducted a systematic review of acellular pertussis vaccine effectiveness (VE) publications. Studies had to comply with the US schedule, evaluate clinical outcomes, and report VE over discrete time points. VE after the 5-dose childhood series and after the adolescent sixth-dose booster were extracted separately and pooled. All relative VE estimates were transformed to absolute estimates. VE waning was estimated using meta-regression modeling. Findings Three studies reported VE after the childhood series and four after the adolescent booster. All booster studies reported relative VE (vs acellular pertussis vaccine-primed population). We estimate initial childhood series absolute VE is 91% (95% CI: 87% to 95%) and declines at 9.6% annually. Initial relative VE after adolescent boosting is 70% (95% CI: 54% to 86%) and declines at 45.3% annually. Initial absolute VE after adolescent boosting is 85% (95% CI: 84% to 86%) and declines at 11.7% (95% CI: 11.1% to 12.3%) annually. Interpretation Acellular pertussis vaccine efficacy is initially high and wanes over time. Observational VE studies of boosting failed to recognize that they were measuring relative, not absolute, VE and the absolute VE in the boosted population is better than appreciated. PMID:29912887

  10. Estimation of source location and ground impedance using a hybrid multiple signal classification and Levenberg-Marquardt approach

    NASA Astrophysics Data System (ADS)

    Tam, Kai-Chung; Lau, Siu-Kit; Tang, Shiu-Keung

    2016-07-01

    A microphone array signal processing method for locating a stationary point source over a locally reactive ground and for estimating ground impedance is examined in detail in the present study. A non-linear least square approach using the Levenberg-Marquardt method is proposed to overcome the problem of unknown ground impedance. The multiple signal classification method (MUSIC) is used to give the initial estimation of the source location, while the technique of forward backward spatial smoothing is adopted as a pre-processer of the source localization to minimize the effects of source coherence. The accuracy and robustness of the proposed signal processing method are examined. Results show that source localization in the horizontal direction by MUSIC is satisfactory. However, source coherence reduces drastically the accuracy in estimating the source height. The further application of Levenberg-Marquardt method with the results from MUSIC as the initial inputs improves significantly the accuracy of source height estimation. The present proposed method provides effective and robust estimation of the ground surface impedance.

  11. Inertial sensor-based smoother for gait analysis.

    PubMed

    Suh, Young Soo

    2014-12-17

    An off-line smoother algorithm is proposed to estimate foot motion using an inertial sensor unit (three-axis gyroscopes and accelerometers) attached to a shoe. The smoother gives more accurate foot motion estimation than filter-based algorithms by using all of the sensor data instead of using the current sensor data. The algorithm consists of two parts. In the first part, a Kalman filter is used to obtain initial foot motion estimation. In the second part, the error in the initial estimation is compensated using a smoother, where the problem is formulated in the quadratic optimization problem. An efficient solution of the quadratic optimization problem is given using the sparse structure. Through experiments, it is shown that the proposed algorithm can estimate foot motion more accurately than a filter-based algorithm with reasonable computation time. In particular, there is significant improvement in the foot motion estimation when the foot is moving off the floor: the z-axis position error squared sum (total time: 3.47 s) when the foot is in the air is 0.0807 m2 (Kalman filter) and 0.0020 m2 (the proposed smoother).

  12. Adaptive estimation of nonlinear parameters of a nonholonomic spherical robot using a modified fuzzy-based speed gradient algorithm

    NASA Astrophysics Data System (ADS)

    Roozegar, Mehdi; Mahjoob, Mohammad J.; Ayati, Moosa

    2017-05-01

    This paper deals with adaptive estimation of the unknown parameters and states of a pendulum-driven spherical robot (PDSR), which is a nonlinear in parameters (NLP) chaotic system with parametric uncertainties. Firstly, the mathematical model of the robot is deduced by applying the Newton-Euler methodology for a system of rigid bodies. Then, based on the speed gradient (SG) algorithm, the states and unknown parameters of the robot are estimated online for different step length gains and initial conditions. The estimated parameters are updated adaptively according to the error between estimated and true state values. Since the errors of the estimated states and parameters as well as the convergence rates depend significantly on the value of step length gain, this gain should be chosen optimally. Hence, a heuristic fuzzy logic controller is employed to adjust the gain adaptively. Simulation results indicate that the proposed approach is highly encouraging for identification of this NLP chaotic system even if the initial conditions change and the uncertainties increase; therefore, it is reliable to be implemented on a real robot.

  13. Project Cost Estimation for Planning

    DOT National Transportation Integrated Search

    2010-02-26

    For Nevada Department of Transportation (NDOT), there are far too many projects that ultimately cost much more than initially planned. Because project nominations are linked to estimates of future funding and the analysis of system needs, the inaccur...

  14. Changes in Soil Carbon Storage After Cultivation

    DOE Data Explorer

    Mann, L. K. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2004-01-01

    Previously published data from 625 paired soil samples were used to predict carbon in cultivated soil as a function of initial carbon content. A 30-cm sampling depth provided a less variable estimate (r2 = 0.9) of changes in carbon than a 15-cm sampling depth (r2 = 0.6). Regression analyses of changes in carbon storage in relation to years of cultivation confirmed that the greatest rates of change occurred in the first 20 y. An initial carbon effect was present in all analyses: soils very low in carbon tended to gain slight amounts of carbon after cultivation, but soils high in carbon lost at least 20% during cultivation. Carbon losses from most agricultural soils are estimated to average less than 20% of initial values or less than 1.5 kg/m2 within the top 30 cm. These estimates should not be applied to depths greater than 30 cm and would be improved with more bulk density information and equivalent sample volumes.

  15. The algorithm of motion blur image restoration based on PSF half-blind estimation

    NASA Astrophysics Data System (ADS)

    Chen, Da-Ke; Lin, Zhe

    2011-08-01

    A novel algorithm of motion blur image restoration based on PSF half-blind estimation with Hough transform was introduced on the basis of full analysis of the principle of TDICCD camera, with the problem that vertical uniform linear motion estimation used by IBD algorithm as the original value of PSF led to image restoration distortion. Firstly, the mathematical model of image degradation was established with the transcendental information of multi-frame images, and then two parameters (movement blur length and angle) that have crucial influence on PSF estimation was set accordingly. Finally, the ultimate restored image can be acquired through multiple iterative of the initial value of PSF estimation in Fourier domain, which the initial value was gained by the above method. Experimental results show that the proposal algorithm can not only effectively solve the image distortion problem caused by relative motion between TDICCD camera and movement objects, but also the details characteristics of original image are clearly restored.

  16. In situ nano- to microscopic imaging and growth mechanism of electrochemical dissolution (e.g., corrosion) of a confined metal surface

    PubMed Central

    Merola, C.; Cheng, H.-W.; Schwenzfeier, K.; Kristiansen, K.; Chen, Y.-J.; Dobbs, H. A.; Valtiner, M.

    2017-01-01

    Reactivity in confinement is central to a wide range of applications and systems, yet it is notoriously difficult to probe reactions in confined spaces in real time. Using a modified electrochemical surface forces apparatus (EC-SFA) on confined metallic surfaces, we observe in situ nano- to microscale dissolution and pit formation (qualitatively similar to previous observation on nonmetallic surfaces, e.g., silica) in well-defined geometries in environments relevant to corrosion processes. We follow “crevice corrosion” processes in real time in different pH-neutral NaCl solutions and applied surface potentials of nickel (vs. Ag|AgCl electrode in solution) for the mica–nickel confined interface of total area ∼0.03 mm2. The initial corrosion proceeds as self-catalyzed pitting, visualized by the sudden appearance of circular pits with uniform diameters of 6–7 μm and depth ∼2–3 nm. At concentrations above 10 mM NaCl, pitting is initiated at the outer rim of the confined zone, while below 10 mM NaCl, pitting is initiated inside the confined zone. We compare statistical analysis of growth kinetics and shape evolution of individual nanoscale deep pits with estimates from macroscopic experiments to study initial pit growth and propagation. Our data and experimental techniques reveal a mechanism that suggests initial corrosion results in formation of an aggressive interfacial electrolyte that rapidly accelerates pitting, similar to crack initiation and propagation within the confined area. These results support a general mechanism for nanoscale material degradation and dissolution (e.g., crevice corrosion) of polycrystalline nonnoble metals, alloys, and inorganic materials within confined interfaces. PMID:28827338

  17. Retrospective Analysis of Dose Titration and Serum Testosterone Level Assessments in Patients Treated With Topical Testosterone.

    PubMed

    Muram, David; Kaltenboeck, Anna; Boytsov, Natalie; Hayes-Larson, Eleanor; Ivanova, Jasmina; Birnbaum, Howard G; Swindle, Ralph

    2015-11-01

    Patterns of care following topical testosterone agent (TTA) initiation are poorly understood. This study aimed to characterize care following TTA initiation and compare results between patients with and without a serum testosterone (T) assay within 30 days before and including TTA initiation. Adult men (N=4,146) initiating TTAs from January 1, 2011, to March 31, 2012, were identified from a commercially insured database. Patients were included if they initiated at recommended starting dose (RSD) and had ≥12 and ≥6 months of continuous eligibility preinitiation (baseline) and postinitiation (study period), respectively. Patients were stratified by preinitiation T assay. Maintenance dose attainment month was determined using unadjusted generalized estimating equations regression to compare dose relative to RSD month by month. Outcomes included maintenance dose attainment month, time to stopping of index TTA refills or a claim for nonindex testosterone replacement therapy (TRT), and proportion of patients with study period T assay or diagnosis of hypogonadism (HG) or another low testosterone condition, and were compared using chi-square and Wilcoxon rank-sum tests for categorical and continuous variables, respectively. Maintenance dose was attained in Month 4 postinitiation, at 115.2% of RSD. Approximately 46% of patients had a preinitiation T assay; these men were more likely to receive a diagnosis of HG or another low testosterone condition, to have a follow-up T assay, to continue treatment by filling a nonindex TRT, and less likely to stop refilling treatment with their index TTA. Differences in care following TTA initiation suggest that preinitiation T assays (i.e., guideline-based care) may be helpful in ensuring treatment benefits. © The Author(s) 2014.

  18. Understanding Predictors of Early Antenatal Care Initiation in Relationship to Timing of HIV Diagnosis in South Africa.

    PubMed

    Nattey, Cornelius; Jinga, Nelly; Mongwenyana, Constance; Mokhele, Idah; Mohomi, Given; Fox, Matthew P; Onoya, Dorina

    2018-06-01

    Effective prevention of mother-to-child transmission benefits from early presentation to antenatal care (ANC). It is, however, unclear whether a previous HIV diagnosis results in earlier initiation of ANC. We estimated the probability of early ANC initiation among women with a previous HIV-positive diagnosis compared to those who first tested for HIV during ANC and explored determinants of early ANC among HIV-positive women. We conducted an analysis of a cross-sectional survey among 411 HIV-positive adult (>18 years) women who gave birth at midwife obstetrics units in Gauteng between October 2016 and May 2017. Predictors of early ANC (defined as initiating ANC before or at 14 weeks of gestation) were assessed by multivariate log-binomial regression model. Overall, 51% (210) were diagnosed during pregnancy with 89% (188) initiating antiretroviral therapy on the same day of diagnosis. There was no meaningful difference in the timing of ANC initiation between women with previous HIV diagnosis [adjusted risk ratio (aRR) = 1.2; 95% confidence interval (95% CI): 0.9-1.7] compared with those diagnosed during pregnancy. Early ANC was predicted by planned pregnancy [aRR = 1.3; 95% CI: 1.1-1.7], parity (>2 children) [aRR = 0.6; 95% CI: 0.2-0.9] compared to not having a child, and tuberculosis diagnosis [aRR = 2.9; 95% CI: 1.4-6.1]. Our results suggest the need for a targeted intervention among HIV-positive women by improving the quality, content and outreach of ANC services to enhance early ANC uptake, and minimize mother-to-child transmission risk.

  19. Effect of Statin Use on Acute Kidney Injury Risk Following Coronary Artery Bypass Grafting

    PubMed Central

    Layton, J. Bradley; Kshirsagar, Abhijit V.; Simpson, Ross J.; Pate, Virginia; Funk, Michele Jonsson; Sturmer, Til; Brookhart, M. Alan

    2013-01-01

    Acute kidney injury (AKI) is a serious complication of cardiovascular surgery. While some non-experimental studies suggest statin use may reduce post-surgical AKI, methodological differences in study designs leave uncertainty regarding the reality or magnitude of the effect. We estimated the effect of pre-operative statin initiation on post-coronary artery bypass graft (CABG) AKI using an epidemiologic approach more closely simulating a randomized controlled trial in a large CABG patient population. We utilized healthcare claims from large, employer-based and Medicare insurance databases for the years 2000 – 2010. To minimize healthy user bias, we identified patients undergoing non-emergency CABG who either newly initiated a statin within 20 days prior to surgery or were unexposed for 200+ days prior to CABG. AKI was identified within 15 days following CABG. We calculated multivariable adjusted risk ratios (RR) and 95% confidence intervals (CI) with Poisson regression. Analyses were repeated using propensity score methods adjusted for clinical and healthcare utilization variables. We identified 17,077 CABG patients. Post-CABG AKI developed in 3.4% of statin initiators and 6.2% of non-initiators. After adjustment, we observed a protective effect of statin initiation on AKI (RR = 0.78, 95% CI 0.63, 0.96). This effect differed by age: ≥65 years, RR=0.91 (95% CI: 0.68, 1.20); <65 years, RR=0.62 (95% CI: 0.45, 0.86), although AKI was more common in the older age group (7.7 vs. 4.0%). In conclusion, statin initiation immediately prior to CABG may modestly reduce the risk of post-operative AKI, particularly in younger CABG patients. PMID:23273532

  20. Outcomes of HIV-positive patients with cryptococcal meningitis in the Americas.

    PubMed

    Crabtree Ramírez, B; Caro Vega, Y; Shepherd, B E; Le, C; Turner, M; Frola, C; Grinsztejn, B; Cortes, C; Padgett, D; Sterling, T R; McGowan, C C; Person, A

    2017-10-01

    Cryptococcal meningitis (CM) is associated with substantial mortality in HIV-infected patients. Optimal timing of antiretroviral therapy (ART) in persons with CM represents a clinical challenge, and the burden of CM in Latin America has not been well described. Studies suggest that early ART initiation is associated with higher mortality, but data from the Americas are scarce. HIV-infected adults in care between 1985-2014 at participating sites in the Latin America (the Caribbean, Central and South America network (CCASAnet)) and the Vanderbilt Comprehensive Care Clinic (VCCC) and who had CM were included. Survival probabilities were estimated. Risk of death when initiating ART within the first 2 weeks after CM diagnosis versus initiating between 2-8 weeks was assessed using dynamic marginal structural models adjusting for site, age, sex, year of CM, CD4 count, and route of HIV transmission. 340 patients were included (Argentina 58, Brazil 138, Chile 28, Honduras 27, Mexico 34, VCCC 55) and 142 (42%) died during the observation period. Among 151 patients with CM prior to ART 56 (37%) patients died compared to 86 (45%) of 189 with CM after ART initiation (p=0.14). Patients diagnosed with CM after ART had a higher risk of death (p=0.03, log-rank test). The probability of survival was not statistically different between patients who started ART within 2 weeks of CM (7/24, 29%) vs. those initiating between 2-8 weeks (14/53, 26%) (p=0.96), potentially due to lack of power. In this large Latin-American cohort, patients with CM had very high mortality rates, especially those diagnosed after ART initiation. This study reflects the overwhelming burden of CM in HIV-infected patients in Latin America. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

Top