Sole: Online Analysis of Southern FIA Data
Michael P. Spinney; Paul C. Van Deusen; Francis A. Roesch
2006-01-01
The Southern On Line Estimator (SOLE) is a flexible modular software program for analyzing U.S. Department of Agriculture Forest Service Forest Inventory and Analysis data. SOLE produces statistical tables, figures, maps, and portable document format reports based on user selected area and variables. SOLE?s Java-based graphical user interface is easy to use, and its R-...
Challenges of working with FIADB17 data: the SOLE experience
Michael Spinney; Paul Van Deusen
2007-01-01
The Southern On Line Estimator (SOLE) is an Internet-based Forest Inventory and Analysis (FIA) data analysis tool. SOLE is based on data downloaded from the publicly available FIA database (FIADB) and summarized by plot condition. The tasks of downloading, processing, and summarizing FIADB data require specialized expertise in inventory theory and data manipulation....
SOLE: enhanced FIA data analysis capabilities
Michael Spinney; Paul Van Deusen
2009-01-01
The Southern On Line Estimator (SOLE), is an Internet-based annual forest inventory and analysis (FIA) data analysis tool developed cooperatively by the National Council for Air and Stream Improvement and the Forest Service, U.S. Department of Agriculture's Forest Inventory and Analysis program at the Southern Research Station. Recent development of SOLE has...
NASA Astrophysics Data System (ADS)
Sabatini, Laura; Bullo, Marianna; Cariani, Alessia; Celić, Igor; Ferrari, Alice; Guarniero, Ilaria; Leoni, Simone; Marčeta, Bojan; Marcone, Alessandro; Polidori, Piero; Raicevich, Saša; Tinti, Fausto; Vrgoč, Nedo; Scarcella, Giuseppe
2018-07-01
In the Adriatic Sea two cryptic species of sole coexist, the common and Egyptian sole. Soles are one of the most valuable demersal fishery resources in the Adriatic Sea, so a correct species identification is crucial in order to perform stock assessment and implement effective management measures based on reliable and accurate data. In this study specimens collected during fishery-independent and fishery-dependent activities in the Adriatic were analyzed and identified coupling morphological and genetic approaches. A comparison of these two methods for the sole species identification was carried out to assess the most effective, accurate and practical diagnostic morphological key-character(s). Results showed that external characters, in particular features of the posterior dorsal and anal fins, are valid and accurate morphological markers. Based on these traits, a practical identification key of the two sibling species was proposed. Moreover, it was possible to estimate the extent of the error due to species misidentification introduced in the common sole stock assessment carried out in the Northern-central Adriatic Sea (GSA17). A 5% bias in the correct identification of common sole specimens was detected. However, this bias was shown not to affect the common sole stock assessment. Moreover, the genetic profiling of the Adriatic common sole allowed estimating genetic diversity and assessing population structure. Significant divergence between common soles inhabiting the eastern part of the Southern Adriatic Sea and those collected from the other areas of the basin was confirmed. Therefore, the occurrence of genetically differentiated subpopulations supports the need to implement independent stock assessments and management measures.
ERIC Educational Resources Information Center
Hickson, Stephen; Reed, W. Robert; Sander, Nicholas
2012-01-01
This study investigates the degree to which grades based solely on constructed-response (CR) questions differ from grades based solely on multiple-choice (MC) questions. If CR questions are to justify their higher costs, they should produce different grade outcomes than MC questions. We use a data set composed of thousands of observations on…
Tsuka, T; Murahata, Y; Azuma, K; Osaki, T; Ito, N; Okamoto, Y; Imagawa, T
2014-10-01
Computed tomography (CT) was performed on 800 untrimmed claws (400 inner claws and 400 outer claws) of 200 pairs of bovine hindlimbs to investigate the relationships between dorsal wall length and sole thickness, and between dorsal wall length and the relative rotation angle of distal phalanx-to-sole surface (S-D angle). Sole thickness was 3.8 and 4.0 mm at the apex of the inner claws and outer claws, respectively, with dorsal wall lengths <70 mm. These sole thickness values were less than the critical limit of 5 mm, which is associated with a softer surface following thinning of the soles. A sole thickness of 5 mm at the apex was estimated to correlate with dorsal wall lengths of 72.1 and 72.7 mm for the inner and outer claws, respectively. Sole thickness was 6.1 and 6.4 mm at the apex of the inner and outer claws, respectively, with dorsal wall lengths of 75 mm. These sole thickness values were less than the recommended sole thickness of 7 mm based on the protective function of the soles. A sole thickness >7 mm at the apex was estimated to correlate with a dorsal wall length of 79.8 and 78.4mm for the inner and outer claws, respectively. The S-D angles were recorded as anteversions of 2.9° and 4.7° for the inner and outer claws, respectively, with a dorsal wall length of 75 mm. These values indicate that the distal phalanx is likely to have rotated naturally forward toward the sole surface. The distal phalanx rotated backward to the sole surface at 3.2° and 7.6° for inner claws with dorsal wall lengths of 90-99 and ≥100 mm, respectively; and at 3.5° for outer claws with a dorsal wall length ≥100 mm. Dorsal wall lengths of 85.7 and 97.2 mm were estimated to correlate with a parallel positional relationship of the distal phalanx to the sole surface in the inner and outer claws, respectively. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Health Care Utilization and Expenditures Attributable to Cigar Smoking Among US Adults, 2000-2015.
Wang, Yingning; Sung, Hai-Yen; Yao, Tingting; Lightwood, James; Max, Wendy
Cigar use in the United States is a growing public health concern because of its increasing popularity. We estimated health care utilization and expenditures attributable to cigar smoking among US adults aged ≥35. We analyzed data on 84 178 adults using the 2000, 2005, 2010, and 2015 National Health Interview Surveys. We estimated zero-inflated Poisson (ZIP) regression models on hospital nights, emergency department (ED) visits, physician visits, and home-care visits as a function of tobacco use status-current sole cigar smokers (ie, smoke cigars only), current poly cigar smokers (smoke cigars and smoke cigarettes or use smokeless tobacco), former sole cigar smokers (used to smoke cigars only), former poly cigar smokers (used to smoke cigars and smoke cigarettes or use smokeless tobacco), other tobacco users (ever smoked cigarettes and used smokeless tobacco but not cigars), and never tobacco users (never smoked cigars, smoked cigarettes, or used smokeless tobacco)-and other covariates. We calculated health care utilization attributable to current and former sole cigar smoking based on the estimated ZIP models, and then we calculated total health care expenditures attributable to cigar smoking. Current and former sole cigar smoking was associated with excess annual utilization of 72 137 hospital nights, 32 748 ED visits, and 420 118 home-care visits. Annual health care expenditures attributable to sole cigar smoking were $284 million ($625 per sole cigar smoker), and total annual health care expenditures attributable to sole and poly cigar smoking were $1.75 billion. Comprehensive tobacco control policies and interventions are needed to reduce cigar smoking and the associated health care burden.
A critique of age estimation using attrition as the sole indicator.
Ball, J
2002-12-01
The age determination of skeletal remains has been carried out using anthropological examination of the remaining bones and dentition. The aging of the dentition is based on attrition which, if physiological will correlate with age. Occasionally the only material available is a single tooth or a few teeth, or in the case of a living person, teeth in situ. In certain cases microscopic examination of the teeth may not be possible and the age estimation is then often determined by the degree of attrition associated with the tooth. In more recent times the causes of attrition have involved other factors such as bruxism, diet, environment and medication. The weaknesses and limitations of age estimation by examination of dental attrition as the sole indicator of age are highlighted.
John R. Brooks
2007-01-01
A technique for estimating stand average dominant height based solely on field inventory data is investigated. Using only 45.0919 percent of the largest trees per acre in the diameter distribution resulted in estimates of average dominant height that were within 4.3 feet of the actual value, when averaged over stands of very different structure and history. Cubic foot...
Helmersson-Karlqvist, Johanna; Ärnlöv, Johan; Larsson, Anders
2016-10-01
Decreased glomerular filtration rate (GFR) is an important cardiovascular risk factor, but estimated GFR (eGFR) may differ depending on whether it is based on creatinine or cystatin C. A combined creatinine/cystatin C equation has recently been shown to best estimate GFR; however, the benefits of using the combined equation for risk prediction in routine clinical care have been less studied. This study compares mortality risk prediction by eGFR using the combined creatinine/cystatin C equation (CKD-EPI), a sole creatinine equation (CKD-EPI) and a sole cystatin C equation (CAPA), respectively, using assays that are traceable to international calibrators. All patients analysed for both creatinine and cystatin C from the same blood sample tube (n = 13,054) during 2005-2007 in Uppsala University Hospital Laboratory were divided into eGFR risk categories>60, 30-60 and <30 mL/min/1.73 m(2) by each eGFR equation. During follow-up (median 4.6 years), 4398 participants died, of which 1396 deaths were due to cardiovascular causes. Reduced eGFR was significantly associated with death as assessed by all eGFR equations. The net reclassification improvement (NRI) for the combination equation compared with the sole creatinine equation was 0.10 (p < 0.001) for all-cause mortality and 0.08 (p < 0.001) for cardiovascular mortality, indicating improved reclassification. In contrast, NRI for the combination equation, compared with the sole cystatin C equation, was -0.06 (p < 0.001) for all-cause mortality and -0.02 (p = 0.032) for cardiovascular mortality, indicating a worsened reclassification. In routine clinical care, cystatin C-based eGFR was more closely associated with mortality compared with both creatinine-based eGFR and creatinine/cystatin C-based eGFR. © The European Society of Cardiology 2016.
On-Line Analysis of Southern FIA Data
Michael P. Spinney; Paul C. Van Deusen; Francis A. Roesch
2006-01-01
The Southern On-Line Estimator (SOLE) is a web-based FIA database analysis tool designed with an emphasis on modularity. The Java-based user interface is simple and intuitive to use and the R-based analysis engine is fast and stable. Each component of the program (data retrieval, statistical analysis and output) can be individually modified to accommodate major...
Subduction starts by stripping slabs
NASA Astrophysics Data System (ADS)
Soret, Mathieu; Agard, Philippe; Dubacq, Benoît; Prigent, Cécile; Plunder, Alexis; Yamato, Philippe; Guillot, Stéphane
2017-04-01
Metamorphic soles correspond to tectonic slices welded beneath most large-scale ophiolites. These slivers of oceanic crust metamorphosed up to granulite facies conditions are interpreted as having formed during the first My of intra-oceanic subduction from heat transfer from the incipient mantle wedge towards the top of the subducting plate. Our study reappraises the formation of metamorphic sole through detailed field and petrological work on three classical key sections across the Semail ophiolite (Oman and United Arab Emirates). Geothermobarometry and thermodynamic modelling show that metamorphic soles do not record a continuous temperature gradient, as expected from simple heating by the upper plate or by shear heating and proposed by previous studies. The upper, high-temperature metamorphic sole is subdivided in at least two units, testifying to the stepwise formation, detachment and accretion of successive slices from the downgoing slab to the mylonitic base of the ophiolite. Estimated peak pressure-temperature conditions through the metamorphic sole are, from top to bottom, 850˚C - 1GPa, 725°C - 0.8 GPa and 530°C - 0.5 GPa. These estimates appear constant within each unit but separated by a gap of 100 to 200˚C and 0.2 GPa. Despite being separated by hundreds of kilometres below the Semail ophiolite and having contrasting locations with respect to the ophiolite ridge axis, metamorphic soles show no evidence for significant petrological variations along strike. These constraints allow to refine the tectonic-petrological model for the genesis of metamorphic soles, formed through the stepwise stacking of several homogeneous slivers of oceanic crust and its sedimentary cover. Metamorphic soles do not so much result from downward heat transfer (ironing effect) but rather from progressive metamorphism during strain localization and cooling of the plate interface. The successive thrusts are the result of rheological contrasts between the sole (initially at the subducting slab) and the peridotite above as the plate interface progressively cools down. These findings have implications for the thickness, the scale and the coupling state at the plate interface during the early history of subduction/obduction systems.
48 CFR 9905.506-60 - Illustrations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... base. In a proposal for a covered contract, it estimates the allocable expenses based solely on the..., it has a 5-month transitional “fiscal year.” The same 5-month period must be used as the transitional cost accounting period; it may not be combined as provided in 9905.506-50(f), because the transitional...
48 CFR 9905.506-60 - Illustrations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... base. In a proposal for a covered contract, it estimates the allocable expenses based solely on the..., it has a 5-month transitional “fiscal year.” The same 5-month period must be used as the transitional cost accounting period; it may not be combined as provided in 9905.506-50(f), because the transitional...
Croué, Iola; Fikse, Freddy; Johansson, Kjell; Carlén, Emma; Thomas, Gilles; Leclerc, Hélène; Ducrocq, Vincent
2017-10-01
Claw lesions are one of the most important health issues in dairy cattle. Although the frequency of claw lesions depends greatly on herd management, the frequency can be lowered through genetic selection. A genetic evaluation could be developed based on trimming records collected by claw trimmers; however, not all cows present in a herd are usually selected by the breeder to be trimmed. The objectives of this study were to investigate the importance of the preselection of cows for trimming, to account for this preselection, and to estimate genetic parameters of claw health traits. The final data set contained 25,511 trimming records of French Holstein cows. Analyzed claw lesion traits were digital dermatitis, heel horn erosion, interdigital hyperplasia, sole hemorrhage circumscribed, sole hemorrhage diffused, sole ulcer, and white line fissure. All traits were analyzed as binary traits in a multitrait linear animal model. Three scenarios were considered: including only trimmed cows in a 7-trait model (scenario 1); or trimmed cows and contemporary cows not trimmed but present at the time of a visit (considering that nontrimmed cows were healthy) in a 7-trait model (scenario 2); or trimmed cows and contemporary cows not trimmed but present at the time of a visit (considering lesion records for trimmed cows only), in an 8-trait model, including a 0/1 trimming status trait (scenario 3). For scenario 3, heritability estimates ranged from 0.02 to 0.09 on the observed scale. Genetic correlations clearly revealed 2 groups of traits (digital dermatitis, heel horn erosion, and interdigital hyperplasia on the one hand, and sole hemorrhage circumscribed, sole hemorrhage diffused, sole ulcer, and white line fissure on the other hand). Heritabilities on the underlying scale did not vary much depending on the scenario: the effect of the preselection of cows for trimming on the estimation of heritabilities appeared to be negligible. However, including untrimmed cows as healthy caused bias in the estimation of genetic correlations. The use of a trimming status trait to account for preselection appears promising, as it allows consideration of the exhaustive population of cows present at the time a trimmer visited a farm without causing bias in genetic parameters. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Becker, Johanna; Reist, Martin; Steiner, Adrian
2014-04-01
This study assessed the attitudes of personnel involved in therapeutic claw trimming of dairy cattle in Switzerland towards pain associated with sole ulcers and their treatment. Data from 77 farmers, 32 claw trimmers, and 137 cattle veterinarians were used. A large range of factors were associated with whether the respondents thought that anaesthesia during the treatment of sole ulcers was beneficial; these included year of graduation, work experience, attitude to costs of analgesia, perception of competition between veterinarians and claw trimmers, estimation of pain level associated with treatment, estimated sensitivity of dairy cows to pain, knowledge of the obligation to provide analgesia, and whether the respondent thought lesion size and occurrence of defensive behaviour by the cow were important. Respondents' estimation of the pain level associated with sole ulcer treatment was linked to frequency of therapeutic claw trimming, age, farmers' income, estimated knowledge of the benefits of analgesia, and estimated sensitivity of dairy cows to pain. The latter factor was associated with profession, frequency of therapeutic claw trimming, capability of pain recognition, opinion on the benefits of analgesia, knowledge of the obligation to provide analgesia, and self-estimation of the ability to recognise pain. Improving the knowledge of personnel involved in therapeutic claw trimming with regard to pain in dairy cows and how to alleviate it is crucial if management of pain associated with treatment of sole ulcer and the welfare of lame cows are to be optimised. Copyright © 2014 Elsevier Ltd. All rights reserved.
Sean P. Healey; Paul L. Patterson; Sassan Saatchi; Michael A. Lefsky; Andrew J. Lister; Elizabeth A. Freeman; Gretchen G. Moisen
2012-01-01
Light Detection and Ranging (LiDAR) returns from the spaceborne Geoscience Laser Altimeter (GLAS) sensor may offer an alternative to solely field-based forest biomass sampling. Such an approach would rely upon model-based inference, which can account for the uncertainty associated with using modeled, instead of field-collected, measurements. Model-based methods have...
78 FR 53336 - List of Fisheries for 2013
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-29
... provided on the LOF are solely used for descriptive purposes and will not be used in determining future... this information to determine whether the fishery can be classified on the LOF based on quantitative... does not have a quantitative estimate of the number of mortalities and serious injuries of pantropical...
Oman metamorphic sole formation reveals early subduction dynamics
NASA Astrophysics Data System (ADS)
Soret, Mathieu; Agard, Philippe; Dubacq, Benoît; Plunder, Alexis; Ildefonse, Benoît; Yamato, Philippe; Prigent, Cécile
2016-04-01
Metamorphic soles correspond to m to ~500m thick tectonic slices welded beneath most of the large-scale ophiolites. They typically show a steep inverted metamorphic structure where the pressure and temperature conditions of crystallization increase upward (from 500±100°C at 0.5±0.2 GPa to 800±100°C at 1.0±0.2 GPa), with isograds subparallel to the contact with the overlying ophiolitic peridotite. The proportion of mafic rocks in metamorphic soles also increases from the bottom (meta-sediments rich) to the top (approaching the ophiolite peridotites). These soles are interpreted as the result of heat transfer from the incipient mantle wedge toward the nascent slab (associated with large-scale fluid transfer and possible shear heating) during the first My of intra-oceanic subduction (as indicated by radiometric ages). Metamorphic soles provide therefore major constraints on early subduction dynamics (i.e., thermal structure, fluid migration and rheology along the nascent slab interface). We present a detailed structural and petrological study of the metamorphic sole from 4 major cross-sections along the Oman ophiolite. We show precise pressure-temperature estimates obtained by pseudosection modelling and EBSD measurements performed on both the garnet-bearing and garnet-free high-grade sole. Results allow quantification of the micro-scale deformation and highlight differences in pressure-temperature-deformation conditions between the 4 different locations, showing that the inverted metamorphic gradient through the sole is not continuous in all locations. Based on these new constraints, we suggest a new tectonic-petrological model for the formation of metamorphic soles below ophiolites. This model involves the stacking of several homogeneous slivers of oceanic crust leading to the present-day structure of the sole. In this view, these thrusts are the result of rheological contrasts between the sole and the peridotite as the plate interface progressively cools down. These slivers later underwent several stages of retrogression (partly mediated by ascending fluids from the slab) from amphibolite- to prehnite/pumpellite-facies conditions.
Improving comfort of shoe sole through experiments based on CAD-FEM modeling.
Franciosa, Pasquale; Gerbino, Salvatore; Lanzotti, Antonio; Silvestri, Luca
2013-01-01
It was reported that next to style, comfort is the second key aspect in purchasing footwear. One of the most important components of footwear is the shoe sole, whose design is based on many factors such as foot shape/size, perceived comfort and materials. The present paper focuses on the parametric analysis of a shoe sole to improve the perceived comfort. The sensitivity of geometric and material design factors on comfort degree was investigated by combining real experimental tests and CAD-FEM simulations. The correlation between perceived comfort and physical responses, such as plantar pressures, was estimated by conducting real tests. Four different conditions were analyzed: subjects wearing three commercially available shoes and in a barefoot condition. For each condition, subjects expressed their perceived comfort score. By adopting plantar sensors, the plantar pressures were also monitored. Once given such a correlation, a parametric FEM model of the footwear was developed. In order to better simulate contact at the plantar surface, a detailed FEM model of the foot was also generated from CT scan images. Lastly, a fractional factorial design array was applied to study the sensitivity of different sets of design factors on comfort degree. The findings of this research showed that the sole thickness and its material highly influence perceived comfort. In particular, softer materials and thicker soles contribute to increasing the degree of comfort. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.
Nonparametric probability density estimation by optimization theoretic techniques
NASA Technical Reports Server (NTRS)
Scott, D. W.
1976-01-01
Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.
NASA Astrophysics Data System (ADS)
Soret, Mathieu; Agard, Philippe; Dubacq, Benoît; Hirth, Greg; Yamato, Philippe; Ildefonse, Benoît; Prigent, Cécile
2016-04-01
Metamorphic soles correspond to m to ~500 m thick highly strained metamorphic rock units found beneath mylonitic banded peridotites at the base of large-scale ophiolites, as exemplified in Oman. Metamorphic soles are mainly composed of metabasalts deriving from the downgoing oceanic lithosphere and metamorphosed up to granulite-facies conditions by heat transfer from the mantle wedge. Pressure-temperature peak conditions are usually estimated at 1.0±0.2 GPa and 800±100°C. The absence of HP-LT metamorphism overprint implies that metamorphic soles have been formed and exhumed during subduction infancy. In this view, metamorphic soles were strongly deformed during their accretion to the mantle wedge (corresponding, now, to the base of the ophiolite). Therefore, metamorphic soles and banded peridotites are direct witnesses of the dynamics of early subduction zones, in terms of thermal structure, fluid migration and rheology evolution across the nascent slab interface. Based on fieldwork and EBSD analyses, we present a detailed (micro-) structural study performed on samples coming from the Sumeini window, the better-preserved cross-section of the metamorphic sole of Oman. Large differences are found in the deformation (CPO, grain size, aspect ratio) of clinopyroxene, amphibole and plagioclase, related to mineralogical changes linked with the distance to the peridotite contact (e.g., hardening due to the appearance of garnet and clinopyroxene). To model the incipient slab interface in laboratory, we carried out 5 hydrostatic annealing and simple-shear experiments on Griggs solid-medium apparatus. Deformation experiments were conducted at axial strain rates of 10-6 s-1. Fine-grained amphibolite was synthetized by adding 1 wt.% water to a (Mid-Ocean Ridge) basalt powder as a proxy for the metamorphic sole (amphibole + plagioclase + clinopyroxene ± garnet assemblage). To synthetize garnet, 2 experiments were carried out in hydrostatic conditions and with deformation at 800°C with confining pressure of 2 GPa. Another simple-shear experiment has been carried out at 800°C and 1 GPa with fined-grained natural garnet. With the aim of mimicking the early slab interface (between the metamorphic sole and banded peridotites at the base of the ophiolite), 2 simple-shear deformation experiments with 2 layers have been carried out at 800°C and confining pressure of 1 GPa. The bottom layer was made of hydrated basalt powder and the top layer was made of olivine. Fined-grained garnet-free amphibolite is significantly weaker than dunite but the appearance of harder minerals in the amphibolite (garnet and clinopyroxene) has major implications on its rheological evolution. These results allow liking field observations of strain localization at the interface to the metamorphic sole formation.
Seo, Songwon; Lee, Dal Nim; Jin, Young Woo; Lee, Won Jin; Park, Sunhoo
2018-05-11
Risk projection models estimating the lifetime cancer risk from radiation exposure are generally based on exposure dose, age at exposure, attained age, gender and study-population-specific factors such as baseline cancer risks and survival rates. Because such models have mostly been based on the Life Span Study cohort of Japanese atomic bomb survivors, the baseline risks and survival rates in the target population should be considered when applying the cancer risk. The survival function used in the risk projection models that are commonly used in the radiological protection field to estimate the cancer risk from medical or occupational exposure is based on all-cause mortality. Thus, it may not be accurate for estimating the lifetime risk of high-incidence but not life-threatening cancer with a long-term survival rate. Herein, we present the lifetime attributable risk (LAR) estimates of all solid cancers except thyroid cancer, thyroid cancer, and leukemia except chronic lymphocytic leukemia in South Korea for lifetime exposure to 1 mGy per year using the cancer-free survival function, as recently applied in the Fukushima health risk assessment by the World Health Organization. Compared with the estimates of LARs using an overall survival function solely based on all-cause mortality, the LARs of all solid cancers except thyroid cancer, and thyroid cancer evaluated using the cancer-free survival function, decreased by approximately 13% and 1% for men and 9% and 5% for women, respectively. The LAR of leukemia except chronic lymphocytic leukemia barely changed for either gender owing to the small absolute difference between its incidence and mortality. Given that many cancers have a high curative rate and low mortality rate, using a survival function solely based on all-cause mortality may cause an overestimation of the lifetime risk of cancer incidence. The lifetime fractional risk was robust against the choice of survival function.
Characteristics of Rural Communities with a Sole, Independently Owned Pharmacy.
Nattinger, Matthew; Ullrich, Fred; Mueller, Keith J
2015-04-01
Prior RUPRI Center policy briefs have described the role of rural pharmacies in providing many essential clinical services (in addition to prescription and nonprescription medications), such as blood pressure monitoring, immunizations, and diabetes counseling, and the adverse effects of Medicare Part D negotiated networks on the financial viability of rural pharmacies.1 Because rural pharmacies play such a broad role in health care delivery, pharmacy closures can sharply reduce access to essential health care services in rural and underserved communities. These closures are of particular concern in rural areas served by a sole, independently owned pharmacy (i.e., a pharmacy unaffiliated with a chain or franchise). This policy brief characterizes the population of rural areas served by a sole, independently owned pharmacy. Dependent on a sole pharmacy, these areas are at highest risk to lose access to many essential clinical services. Key Findings. (1) In 2014 over 2.7 million people lived in 663 rural communities served by a sole, independently owned pharmacy. (2) More than one-quarter of these residents (27.9 percent) were living below 150 percent of the federal poverty level. (3) Based on estimates from 2012, a substantial portion of the residents of these areas were dependent on public insurance (i.e., Medicare and/or Medicaid, 20.5 percent) or were uninsured (15.0 percent). (4) If the sole, independent retail pharmacy in these communities were to close, the next closest retail pharmacy would be over 10 miles away for a majority of rural communities (69.7 percent).
Estimation of Untracked Geosynchronous Population from Short-Arc Angles-Only Observations
NASA Technical Reports Server (NTRS)
Healy, Liam; Matney, Mark
2017-01-01
Telescope observations of the geosynchronous regime will observe two basic types of objects --- objects related to geosynchronous earth orbit (GEO) satellites, and objects in highly elliptical geosynchronous transfer orbits (GTO). Because telescopes only measure angular rates, the GTO can occasionally mimic the motion of GEO objects over short arcs. A GEO census based solely on short arc telescope observations may be affected by these ``interlopers''. A census that includes multiple angular rates can get an accurate statistical estimate of the GTO population, and that then can be used to correct the estimate of the geosynchronous earth orbit population.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayer, J.
Based on a compilation of three estimation approaches, the total nationwide population of wild pigs in the United States numbers approximately 6.3 million animals, with that total estimate ranging from 4.4 up to 11.3 million animals. The majority of these numbers (99 percent), which were encompassed by ten states (i.e., Alabama, Arkansas, California, Florida, Georgia, Louisiana, Mississippi, Oklahoma, South Carolina and Texas), were based on defined estimation methodologies (e.g., density estimates correlated to the total potential suitable wild pig habitat statewide, statewide harvest percentages, statewide agency surveys regarding wild pig distribution and numbers). In contrast to the pre-1990 estimates, nonemore » of these more recent efforts, collectively encompassing 99 percent of the total, were based solely on anecdotal information or speculation. To that end, one can defensibly state that the wild pigs found in the United States number in the millions of animals, with the nationwide population estimated to arguably vary from about four million up to about eleven million individuals.« less
An approximate spin design criterion for monoplanes, 1 May 1939
NASA Technical Reports Server (NTRS)
Seidman, O.; Donlan, C. J.
1976-01-01
An approximate empirical criterion, based on the projected side area and the mass distribution of the airplane, was formulated. The British results were analyzed and applied to American designs. A simpler design criterion, based solely on the type and the dimensions of the tail, was developed; it is useful in a rapid estimation of whether a new design is likely to comply with the minimum requirements for safety in spinning.
Planetary Probe Entry Atmosphere Estimation Using Synthetic Air Data System
NASA Technical Reports Server (NTRS)
Karlgaard, Chris; Schoenenberger, Mark
2017-01-01
This paper develops an atmospheric state estimator based on inertial acceleration and angular rate measurements combined with an assumed vehicle aerodynamic model. The approach utilizes the full navigation state of the vehicle (position, velocity, and attitude) to recast the vehicle aerodynamic model to be a function solely of the atmospheric state (density, pressure, and winds). Force and moment measurements are based on vehicle sensed accelerations and angular rates. These measurements are combined with an aerodynamic model and a Kalman-Schmidt filter to estimate the atmospheric conditions. The new method is applied to data from the Mars Science Laboratory mission, which landed the Curiosity rover on the surface of Mars in August 2012. The results of the new estimation algorithm are compared with results from a Flush Air Data Sensing algorithm based on onboard pressure measurements on the vehicle forebody. The comparison indicates that the new proposed estimation method provides estimates consistent with the air data measurements, without the use of pressure measurements. Implications for future missions such as the Mars 2020 entry capsule are described.
Geothermal Life Cycle Calculator
Sullivan, John
2014-03-11
This calculator is a handy tool for interested parties to estimate two key life cycle metrics, fossil energy consumption (Etot) and greenhouse gas emission (ghgtot) ratios, for geothermal electric power production. It is based solely on data developed by Argonne National Laboratory for DOE’s Geothermal Technologies office. The calculator permits the user to explore the impact of a range of key geothermal power production parameters, including plant capacity, lifetime, capacity factor, geothermal technology, well numbers and depths, field exploration, and others on the two metrics just mentioned. Estimates of variations in the results are also available to the user.
Watson, K.; Hummer-Miller, S.
1981-01-01
A method based solely on remote sensing data has been developed to estimate those meteorological effects which are required for thermal-inertia mapping. It assumes that the atmospheric fluxes are spatially invariant and that the solar, sky, and sensible heat fluxes can be approximated by a simple mathematical form. Coefficients are determined from least-squares method by fitting observational data to our thermal model. A comparison between field measurements and the model-derived flux shows the type of agreement which can be achieved. An analysis of the limitations of the method is also provided. ?? 1981.
Acoustic Impact of Short-Term Ocean Variability in the Okinawa Trough
2010-01-20
nature run: Generalized Digital Environment Model ( GDEM ) 3.0 climatologyfl], Modular Ocean Data Assimilation System (MODAS) synthetic profiles[2], Navy...potentially preferred for a particular class of applications, and thus a possible source of sound speed for estimates of acoustic transmission. Three, GDEM ...MODAS, and NCODA, are statistical products, and the other three are dynamic forecasts from NCOM. GDEM is a climatology based solely on historical
The Brazilian Air Force Health System: Workforce-Needs Estimation Using System Dynamics
2009-03-01
workforce in the system. 3. Non- intervention This forecast provides a potential scenario of workforce numbers, based solely on actual numbers derived from...present knowledge and actions taken under the assumption that no unexpected interventions will occur. It is a red flag that guides future decisions...represented as a distribution. Bartholomew (1974) establishes a stochastic model of manpower systems as a probabilistic description of the
Yield estimation of corn with multispectral data and the potential of using imaging spectrometers
NASA Astrophysics Data System (ADS)
Bach, Heike
1997-05-01
In the frame of the special yield estimation, a regular procedure conducted for the European Union to more accurately estimate agricultural yield, a project was conducted for the state minister for Rural Environment, Food and Forestry of Baden-Wuerttemberg, Germany) to test remote sensing data with advanced yield formation models for accuracy and timelines of yield estimation of corn. The methodology employed uses field-based plant parameter estimation from atmospherically corrected multitemporal/multispectral LANDSAT-TM data. An agrometeorological plant-production-model is used for yield prediction. Based solely on 4 LANDSAT-derived estimates and daily meteorological data the grain yield of corn stands was determined for 1995. The modeled yield was compared with results independently gathered within the special yield estimation for 23 test fields in the Upper Rhine Valley. The agrement between LANDSAT-based estimates and Special Yield Estimation shows a relative error of 2.3 percent. The comparison of the results for single fields shows, that six weeks before harvest the grain yield of single corn fields was estimated with a mean relative accuracy of 13 percent using satellite information. The presented methodology can be transferred to other crops and geographical regions. For future applications hyperspectral sensors show great potential to further enhance the results or yield prediction with remote sensing.
NASA Astrophysics Data System (ADS)
Bach, Heike
1998-07-01
In order to test remote sensing data with advanced yield formation models for accuracy and timeliness of yield estimation of corn, a project was conducted for the State Ministry for Rural Environment, Food, and Forestry of Baden-Württemberg (Germany). This project was carried out during the course of the `Special Yield Estimation', a regular procedure conducted for the European Union, to more accurately estimate agricultural yield. The methodology employed uses field-based plant parameter estimation from atmospherically corrected multitemporal/multispectral LANDSAT-TM data. An agrometeorological plant-production-model is used for yield prediction. Based solely on four LANDSAT-derived estimates (between May and August) and daily meteorological data, the grain yield of corn fields was determined for 1995. The modelled yields were compared with results gathered independently within the Special Yield Estimation for 23 test fields in the upper Rhine valley. The agreement between LANDSAT-based estimates (six weeks before harvest) and Special Yield Estimation (at harvest) shows a relative error of 2.3%. The comparison of the results for single fields shows that six weeks before harvest, the grain yield of corn was estimated with a mean relative accuracy of 13% using satellite information. The presented methodology can be transferred to other crops and geographical regions. For future applications hyperspectral sensors show great potential to further enhance the results for yield prediction with remote sensing.
Developmental and Individual Differences in Pure Numerical Estimation
ERIC Educational Resources Information Center
Booth, Julie L.; Siegler, Robert S.
2006-01-01
The authors examined developmental and individual differences in pure numerical estimation, the type of estimation that depends solely on knowledge of numbers. Children between kindergarten and 4th grade were asked to solve 4 types of numerical estimation problems: computational, numerosity, measurement, and number line. In Experiment 1,…
An Integrated Approach for Aircraft Engine Performance Estimation and Fault Diagnostics
NASA Technical Reports Server (NTRS)
imon, Donald L.; Armstrong, Jeffrey B.
2012-01-01
A Kalman filter-based approach for integrated on-line aircraft engine performance estimation and gas path fault diagnostics is presented. This technique is specifically designed for underdetermined estimation problems where there are more unknown system parameters representing deterioration and faults than available sensor measurements. A previously developed methodology is applied to optimally design a Kalman filter to estimate a vector of tuning parameters, appropriately sized to enable estimation. The estimated tuning parameters can then be transformed into a larger vector of health parameters representing system performance deterioration and fault effects. The results of this study show that basing fault isolation decisions solely on the estimated health parameter vector does not provide ideal results. Furthermore, expanding the number of the health parameters to address additional gas path faults causes a decrease in the estimation accuracy of those health parameters representative of turbomachinery performance deterioration. However, improved fault isolation performance is demonstrated through direct analysis of the estimated tuning parameters produced by the Kalman filter. This was found to provide equivalent or superior accuracy compared to the conventional fault isolation approach based on the analysis of sensed engine outputs, while simplifying online implementation requirements. Results from the application of these techniques to an aircraft engine simulation are presented and discussed.
Lian, Meifei; Zhao, Kai; Feng, Yunzhi; Yao, Qian
The reliability of combining natural teeth and implants in one removable prosthesis is controversial. This systematic review was conducted to evaluate the prognosis of combined tooth/implant-supported double-crown-retained removable dental prostheses (DCR-RDPs) and to compare them with solely implant-supported prostheses with a minimum observation period of 3 years. Electronic database (PubMed, Embase, Central, and SCI) and manual searches up to August 2016 were conducted to identify human clinical studies on tooth/implant-supported DCR-RDPs. Literature selection and data extraction were accomplished by two independent reviewers. Meta-analyses of survival and complication rates were performed separately for combined tooth/implant-supported and solely implant-supported DCRRDPs. Among the initially identified 366 articles, 17 were included in a quantitative analysis. The estimated overall cumulative survival rate (CSR) for implants in combined tooth/implant-supported DCRRDPs was 98.72% (95% confidence interval [95% CI]: 96.98% to 99.82%), and that for implants in solely implant-supported DCR-RDPs was 98.83% (95% CI: 97.45% to 99.75%). The summary CSR for abutment teeth was 92.96% (95% CI: 85.38% to 98.12%). Double-crown-retained dentures with both abutment types showed high CSRs, most of which were approximately 100%. Regarding prosthetic maintenance treatment, the estimated incidence for patients treated with combined tooth/implant-supported RDPs was 0.164 (95% CI: 0.089 to 0.305) per patient per year (T/P/Y) and that for patients restored with solely implant-supported RDPs was 0.260 (95% CI: 0.149 to 0.454) T/P/Y. Based on four studies with combined tooth/implant-supported DCR-RDPs, no intrusion phenomena were encountered. Subject to the limitations of the present review, combining remaining teeth and implants in DCR-RDPs is a reliable and predictable treatment modality for partially edentulous patients. Comparable high survival rates and minor biologic or technical complications are observed for combined tooth/implant-supported and solely implant-supported DCR-RDPs. Due to the heterogeneity of the included studies, the results must be interpreted with caution.
Pradhan, Abani K; Ivanek, Renata; Gröhn, Yrjö T; Bukowski, Robert; Geornaras, Ifigenia; Sofos, John N; Wiedmann, Martin
2010-04-01
The objective of this study was to estimate the relative risk of listeriosis-associated deaths attributable to Listeria monocytogenes contamination in ham and turkey formulated without and with growth inhibitors (GIs). Two contamination scenarios were investigated: (i) prepackaged deli meats with contamination originating solely from manufacture at a frequency of 0.4% (based on reported data) and (ii) retail-sliced deli meats with contamination originating solely from retail at a frequency of 2.3% (based on reported data). Using a manufacture-to-consumption risk assessment with product-specific growth kinetic parameters (i.e., lag phase and exponential growth rate), reformulation with GIs was estimated to reduce human listeriosis deaths linked to ham and turkey by 2.8- and 9-fold, respectively, when contamination originated at manufacture and by 1.9- and 2.8-fold, respectively, for products contaminated at retail. Contamination originating at retail was estimated to account for 76 and 63% of listeriosis deaths caused by ham and turkey, respectively, when all products were formulated without GIs and for 83 and 84% of listeriosis deaths caused by ham and turkey, respectively, when all products were formulated with GIs. Sensitivity analyses indicated that storage temperature was the most important factor affecting the estimation of per annum relative risk. Scenario analyses suggested that reducing storage temperature in home refrigerators to consistently below 7 degrees C would greatly reduce the risk of human listeriosis deaths, whereas reducing storage time appeared to be less effective. Overall, our data indicate a critical need for further development and implementation of effective control strategies to reduce L. monocytogenes contamination at the retail level.
Marginal estimator for the aberrations of a space telescope by phase diversity
NASA Astrophysics Data System (ADS)
Blanc, Amandine; Mugnier, Laurent; Idier, Jérôme
2017-11-01
In this communication, we propose a novel method for estimating the aberrations of a space telescope from phase diversity data. The images recorded by such a telescope can be degraded by optical aberrations due to design, fabrication or misalignments. Phase diversity is a technique that allows the estimation of aberrations. The only estimator found in the relevant literature is based on a joint estimation of the aberrated phase and the observed object. We recall this approach and study the behavior of this joint estimator by means of simulations. We propose a novel marginal estimator of the sole phase. it is obtained by integrating the observed object out of the problem; indeed, this object is a nuisance parameter in our problem. This reduces drastically the number of unknown and provides better asymptotic properties. This estimator is implemented and its properties are validated by simulation. its performance is equal or even better than that of the joint estimator for the same computing cost.
Estimating Missing Unit Process Data in Life Cycle Assessment Using a Similarity-Based Approach.
Hou, Ping; Cai, Jiarui; Qu, Shen; Xu, Ming
2018-05-01
In life cycle assessment (LCA), collecting unit process data from the empirical sources (i.e., meter readings, operation logs/journals) is often costly and time-consuming. We propose a new computational approach to estimate missing unit process data solely relying on limited known data based on a similarity-based link prediction method. The intuition is that similar processes in a unit process network tend to have similar material/energy inputs and waste/emission outputs. We use the ecoinvent 3.1 unit process data sets to test our method in four steps: (1) dividing the data sets into a training set and a test set; (2) randomly removing certain numbers of data in the test set indicated as missing; (3) using similarity-weighted means of various numbers of most similar processes in the training set to estimate the missing data in the test set; and (4) comparing estimated data with the original values to determine the performance of the estimation. The results show that missing data can be accurately estimated when less than 5% data are missing in one process. The estimation performance decreases as the percentage of missing data increases. This study provides a new approach to compile unit process data and demonstrates a promising potential of using computational approaches for LCA data compilation.
Real-Time Radar-Based Tracking and State Estimation of Multiple Non-Conformant Aircraft
NASA Technical Reports Server (NTRS)
Cook, Brandon; Arnett, Timothy; Macmann, Owen; Kumar, Manish
2017-01-01
In this study, a novel solution for automated tracking of multiple unknown aircraft is proposed. Many current methods use transponders to self-report state information and augment track identification. While conformant aircraft typically report transponder information to alert surrounding aircraft of its state, vehicles may exist in the airspace that are non-compliant and need to be accurately tracked using alternative methods. In this study, a multi-agent tracking solution is presented that solely utilizes primary surveillance radar data to estimate aircraft state information. Main research challenges include state estimation, track management, data association, and establishing persistent track validity. In an effort to realize these challenges, techniques such as Maximum a Posteriori estimation, Kalman filtering, degree of membership data association, and Nearest Neighbor Spanning Tree clustering are implemented for this application.
Przemyslaw, Baranski; Pawel, Strumillo
2012-01-01
The paper presents an algorithm for estimating a pedestrian location in an urban environment. The algorithm is based on the particle filter and uses different data sources: a GPS receiver, inertial sensors, probability maps and a stereo camera. Inertial sensors are used to estimate a relative displacement of a pedestrian. A gyroscope estimates a change in the heading direction. An accelerometer is used to count a pedestrian's steps and their lengths. The so-called probability maps help to limit GPS inaccuracy by imposing constraints on pedestrian kinematics, e.g., it is assumed that a pedestrian cannot cross buildings, fences etc. This limits position inaccuracy to ca. 10 m. Incorporation of depth estimates derived from a stereo camera that are compared to the 3D model of an environment has enabled further reduction of positioning errors. As a result, for 90% of the time, the algorithm is able to estimate a pedestrian location with an error smaller than 2 m, compared to an error of 6.5 m for a navigation based solely on GPS. PMID:22969321
Increasing precision of turbidity-based suspended sediment concentration and load estimates.
Jastram, John D; Zipper, Carl E; Zelazny, Lucian W; Hyer, Kenneth E
2010-01-01
Turbidity is an effective tool for estimating and monitoring suspended sediments in aquatic systems. Turbidity can be measured in situ remotely and at fine temporal scales as a surrogate for suspended sediment concentration (SSC), providing opportunity for a more complete record of SSC than is possible with physical sampling approaches. However, there is variability in turbidity-based SSC estimates and in sediment loadings calculated from those estimates. This study investigated the potential to improve turbidity-based SSC, and by extension the resulting sediment loading estimates, by incorporating hydrologic variables that can be monitored remotely and continuously (typically 15-min intervals) into the SSC estimation procedure. On the Roanoke River in southwestern Virginia, hydrologic stage, turbidity, and other water-quality parameters were monitored with in situ instrumentation; suspended sediments were sampled manually during elevated turbidity events; samples were analyzed for SSC and physical properties including particle-size distribution and organic C content; and rainfall was quantified by geologic source area. The study identified physical properties of the suspended-sediment samples that contribute to SSC estimation variance and hydrologic variables that explained variability of those physical properties. Results indicated that the inclusion of any of the measured physical properties in turbidity-based SSC estimation models reduces unexplained variance. Further, the use of hydrologic variables to represent these physical properties, along with turbidity, resulted in a model, relying solely on data collected remotely and continuously, that estimated SSC with less variance than a conventional turbidity-based univariate model, allowing a more precise estimate of sediment loading, Modeling results are consistent with known mechanisms governing sediment transport in hydrologic systems.
NASA Astrophysics Data System (ADS)
Sun, Jia; Shi, Shuo; Yang, Jian; Du, Lin; Gong, Wei; Chen, Biwu; Song, Shalei
2018-01-01
Leaf biochemical constituents provide useful information about major ecological processes. As a fast and nondestructive method, remote sensing techniques are critical to reflect leaf biochemistry via models. PROSPECT model has been widely applied in retrieving leaf traits by providing hemispherical reflectance and transmittance. However, the process of measuring both reflectance and transmittance can be time-consuming and laborious. Contrary to use reflectance spectrum alone in PROSPECT model inversion, which has been adopted by many researchers, this study proposes to use transmission spectrum alone, with the increasing availability of the latter through various remote sensing techniques. Then we analyzed the performance of PROSPECT model inversion with (1) only transmission spectrum, (2) only reflectance and (3) both reflectance and transmittance, using synthetic datasets (with varying levels of random noise and systematic noise) and two experimental datasets (LOPEX and ANGERS). The results show that (1) PROSPECT-5 model inversion based solely on transmission spectrum is viable with results generally better than that based solely on reflectance spectrum; (2) leaf dry matter can be better estimated using only transmittance or reflectance than with both reflectance and transmittance spectra.
Metamorphic sole formation reveals plate interface rheology during early subduction
NASA Astrophysics Data System (ADS)
Mathieu, S.; Agard, P.; Dubacq, B.; Plunder, A.; Prigent, C.
2015-12-01
Metamorphic soles are m to ~500m thick tectonic slices welded beneath most large ophiolites. They correspond to highly to mildly deformed portions of oceanic lithosphere metamorphosed at amphibolite to granulite facies peak conditions. Metamorphic soles are interpreted as formed ≤1-2Ma after intraoceanic subduction initiation by heat transfer from the hot, incipient mantle wegde to the underthrusting lower plate. Their early accretion and exhumation together with the future ophiolite implies at least one jump of the subduction plate interface from above to below the metamorphic sole. Metamorphic soles thus represent one of the few remnants of the very early evolution of the subduction plate interface and provide major constraints on the thermal structure and the effective rheology of the crust and mantle along the nascent slab interface.We herein present a structural and petrological detailed description of the Oman and Turkey metamorphic soles. Both soles present a steep inverted metamorphic structure, with isograds subparallel to the peridotite contact, in which the proportion of mafic rocks, pressure and temperature conditions increase upward. They comprise, as most metamorphic soles worldwide, two main units: (1) a high-grade unit adjacent to the overlying peridotite composed of granulitized to amphibolized metabasalts, with rare metasedimentary interlayers (~800±100ºC at 10±2kbar) and (2) a low-grade greenschist facies unit composed of metasedimentary rocks with rare metatuffs (~500±100ºC at 5±2kbar). We provide for the first time refined P-T peak condition estimations by means of pseudosection modelling and maximum temperature constraints for the Oman low-grade sole by RAMAN thermometry. In order to quantify micro-scale deformations trough the sole, we also present EBSD data on the Oman garnet-bearing and garnet-free high-grade sole.With these new constraints, we finally propose a new conceptual mechanical model for metamorphic sole formation. This model excludes the presence of a continuous inverted metamorphic gradient through the sole but implies the stacking of several homogeneous slivers to constitute the present structure of the sole. These successive thrusts are the result of rheological changes as the plate interface progressively cools.
Real-time estimation of horizontal gaze angle by saccade integration using in-ear electrooculography
2018-01-01
The manuscript proposes and evaluates a real-time algorithm for estimating eye gaze angle based solely on single-channel electrooculography (EOG), which can be obtained directly from the ear canal using conductive ear moulds. In contrast to conventional high-pass filtering, we used an algorithm that calculates absolute eye gaze angle via statistical analysis of detected saccades. The estimated eye positions of the new algorithm were still noisy. However, the performance in terms of Pearson product-moment correlation coefficients was significantly better than the conventional approach in some instances. The results suggest that in-ear EOG signals captured with conductive ear moulds could serve as a basis for light-weight and portable horizontal eye gaze angle estimation suitable for a broad range of applications. For instance, for hearing aids to steer the directivity of microphones in the direction of the user’s eye gaze. PMID:29304120
Hládek, Ľuboš; Porr, Bernd; Brimijoin, W Owen
2018-01-01
The manuscript proposes and evaluates a real-time algorithm for estimating eye gaze angle based solely on single-channel electrooculography (EOG), which can be obtained directly from the ear canal using conductive ear moulds. In contrast to conventional high-pass filtering, we used an algorithm that calculates absolute eye gaze angle via statistical analysis of detected saccades. The estimated eye positions of the new algorithm were still noisy. However, the performance in terms of Pearson product-moment correlation coefficients was significantly better than the conventional approach in some instances. The results suggest that in-ear EOG signals captured with conductive ear moulds could serve as a basis for light-weight and portable horizontal eye gaze angle estimation suitable for a broad range of applications. For instance, for hearing aids to steer the directivity of microphones in the direction of the user's eye gaze.
Determining Level of Service for Multilane Median Opening Zone
NASA Astrophysics Data System (ADS)
Ali, Paydar; Johnnie, Ben-Edigbe
2017-08-01
The road system is a capital-intensive investment, requiring thorough schematic framework and funding. Roads are built to provide an intrinsic quality of service which satisfies the road users. Roads that provide good services are expected to deliver operational performance that is consistent with their design specifications. Level of service and cumulative percentile speed distribution methods have been used in previous studies to estimate the quality of multilane highway service. Whilst the level of service approach relies on speed/flow curve, the cumulative percentile speed distribution is based solely speed. These estimation methods were used in studies carried out in Johor Malaysia. The aim of the studies is to ascertain the extent of speed reduction caused by midblock U-turn facilities as well as verify which estimation method is more reliable. At selected sites, road segments for both directional flows were divided into free-flow and midblock zones. Traffic volume, speed and vehicle type data for each zone were collected continuously for six weeks. Both estimation methods confirmed that speed reduction would be caused by midblock u-turn facilities. However level of service methods suggested that the quality of service would improve from level F to E or D at midblock zone in spite of speed reduction. Level of service was responding to traffic volume reduction at midblock u-turn facility not travel speed reduction. The studies concluded that since level of service was more responsive to traffic volume reduction than travel speed, it cannot be solely relied upon when assessing the quality of multilane highway service.
3D Indoor Positioning of UAVs with Spread Spectrum Ultrasound and Time-of-Flight Cameras
Aguilera, Teodoro
2017-01-01
This work proposes the use of a hybrid acoustic and optical indoor positioning system for the accurate 3D positioning of Unmanned Aerial Vehicles (UAVs). The acoustic module of this system is based on a Time-Code Division Multiple Access (T-CDMA) scheme, where the sequential emission of five spread spectrum ultrasonic codes is performed to compute the horizontal vehicle position following a 2D multilateration procedure. The optical module is based on a Time-Of-Flight (TOF) camera that provides an initial estimation for the vehicle height. A recursive algorithm programmed on an external computer is then proposed to refine the estimated position. Experimental results show that the proposed system can increase the accuracy of a solely acoustic system by 70–80% in terms of positioning mean square error. PMID:29301211
Survival and harvest-related mortality of white-tailed deer in Massachusetts
Mcdonald, John E.; DeStefano, Stephen; Gaughan, Christopher; Mayer, Michael; Woytek, William A.; Christensen, Sonja; Fuller, Todd K.
2011-01-01
We monitored 142 radiocollared adult (≥1.0 yr old) white-tailed deer (Odocoileus virginianus) in 3 study areas of Massachusetts, USA, to estimate annual survival and mortality due to legal hunting. We then applied these rates to deer harvest information to estimate deer population trends over time, and compared these to trends derived solely from harvest data estimates. Estimated adult female survival rates were similar (0.82–0.86), and uniformly high, across 3 management zones in Massachusetts that differed in landscape composition, human density, and harvest regulations. Legal hunting accounted for 16–29% of all adult female mortality. Estimated adult male survival rates varied from 0.55 to 0.79, and legal hunting accounted for 40–75% of all mortality. Use of composite hunting mortality rates produced realistic estimates for adult deer populations in 2 zones, but not for the third, where estimation was hindered by regulatory restrictions on antlerless deer harvest. In addition, the population estimates we calculated were generally higher than those derived from population reconstruction, likely due to relatively low harvest pressure. Legal harvest may not be the dominant form of deer mortality in developed landscapes; thus, estimates of populations or trends that rely solely on harvest data will likely be underestimates.
Matsuzaki, Ryosuke; Tachikawa, Takeshi; Ishizuka, Junya
2018-03-01
Accurate simulations of carbon fiber-reinforced plastic (CFRP) molding are vital for the development of high-quality products. However, such simulations are challenging and previous attempts to improve the accuracy of simulations by incorporating the data acquired from mold monitoring have not been completely successful. Therefore, in the present study, we developed a method to accurately predict various CFRP thermoset molding characteristics based on data assimilation, a process that combines theoretical and experimental values. The degree of cure as well as temperature and thermal conductivity distributions during the molding process were estimated using both temperature data and numerical simulations. An initial numerical experiment demonstrated that the internal mold state could be determined solely from the surface temperature values. A subsequent numerical experiment to validate this method showed that estimations based on surface temperatures were highly accurate in the case of degree of cure and internal temperature, although predictions of thermal conductivity were more difficult.
The social cost of rheumatoid arthritis in Italy: the results of an estimation exercise.
Turchetti, G; Bellelli, S; Mosca, M
2014-03-14
The objective of this study is to estimate the mean annual social cost per adult person and the total social cost of rheumatoid arthritis (RA) in Italy. A literature review was performed by searching primary economic studies on adults in order to collect cost data of RA in Italy in the last decade. The review results were merged with data of institutional sources for estimating - following the methodological steps of the cost of illness analysis - the social cost of RA in Italy. The mean annual social cost of RA was € 13,595 per adult patient in Italy. Affecting 259,795 persons, RA determines a social cost of € 3.5 billions in Italy. Non-medical direct cost and indirect cost represent the main cost items (48% and 31%) of the total social cost of RA in Italy. Based on these results, it appears evident that the assessment of the economic burden of RA solely based on direct medical costs evaluation gives a limited view of the phenomenon.
Frahm, Ken Steffen; Mørch, Carsten Dahl; Grill, Warren M; Andersen, Ole Kæseler
2013-09-01
During electrocutaneous stimulations, variation in skin properties across locations can lead to differences in neural activation. However, little focus has been given to the effect of different skin thicknesses on neural activation. Electrical stimulation was applied to six sites across the sole of the foot. The intensities used were two and four times perception threshold. The subjects (n = 8) rated the perception quality and intensity using the McGill Pain Questionnaire and a visual analog scale (VAS). A finite element model was developed and combined with the activation function (AF) to estimate neural activation. Electrical stimulation was perceived as significantly less sharp at the heel compared to all other sites, except one site in the forefoot (logistic regression, p < 0.05). The VAS scores were significantly higher in the arch than at the heel (RM ANOVA, p < 0.05). The model showed that the AF was between 91 and 231 % higher at the five other sites than at the heel. The differences in perception across the sole of the foot indicated that the CNS received different inputs depending on the stimulus site. The lower AF at the heel indicated that the skin thicknesses could contribute to the perceived differences.
PockDrug-Server: a new web server for predicting pocket druggability on holo and apo proteins
Hussein, Hiba Abi; Borrel, Alexandre; Geneix, Colette; Petitjean, Michel; Regad, Leslie; Camproux, Anne-Claude
2015-01-01
Predicting protein pocket's ability to bind drug-like molecules with high affinity, i.e. druggability, is of major interest in the target identification phase of drug discovery. Therefore, pocket druggability investigations represent a key step of compound clinical progression projects. Currently computational druggability prediction models are attached to one unique pocket estimation method despite pocket estimation uncertainties. In this paper, we propose ‘PockDrug-Server’ to predict pocket druggability, efficient on both (i) estimated pockets guided by the ligand proximity (extracted by proximity to a ligand from a holo protein structure) and (ii) estimated pockets based solely on protein structure information (based on amino atoms that form the surface of potential binding cavities). PockDrug-Server provides consistent druggability results using different pocket estimation methods. It is robust with respect to pocket boundary and estimation uncertainties, thus efficient using apo pockets that are challenging to estimate. It clearly distinguishes druggable from less druggable pockets using different estimation methods and outperformed recent druggability models for apo pockets. It can be carried out from one or a set of apo/holo proteins using different pocket estimation methods proposed by our web server or from any pocket previously estimated by the user. PockDrug-Server is publicly available at: http://pockdrug.rpbs.univ-paris-diderot.fr. PMID:25956651
PockDrug-Server: a new web server for predicting pocket druggability on holo and apo proteins.
Hussein, Hiba Abi; Borrel, Alexandre; Geneix, Colette; Petitjean, Michel; Regad, Leslie; Camproux, Anne-Claude
2015-07-01
Predicting protein pocket's ability to bind drug-like molecules with high affinity, i.e. druggability, is of major interest in the target identification phase of drug discovery. Therefore, pocket druggability investigations represent a key step of compound clinical progression projects. Currently computational druggability prediction models are attached to one unique pocket estimation method despite pocket estimation uncertainties. In this paper, we propose 'PockDrug-Server' to predict pocket druggability, efficient on both (i) estimated pockets guided by the ligand proximity (extracted by proximity to a ligand from a holo protein structure) and (ii) estimated pockets based solely on protein structure information (based on amino atoms that form the surface of potential binding cavities). PockDrug-Server provides consistent druggability results using different pocket estimation methods. It is robust with respect to pocket boundary and estimation uncertainties, thus efficient using apo pockets that are challenging to estimate. It clearly distinguishes druggable from less druggable pockets using different estimation methods and outperformed recent druggability models for apo pockets. It can be carried out from one or a set of apo/holo proteins using different pocket estimation methods proposed by our web server or from any pocket previously estimated by the user. PockDrug-Server is publicly available at: http://pockdrug.rpbs.univ-paris-diderot.fr. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Kohoutek, Tobias K.; Mautz, Rainer; Wegner, Jan D.
2013-01-01
We present a novel approach for autonomous location estimation and navigation in indoor environments using range images and prior scene knowledge from a GIS database (CityGML). What makes this task challenging is the arbitrary relative spatial relation between GIS and Time-of-Flight (ToF) range camera further complicated by a markerless configuration. We propose to estimate the camera's pose solely based on matching of GIS objects and their detected location in image sequences. We develop a coarse-to-fine matching strategy that is able to match point clouds without any initial parameters. Experiments with a state-of-the-art ToF point cloud show that our proposed method delivers an absolute camera position with decimeter accuracy, which is sufficient for many real-world applications (e.g., collision avoidance). PMID:23435055
NASA Astrophysics Data System (ADS)
Garcin, Matthieu
2017-10-01
Hurst exponents depict the long memory of a time series. For human-dependent phenomena, as in finance, this feature may vary in the time. It justifies modelling dynamics by multifractional Brownian motions, which are consistent with time-dependent Hurst exponents. We improve the existing literature on estimating time-dependent Hurst exponents by proposing a smooth estimate obtained by variational calculus. This method is very general and not restricted to the sole Hurst framework. It is globally more accurate and easier than other existing non-parametric estimation techniques. Besides, in the field of Hurst exponents, it makes it possible to make forecasts based on the estimated multifractional Brownian motion. The application to high-frequency foreign exchange markets (GBP, CHF, SEK, USD, CAD, AUD, JPY, CNY and SGD, all against EUR) shows significantly good forecasts. When the Hurst exponent is higher than 0.5, what depicts a long-memory feature, the accuracy is higher.
Macisaac, R J; Tsalamandris, C; Thomas, M C; Premaratne, E; Panagiotopoulos, S; Smith, T J; Poon, A; Jenkins, M A; Ratnaike, S I; Power, D A; Jerums, G
2006-07-01
We compared the predictive performance of a GFR based on serum cystatin C levels with commonly used creatinine-based methods in subjects with diabetes. In a cross-sectional study of 251 consecutive clinic patients, the mean reference (plasma clearance of (99m)Tc-diethylene-triamine-penta-acetic acid) GFR (iGFR) was 88+/-2 ml min(-1) 1.73 m(-2). A regression equation describing the relationship between iGFR and 1/cystatin C levels was derived from a test population (n=125) to allow for the estimation of GFR by cystatin C (eGFR-cystatin C). The predictive performance of eGFR-cystatin C, the Modification of Diet in Renal Disease 4 variable formula (MDRD-4) and Cockcroft-Gault (C-G) formulas were then compared in a validation population (n=126). There was no difference in renal function (ml min(-1) 1.73 m(-2)) as measured by iGFR (89.2+/-3.0), eGFR-cystatin C (86.8+/-2.5), MDRD-4 (87.0+/-2.8) or C-G (92.3+/-3.5). All three estimates of renal function had similar precision and accuracy. Estimates of GFR based solely on serum cystatin C levels had the same predictive potential when compared with the MDRD-4 and C-G formulas.
Sensor data security level estimation scheme for wireless sensor networks.
Ramos, Alex; Filho, Raimir Holanda
2015-01-19
Due to their increasing dissemination, wireless sensor networks (WSNs) have become the target of more and more sophisticated attacks, even capable of circumventing both attack detection and prevention mechanisms. This may cause WSN users, who totally trust these security mechanisms, to think that a sensor reading is secure, even when an adversary has corrupted it. For that reason, a scheme capable of estimating the security level (SL) that these mechanisms provide to sensor data is needed, so that users can be aware of the actual security state of this data and can make better decisions on its use. However, existing security estimation schemes proposed for WSNs fully ignore detection mechanisms and analyze solely the security provided by prevention mechanisms. In this context, this work presents the sensor data security estimator (SDSE), a new comprehensive security estimation scheme for WSNs. SDSE is designed for estimating the sensor data security level based on security metrics that analyze both attack prevention and detection mechanisms. In order to validate our proposed scheme, we have carried out extensive simulations that show the high accuracy of SDSE estimates.
Sensor Data Security Level Estimation Scheme for Wireless Sensor Networks
Ramos, Alex; Filho, Raimir Holanda
2015-01-01
Due to their increasing dissemination, wireless sensor networks (WSNs) have become the target of more and more sophisticated attacks, even capable of circumventing both attack detection and prevention mechanisms. This may cause WSN users, who totally trust these security mechanisms, to think that a sensor reading is secure, even when an adversary has corrupted it. For that reason, a scheme capable of estimating the security level (SL) that these mechanisms provide to sensor data is needed, so that users can be aware of the actual security state of this data and can make better decisions on its use. However, existing security estimation schemes proposed for WSNs fully ignore detection mechanisms and analyze solely the security provided by prevention mechanisms. In this context, this work presents the sensor data security estimator (SDSE), a new comprehensive security estimation scheme for WSNs. SDSE is designed for estimating the sensor data security level based on security metrics that analyze both attack prevention and detection mechanisms. In order to validate our proposed scheme, we have carried out extensive simulations that show the high accuracy of SDSE estimates. PMID:25608215
NASA Astrophysics Data System (ADS)
Gong, L.
2013-12-01
Large-scale hydrological models and land surface models are by far the only tools for accessing future water resources in climate change impact studies. Those models estimate discharge with large uncertainties, due to the complex interaction between climate and hydrology, the limited quality and availability of data, as well as model uncertainties. A new purely data-based scale-extrapolation method is proposed, to estimate water resources for a large basin solely from selected small sub-basins, which are typically two-orders-of-magnitude smaller than the large basin. Those small sub-basins contain sufficient information, not only on climate and land surface, but also on hydrological characteristics for the large basin In the Baltic Sea drainage basin, best discharge estimation for the gauged area was achieved with sub-basins that cover 2-4% of the gauged area. There exist multiple sets of sub-basins that resemble the climate and hydrology of the basin equally well. Those multiple sets estimate annual discharge for gauged area consistently well with 5% average error. The scale-extrapolation method is completely data-based; therefore it does not force any modelling error into the prediction. The multiple predictions are expected to bracket the inherent variations and uncertainties of the climate and hydrology of the basin. The method can be applied in both un-gauged basins and un-gauged periods with uncertainty estimation.
A Bayesian Framework for Human Body Pose Tracking from Depth Image Sequences
Zhu, Youding; Fujimura, Kikuo
2010-01-01
This paper addresses the problem of accurate and robust tracking of 3D human body pose from depth image sequences. Recovering the large number of degrees of freedom in human body movements from a depth image sequence is challenging due to the need to resolve the depth ambiguity caused by self-occlusions and the difficulty to recover from tracking failure. Human body poses could be estimated through model fitting using dense correspondences between depth data and an articulated human model (local optimization method). Although it usually achieves a high accuracy due to dense correspondences, it may fail to recover from tracking failure. Alternately, human pose may be reconstructed by detecting and tracking human body anatomical landmarks (key-points) based on low-level depth image analysis. While this method (key-point based method) is robust and recovers from tracking failure, its pose estimation accuracy depends solely on image-based localization accuracy of key-points. To address these limitations, we present a flexible Bayesian framework for integrating pose estimation results obtained by methods based on key-points and local optimization. Experimental results are shown and performance comparison is presented to demonstrate the effectiveness of the proposed approach. PMID:22399933
Robust pupil center detection using a curvature algorithm
NASA Technical Reports Server (NTRS)
Zhu, D.; Moore, S. T.; Raphan, T.; Wall, C. C. (Principal Investigator)
1999-01-01
Determining the pupil center is fundamental for calculating eye orientation in video-based systems. Existing techniques are error prone and not robust because eyelids, eyelashes, corneal reflections or shadows in many instances occlude the pupil. We have developed a new algorithm which utilizes curvature characteristics of the pupil boundary to eliminate these artifacts. Pupil center is computed based solely on points related to the pupil boundary. For each boundary point, a curvature value is computed. Occlusion of the boundary induces characteristic peaks in the curvature function. Curvature values for normal pupil sizes were determined and a threshold was found which together with heuristics discriminated normal from abnormal curvature. Remaining boundary points were fit with an ellipse using a least squares error criterion. The center of the ellipse is an estimate of the pupil center. This technique is robust and accurately estimates pupil center with less than 40% of the pupil boundary points visible.
Microseismic Velocity Imaging of the Fracturing Zone
NASA Astrophysics Data System (ADS)
Zhang, H.; Chen, Y.
2015-12-01
Hydraulic fracturing of low permeability reservoirs can induce microseismic events during fracture development. For this reason, microseismic monitoring using sensors on surface or in borehole have been widely used to delineate fracture spatial distribution and to understand fracturing mechanisms. It is often the case that the stimulated reservoir volume (SRV) is determined solely based on microseismic locations. However, it is known that for some fracture development stage, long period long duration events, instead of microseismic events may be associated. In addition, because microseismic events are essentially weak and there exist different sources of noise during monitoring, some microseismic events could not be detected and thus located. Therefore the estimation of the SRV is biased if it is solely determined by microseismic locations. With the existence of fluids and fractures, the seismic velocity of reservoir layers will be decreased. Based on this fact, we have developed a near real time seismic velocity tomography method to characterize velocity changes associated with fracturing process. The method is based on double-difference seismic tomography algorithm to image the fracturing zone where microseismic events occur by using differential arrival times from microseismic event pairs. To take into account varying data distribution for different fracking stages, the method solves the velocity model in the wavelet domain so that different scales of model features can be obtained according to different data distribution. We have applied this real time tomography method to both acoustic emission data from lab experiment and microseismic data from a downhole microseismic monitoring project for shale gas hydraulic fracturing treatment. The tomography results from lab data clearly show the velocity changes associated with different rock fracturing stages. For the field data application, it shows that microseismic events are located in low velocity anomalies. By combining low velocity anomalies with microseismic events, we should better estimate the SRV.
Loving, Kathryn A.; Lin, Andy; Cheng, Alan C.
2014-01-01
Advances reported over the last few years and the increasing availability of protein crystal structure data have greatly improved structure-based druggability approaches. However, in practice, nearly all druggability estimation methods are applied to protein crystal structures as rigid proteins, with protein flexibility often not directly addressed. The inclusion of protein flexibility is important in correctly identifying the druggability of pockets that would be missed by methods based solely on the rigid crystal structure. These include cryptic pockets and flexible pockets often found at protein-protein interaction interfaces. Here, we apply an approach that uses protein modeling in concert with druggability estimation to account for light protein backbone movement and protein side-chain flexibility in protein binding sites. We assess the advantages and limitations of this approach on widely-used protein druggability sets. Applying the approach to all mammalian protein crystal structures in the PDB results in identification of 69 proteins with potential druggable cryptic pockets. PMID:25079060
Sensor-Data Fusion for Multi-Person Indoor Location Estimation
2017-01-01
We consider the problem of estimating the location of people as they move and work in indoor environments. More specifically, we focus on the scenario where one of the persons of interest is unable or unwilling to carry a smartphone, or any other “wearable” device, which frequently arises in caregiver/cared-for situations. We consider the case of indoor spaces populated with anonymous binary sensors (Passive Infrared motion sensors) and eponymous wearable sensors (smartphones interacting with Estimote beacons), and we propose a solution to the resulting sensor-fusion problem. Using a data set with sensor readings collected from one-person and two-person sessions engaged in a variety of activities of daily living, we investigate the relative merits of relying solely on anonymous sensors, solely on eponymous sensors, or on their combination. We examine how the lack of synchronization across different sensing sources impacts the quality of location estimates, and discuss how it could be mitigated without resorting to device-level mechanisms. Finally, we examine the trade-off between the sensors’ coverage of the monitored space and the quality of the location estimates. PMID:29057812
Sensor-Data Fusion for Multi-Person Indoor Location Estimation.
Mohebbi, Parisa; Stroulia, Eleni; Nikolaidis, Ioanis
2017-10-18
We consider the problem of estimating the location of people as they move and work in indoor environments. More specifically, we focus on the scenario where one of the persons of interest is unable or unwilling to carry a smartphone, or any other "wearable" device, which frequently arises in caregiver/cared-for situations. We consider the case of indoor spaces populated with anonymous binary sensors (Passive Infrared motion sensors) and eponymous wearable sensors (smartphones interacting with Estimote beacons), and we propose a solution to the resulting sensor-fusion problem. Using a data set with sensor readings collected from one-person and two-person sessions engaged in a variety of activities of daily living, we investigate the relative merits of relying solely on anonymous sensors, solely on eponymous sensors, or on their combination. We examine how the lack of synchronization across different sensing sources impacts the quality of location estimates, and discuss how it could be mitigated without resorting to device-level mechanisms. Finally, we examine the trade-off between the sensors' coverage of the monitored space and the quality of the location estimates.
Perceived object stability depends on multisensory estimates of gravity.
Barnett-Cowan, Michael; Fleming, Roland W; Singh, Manish; Bülthoff, Heinrich H
2011-04-27
How does the brain estimate object stability? Objects fall over when the gravity-projected centre-of-mass lies outside the point or area of support. To estimate an object's stability visually, the brain must integrate information across the shape and compare its orientation to gravity. When observers lie on their sides, gravity is perceived as tilted toward body orientation, consistent with a representation of gravity derived from multisensory information. We exploited this to test whether vestibular and kinesthetic information affect this visual task or whether the brain estimates object stability solely from visual information. In three body orientations, participants viewed images of objects close to a table edge. We measured the critical angle at which each object appeared equally likely to fall over or right itself. Perceived gravity was measured using the subjective visual vertical. The results show that the perceived critical angle was significantly biased in the same direction as the subjective visual vertical (i.e., towards the multisensory estimate of gravity). Our results rule out a general explanation that the brain depends solely on visual heuristics and assumptions about object stability. Instead, they suggest that multisensory estimates of gravity govern the perceived stability of objects, resulting in objects appearing more stable than they are when the head is tilted in the same direction in which they fall.
Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng
2015-01-01
Summary The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function. PMID:27346982
Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng
2016-06-01
The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function.
Impacts of different types of measurements on estimating unsaturated flow parameters
NASA Astrophysics Data System (ADS)
Shi, Liangsheng; Song, Xuehang; Tong, Juxiu; Zhu, Yan; Zhang, Qiuru
2015-05-01
This paper assesses the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.
Impacts of Different Types of Measurements on Estimating Unsaturatedflow Parameters
NASA Astrophysics Data System (ADS)
Shi, L.
2015-12-01
This study evaluates the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.
Model Effects on GLAS-Based Regional Estimates of Forest Biomass and Carbon
NASA Technical Reports Server (NTRS)
Nelson, Ross
2008-01-01
ICESat/GLAS waveform data are used to estimate biomass and carbon on a 1.27 million sq km study area. the Province of Quebec, Canada, below treeline. The same input data sets and sampling design are used in conjunction with four different predictive models to estimate total aboveground dry forest biomass and forest carbon. The four models include nonstratified and stratified versions of a multiple linear model where either biomass or (square root of) biomass serves as the dependent variable. The use of different models in Quebec introduces differences in Provincial biomass estimates of up to 0.35 Gt (range 4.942+/-0.28 Gt to 5.29+/-0.36 Gt). The results suggest that if different predictive models are used to estimate regional carbon stocks in different epochs, e.g., y2005, y2015, one might mistakenly infer an apparent aboveground carbon "change" of, in this case, 0.18 Gt, or approximately 7% of the aboveground carbon in Quebec, due solely to the use of different predictive models. These findings argue for model consistency in future, LiDAR-based carbon monitoring programs. Regional biomass estimates from the four GLAS models are compared to ground estimates derived from an extensive network of 16,814 ground plots located in southern Quebec. Stratified models proved to be more accurate and precise than either of the two nonstratified models tested.
Impact of Vaccination on 14 High-Risk HPV Type Infections: A Mathematical Modelling Approach
Vänskä, Simopekka; Auranen, Kari; Leino, Tuija; Salo, Heini; Nieminen, Pekka; Kilpi, Terhi; Tiihonen, Petri; Apter, Dan; Lehtinen, Matti
2013-01-01
The development of high-risk human papillomavirus (hrHPV) infection to cervical cancer is a complicated process. We considered solely hrHPV infections, thus avoiding the confounding effects of disease progression, screening, and treatments. To analyse hrHPV epidemiology and to estimate the overall impact of vaccination against infections with hrHPVs, we developed a dynamic compartmental transmission model for single and multiple infections with 14 hrHPV types. The infection-related parameters were estimated using population-based sexual behaviour and hrHPV prevalence data from Finland. The analysis disclosed the important role of persistent infections in hrHPV epidemiology, provided further evidence for a significant natural immunity, and demonstrated the dependence of transmission probability estimates on the model structure. The model predicted that vaccinating girls at 80% coverage will result in a 55% reduction in the overall hrHPV prevalence and a higher 65% reduction in the prevalence of persistent hrHPV infections in females. In males, the reduction will be 42% in the hrHPV prevalence solely by the herd effect from the 80% coverage in girls. If such high coverage among girls is not reached, it is still possible to reduce the female hrHPV prevalence indirectly by the herd effect if also boys are included in the vaccination program. On the other hand, any herd effects in older unvaccinated cohorts were minor. Limiting the epidemiological model to infection yielded improved understanding of the hrHPV epidemiology and of mechanisms with which vaccination impacts on hrHPV infections. PMID:24009669
Bayesian Estimation of the Spatially Varying Completeness Magnitude of Earthquake Catalogs
NASA Astrophysics Data System (ADS)
Mignan, A.; Werner, M.; Wiemer, S.; Chen, C.; Wu, Y.
2010-12-01
Assessing the completeness magnitude Mc of earthquake catalogs is an essential prerequisite for any seismicity analysis. We employ a simple model to compute Mc in space, based on the proximity to seismic stations in a network. We show that a relationship of the form Mcpred(d) = ad^b+c, with d the distance to the 5th nearest seismic station, fits the observations well. We then propose a new Mc mapping approach, the Bayesian Magnitude of Completeness (BMC) method, based on a 2-step procedure: (1) a spatial resolution optimization to minimize spatial heterogeneities and uncertainties in Mc estimates and (2) a Bayesian approach that merges prior information about Mc based on the proximity to seismic stations with locally observed values weighted by their respective uncertainties. This new methodology eliminates most weaknesses associated with current Mc mapping procedures: the radius that defines which earthquakes to include in the local magnitude distribution is chosen according to an objective criterion and there are no gaps in the spatial estimation of Mc. The method solely requires the coordinates of seismic stations. Here, we investigate the Taiwan Central Weather Bureau (CWB) earthquake catalog by computing a Mc map for the period 1994-2010.
Eidem, Ingvild; Vangen, Siri; Henriksen, Tore; Vollset, Stein E; Hanssen, Kristian F; Joner, Geir; Stene, Lars C
2014-08-01
To study differences in ultrasound-based compared to menstrual-based term estimation in women with type 1 diabetes. Nationwide register study. Norway. Deliveries in Norway 1999-2004 by women registered in the Norwegian Childhood Diabetes Registry (n = 342) and the background population (n = 307 248), with data on both ultrasound-based and menstrual-based gestational age notified in the Birth Registry of Norway. Births with major malformations were excluded. Linkage of two nationwide registries, the Medical Birth Registry of Norway and the Norwegian Childhood Diabetes Registry. Estimated gestational age at delivery based on routine second trimester ultrasound measurements and last menstrual period. In women with type 1 diabetes, the distribution of gestational age at delivery was shifted considerably towards a lower gestational age when using second trimester ultrasound data for estimation, compared with last menstrual period data. The difference between the two estimation methods was larger among women with type 1 diabetes, although also evident in the general population. One in four women with diabetes and a certain last menstrual period date had their ultrasound-calculated term postponed 1 week or more, while one in 10 had it postponed 2 weeks or more. Corresponding numbers in the background population were one in five and one in 20. We found a systematic postponement of ultrasound-based compared with menstrual-based term estimation in women with type 1 diabetes. Relying solely on routine ultrasound-based term calculation for delivery decision may imply a risk of going beyond an optimal pregnancy length. © 2014 Nordic Federation of Societies of Obstetrics and Gynecology.
Fletcher, E; Carmichael, O; Decarli, C
2012-01-01
We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer's disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions.
Fletcher, E.; Carmichael, O.; DeCarli, C.
2013-01-01
We propose a template-based method for correcting field inhomogeneity biases in magnetic resonance images (MRI) of the human brain. At each algorithm iteration, the update of a B-spline deformation between an unbiased template image and the subject image is interleaved with estimation of a bias field based on the current template-to-image alignment. The bias field is modeled using a spatially smooth thin-plate spline interpolation based on ratios of local image patch intensity means between the deformed template and subject images. This is used to iteratively correct subject image intensities which are then used to improve the template-to-image deformation. Experiments on synthetic and real data sets of images with and without Alzheimer’s disease suggest that the approach may have advantages over the popular N3 technique for modeling bias fields and narrowing intensity ranges of gray matter, white matter, and cerebrospinal fluid. This bias field correction method has the potential to be more accurate than correction schemes based solely on intrinsic image properties or hypothetical image intensity distributions. PMID:23365843
Perceptual constancy in auditory perception of distance to railway tracks.
De Coensel, Bert; Nilsson, Mats E; Berglund, Birgitta; Brown, A L
2013-07-01
Distance to a sound source can be accurately estimated solely from auditory information. With a sound source such as a train that is passing by at a relatively large distance, the most important auditory information for the listener for estimating its distance consists of the intensity of the sound, spectral changes in the sound caused by air absorption, and the motion-induced rate of change of intensity. However, these cues are relative because prior information/experience of the sound source-its source power, its spectrum and the typical speed at which it moves-is required for such distance estimates. This paper describes two listening experiments that allow investigation of further prior contextual information taken into account by listeners-viz., whether they are indoors or outdoors. Asked to estimate the distance to the track of a railway, it is shown that listeners assessing sounds heard inside the dwelling based their distance estimates on the expected train passby sound level outdoors rather than on the passby sound level actually experienced indoors. This form of perceptual constancy may have consequences for the assessment of annoyance caused by railway noise.
Muscle parameters estimation based on biplanar radiography.
Dubois, G; Rouch, P; Bonneau, D; Gennisson, J L; Skalli, W
2016-11-01
The evaluation of muscle and joint forces in vivo is still a challenge. Musculo-Skeletal (musculo-skeletal) models are used to compute forces based on movement analysis. Most of them are built from a scaled-generic model based on cadaver measurements, which provides a low level of personalization, or from Magnetic Resonance Images, which provide a personalized model in lying position. This study proposed an original two steps method to access a subject-specific musculo-skeletal model in 30 min, which is based solely on biplanar X-Rays. First, the subject-specific 3D geometry of bones and skin envelopes were reconstructed from biplanar X-Rays radiography. Then, 2200 corresponding control points were identified between a reference model and the subject-specific X-Rays model. Finally, the shape of 21 lower limb muscles was estimated using a non-linear transformation between the control points in order to fit the muscle shape of the reference model to the X-Rays model. Twelfth musculo-skeletal models were reconstructed and compared to their reference. The muscle volume was not accurately estimated with a standard deviation (SD) ranging from 10 to 68%. However, this method provided an accurate estimation the muscle line of action with a SD of the length difference lower than 2% and a positioning error lower than 20 mm. The moment arm was also well estimated with SD lower than 15% for most muscle, which was significantly better than scaled-generic model for most muscle. This method open the way to a quick modeling method for gait analysis based on biplanar radiography.
Pronk, Anjoeka; Stewart, Patricia A.; Coble, Joseph B.; Katki, Hormuzd A.; Wheeler, David C.; Colt, Joanne S.; Baris, Dalsu; Schwenn, Molly; Karagas, Margaret R.; Johnson, Alison; Waddell, Richard; Verrill, Castine; Cherala, Sai; Silverman, Debra T.; Friesen, Melissa C.
2012-01-01
Objectives Professional judgment is necessary to assess occupational exposure in population-based case-control studies; however, the assessments lack transparency and are time-consuming to perform. To improve transparency and efficiency, we systematically applied decision rules to the questionnaire responses to assess diesel exhaust exposure in the New England Bladder Cancer Study, a population-based case-control study. Methods 2,631 participants reported 14,983 jobs; 2,749 jobs were administered questionnaires (‘modules’) with diesel-relevant questions. We applied decision rules to assign exposure metrics based solely on the occupational history responses (OH estimates) and based on the module responses (module estimates); we combined the separate OH and module estimates (OH/module estimates). Each job was also reviewed one at a time to assign exposure (one-by-one review estimates). We evaluated the agreement between the OH, OH/module, and one-by-one review estimates. Results The proportion of exposed jobs was 20–25% for all jobs, depending on approach, and 54–60% for jobs with diesel-relevant modules. The OH/module and one-by-one review had moderately high agreement for all jobs (κw=0.68–0.81) and for jobs with diesel-relevant modules (κw=0.62–0.78) for the probability, intensity, and frequency metrics. For exposed subjects, the Spearman correlation statistic was 0.72 between the cumulative OH/module and one-by-one review estimates. Conclusions The agreement seen here may represent an upper level of agreement because the algorithm and one-by-one review estimates were not fully independent. This study shows that applying decision-based rules can reproduce a one-by-one review, increase transparency and efficiency, and provide a mechanism to replicate exposure decisions in other studies. PMID:22843440
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanders, M.S.; Downing, J.V.
1982-10-01
Based on laboratory and preliminary field tests of off-the-shelf steel-toed rubber boots, a molded sole design was developed to provide increased traction over conventional calendared sole miners boots. The pattern provided sharp edges perpendicular to both lateral and fore-aft slip vectors. The sole was designed to reduce mud caking. An instep lace-up capability was added to better secure the foot inside the boot. A 5-month field evaluation compared the prototype boots to the boots the participants usually wear.
A new scenario-based approach to damage detection using operational modal parameter estimates
NASA Astrophysics Data System (ADS)
Hansen, J. B.; Brincker, R.; López-Aenlle, M.; Overgaard, C. F.; Kloborg, K.
2017-09-01
In this paper a vibration-based damage localization and quantification method, based on natural frequencies and mode shapes, is presented. The proposed technique is inspired by a damage assessment methodology based solely on the sensitivity of mass-normalized experimental determined mode shapes. The present method differs by being based on modal data extracted by means of Operational Modal Analysis (OMA) combined with a reasonable Finite Element (FE) representation of the test structure and implemented in a scenario-based framework. Besides a review of the basic methodology this paper addresses fundamental theoretical as well as practical considerations which are crucial to the applicability of a given vibration-based damage assessment configuration. Lastly, the technique is demonstrated on an experimental test case using automated OMA. Both the numerical study as well as the experimental test case presented in this paper are restricted to perturbations concerning mass change.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, Jaideep; Lee, Jina; Lefantzi, Sophia
2013-09-01
The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. The limited nature of the measured data leads to a severely-underdetermined estimation problem. If the estimation is performed at fine spatial resolutions, it can also be computationally expensive. In order to enable such estimations, advances are needed in the spatial representation of ffCO2 emissions, scalable inversion algorithms and the identification of observables to measure. To that end, we investigate parsimonious spatial parameterizations of ffCO2 emissions whichmore » can be used in atmospheric inversions. We devise and test three random field models, based on wavelets, Gaussian kernels and covariance structures derived from easily-observed proxies of human activity. In doing so, we constructed a novel inversion algorithm, based on compressive sensing and sparse reconstruction, to perform the estimation. We also address scalable ensemble Kalman filters as an inversion mechanism and quantify the impact of Gaussian assumptions inherent in them. We find that the assumption does not impact the estimates of mean ffCO2 source strengths appreciably, but a comparison with Markov chain Monte Carlo estimates show significant differences in the variance of the source strengths. Finally, we study if the very different spatial natures of biogenic and ffCO2 emissions can be used to estimate them, in a disaggregated fashion, solely from CO2 concentration measurements, without extra information from products of incomplete combustion e.g., CO. We find that this is possible during the winter months, though the errors can be as large as 50%.« less
A general method for the quantitative assessment of mineral pigments.
Ares, M C Zurita; Fernández, J M
2016-01-01
A general method for the estimation of mineral pigment contents in different bases has been proposed using a sole set of calibration curves, (one for each pigment), calculated for a white standard base, thus elaborating patterns for each utilized base is not necessary. The method can be used in different bases and its validity had ev en been proved in strongly tinted bases. The method consists of a novel procedure that combines diffuse reflectance spectroscopy, second derivatives and the Kubelka-Munk function. This technique has proved to be at least one order of magnitude more sensitive than X-Ray diffraction for colored compounds, since it allowed the determination of the pigment amount in colored samples containing 0.5 wt% of pigment that was not detected by X-Ray Diffraction. The method can be used to estimate the concentration of mineral pigments in a wide variety of either natural or artificial materials, since it does not requiere the calculation of each pigment pattern in every base. This fact could have important industrial consequences, as the proposed method would be more convenient, faster and cheaper. Copyright © 2015 Elsevier B.V. All rights reserved.
Attard, Catherine R M; Beheregaray, Luciano B; Möller, Luciana M
2018-05-01
There has been remarkably little attention to using the high resolution provided by genotyping-by-sequencing (i.e., RADseq and similar methods) for assessing relatedness in wildlife populations. A major hurdle is the genotyping error, especially allelic dropout, often found in this type of data that could lead to downward-biased, yet precise, estimates of relatedness. Here, we assess the applicability of genotyping-by-sequencing for relatedness inferences given its relatively high genotyping error rate. Individuals of known relatedness were simulated under genotyping error, allelic dropout and missing data scenarios based on an empirical ddRAD data set, and their true relatedness was compared to that estimated by seven relatedness estimators. We found that an estimator chosen through such analyses can circumvent the influence of genotyping error, with the estimator of Ritland (Genetics Research, 67, 175) shown to be unaffected by allelic dropout and to be the most accurate when there is genotyping error. We also found that the choice of estimator should not rely solely on the strength of correlation between estimated and true relatedness as a strong correlation does not necessarily mean estimates are close to true relatedness. We also demonstrated how even a large SNP data set with genotyping error (allelic dropout or otherwise) or missing data still performs better than a perfectly genotyped microsatellite data set of tens of markers. The simulation-based approach used here can be easily implemented by others on their own genotyping-by-sequencing data sets to confirm the most appropriate and powerful estimator for their data. © 2017 John Wiley & Sons Ltd.
Omura, Y
1994-01-01
Accuracy of the widely used organ representation areas, currently used in different schools of foot and hand reflexology was evaluated using Bi-Digital O-Ring test resonance phenomenon. Our previous study indicated that mapping organ representation areas of the tongue using Bi-Digital O-Ring Test resonance phenomenon between 2 identical substances often provided more reliable clinical information for both diagnosis and treatment than the 2 widely used, but crude, traditional schools of Chinese tongue diagnosis. This same method was applied for the mapping of the organ representation areas on the feet and hands. We succeeded in mapping the following areas on human feet: 1) Middle (3rd) toe on the sole side represents the following starting from the tip: A) Head, B) Face with eye, ear, nose, and mouth (1st Digit) C) Neck and organs within the neck (narrow band of space between 1st crease after the 1st digit and crease at the junction of the beginning of the sole); 2) 2nd and 4th toe represent upper extremities, the beginning tip being fingers and hands. The crease at the base of these toes represents the shoulder. The 2nd toe represents right upper extremity, and the 4th toe represents left upper extremity; 3) 1st and 5th toes in both the right and left feet represent lower extremities with the tip being the toes and soles of feet. The crease at the base of these toes represents the inguinal area. The 1st toe of each foot represents right lower extremity, and 5th toe represents left lower extremity. The sole of the foot is divided into the following 3 distinctive sections. 1) Upper (1st) section represents organs in the chest cavity including 2 thymus glands, trachea, 2 lungs, with the heart between them, and with the esophagus appearing as a narrow band outside of the lung near and below the 1st and 2nd toe depending upon the individual. Chest section occupies the first 1/3 to 1/5 (on a relatively long foot) of the entire sole. The boundary between the chest and G.I. system can be approximately estimated by extending the length of the entire toe or up to 25% longer to the sole, but it can be accurately determined using a diaphragm tissue microscope slide as a reference control substance. 2) Middle (2nd) section represents Gastro-Intestinal system, including lower end of the esophagus, liver, stomach, spleen, gall bladder, pancreas, duodenum, jejunum, ileum, appendix, colon, and anus.(ABSTRACT TRUNCATED AT 400 WORDS)
Past observable dynamics of a continuously monitored qubit
NASA Astrophysics Data System (ADS)
García-Pintos, Luis Pedro; Dressel, Justin
2017-12-01
Monitoring a quantum observable continuously in time produces a stochastic measurement record that noisily tracks the observable. For a classical process, such noise may be reduced to recover an average signal by minimizing the mean squared error between the noisy record and a smooth dynamical estimate. We show that for a monitored qubit, this usual procedure returns unusual results. While the record seems centered on the expectation value of the observable during causal generation, examining the collected past record reveals that it better approximates a moving-mean Gaussian stochastic process centered at a distinct (smoothed) observable estimate. We show that this shifted mean converges to the real part of a generalized weak value in the time-continuous limit without additional postselection. We verify that this smoothed estimate minimizes the mean squared error even for individual measurement realizations. We go on to show that if a second observable is weakly monitored concurrently, then that second record is consistent with the smoothed estimate of the second observable based solely on the information contained in the first observable record. Moreover, we show that such a smoothed estimate made from incomplete information can still outperform estimates made using full knowledge of the causal quantum state.
Alternative evaluation metrics for risk adjustment methods.
Park, Sungchul; Basu, Anirban
2018-06-01
Risk adjustment is instituted to counter risk selection by accurately equating payments with expected expenditures. Traditional risk-adjustment methods are designed to estimate accurate payments at the group level. However, this generates residual risks at the individual level, especially for high-expenditure individuals, thereby inducing health plans to avoid those with high residual risks. To identify an optimal risk-adjustment method, we perform a comprehensive comparison of prediction accuracies at the group level, at the tail distributions, and at the individual level across 19 estimators: 9 parametric regression, 7 machine learning, and 3 distributional estimators. Using the 2013-2014 MarketScan database, we find that no one estimator performs best in all prediction accuracies. Generally, machine learning and distribution-based estimators achieve higher group-level prediction accuracy than parametric regression estimators. However, parametric regression estimators show higher tail distribution prediction accuracy and individual-level prediction accuracy, especially at the tails of the distribution. This suggests that there is a trade-off in selecting an appropriate risk-adjustment method between estimating accurate payments at the group level and lower residual risks at the individual level. Our results indicate that an optimal method cannot be determined solely on the basis of statistical metrics but rather needs to account for simulating plans' risk selective behaviors. Copyright © 2018 John Wiley & Sons, Ltd.
Two NextGen Air Safety Tools: An ADS-B Equipped UAV and a Wake Turbulence Estimator
NASA Astrophysics Data System (ADS)
Handley, Ward A.
Two air safety tools are developed in the context of the FAA's NextGen program. The first tool addresses the alarming increase in the frequency of near-collisions between manned and unmanned aircraft by equipping a common hobby class UAV with an ADS-B transponder that broadcasts its position, speed, heading and unique identification number to all local air traffic. The second tool estimates and outputs the location of dangerous wake vortex corridors in real time based on the ADS-B data collected and processed using a custom software package developed for this project. The TRansponder based Position Information System (TRAPIS) consists of data packet decoders, an aircraft database, Graphical User Interface (GUI) and the wake vortex extension application. Output from TRAPIS can be visualized in Google Earth and alleviates the problem of pilots being left to imagine where invisible wake vortex corridors are based solely on intuition or verbal warnings from ATC. The result of these two tools is the increased situational awareness, and hence safety, of human pilots in the National Airspace System (NAS).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Jun-Kai; Gong, Zi-Zhen; Zhang, Tian
Down-regulation of intestinal P-glycoprotein (P-gp) by soybean oil-based lipid emulsion (SOLE) may cause elevated intestinal permeability of lipopolysaccharide (LPS) in patients with total parenteral nutrition, but the appropriate preventative treatment is currently limited. Recently, sodium butyrate (NaBut) has been demonstrated to regulate the expression of P-gp. Therefore, this study aimed to address whether treatment with NaBut could attenuate SOLE-induced increase in intestinal permeability of LPS by modulation of P-gp in vitro. Caco-2 cells were exposed to SOLE with or without NaBut. SOLE-induced down-regulation of P-gp was significantly attenuated by co-incubation with NaBut. Nuclear recruitment of FOXO 3a in response to NaButmore » was involved in P-gp regulation. Transport studies revealed that SOLE-induced increase in permeability of LPS was significantly attenuated by co-incubation with NaBut. Collectively, our results suggested that NaBut may be a potentially useful medication to prevent SOLE-induced increase in intestinal permeability of LPS. - Highlights: • Caco-2 cells were used as models for studying parenteral nutrition in vitro. • NaBut restored SOLE-induced down-regulation of P-gp in Caco-2 cells. • Regulation of P-gp by NaBut was mediated via nuclear recruitment of FOXO 3a. • NaBut modulated the permeability of LPS by P-gp function, not barrier function.« less
Influence of Spatial and Chromatic Noise on Luminance Discrimination.
Miquilini, Leticia; Walker, Natalie A; Odigie, Erika A; Guimarães, Diego Leite; Salomão, Railson Cruz; Lacerda, Eliza Maria Costa Brito; Cortes, Maria Izabel Tentes; de Lima Silveira, Luiz Carlos; Fitzgerald, Malinda E C; Ventura, Dora Fix; Souza, Givago Silva
2017-12-05
Pseudoisochromatic figures are designed to base discrimination of a chromatic target from a background solely on the chromatic differences. This is accomplished by the introduction of luminance and spatial noise thereby eliminating these two dimensions as cues. The inverse rationale could also be applied to luminance discrimination, if spatial and chromatic noise are used to mask those cues. In this current study estimate of luminance contrast thresholds were conducted using a novel stimulus, based on the use of chromatic and spatial noise to mask the use of these cues in a luminance discrimination task. This was accomplished by presenting stimuli composed of a mosaic of circles colored randomly. A Landolt-C target differed from the background only by the luminance. The luminance contrast thresholds were estimated for different chromatic noise saturation conditions and compared to luminance contrast thresholds estimated using the same target in a non-mosaic stimulus. Moreover, the influence of the chromatic content in the noise on the luminance contrast threshold was also investigated. Luminance contrast threshold was dependent on the chromaticity noise strength. It was 10-fold higher than thresholds estimated from non-mosaic stimulus, but they were independent of colour space location in which the noise was modulated. The present study introduces a new method to investigate luminance vision intended for both basic science and clinical applications.
NASA Astrophysics Data System (ADS)
Wilderbuer, Thomas; Stockhausen, William; Bond, Nicholas
2013-10-01
This study provides a retrospective analysis of the relationship between physical oceanography, biology and recruitment of three Eastern Bering Sea flatfish stocks: flathead sole (Hippoglossoides elassodon), northern rock sole (Lepidopsetta polyxystra), and arrowtooth flounder (Atheresthes stomias) during the period 1978-2005. Stock assessment model estimates of recruitment and spawning stock size indicate that temporal patterns in productivity are consistent with decadal scale (or shorter) patterns in climate variability, which may influence marine survival during the early life history phases. Density-dependence (through spawning stock size) was statistically significant in a Ricker stock-recruit model of flatfish recruitment that included environmental terms. Wind-driven advection of northern rock sole and flathead sole larvae to favorable nursery grounds was found to coincide with years of above-average recruitment. Ocean forcing of Bristol Bay surface waters during springtime was mostly on-shelf (eastward) during the 1980s and again in the early 2000s, but was off-shelf (westerly) during the 1990s, corresponding with periods of good and poor recruitment, respectively. Finally, the Arctic Oscillation was found to be an important indicator of arrowtooth flounder productivity. Model results were applied to IPCC (Intergovernmental Panel on Climate Change) future springtime wind scenarios to predict the future impact of climate on northern rock sole productivity and indicated that a moderate future increase in recruitment might be expected because the climate trends favor on-shelf transport but that density-dependence will dampen this effect such that northern rock sole abundance will not be substantially affected by climate change.
From Pressure to Path: Barometer-based Vehicle Tracking
Ho, Bo-Jhang; Martin, Paul; Swaminathan, Prashanth; Srivastava, Mani
2017-01-01
Pervasive mobile devices have enabled countless context-and location-based applications that facilitate navigation, life-logging, and more. As we build the next generation of smart cities, it is important to leverage the rich sensing modalities that these numerous devices have to offer. This work demonstrates how mobile devices can be used to accurately track driving patterns based solely on pressure data collected from the device’s barometer. Specifically, by correlating pressure time-series data against topographic elevation data and road maps for a given region, a centralized computer can estimate the likely paths through which individual users have driven, providing an exceptionally low-power method for measuring driving patterns of a given individual or for analyzing group behavior across multiple users. This work also brings to bear a more nefarious side effect of pressure-based path estimation: a mobile application can, without consent and without notifying the user, use pressure data to accurately detect an individual’s driving behavior, compromising both user privacy and security. We further analyze the ability to predict driving trajectories in terms of the variance in barometer pressure and geographical elevation, demonstrating cases in which more than 80% of paths can be accurately predicted. PMID:29503981
From Pressure to Path: Barometer-based Vehicle Tracking.
Ho, Bo-Jhang; Martin, Paul; Swaminathan, Prashanth; Srivastava, Mani
2015-11-01
Pervasive mobile devices have enabled countless context-and location-based applications that facilitate navigation, life-logging, and more. As we build the next generation of smart cities, it is important to leverage the rich sensing modalities that these numerous devices have to offer. This work demonstrates how mobile devices can be used to accurately track driving patterns based solely on pressure data collected from the device's barometer. Specifically, by correlating pressure time-series data against topographic elevation data and road maps for a given region, a centralized computer can estimate the likely paths through which individual users have driven, providing an exceptionally low-power method for measuring driving patterns of a given individual or for analyzing group behavior across multiple users. This work also brings to bear a more nefarious side effect of pressure-based path estimation: a mobile application can, without consent and without notifying the user, use pressure data to accurately detect an individual's driving behavior, compromising both user privacy and security. We further analyze the ability to predict driving trajectories in terms of the variance in barometer pressure and geographical elevation, demonstrating cases in which more than 80% of paths can be accurately predicted.
Accurate aging of juvenile salmonids using fork lengths
Sethi, Suresh; Gerken, Jonathon; Ashline, Joshua
2017-01-01
Juvenile salmon life history strategies, survival, and habitat interactions may vary by age cohort. However, aging individual juvenile fish using scale reading is time consuming and can be error prone. Fork length data are routinely measured while sampling juvenile salmonids. We explore the performance of aging juvenile fish based solely on fork length data, using finite Gaussian mixture models to describe multimodal size distributions and estimate optimal age-discriminating length thresholds. Fork length-based ages are compared against a validation set of juvenile coho salmon, Oncorynchus kisutch, aged by scales. Results for juvenile coho salmon indicate greater than 95% accuracy can be achieved by aging fish using length thresholds estimated from mixture models. Highest accuracy is achieved when aged fish are compared to length thresholds generated from samples from the same drainage, time of year, and habitat type (lentic versus lotic), although relatively high aging accuracy can still be achieved when thresholds are extrapolated to fish from populations in different years or drainages. Fork length-based aging thresholds are applicable for taxa for which multiple age cohorts coexist sympatrically. Where applicable, the method of aging individual fish is relatively quick to implement and can avoid ager interpretation bias common in scale-based aging.
Visscher, Peter M; Goddard, Michael E
2015-01-01
Heritability is a population parameter of importance in evolution, plant and animal breeding, and human medical genetics. It can be estimated using pedigree designs and, more recently, using relationships estimated from markers. We derive the sampling variance of the estimate of heritability for a wide range of experimental designs, assuming that estimation is by maximum likelihood and that the resemblance between relatives is solely due to additive genetic variation. We show that well-known results for balanced designs are special cases of a more general unified framework. For pedigree designs, the sampling variance is inversely proportional to the variance of relationship in the pedigree and it is proportional to 1/N, whereas for population samples it is approximately proportional to 1/N(2), where N is the sample size. Variation in relatedness is a key parameter in the quantification of the sampling variance of heritability. Consequently, the sampling variance is high for populations with large recent effective population size (e.g., humans) because this causes low variation in relationship. However, even using human population samples, low sampling variance is possible with high N. Copyright © 2015 by the Genetics Society of America.
Monthly AOD maps combining strengths of remote sensing products
NASA Astrophysics Data System (ADS)
Kinne, Stefan
2010-05-01
The mid-visible aerosol optical depth (AOD) is the most prominent property to quantify aerosol amount the atmospheric column. Almost all aerosol retrievals of satellite sensors provide estimates for this property, however, often with limited success. As sensors differ in capabilities individual retrievals have local and regional strengths and weaknesses. Focusing on individual retrieval strengths a satellite based AOD composite has been constructed. Hereby, every retrieval performance has been assessed in statistical comparisons to ground-based sun-photometry, which provide highly accurate references though only at few globally distributed monitoring sites. Based on these comparisons, which consider bias as well as spatial patterns and seasonality, the regionally best performing satellite AOD products are combined. The resulting remote sensing AOD composite provide a general reference for the spatial and temporal AOD distribution on an (almost) global basis - solely tied to sensor data.
Molybdenum target specifications for cyclotron production of 99mTc based on patient dose estimates.
Hou, X; Tanguay, J; Buckley, K; Schaffer, P; Bénard, F; Ruth, T J; Celler, A
2016-01-21
In response to the recognized fragility of reactor-produced (99)Mo supply, direct production of (99m)Tc via (100)Mo(p,2n)(99m)Tc reaction using medical cyclotrons has been investigated. However, due to the existence of other Molybdenum (Mo) isotopes in the target, in parallel with (99m)Tc, other technetium (Tc) radioactive isotopes (impurities) will be produced. They will be incorporated into the labeled radiopharmaceuticals and result in increased patient dose. The isotopic composition of the target and beam energy are main factors that determine production of impurities, thus also dose increases. Therefore, they both must be considered when selecting targets for clinical (99m)Tc production. Although for any given Mo target, the patient dose can be predicted based on complicated calculations of production yields for each Tc radioisotope, it would be very difficult to reverse these calculations to specify target composition based on dosimetry considerations. In this article, a relationship between patient dosimetry and Mo target composition is studied. A simple and easy algorithm for dose estimation, based solely on the knowledge of target composition and beam energy, is described. Using this algorithm, the patient dose increase due to every Mo isotope that could be present in the target is estimated. Most importantly, a technique to determine Mo target composition thresholds that would meet any given dosimetry requirement is proposed.
Molybdenum target specifications for cyclotron production of 99mTc based on patient dose estimates
NASA Astrophysics Data System (ADS)
Hou, X.; Tanguay, J.; Buckley, K.; Schaffer, P.; Bénard, F.; Ruth, T. J.; Celler, A.
2016-01-01
In response to the recognized fragility of reactor-produced 99Mo supply, direct production of 99mTc via 100Mo(p,2n)99mTc reaction using medical cyclotrons has been investigated. However, due to the existence of other Molybdenum (Mo) isotopes in the target, in parallel with 99mTc, other technetium (Tc) radioactive isotopes (impurities) will be produced. They will be incorporated into the labeled radiopharmaceuticals and result in increased patient dose. The isotopic composition of the target and beam energy are main factors that determine production of impurities, thus also dose increases. Therefore, they both must be considered when selecting targets for clinical 99mTc production. Although for any given Mo target, the patient dose can be predicted based on complicated calculations of production yields for each Tc radioisotope, it would be very difficult to reverse these calculations to specify target composition based on dosimetry considerations. In this article, a relationship between patient dosimetry and Mo target composition is studied. A simple and easy algorithm for dose estimation, based solely on the knowledge of target composition and beam energy, is described. Using this algorithm, the patient dose increase due to every Mo isotope that could be present in the target is estimated. Most importantly, a technique to determine Mo target composition thresholds that would meet any given dosimetry requirement is proposed.
Waldegrave, Charles; King, Peter; Maniapoto, Maria; Tamasese, Taimalieutu Kiwi; Parsons, Tafaoimalo Loudeen; Sullivan, Ginny
2016-12-01
This study reports findings and policy recommendations from a research project that applied a relational resilience framework to a study of 60 sole parent families in New Zealand, with approximately equal numbers of Māori, Pacific, and European (White) participants. The sole parent families involved were already known to be resilient and the study focused on identifying the relationships and strategies underlying the achievement and maintenance of their resilience. The study was carried out to provide an evidence base for the development and implementation of policies and interventions to both support sole parent families who have achieved resilience and assist those who struggle to do so. The three populations shared many similarities in their pathways to becoming sole parents and the challenges they faced as sole parents. The coping strategies underlying their demonstrated resilience were also broadly similar, but the ways in which they were carried out did vary in a manner that particularly reflected cultural practices in terms of their reliance upon extended family-based support or support from outside the family. The commonalities support the appropriateness of the common conceptual framework used, whereas the differences underline the importance of developing nuanced policy responses that take into account cultural differences between the various populations to which policy initiatives are directed. © 2016 Family Process Institute.
Novel Approaches for Estimating Human Exposure to Air Pollutants
Numerous health studies have used measurements from a few central-site ambient monitors to characterize air pollution exposures. Relying on solely on central-site ambient monitors does not account for the spatial-heterogeneity of ambient air pollution patterns, the temporal varia...
Stochastic determination of matrix determinants
NASA Astrophysics Data System (ADS)
Dorn, Sebastian; Enßlin, Torsten A.
2015-07-01
Matrix determinants play an important role in data analysis, in particular when Gaussian processes are involved. Due to currently exploding data volumes, linear operations—matrices—acting on the data are often not accessible directly but are only represented indirectly in form of a computer routine. Such a routine implements the transformation a data vector undergoes under matrix multiplication. While efficient probing routines to estimate a matrix's diagonal or trace, based solely on such computationally affordable matrix-vector multiplications, are well known and frequently used in signal inference, there is no stochastic estimate for its determinant. We introduce a probing method for the logarithm of a determinant of a linear operator. Our method rests upon a reformulation of the log-determinant by an integral representation and the transformation of the involved terms into stochastic expressions. This stochastic determinant determination enables large-size applications in Bayesian inference, in particular evidence calculations, model comparison, and posterior determination.
NASA Astrophysics Data System (ADS)
Gold, Lukas; Bach, Tobias; Virsik, Wolfgang; Schmitt, Angelika; Müller, Jana; Staab, Torsten E. M.; Sextl, Gerhard
2017-03-01
For electrically powered applications such as consumer electronics and especially for electric vehicles a precise state-of-charge estimation for their lithium-ion batteries is desired to reduce aging, e.g. avoiding detrimental states-of-charge. Today, this estimation is performed by battery management systems that solely rely on charge bookkeeping and cell voltage measurements. In the present work we introduce a new, physical probe for the state-of-charge based on ultrasonic transmission. Within the simple experimental setup raised cosine pulses are applied to lithium-ion battery pouch cells, whose signals are sensitive to changes in porosity of the graphite anode during charging/dis-charging and, therefore, to the state-of-charge. The underlying physical principle can be related to Biot's theory about propagation of waves in fluid saturated porous media and by including scattering by boundary layers inside the cell.
Stochastic determination of matrix determinants.
Dorn, Sebastian; Ensslin, Torsten A
2015-07-01
Matrix determinants play an important role in data analysis, in particular when Gaussian processes are involved. Due to currently exploding data volumes, linear operations-matrices-acting on the data are often not accessible directly but are only represented indirectly in form of a computer routine. Such a routine implements the transformation a data vector undergoes under matrix multiplication. While efficient probing routines to estimate a matrix's diagonal or trace, based solely on such computationally affordable matrix-vector multiplications, are well known and frequently used in signal inference, there is no stochastic estimate for its determinant. We introduce a probing method for the logarithm of a determinant of a linear operator. Our method rests upon a reformulation of the log-determinant by an integral representation and the transformation of the involved terms into stochastic expressions. This stochastic determinant determination enables large-size applications in Bayesian inference, in particular evidence calculations, model comparison, and posterior determination.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 2 2011-10-01 2011-10-01 false Determination of the hospital-specific rate for inpatient operating costs for sole community hospitals based on a Federal fiscal year 1996 base period. 412.77 Section 412.77 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES MEDICARE PROGRAM PROSPECTIVE...
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 2 2013-10-01 2013-10-01 false Determination of the hospital-specific rate for inpatient operating costs for sole community hospitals based on a Federal fiscal year 2006 base period. 412.78 Section 412.78 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES MEDICARE PROGRAM PROSPECTIVE...
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 2 2013-10-01 2013-10-01 false Determination of the hospital-specific rate for inpatient operating costs for sole community hospitals based on a Federal fiscal year 1996 base period. 412.77 Section 412.77 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES MEDICARE PROGRAM PROSPECTIVE...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 2 2012-10-01 2012-10-01 false Determination of the hospital-specific rate for inpatient operating costs for sole community hospitals based on a Federal fiscal year 1996 base period. 412.77 Section 412.77 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES MEDICARE PROGRAM PROSPECTIVE...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 2 2014-10-01 2014-10-01 false Determination of the hospital-specific rate for inpatient operating costs for sole community hospitals based on a Federal fiscal year 1996 base period. 412.77 Section 412.77 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES MEDICARE PROGRAM PROSPECTIVE...
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 2 2011-10-01 2011-10-01 false Determination of the hospital-specific rate for inpatient operating costs for sole community hospitals based on a Federal fiscal year 2006 base period. 412.78 Section 412.78 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES MEDICARE PROGRAM PROSPECTIVE...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 2 2012-10-01 2012-10-01 false Determination of the hospital-specific rate for inpatient operating costs for sole community hospitals based on a Federal fiscal year 2006 base period. 412.78 Section 412.78 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES MEDICARE PROGRAM PROSPECTIVE...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 2 2014-10-01 2014-10-01 false Determination of the hospital-specific rate for inpatient operating costs for sole community hospitals based on a Federal fiscal year 2006 base period. 412.78 Section 412.78 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES MEDICARE PROGRAM PROSPECTIVE...
Steinbach, Sarah M L; Sturgess, Christopher P; Dunning, Mark D; Neiger, Reto
2015-06-01
Assessment of renal function by means of plasma clearance of a suitable marker has become standard procedure for estimation of glomerular filtration rate (GFR). Sinistrin, a polyfructan solely cleared by the kidney, is often used for this purpose. Pharmacokinetic modeling using adequate software is necessary to calculate disappearance rate and half-life of sinistrin. The purpose of this study was to describe the use of a Microsoft excel based add-in program to calculate plasma sinistrin clearance, as well as additional pharmacokinetic parameters such as transfer rates (k), half-life (t1/2) and volume of distribution (Vss) for sinistrin in dogs with varying degrees of renal function. Copyright © 2015 Elsevier Ltd. All rights reserved.
Geometry-based pressure drop prediction in mildly diseased human coronary arteries.
Schrauwen, J T C; Wentzel, J J; van der Steen, A F W; Gijsen, F J H
2014-06-03
Pressure drop (△p) estimations in human coronary arteries have several important applications, including determination of appropriate boundary conditions for CFD and estimation of fractional flow reserve (FFR). In this study a △p prediction was made based on geometrical features derived from patient-specific imaging data. Twenty-two mildly diseased human coronary arteries were imaged with computed tomography and intravascular ultrasound. Each artery was modelled in three consecutive steps: from straight to tapered, to stenosed, to curved model. CFD was performed to compute the additional △p in each model under steady flow for a wide range of Reynolds numbers. The correlations between the added geometrical complexity and additional △p were used to compute a predicted △p. This predicted △p based on geometry was compared to CFD results. The mean △p calculated with CFD was 855±666Pa. Tapering and curvature added significantly to the total △p, accounting for 31.4±19.0% and 18.0±10.9% respectively at Re=250. Using tapering angle, maximum area stenosis and angularity of the centerline, we were able to generate a good estimate for the predicted △p with a low mean but high standard deviation: average error of 41.1±287.8Pa at Re=250. Furthermore, the predicted △p was used to accurately estimate FFR (r=0.93). The effect of the geometric features was determined and the pressure drop in mildly diseased human coronary arteries was predicted quickly based solely on geometry. This pressure drop estimation could serve as a boundary condition in CFD to model the impact of distal epicardial vessels. Copyright © 2014 Elsevier Ltd. All rights reserved.
Comparison of Climatological Planetary Boundary Layer Depth Estimates Using the GEOS-5 AGCM
NASA Technical Reports Server (NTRS)
Mcgrath-Spangler, Erica Lynn; Molod, Andrea M.
2014-01-01
Planetary boundary layer (PBL) processes, including those influencing the PBL depth, control many aspects of weather and climate and accurate models of these processes are important for forecasting changes in the future. However, evaluation of model estimates of PBL depth are difficult because no consensus on PBL depth definition currently exists and various methods for estimating this parameter can give results that differ by hundreds of meters or more. In order to facilitate comparisons between the Goddard Earth Observation System (GEOS-5) and other modeling and observational systems, seven PBL depth estimation methods are used to produce PBL depth climatologies and are evaluated and compared here. All seven methods evaluate the same atmosphere so all differences are related solely to the definition chosen. These methods depend on the scalar diffusivity, bulk and local Richardson numbers, and the diagnosed horizontal turbulent kinetic energy (TKE). Results are aggregated by climate class in order to allow broad generalizations. The various PBL depth estimations give similar midday results with some exceptions. One method based on horizontal turbulent kinetic energy produces deeper PBL depths in the winter associated with winter storms. In warm, moist conditions, the method based on a bulk Richardson number gives results that are shallower than those given by the methods based on the scalar diffusivity. The impact of turbulence driven by radiative cooling at cloud top is most significant during the evening transition and along several regions across the oceans and methods sensitive to this cooling produce deeper PBL depths where it is most active. Additionally, Richardson number-based methods collapse better at night than methods that depend on the scalar diffusivity. This feature potentially affects tracer transport.
Estimation of laceration length by emergency department personnel.
Bourne, Christina L; Jenkins, M Adams; Brewer, Kori L
2014-11-01
Documentation and billing for laceration repair involves a description of wound length. We designed this study to test the hypothesis that emergency department (ED) personnel can accurately estimate wound lengths without the aid of a measuring device. This was a single-center prospective observational study performed in an academic ED. Seven wounds of varying lengths were simulated by creating lacerations on purchased pigs' ears and feet. We asked healthcare providers, defined as nurses and physicians working in the ED, to estimate the length of each wound by visual inspection. Length estimates were given in centimeters (cm) and inches. Estimated lengths were considered correct if the estimate was within 0.5 cm or 0.2 inches of the actual length. We calculated the differences between estimated and actual laceration lengths for each laceration and compared the accuracy of physicians to nurses using an unpaired t-test. Thirty-two physicians (nine faculty and 23 residents) and 16 nurses participated. All subjects tended to overestimate in cm and inches. Physicians were able to estimate laceration length within 0.5 cm 36% of the time and within 0.2 inches 29% of the time. Physicians were more accurate at estimating wound lengths than nurses in both cm and inches. Both physicians and nurses were more accurate at estimating shorter lengths (<5.0 cm) than longer (>5.0 cm). ED personnel are often unable to accurately estimate wound length in either cm or inches and tend to overestimate laceration lengths when based solely on visual inspection.
Bromaghin, Jeffrey F.; Budge, Suzanne M.; Thiemann, Gregory W.; Rode, Karyn D.
2017-01-01
Knowledge of animal diets provides essential insights into their life history and ecology, although diet estimation is challenging and remains an active area of research. Quantitative fatty acid signature analysis (QFASA) has become a popular method of estimating diet composition, especially for marine species. A primary assumption of QFASA is that constants called calibration coefficients, which account for the differential metabolism of individual fatty acids, are known. In practice, however, calibration coefficients are not known, but rather have been estimated in feeding trials with captive animals of a limited number of model species. The impossibility of verifying the accuracy of feeding trial derived calibration coefficients to estimate the diets of wild animals is a foundational problem with QFASA that has generated considerable criticism. We present a new model that allows simultaneous estimation of diet composition and calibration coefficients based only on fatty acid signature samples from wild predators and potential prey. Our model performed almost flawlessly in four tests with constructed examples, estimating both diet proportions and calibration coefficients with essentially no error. We also applied the model to data from Chukchi Sea polar bears, obtaining diet estimates that were more diverse than estimates conditioned on feeding trial calibration coefficients. Our model avoids bias in diet estimates caused by conditioning on inaccurate calibration coefficients, invalidates the primary criticism of QFASA, eliminates the need to conduct feeding trials solely for diet estimation, and consequently expands the utility of fatty acid data to investigate aspects of ecology linked to animal diets.
Widespread Nanoparticle-Assay Interference: Implications for Nanotoxicity Testing
Ong, Kimberly J.; MacCormack, Tyson J.; Clark, Rhett J.; Ede, James D.; Ortega, Van A.; Felix, Lindsey C.; Dang, Michael K. M.; Ma, Guibin; Fenniri, Hicham; Veinot, Jonathan G. C.; Goss, Greg G.
2014-01-01
The evaluation of engineered nanomaterial safety has been hindered by conflicting reports demonstrating differential degrees of toxicity with the same nanoparticles. The unique properties of these materials increase the likelihood that they will interfere with analytical techniques, which may contribute to this phenomenon. We tested the potential for: 1) nanoparticle intrinsic fluorescence/absorbance, 2) interactions between nanoparticles and assay components, and 3) the effects of adding both nanoparticles and analytes to an assay, to interfere with the accurate assessment of toxicity. Silicon, cadmium selenide, titanium dioxide, and helical rosette nanotubes each affected at least one of the six assays tested, resulting in either substantial over- or under-estimations of toxicity. Simulation of realistic assay conditions revealed that interference could not be predicted solely by interactions between nanoparticles and assay components. Moreover, the nature and degree of interference cannot be predicted solely based on our current understanding of nanomaterial behaviour. A literature survey indicated that ca. 95% of papers from 2010 using biochemical techniques to assess nanotoxicity did not account for potential interference of nanoparticles, and this number had not substantially improved in 2012. We provide guidance on avoiding and/or controlling for such interference to improve the accuracy of nanotoxicity assessments. PMID:24618833
Diagnoses and characteristics of autism spectrum disorders in children with Prader-Willi syndrome.
Dykens, Elisabeth M; Roof, Elizabeth; Hunt-Hawkins, Hailee; Dankner, Nathan; Lee, Evon Batey; Shivers, Carolyn M; Daniell, Christopher; Kim, Soo-Jeong
2017-01-01
A small percentage of people with autism spectrum disorders (ASD) have alterations in chromosome 15q11.2-q3, the critical region for Prader-Willi syndrome (PWS). Data are limited, however, on the rates and characteristics of ASD in PWS. Previous estimates of ASD in PWS (25 to 41%) are questionable as they are based solely on autism screeners given to parents. Inaccurate diagnoses of ASD in PWS can mislead intervention and future research. One hundred forty-six children and youth with PWS aged 4 to 21 years ( M = 11) were assessed with the Autism Diagnostic Observation Schedule-2 (ADOS-2). An expert clinical team-made best-estimate ASD diagnoses based on ADOS-2 videotapes, calibrated severity scores, and children's developmental histories and indices of current functioning. Children were also administered the Kaufman Brief Intelligence Test-2, and parents completed the Repetitive Behavior Scale-Revised and Vineland Adaptive Behavior Scales. Scores were compared across children with PWS + ASD versus PWS only. The performance of an ASD screener, the Social Communication Questionnaire (SCQ) and the ADOS-2 were evaluated in relation to best-estimate diagnoses. Best-estimate diagnoses of ASD were made in 18 children, or 12.3% of the sample, and the majority of them had the maternal uniparental disomy (mUPD) PWS genetic subtype. Compared to the PWS-only group, children with PWS + ASD had lower verbal and composite IQ's and adaptive daily living and socialization skills, as well as elevated stereotypies and restricted interests. Regardless of ASD status, compulsivity and insistence on sameness in routines or events were seen in 76-100% of children and were robustly correlated with lower adaptive functioning. The SCQ yielded a 29-49% chance that screen-positive cases will indeed have ASD. The ADOS-2 had higher sensitivity, specificity and predictive values. Communication problems were seen in children who were ADOS-2 positive but deemed not to have ASD by the clinical team. Autism screeners should not be the sole index of probable ASD in PWS; children need to be directly observed and evaluated. Compulsivity and insistence on sameness are salient in PWS and likely impede adaptive functioning. Most children with PWS only evidenced sub-threshold problems in social interactions that could signal risks for other psychopathologies.
26 CFR 20.2208-1 - Certain residents of possessions considered citizens of the United States.
Code of Federal Regulations, 2010 CFR
2010-04-01
... solely by reason of his being a citizen of such possession or by reason of his birth or residence within... examples set forth in § 20.2209-1: Example. A, a citizen of the United States by reason of his birth in the... United States citizenship is based on birth in the United States and is not based solely on being a...
Decker, Johannes H.; Otto, A. Ross; Daw, Nathaniel D.; Hartley, Catherine A.
2016-01-01
Theoretical models distinguish two decision-making strategies that have been formalized in reinforcement-learning theory. A model-based strategy leverages a cognitive model of potential actions and their consequences to make goal-directed choices, whereas a model-free strategy evaluates actions based solely on their reward history. Research in adults has begun to elucidate the psychological mechanisms and neural substrates underlying these learning processes and factors that influence their relative recruitment. However, the developmental trajectory of these evaluative strategies has not been well characterized. In this study, children, adolescents, and adults, performed a sequential reinforcement-learning task that enables estimation of model-based and model-free contributions to choice. Whereas a model-free strategy was evident in choice behavior across all age groups, evidence of a model-based strategy only emerged during adolescence and continued to increase into adulthood. These results suggest that recruitment of model-based valuation systems represents a critical cognitive component underlying the gradual maturation of goal-directed behavior. PMID:27084852
NASA Astrophysics Data System (ADS)
Tan, Z.; Zhuang, Q.; Henze, D. K.; Frankenberg, C.; Dlugokencky, E. J.; Sweeney, C.; Turner, A. J.
2015-12-01
Understanding CH4 emissions from wetlands and lakes are critical for the estimation of Arctic carbon balance under fast warming climatic conditions. To date, our knowledge about these two CH4 sources is almost solely built on the upscaling of discontinuous measurements in limited areas to the whole region. Many studies indicated that, the controls of CH4 emissions from wetlands and lakes including soil moisture, lake morphology and substrate content and quality are notoriously heterogeneous, thus the accuracy of those simple estimates could be questionable. Here we apply a high spatial resolution atmospheric inverse model (nested-grid GEOS-Chem Adjoint) over the Arctic by integrating SCIAMACHY and NOAA/ESRL CH4 measurements to constrain the CH4 emissions estimated with process-based wetland and lake biogeochemical models. Our modeling experiments using different wetland CH4 emission schemes and satellite and surface measurements show that the total amount of CH4 emitted from the Arctic wetlands is well constrained, but the spatial distribution of CH4 emissions is sensitive to priors. For CH4 emissions from lakes, our high-resolution inversion shows that the models overestimate CH4 emissions in Alaskan costal lowlands and East Siberian lowlands. Our study also indicates that the precision and coverage of measurements need to be improved to achieve more accurate high-resolution estimates.
Assessment of atmospheric mercury emissions in Finland
Mukherjee; Melanen; Ekqvist; Verta
2000-10-02
This paper is part of the study of atmospheric emissions of heavy metals conducted by the Finnish Environment Institute in collaboration with the Technical Research Centre of Finland (VTT) under the umbrella of the Finnish Ministry of the Environment. The scope of our study is limited solely to anthropogenic mercury that is emitted directly to the atmosphere. This article addresses emission factors and trends of atmospheric mercury emissions during the 1990s and is based mainly on the database of the Finnish Environmental Administration. In addition, data based on the measurements taken by the VTT regarding emission factors have been used to estimate emissions of mercury from the incineration of waste. The study indicates that the total emission of mercury has decreased from 1140 kg in 1990 to 620 kg in 1997, while industrial and energy production have been on the increase simultaneously. The 45% emission reduction is due to improved gas cleaning equipment, process changes, automation, the installation of flue gas desulfurization process in coal-fired power plants and strict pollution control laws. In the past, some authors have estimated a higher mercury emission in Finland. In this study, it is also observed that there are no big changes in the quality of raw materials. Estimated emission factors can be of great help to management for estimating mercury emissions and also its risk assessment.
Jerszurki, Daniela; Souza, Jorge L. M.; Silva, Lucas C. R.
2017-01-01
The development of new reference evapotranspiration (ETo) methods hold significant promise for improving our quantitative understanding of climatic impacts on water loss from the land to the atmosphere. To address the challenge of estimating ETo in tropical and subtropical regions where direct measurements are scarce we tested a new method based on geographical patterns of extraterrestrial radiation (Ra) and atmospheric water potential (Ψair). Our approach consisted of generating daily estimates of ETo across several climate zones in Brazil–as a model system–which we compared with standard EToPM (Penman-Monteith) estimates. In contrast with EToPM, the simplified method (EToMJS) relies solely on Ψair calculated from widely available air temperature (oC) and relative humidity (%) data, which combined with Ra data resulted in reliable estimates of equivalent evaporation (Ee) and ETo. We used regression analyses of Ψair vs EToPM and Ee vs EToPM to calibrate the EToMJS(Ψair) and EToMJS estimates from 2004 to 2014 and between seasons and climatic zone. Finally, we evaluated the performance of the new method based on the coefficient of determination (R2) and correlation (R), index of agreement “d”, mean absolute error (MAE) and mean reason (MR). This evaluation confirmed the suitability of the EToMJS method for application in tropical and subtropical regions, where the climatic information needed for the standard EToPM calculation is absent. PMID:28658324
Jerszurki, Daniela; Souza, Jorge L M; Silva, Lucas C R
2017-01-01
The development of new reference evapotranspiration (ETo) methods hold significant promise for improving our quantitative understanding of climatic impacts on water loss from the land to the atmosphere. To address the challenge of estimating ETo in tropical and subtropical regions where direct measurements are scarce we tested a new method based on geographical patterns of extraterrestrial radiation (Ra) and atmospheric water potential (Ψair). Our approach consisted of generating daily estimates of ETo across several climate zones in Brazil-as a model system-which we compared with standard EToPM (Penman-Monteith) estimates. In contrast with EToPM, the simplified method (EToMJS) relies solely on Ψair calculated from widely available air temperature (oC) and relative humidity (%) data, which combined with Ra data resulted in reliable estimates of equivalent evaporation (Ee) and ETo. We used regression analyses of Ψair vs EToPM and Ee vs EToPM to calibrate the EToMJS(Ψair) and EToMJS estimates from 2004 to 2014 and between seasons and climatic zone. Finally, we evaluated the performance of the new method based on the coefficient of determination (R2) and correlation (R), index of agreement "d", mean absolute error (MAE) and mean reason (MR). This evaluation confirmed the suitability of the EToMJS method for application in tropical and subtropical regions, where the climatic information needed for the standard EToPM calculation is absent.
Mover Position Detection for PMTLM Based on Linear Hall Sensors through EKF Processing
Yan, Leyang; Zhang, Hui; Ye, Peiqing
2017-01-01
Accurate mover position is vital for a permanent magnet tubular linear motor (PMTLM) control system. In this paper, two linear Hall sensors are utilized to detect the mover position. However, Hall sensor signals contain third-order harmonics, creating errors in mover position detection. To filter out the third-order harmonics, a signal processing method based on the extended Kalman filter (EKF) is presented. The limitation of conventional processing method is first analyzed, and then EKF is adopted to detect the mover position. In the EKF model, the amplitude of the fundamental component and the percentage of the harmonic component are taken as state variables, and they can be estimated based solely on the measured sensor signals. Then, the harmonic component can be calculated and eliminated. The proposed method has the advantages of faster convergence, better stability and higher accuracy. Finally, experimental results validate the effectiveness and superiority of the proposed method. PMID:28383505
Multisubstrate biodegradation kinetics of naphthalene, phenanthrene, and pyrene mixtures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guha, S.; Peters, C.A.; Jaffe, P.R.
Biodegradation kinetics of naphthalene, phenanthrene and pyrene were studied in sole-substrate systems, and in binary and ternary mixtures to examine substrate interactions. The experiments were conducted in aerobic batch aqueous systems inoculated with a mixed culture that had been isolated from soils contaminated with polycyclic aromatic hydrocarbons (PAHs). Monod kinetic parameters and yield coefficients for the individual parameters and yield coefficients for the individual compounds were estimated from substrate depletion and CO{sub 2} evolution rate data in sole-substrate experiments. In all three binary mixture experiments, biodegradation kinetics were comparable to the sole-substrate kinetics. In the ternary mixture, biodegradation of naphthalenemore » was inhibited and the biodegradation rates of phenanthrene and pyrene were enhanced. A multisubstrate form of the Monod kinetic model was found to adequately predict substrate interactions in the binary and ternary mixtures using only the parameters derived from sole-substrate experiments. Numerical simulations of biomass growth kinetics explain the observed range of behaviors in PAH mixtures. In general, the biodegradation rates of the more degradable and abundant compounds are reduced due to competitive inhibition, but enhanced biodegradation of the more recalcitrant PAHs occurs due to simultaneous biomass growth on multiple substrates. In PAH-contaminated environments, substrate interactions may be very large due to additive effects from the large number of compounds present.« less
High Assurance Human-Centric Decision Systems
2013-05-01
of the human operator who is multitasking in this situation. 38 Crandall, Cummings, and Mitchell [7], [8] have introduced “fan-out” models to estimate...planning in multitasking contexts. In the future, we will study extensions of our cog- nitive model. Currently, the cognitive model is focused solely
48 CFR 8.405-6 - Limiting sources.
Code of Federal Regulations, 2013 CFR
2013-10-01
... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...
48 CFR 8.405-6 - Limiting sources.
Code of Federal Regulations, 2011 CFR
2011-10-01
... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...
48 CFR 8.405-6 - Limiting sources.
Code of Federal Regulations, 2012 CFR
2012-10-01
... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...
48 CFR 8.405-6 - Limiting sources.
Code of Federal Regulations, 2014 CFR
2014-10-01
... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...
Estimating floodwater depths from flood inundation maps and topography
Cohen, Sagy; Brakenridge, G. Robert; Kettner, Albert; Bates, Bradford; Nelson, Jonathan M.; McDonald, Richard R.; Huang, Yu-Fen; Munasinghe, Dinuke; Zhang, Jiaqi
2018-01-01
Information on flood inundation extent is important for understanding societal exposure, water storage volumes, flood wave attenuation, future flood hazard, and other variables. A number of organizations now provide flood inundation maps based on satellite remote sensing. These data products can efficiently and accurately provide the areal extent of a flood event, but do not provide floodwater depth, an important attribute for first responders and damage assessment. Here we present a new methodology and a GIS-based tool, the Floodwater Depth Estimation Tool (FwDET), for estimating floodwater depth based solely on an inundation map and a digital elevation model (DEM). We compare the FwDET results against water depth maps derived from hydraulic simulation of two flood events, a large-scale event for which we use medium resolution input layer (10 m) and a small-scale event for which we use a high-resolution (LiDAR; 1 m) input. Further testing is performed for two inundation maps with a number of challenging features that include a narrow valley, a large reservoir, and an urban setting. The results show FwDET can accurately calculate floodwater depth for diverse flooding scenarios but also leads to considerable bias in locations where the inundation extent does not align well with the DEM. In these locations, manual adjustment or higher spatial resolution input is required.
NASA Astrophysics Data System (ADS)
Cave, Robert J.; Newton, Marshall D.
1996-01-01
A new method for the calculation of the electronic coupling matrix element for electron transfer processes is introduced and results for several systems are presented. The method can be applied to ground and excited state systems and can be used in cases where several states interact strongly. Within the set of states chosen it is a non-perturbative treatment, and can be implemented using quantities obtained solely in terms of the adiabatic states. Several applications based on quantum chemical calculations are briefly presented. Finally, since quantities for adiabatic states are the only input to the method, it can also be used with purely experimental data to estimate electron transfer matrix elements.
BFEE: A User-Friendly Graphical Interface Facilitating Absolute Binding Free-Energy Calculations.
Fu, Haohao; Gumbart, James C; Chen, Haochuan; Shao, Xueguang; Cai, Wensheng; Chipot, Christophe
2018-03-26
Quantifying protein-ligand binding has attracted the attention of both theorists and experimentalists for decades. Many methods for estimating binding free energies in silico have been reported in recent years. Proper use of the proposed strategies requires, however, adequate knowledge of the protein-ligand complex, the mathematical background for deriving the underlying theory, and time for setting up the simulations, bookkeeping, and postprocessing. Here, to minimize human intervention, we propose a toolkit aimed at facilitating the accurate estimation of standard binding free energies using a geometrical route, coined the binding free-energy estimator (BFEE), and introduced it as a plug-in of the popular visualization program VMD. Benefitting from recent developments in new collective variables, BFEE can be used to generate the simulation input files, based solely on the structure of the complex. Once the simulations are completed, BFEE can also be utilized to perform the post-treatment of the free-energy calculations, allowing the absolute binding free energy to be estimated directly from the one-dimensional potentials of mean force in simulation outputs. The minimal amount of human intervention required during the whole process combined with the ergonomic graphical interface makes BFEE a very effective and practical tool for the end-user.
Fluorine Abundances in AGB Carbon Stars: New Results?
NASA Astrophysics Data System (ADS)
Abia, C.; de Laverny, P.; Recio-Blanco, A.; Domínguez, I.; Cristallo, S.; Straniero, O.
2009-09-01
A recent reanalysis of the fluorine abundance in three Galactic Asymptotic Giant Branch (AGB) carbon stars (TX Psc, AQ Sgr and R Scl) by Abia et al. (2009) results in estimates of fluorine abundances systematically lower by ~0.8 dex on average, with respect to the sole previous estimates by Jorissen, Smith & Lambert (1992). The new F abundances are in better agreement with the predictions of full-network stellar models of low-mass (<3 Msolar) AGB stars.
Estimating Adolescent Risk for Hearing Loss Based on Data From a Large School-Based Survey
Verschuure, Hans; van der Ploeg, Catharina P. B.; Brug, Johannes; Raat, Hein
2010-01-01
Objectives. We estimated whether and to what extent a group of adolescents were at risk of developing permanent hearing loss as a result of voluntary exposure to high-volume music, and we assessed whether such exposure was associated with hearing-related symptoms. Methods. In 2007, 1512 adolescents (aged 12–19 years) in Dutch secondary schools completed questionnaires about their music-listening behavior and whether they experienced hearing-related symptoms after listening to high-volume music. We used their self-reported data in conjunction with published average sound levels of music players, discotheques, and pop concerts to estimate their noise exposure, and we compared that exposure to our own “loosened” (i.e., less strict) version of current European safety standards for occupational noise exposure. Results. About half of the adolescents exceeded safety standards for occupational noise exposure. About one third of the respondents exceeded safety standards solely as a result of listening to MP3 players. Hearing symptoms that occurred after using an MP3 player or going to a discotheque were associated with exposure to high-volume music. Conclusions. Adolescents often exceeded current occupational safety standards for noise exposure, highlighting the need for specific safety standards for leisure-time noise exposure. PMID:20395587
AMS of the Minor Plutonium Isotopes
NASA Astrophysics Data System (ADS)
Steier, P.; Hrnecek, E.; Priller, A.; Quinto, F.; Srncik, M.; Wallner, A.; Wallner, G.; Winkler, S.
2013-01-01
VERA, the Vienna Environmental Research Accelerator, is especially equipped for the measurement of actinides, and performs a growing number of measurements on environmental samples. While AMS is not the optimum method for each particular plutonium isotope, the possibility to measure 239Pu, 240Pu, 241Pu, 242Pu and 244Pu on the same AMS sputter target is a great simplification. We have obtained a first result on the global fallout value of 244Pu/239Pu = (5.7 ± 1.0) × 10-5 based on soil samples from Salzburg prefecture, Austria. Furthermore, we suggest using the 242Pu/240Pu ratio as an estimate of the initial 241Pu/239Pu ratio, which allows dating of the time of irradiation based solely on Pu isotopes. We have checked the validity of this estimate using literature data, simulations, and environmental samples from soil from the Salzburg prefecture (Austria), from the shut down Garigliano Nuclear Power Plant (Sessa Aurunca, Italy) and from the Irish Sea near the Sellafield nuclear facility. The maximum deviation of the estimated dates from the expected ages is 6 years, while relative dating of material from the same source seems to be possible with a precision of less than 2 years. Additional information carried by the minor plutonium isotopes may allow further improvements of the precision of the method.
USDA-ARS?s Scientific Manuscript database
Watershed models typically are evaluated solely through comparison of in-stream water and nutrient fluxes with measured data using established performance criteria, whereas processes and responses within the interior of the watershed that govern these global fluxes often are neglected. Due to the l...
On-line estimation of suspended solids in biological reactors of WWTPs using a Kalman observer.
Beltrán, S; Irizar, I; Monclús, H; Rodríguez-Roda, I; Ayesa, E
2009-01-01
The total amount of solids in Wastewater Treatment Plants (WWTPs) and their distribution among the different elements and lines play a crucial role in the stability, performance and operational costs of the process. However, an accurate prediction of the evolution of solids concentration in the different elements of a WWTP is not a straightforward task. This paper presents the design, development and validation of a generic Kalman observer for the on-line estimation of solids concentration in the tank reactors of WWTPs. The proposed observer is based on the fact that the information about the evolution of the total amount of solids in the plant can be supplied by the available on-line Suspended Solids (SS) analysers, while their distribution can be simultaneously estimated from the hydraulic pattern of the plant. The proposed observer has been applied to the on-line estimation of SS in the reactors of a pilot-scale Membrane Bio-Reactor (MBR). The results obtained have shown that the experimental information supplied by a sole on-line SS analyser located in the first reactor of the pilot plant, in combination with updated information about internal flow rates data, has been able to give a reasonable estimation of the evolution of the SS concentration in all the tanks.
Levin, S G; Young, R W; Stohler, R L
1992-11-01
This paper presents an estimate of the median lethal dose for humans exposed to total-body irradiation and not subsequently treated for radiation sickness. The median lethal dose was estimated from calculated doses to young adults who were inside two reinforced concrete buildings that remained standing in Nagasaki after the atomic detonation. The individuals in this study, none of whom have previously had calculated doses, were identified from a detailed survey done previously. Radiation dose to the bone marrow, which was taken as the critical radiation site, was calculated for each individual by the Engineering Physics and Mathematics Division of the Oak Ridge National Laboratory using a new three-dimensional discrete-ordinates radiation transport code that was developed and validated for this study using the latest site geometry, radiation yield, and spectra data. The study cohort consisted of 75 individuals who either survived > 60 d or died between the second and 60th d postirradiation due to radiation injury, without burns or other serious injury. Median lethal dose estimates were calculated using both logarithmic (2.9 Gy) and linear (3.4 Gy) dose scales. Both calculations, which met statistical validity tests, support previous estimates of the median lethal dose based solely on human data, which cluster around 3 Gy.
Ding, Xiaorong; Yan, Bryan P; Zhang, Yuan-Ting; Liu, Jing; Zhao, Ni; Tsang, Hon Ki
2017-09-14
Cuffless technique enables continuous blood pressure (BP) measurement in an unobtrusive manner, and thus has the potential to revolutionize the conventional cuff-based approaches. This study extends the pulse transit time (PTT) based cuffless BP measurement method by introducing a new indicator - the photoplethysmogram (PPG) intensity ratio (PIR). The performance of the models with PTT and PIR was comprehensively evaluated in comparison with six models that are based on sole PTT. The validation conducted on 33 subjects with and without hypertension, at rest and under various maneuvers with induced BP changes, and over an extended calibration interval, respectively. The results showed that, comparing to the PTT models, the proposed methods achieved better accuracy on each subject group at rest state and over 24 hours calibration interval. Although the BP estimation errors under dynamic maneuvers and over extended calibration interval were significantly increased for all methods, the proposed methods still outperformed the compared methods in the latter situation. These findings suggest that additional BP-related indicator other than PTT has added value for improving the accuracy of cuffless BP measurement. This study also offers insights into future research in cuffless BP measurement for tracking dynamic BP changes and over extended periods of time.
Holmes, John B; Dodds, Ken G; Lee, Michael A
2017-03-02
An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.
Vergara, P; Fargallo, J A; Martínez-Padilla, J
2015-01-01
Knowledge of the genetic basis of sexual ornaments is essential to understand their evolution through sexual selection. Although carotenoid-based ornaments have been instrumental in the study of sexual selection, given the inability of animals to synthesize carotenoids de novo, they are generally assumed to be influenced solely by environmental variation. However, very few studies have directly estimated the role of genes and the environment in shaping variation in carotenoid-based traits. Using long-term individual-based data, we here explore the evolutionary potential of a dynamic, carotenoid-based ornament (namely skin coloration), in male and female common kestrels. We first estimate the amount of genetic variation underlying variation in hue, chroma and brightness. After correcting for sex differences, the chroma of the orange-yellow eye ring coloration was significantly heritable (h2±SE=0.40±0.17), whereas neither hue (h2=0) nor brightness (h2=0.02) was heritable. Second, we estimate the strength and shape of selection acting upon chromatic (hue and chroma) and achromatic (brightness) variation and show positive and negative directional selection on female but not male chroma and hue, respectively, whereas brightness was unrelated to fitness in both sexes. This suggests that different components of carotenoid-based signals traits may show different evolutionary dynamics. Overall, we show that carotenoid-based coloration is a complex and multifaceted trait. If we are to gain a better understanding of the processes responsible for the generation and maintenance of variation in carotenoid-based coloration, these complexities need to be taken into account. © 2014 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2014 European Society For Evolutionary Biology.
Zimmerman, Guthrie S.; Sauer, John; Boomer, G. Scott; Devers, Patrick K.; Garrettson, Pamela R.
2017-01-01
The U.S. Fish and Wildlife Service (USFWS) uses data from the North American Breeding Bird Survey (BBS) to assist in monitoring and management of some migratory birds. However, BBS analyses provide indices of population change rather than estimates of population size, precluding their use in developing abundance-based objectives and limiting applicability to harvest management. Wood Ducks (Aix sponsa) are important harvested birds in the Atlantic Flyway (AF) that are difficult to detect during aerial surveys because they prefer forested habitat. We integrated Wood Duck count data from a ground-plot survey in the northeastern U.S. with AF-wide BBS, banding, parts collection, and harvest data to derive estimates of population size for the AF. Overlapping results between the smaller-scale intensive ground-plot survey and the BBS in the northeastern U.S. provided a means for scaling BBS indices to the breeding population size estimates. We applied these scaling factors to BBS results for portions of the AF lacking intensive surveys. Banding data provided estimates of annual survival and harvest rates; the latter, when combined with parts-collection data, provided estimates of recruitment. We used the harvest data to estimate fall population size. Our estimates of breeding population size and variability from the integrated population model (N̄ = 0.99 million, SD = 0.04) were similar to estimates of breeding population size based solely on data from the AF ground-plot surveys and the BBS (N̄ = 1.01 million, SD = 0.04) from 1998 to 2015. Integrating BBS data with other data provided reliable population size estimates for Wood Ducks at a scale useful for harvest and habitat management in the AF, and allowed us to derive estimates of important demographic parameters (e.g., seasonal survival rates, sex ratio) that were not directly informed by data.
Assessing Potential Additional PFAS Retention Processes in the Subsurface
NASA Astrophysics Data System (ADS)
Brusseau, M. L.
2017-12-01
Understanding the transport and fate of per- and poly-fluorinated alkyl substances (PFASs) in the subsurface is critical for accurate risk assessments and design of effective remedial actions. Current conceptual and mathematical models are based on an assumption that solid-phase adsorption is the sole source of retention for PFASs. However, additional retention processes may be relevant for PFAS compounds in vadose-zone systems and in source zones that contain trapped immiscible organic liquids. These include adsorption at the air-water interface, partitioning to the soil atmosphere, adsorption at the NAPL-water interface, and absorption by NAPL. A multi-process retention model is proposed to account for these potential additional sources of PFAS retardation. An initial assessment of the relative magnitudes and significance of these retention processes was conducted for three representative PFASs, perfluorooctanoic acid (PFOA), perfluorooctane sulfonate (PFOS), and 8:2 fluorotelomer alcohol (FTOH). Data collected from the literature were used to determine measured or estimated values for the relevant distribution coefficients, which were in turn used to calculate retardation factors for a representative porous medium. Adsorption at the air-water interface was shown to be a primary source of retention for PFOA and PFOS, contributing approximately 80% of total retardation. Adsorption to NAPL-water interfaces and absorption by bulk NAPL were also shown to be significant sources of retention for PFOS and PFOA. The latter process was the predominant source of retention for 8:2 FTOH, contributing 98% of total retardation. These results indicate that we may anticipate significant retention of PFASs by these additional processes. In such cases, retardation of PFASs in source areas may be significantly greater than what is typically estimated based on the standard assumption of solid-phase adsorption as the sole retention mechanism. This has significant ramifications for accurate determination of the migration potential and magnitude of mass flux to groundwater, as well as for calculations of contaminant mass residing in source zones.
NASA Astrophysics Data System (ADS)
Hurtado, C.; Bailey, C.; Visokay, L.; Scharf, A.
2017-12-01
The Semail ophiolite is the world's largest and best-exposed ophiolite sequence, however the processes associated with both oceanic detachment and later emplacement onto the Arabian continental margin remain enigmatic. This study examines the upper mantle section of the ophiolite, its associated metamorphic sole, and the autochthonous strata beneath the ophiolite at two locations in northern Oman. Our purpose is to understand the structural history of ophiolite emplacement and evaluate the deformation kinematics of faulted and sheared rocks in the metamorphic sole. At Wadi Hawasina, the base of the ophiolite is defined by a 5- to 15-m thick zone of penetratively-serpentinized mylonitic peridotite. Kinematic indicators record top-to-the SW (reverse) sense-of-shear with a triclinic deformation asymmetry. An inverted metamorphic grade is preserved in the 300- to 500-m thick metamorphic sole that is thrust over deep-water sedimentary rocks of the Hawasina Group. The study site near Buwah, in the northern Jebel Nakhl culmination, contains a N-to-S progression of mantle peridotite, metamorphic sole, and underlying Jurassic carbonates. Liswanite crops out in NW-SE trending linear ridges in the peridotite. The metamorphic sole includes well-foliated quartzite, metachert, and amphibolite. Kinematic evidence indicates that the liswanite and a serpentinized mélange experienced top to-the north (normal) sense-of-shear. Two generations of E-W striking, N-dipping normal faults separate the autochthonous sequence from the metamorphic sole, and also cut out significant sections of the metamorphic sole. Fabric analysis reveals that the metamorphic sole experienced flattening strain (K<0.2) that accumulated during pure shear-dominated general shear (Wk<0.4). Normal faulting and extension at the Buwah site indicates that post-ophiolite deformation is significant in the Jebel Akhdar and Jebel Nakhl culminations.
Joint amalgamation of most parsimonious reconciled gene trees
Scornavacca, Celine; Jacox, Edwin; Szöllősi, Gergely J.
2015-01-01
Motivation: Traditionally, gene phylogenies have been reconstructed solely on the basis of molecular sequences; this, however, often does not provide enough information to distinguish between statistically equivalent relationships. To address this problem, several recent methods have incorporated information on the species phylogeny in gene tree reconstruction, leading to dramatic improvements in accuracy. Although probabilistic methods are able to estimate all model parameters but are computationally expensive, parsimony methods—generally computationally more efficient—require a prior estimate of parameters and of the statistical support. Results: Here, we present the Tree Estimation using Reconciliation (TERA) algorithm, a parsimony based, species tree aware method for gene tree reconstruction based on a scoring scheme combining duplication, transfer and loss costs with an estimate of the sequence likelihood. TERA explores all reconciled gene trees that can be amalgamated from a sample of gene trees. Using a large scale simulated dataset, we demonstrate that TERA achieves the same accuracy as the corresponding probabilistic method while being faster, and outperforms other parsimony-based methods in both accuracy and speed. Running TERA on a set of 1099 homologous gene families from complete cyanobacterial genomes, we find that incorporating knowledge of the species tree results in a two thirds reduction in the number of apparent transfer events. Availability and implementation: The algorithm is implemented in our program TERA, which is freely available from http://mbb.univ-montp2.fr/MBB/download_sources/16__TERA. Contact: celine.scornavacca@univ-montp2.fr, ssolo@angel.elte.hu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25380957
[Economic impact of chronic, acute and global malnutrition in Peru].
Alcázar, Lorena; Ocampo, Diego; Huamán-Espino, Lucio; Pablo Aparco, Juan
2013-01-01
To estimate the economic impact of chronic, acute and global malnutrition in Peru. This study, through an econometric model, estimated the economic impact of child malnutrition in two time horizons (incidental retrospective and prospective) during 2011, considering malnutrition-associated costs of health, education and productivity for the Peruvian economy. Information collected is a combination of data coming from the Demographic Survey of Family Health, the National Survey of Homes, the 2007 Census of Population and Housing, and public budget information, as well as estimates of risks a child is exposed to due to malnutrition during their first years of life. Nationwide it was found that in the perspective retrospective, the cost of child malnutrition in 2011 was 10,999 million soles, which was equal to 2.2% of GDP for that same year. Prospective costs nationwide, of those who by 2011 were 0 to 59 months, reached 4,505 million soles and represented 0.9% of GDP in 2011. Most cases stem from losses of productivity in both cases. Moreover, malnutrition affects much more both the Andes and jungle regions. The economic impact of child malnutrition represents a significant percentage of GDP, reason for which it is necessary to continue investing equitably in its prevention through participation with proven efficiency.
Should fatty acid signature proportions sum to 1 for diet estimation?
Bromaghin, Jeffrey F.; Budge, Suzanne M.; Thiemann, Gregory W.
2016-01-01
Knowledge of predator diets, including how diets might change through time or differ among predators, provides essential insights into their ecology. Diet estimation therefore remains an active area of research within quantitative ecology. Quantitative fatty acid signature analysis (QFASA) is an increasingly common method of diet estimation. QFASA is based on a data library of prey signatures, which are vectors of proportions summarizing the fatty acid composition of lipids, and diet is estimated as the mixture of prey signatures that most closely approximates a predator’s signature. Diets are typically estimated using proportions from a subset of all fatty acids that are known to be solely or largely influenced by diet. Given the subset of fatty acids selected, the current practice is to scale their proportions to sum to 1.0. However, scaling signature proportions has the potential to distort the structural relationships within a prey library and between predators and prey. To investigate that possibility, we compared the practice of scaling proportions with two alternatives and found that the traditional scaling can meaningfully bias diet estimators under some conditions. Two aspects of the prey types that contributed to a predator’s diet influenced the magnitude of the bias: the degree to which the sums of unscaled proportions differed among prey types and the identifiability of prey types within the prey library. We caution investigators against the routine scaling of signature proportions in QFASA.
2014-01-01
Background While the neural and mechanical effects of whole nerve cutaneous stimulation on human locomotion have been previously studied, there is less information about effects evoked by activation of discrete skin regions on the sole of the foot. Electrical stimulation of discrete foot regions evokes position-modulated patterns of cutaneous reflexes in muscles acting at the ankle during standing but data during walking are lacking. Here, non-noxious electrical stimulation was delivered to five discrete locations on the sole of the foot (heel, and medial and lateral sites on the midfoot and forefoot) during treadmill walking. EMG activity from muscles acting at the hip, knee and ankle were recorded along with movement at these three joints. Additionally, 3 force sensing resistors measuring continuous force changes were placed at the heel, and the medial and lateral aspects of the right foot sole. All data were sorted based on stimulus occurrence in twelve step-cycle phases, before being averaged together within a phase for subsequent analysis. Methods Non-noxious electrical stimulation was delivered to five discrete locations on the sole of the foot (heel, and medial and lateral sites on the midfoot and forefoot) during treadmill walking. EMG activity from muscles acting at the hip, knee and ankle were recorded along with movement at these three joints. Additionally, 3 force sensing resistors measuring continuous force changes were placed at the heel, and the medial and lateral aspects of the right foot sole. All data were sorted based on stimulus occurrence in twelve step-cycle phases, before being averaged together within a phase for subsequent analysis. Results The results demonstrate statistically significant dynamic changes in reflex amplitudes, kinematics and foot sole pressures that are site-specific and phase-dependent. The general trends demonstrate responses producing decreased underfoot pressure at the site of stimulation. Conclusions The responses to stimulation of discrete locations on the foot sole evoke a kind of “sensory steering” that may promote balance and maintenance of locomotion through the modulation of limb loading and foot placement. These results have implications for using sensory stimulation as a therapeutic modality during gait retraining (e.g. after stroke) as well as for footwear design and implementation of foot sole contact surfaces during gait. PMID:25202452
Rodríguez-Entrena, Macario; Schuberth, Florian; Gelhard, Carsten
2018-01-01
Structural equation modeling using partial least squares (PLS-SEM) has become a main-stream modeling approach in various disciplines. Nevertheless, prior literature still lacks a practical guidance on how to properly test for differences between parameter estimates. Whereas existing techniques such as parametric and non-parametric approaches in PLS multi-group analysis solely allow to assess differences between parameters that are estimated for different subpopulations, the study at hand introduces a technique that allows to also assess whether two parameter estimates that are derived from the same sample are statistically different. To illustrate this advancement to PLS-SEM, we particularly refer to a reduced version of the well-established technology acceptance model.
De Campeneere, S; Fiems, L O; Van de Voorde, G; Vanacker, J M; Boucque, C V; Demeyer, D I
1999-01-01
Characteristics from the 8th rib cut: chemical composition, tissue composition after dissection, specific gravity (SG) and m. longissimus thoracis (LT) composition, collected on 17 Belgian Blue double-muscled fattening bulls were used to generate equations for predicting chemical carcass composition. Carcass composition was best predicted from chemical analysis of the 8th rib cut and the empty body weight (EBW) of the bull. Carcass chemical fat content (CCF, kg) was predicted from the 8th rib cut fat content (ether extract, 8RF, kg) by the following regression: CCF=1.94+27.37 8RF (R(2)=0.957, RSD =9.89%). A higher coefficient was found for carcass water (CCW, kg) predicted from 8RF and EBW: CCW=-2.26+0.28 EBW-34.28 8RF (R(2)=0.997, RSD=1.48%). No parameter was found to improve the prediction of CCP from EBW solely: CCP=-0.86+0.08 EBW (R(2) =0.992, RSD=2.61%). Prediction equations based solely on LT composition had low R(2) values of between 0.38 and 0.67, whereas no significant equations were found using SG. However, equations based on EBW had R(2) values between 0.78 and 0.99. Chemical components of the 8th rib cut in combination with EBW are most useful in predicting the chemical composition of the carcass of Belgian-Blue double-muscled bulls.
Ectotherm thermal stress and specialization across altitude and latitude.
Buckley, Lauren B; Miller, Ethan F; Kingsolver, Joel G
2013-10-01
Gradients of air temperature, radiation, and other climatic factors change systematically but differently with altitude and latitude. We explore how these factors combine to produce altitudinal and latitudinal patterns of body temperature, thermal stress, and seasonal overlap that differ markedly from patterns based solely on air temperature. We use biophysical models to estimate body temperature as a function of an organism's phenotype and environmental conditions (air and surface temperatures and radiation). Using grasshoppers as a case study, we compare mean body temperatures and the incidence of thermal extremes along altitudinal gradients both under past and current climates. Organisms at high elevation can experience frequent thermal stress despite generally cooler air temperatures due to high levels of solar radiation. Incidences of thermal stress have increased more rapidly than have increases in mean conditions due to recent climate change. Increases in air temperature have coincided with shifts in cloudiness and solar radiation, which can exacerbate shifts in body temperature. We compare altitudinal thermal gradients and their seasonality between tropical and temperate mountains to ask whether mountain passes pose a greater physiological barrier in the tropics (Janzen's hypothesis). We find that considering body temperature rather than air temperature generally increases the amount of overlap in thermal conditions along gradients in elevation and thus decreases the physiological barrier posed by tropical mountains. Our analysis highlights the limitations of predicting thermal stress based solely on air temperatures, and the importance of considering how phenotypes influence body temperatures.
NASA Astrophysics Data System (ADS)
Rioux, Matthew; Garber, Joshua; Bauer, Ann; Bowring, Samuel; Searle, Michael; Kelemen, Peter; Hacker, Bradley
2016-10-01
The Semail (Oman-United Arab Emirates) and other Tethyan-type ophiolites are underlain by a sole consisting of greenschist- to granulite-facies metamorphic rocks. As preserved remnants of the underthrust plate, sole exposures can be used to better understand the formation and obduction of ophiolites. Early models envisioned that the metamorphic sole of the Semail ophiolite formed as a result of thrusting of the hot ophiolite lithosphere over adjacent oceanic crust during initial emplacement; however, calculated pressures from granulite-facies mineral assemblages in the sole suggest the metamorphic rocks formed at >35 km depth, and are too high to be explained by the currently preserved thickness of ophiolite crust and mantle (up to 15-20 km). We have used high-precision U-Pb zircon dating to study the formation and evolution of the metamorphic sole at two well-studied localities. Our previous research and new results show that the ophiolite crust formed from 96.12-95.50 Ma. Our new dates from the Sumeini and Wadi Tayin sole localities indicate peak metamorphism at 96.16 and 94.82 Ma (±0.022 to 0.035 Ma), respectively. The dates from the Sumeini sole locality show for the first time that the metamorphic rocks formed either prior to or during formation of the ophiolite crust, and were later juxtaposed with the base of the ophiolite. These data, combined with existing geochemical constraints, are best explained by formation of the ophiolite in a supra-subduction zone setting, with metamorphism of the sole rocks occurring in a subducted slab. The 1.3 Ma difference between the Wadi Tayin and Sumeini dates indicates that, in contrast to current models, the highest-grade rocks at different sole localities underwent metamorphism, and may have returned up the subduction channel, at different times.
Since the late 1950's more than 750 million tons of toxic wastes have been discarded in an estimated 30,000 to 50,000 hazardous waste sies (HWS). he uncontrolled discarding of chemical wastes creates the potential for risks to human health. tilizing the National Priorities Listin...
ERIC Educational Resources Information Center
Sandy, Robert; Elliott, Robert R.
2005-01-01
Long-term illness (LTI) is a more prevalent workplace risk than fatal accidents but there is virtually no evidence for compensating differentials for a broad measure of LTI. In 1990 almost 3.4 percent of the U.K. adult population suffered from a LTI caused solely by their working conditions. This paper provides the first estimates of compensating…
Mathematical model for Trametes versicolor growth in submerged cultivation.
Tisma, Marina; Sudar, Martina; Vasić-Racki, Durda; Zelić, Bruno
2010-08-01
Trametes versicolor is a white-rot fungus known as a producer of extracellular enzymes such as laccase, manganese-peroxidase, and lignin-peroxidase. The production of these enzymes requires detailed knowledge of the growth characteristics and physiology of the fungus. Submerged cultivations of T. versicolor on glucose, fructose, and sucrose as sole carbon sources were performed in shake flasks. Sucrose hydrolysis catalyzed by the whole cells of T. versicolor was considered as one-step enzymatic reaction described with Michaelis-Menten kinetics. Kinetic parameters of invertase-catalyzed sucrose hydrolysis were estimated (K (m) = 7.99 g dm(-3) and V (m) = 0.304 h(-1)). Monod model was used for description of kinetics of T. versicolor growth on glucose and fructose as sole carbon sources. Growth associated model parameters were estimated from the experimental results obtained by independent experiments (mu(G)(max) = 0.14 h(-1), K(G)(S) = 8.06 g dm(-3), mu(F)(max) = 0.37 h(-1) and K(F)(S) = 54.8 g dm(-3)). Developed mathematical model is in good agreement with the experimental results.
Geomorphic Flood Area (GFA): a DEM-based tool for flood susceptibility mapping at large scales
NASA Astrophysics Data System (ADS)
Manfreda, S.; Samela, C.; Albano, R.; Sole, A.
2017-12-01
Flood hazard and risk mapping over large areas is a critical issue. Recently, many researchers are trying to achieve a global scale mapping encountering several difficulties, above all the lack of data and implementation costs. In data scarce environments, a preliminary and cost-effective floodplain delineation can be performed using geomorphic methods (e.g., Manfreda et al., 2014). We carried out several years of research on this topic, proposing a morphologic descriptor named Geomorphic Flood Index (GFI) (Samela et al., 2017) and developing a Digital Elevation Model (DEM)-based procedure able to identify flood susceptible areas. The procedure exhibited high accuracy in several test sites in Europe, United States and Africa (Manfreda et al., 2015; Samela et al., 2016, 2017) and has been recently implemented in a QGIS plugin named Geomorphic Flood Area (GFA) - tool. The tool allows to automatically compute the GFI, and turn it into a linear binary classifier capable of detecting flood-prone areas. To train this classifier, an inundation map derived using hydraulic models for a small portion of the basin is required (the minimum is 2% of the river basin's area). In this way, the GFA-tool allows to extend the classification of the flood-prone areas across the entire basin. We are also defining a simplified procedure for the estimation of the river depth, which may be helpful for large-scale analyses to approximatively evaluate the expected flood damages in the surrounding areas. ReferencesManfreda, S., Nardi, F., Samela, C., Grimaldi, S., Taramasso, A. C., Roth, G., & Sole, A. (2014). Investigation on the use of geomorphic approaches for the delineation of flood prone areas. J. Hydrol., 517, 863-876. Manfreda, S., Samela, C., Gioia, A., Consoli, G., Iacobellis, V., Giuzio, L., & Sole, A. (2016). Flood-prone areas assessment using linear binary classifiers based on flood maps obtained from 1D and 2D hydraulic models. Nat. Hazards, Vol. 79 (2), pp 735-754. Samela, C., Manfreda, S., Paola, F. D., Giugni, M., Sole, A., & Fiorentino, M. (2016). DEM-Based Approaches for the Delineation of Flood-Prone Areas in an Ungauged Basin in Africa. J. Hydrol. Eng,, 06015010. Samela, C., Troy, T. J., & Manfreda, S. (2017a). Geomorphic classifiers for flood-prone areas delineation for data-scarce environments. Adv. Water Resour., 102, 13-28.
Global solar wind variations over the last four centuries.
Owens, M J; Lockwood, M; Riley, P
2017-01-31
The most recent "grand minimum" of solar activity, the Maunder minimum (MM, 1650-1710), is of great interest both for understanding the solar dynamo and providing insight into possible future heliospheric conditions. Here, we use nearly 30 years of output from a data-constrained magnetohydrodynamic model of the solar corona to calibrate heliospheric reconstructions based solely on sunspot observations. Using these empirical relations, we produce the first quantitative estimate of global solar wind variations over the last 400 years. Relative to the modern era, the MM shows a factor 2 reduction in near-Earth heliospheric magnetic field strength and solar wind speed, and up to a factor 4 increase in solar wind Mach number. Thus solar wind energy input into the Earth's magnetosphere was reduced, resulting in a more Jupiter-like system, in agreement with the dearth of auroral reports from the time. The global heliosphere was both smaller and more symmetric under MM conditions, which has implications for the interpretation of cosmogenic radionuclide data and resulting total solar irradiance estimates during grand minima.
O'Gorman, Thomas W
2018-05-01
In the last decade, it has been shown that an adaptive testing method could be used, along with the Robbins-Monro search procedure, to obtain confidence intervals that are often narrower than traditional confidence intervals. However, these confidence interval limits require a great deal of computation and some familiarity with stochastic search methods. We propose a method for estimating the limits of confidence intervals that uses only a few tests of significance. We compare these limits to those obtained by a lengthy Robbins-Monro stochastic search and find that the proposed method is nearly as accurate as the Robbins-Monro search. Adaptive confidence intervals that are produced by the proposed method are often narrower than traditional confidence intervals when the distributions are long-tailed, skewed, or bimodal. Moreover, the proposed method of estimating confidence interval limits is easy to understand, because it is based solely on the p-values from a few tests of significance.
Evidence for S(IV) compounds other than dissolved SO2 in precipitation
NASA Astrophysics Data System (ADS)
Chapman, E. G.
1986-12-01
Preliminary results from a study characterizing S(IV) compounds in wintertime precipitation samples indicate that bisulfite ion is not the primary form of S(IV), as previously believed. By employing a differencing technique that permits estimation of both SO2 aq and non-SO2 aq compound concentrations, it was found that, on an average, more than 60 percent of the total S(IV) is present in a form other than dissolved SO2. Formaldehyde analyses on selected samples suggest that the most likely form of the S(IV) is hydroxymethanesulfonate, although other aldehyde-S(IV) adducts may also be present. The non-SO2 compounds represented a significant portion of the total sulfur concentrations present in the samples analyzed, with contributions ranging from 1.2 to 27 percent. Because of the stability and oxidation resistance of these S(IV) compounds, sulfur deposition estimates that are based solely on sulfate measurements are undoubtedly low, especially for wintertime events. The study underscores the importance of S(IV) compounds in atmospheric scavenging processes.
How toxic is coal ash? A laboratory toxicity case study
Sherrard, Rick M.; Carriker, Neil; Greeley, Jr., Mark Stephen
2014-12-08
Under a consent agreement among the Environmental Protection Agency (EPA) and proponents both for and against stricter regulation, EPA is to issue a new coal ash disposal rule by the end of 2014. Laboratory toxicity investigations often yield conservative estimates of toxicity because many standard test species are more sensitive than resident species, thus could provide information useful to the rule-making. However, few laboratory studies of coal ash toxicity are available; most studies reported in the literature are based solely on field investigations. In this paper, we describe a broad range of toxicity studies conducted for the Tennessee Valley Authoritymore » (TVA) Kingston ash spill, results of which help provide additional perspective on the toxicity of coal ash.« less
Shock Formation and Energy Dissipation of Slow Magnetosonic Waves in Coronal Plumes
NASA Technical Reports Server (NTRS)
Cuntz, M.; Suess, S. T.
2003-01-01
We study the shock formation and energy dissipation of slow magnetosonic waves in coronal plumes. The wave parameters and the spreading function of the plumes as well as the base magnetic field strength are given by empirical constraints mostly from SOHO/UVCS. Our models show that shock formation occurs at low coronal heights, i.e., within 1.3 bun, depending on the model parameters. In addition, following analytical estimates, we show that scale height of energy dissipation by the shocks ranges between 0.15 and 0.45 Rsun. This implies that shock heating by slow magnetosonic waves is relevant at most heights, even though this type of waves is apparently not a solely operating energy supply mechanism.
Galactic and solar radiation exposure to aircrew during a solar cycle.
Lewis, B J; Bennett, L G I; Green, A R; McCall, M J; Ellaschuk, B; Butler, A; Pierre, M
2002-01-01
An on-going investigation using a tissue-equivalent proportional counter (TEPC) has been carried out to measure the ambient dose equivalent rate of the cosmic radiation exposure of aircrew during a solar cycle. A semi-empirical model has been derived from these data to allow for the interpolation of the dose rate for any global position. The model has been extended to an altitude of up to 32 km with further measurements made on board aircraft and several balloon flights. The effects of changing solar modulation during the solar cycle are characterised by correlating the dose rate data to different solar potential models. Through integration of the dose-rate function over a great circle flight path or between given waypoints, a Predictive Code for Aircrew Radiation Exposure (PCAIRE) has been further developed for estimation of the route dose from galactic cosmic radiation exposure. This estimate is provided in units of ambient dose equivalent as well as effective dose, based on E/H x (10) scaling functions as determined from transport code calculations with LUIN and FLUKA. This experimentally based treatment has also been compared with the CARI-6 and EPCARD codes that are derived solely from theoretical transport calculations. Using TEPC measurements taken aboard the International Space Station, ground based neutron monitoring, GOES satellite data and transport code analysis, an empirical model has been further proposed for estimation of aircrew exposure during solar particle events. This model has been compared to results obtained during recent solar flare events.
Physiome-model-based state-space framework for cardiac deformation recovery.
Wong, Ken C L; Zhang, Heye; Liu, Huafeng; Shi, Pengcheng
2007-11-01
To more reliably recover cardiac information from noise-corrupted, patient-specific measurements, it is essential to employ meaningful constraining models and adopt appropriate optimization criteria to couple the models with the measurements. Although biomechanical models have been extensively used for myocardial motion recovery with encouraging results, the passive nature of such constraints limits their ability to fully count for the deformation caused by active forces of the myocytes. To overcome such limitations, we propose to adopt a cardiac physiome model as the prior constraint for cardiac motion analysis. The cardiac physiome model comprises an electric wave propagation model, an electromechanical coupling model, and a biomechanical model, which are connected through a cardiac system dynamics for a more complete description of the macroscopic cardiac physiology. Embedded within a multiframe state-space framework, the uncertainties of the model and the patient's measurements are systematically dealt with to arrive at optimal cardiac kinematic estimates and possibly beyond. Experiments have been conducted to compare our proposed cardiac-physiome-model-based framework with the solely biomechanical model-based framework. The results show that our proposed framework recovers more accurate cardiac deformation from synthetic data and obtains more sensible estimates from real magnetic resonance image sequences. With the active components introduced by the cardiac physiome model, cardiac deformations recovered from patient's medical images are more physiologically plausible.
A Hybrid Probabilistic Model for Unified Collaborative and Content-Based Image Tagging.
Zhou, Ning; Cheung, William K; Qiu, Guoping; Xue, Xiangyang
2011-07-01
The increasing availability of large quantities of user contributed images with labels has provided opportunities to develop automatic tools to tag images to facilitate image search and retrieval. In this paper, we present a novel hybrid probabilistic model (HPM) which integrates low-level image features and high-level user provided tags to automatically tag images. For images without any tags, HPM predicts new tags based solely on the low-level image features. For images with user provided tags, HPM jointly exploits both the image features and the tags in a unified probabilistic framework to recommend additional tags to label the images. The HPM framework makes use of the tag-image association matrix (TIAM). However, since the number of images is usually very large and user-provided tags are diverse, TIAM is very sparse, thus making it difficult to reliably estimate tag-to-tag co-occurrence probabilities. We developed a collaborative filtering method based on nonnegative matrix factorization (NMF) for tackling this data sparsity issue. Also, an L1 norm kernel method is used to estimate the correlations between image features and semantic concepts. The effectiveness of the proposed approach has been evaluated using three databases containing 5,000 images with 371 tags, 31,695 images with 5,587 tags, and 269,648 images with 5,018 tags, respectively.
Mid-term fire danger index based on satellite imagery and ancillary geographic data
NASA Astrophysics Data System (ADS)
Stefanidou, A.; Dragozi, E.; Tompoulidou, M.; Stepanidou, L.; Grigoriadis, D.; Katagis, T.; Stavrakoudis, D.; Gitas, I.
2017-09-01
Fire danger forecast constitutes one of the most important components of integrated fire management since it provides crucial information for efficient pre-fire planning, alertness and timely response to a possible fire event. The aim of this work is to develop an index that has the capability of predicting accurately fire danger on a mid-term basis. The methodology that is currently under development is based on an innovative approach that employs dry fuel spatial connectivity as well as biophysical and topological variables for the reliable prediction of fire danger. More specifically, the estimation of the dry fuel connectivity is based on a previously proposed automated procedure implemented in R software that uses Moderate Resolution Imaging Spectrometer (MODIS) time series data. Dry fuel connectivity estimates are then combined with other ancillary data such as fuel type and proximity to roads in order to result in the generation of the proposed mid-term fire danger index. The innovation of the proposed index—which will be evaluated by comparison to historical fire data—lies in the fact that its calculation is almost solely affected by the availability of satellite data. Finally, it should be noted that the index is developed within the framework of the National Observatory of Forest Fires (NOFFi) project.
Decker, Johannes H; Otto, A Ross; Daw, Nathaniel D; Hartley, Catherine A
2016-06-01
Theoretical models distinguish two decision-making strategies that have been formalized in reinforcement-learning theory. A model-based strategy leverages a cognitive model of potential actions and their consequences to make goal-directed choices, whereas a model-free strategy evaluates actions based solely on their reward history. Research in adults has begun to elucidate the psychological mechanisms and neural substrates underlying these learning processes and factors that influence their relative recruitment. However, the developmental trajectory of these evaluative strategies has not been well characterized. In this study, children, adolescents, and adults performed a sequential reinforcement-learning task that enabled estimation of model-based and model-free contributions to choice. Whereas a model-free strategy was apparent in choice behavior across all age groups, a model-based strategy was absent in children, became evident in adolescents, and strengthened in adults. These results suggest that recruitment of model-based valuation systems represents a critical cognitive component underlying the gradual maturation of goal-directed behavior. © The Author(s) 2016.
Ghosh, Jo Kay C.; Wilhelm, Michelle; Su, Jason; Goldberg, Daniel; Cockburn, Myles; Jerrett, Michael; Ritz, Beate
2012-01-01
Few studies have examined associations of birth outcomes with toxic air pollutants (air toxics) in traffic exhaust. This study included 8,181 term low birth weight (LBW) children and 370,922 term normal-weight children born between January 1, 1995, and December 31, 2006, to women residing within 5 miles (8 km) of an air toxics monitoring station in Los Angeles County, California. Additionally, land-use-based regression (LUR)-modeled estimates of levels of nitric oxide, nitrogen dioxide, and nitrogen oxides were used to assess the influence of small-area variations in traffic pollution. The authors examined associations with term LBW (≥37 weeks’ completed gestation and birth weight <2,500 g) using logistic regression adjusted for maternal age, race/ethnicity, education, parity, infant gestational age, and gestational age squared. Odds of term LBW increased 2%–5% (95% confidence intervals ranged from 1.00 to 1.09) per interquartile-range increase in LUR-modeled estimates and monitoring-based air toxics exposure estimates in the entire pregnancy, the third trimester, and the last month of pregnancy. Models stratified by monitoring station (to investigate air toxics associations based solely on temporal variations) resulted in 2%–5% increased odds per interquartile-range increase in third-trimester benzene, toluene, ethyl benzene, and xylene exposures, with some confidence intervals containing the null value. This analysis highlights the importance of both spatial and temporal contributions to air pollution in epidemiologic birth outcome studies. PMID:22586068
Ghosh, Jo Kay C; Wilhelm, Michelle; Su, Jason; Goldberg, Daniel; Cockburn, Myles; Jerrett, Michael; Ritz, Beate
2012-06-15
Few studies have examined associations of birth outcomes with toxic air pollutants (air toxics) in traffic exhaust. This study included 8,181 term low birth weight (LBW) children and 370,922 term normal-weight children born between January 1, 1995, and December 31, 2006, to women residing within 5 miles (8 km) of an air toxics monitoring station in Los Angeles County, California. Additionally, land-use-based regression (LUR)-modeled estimates of levels of nitric oxide, nitrogen dioxide, and nitrogen oxides were used to assess the influence of small-area variations in traffic pollution. The authors examined associations with term LBW (≥37 weeks' completed gestation and birth weight <2,500 g) using logistic regression adjusted for maternal age, race/ethnicity, education, parity, infant gestational age, and gestational age squared. Odds of term LBW increased 2%-5% (95% confidence intervals ranged from 1.00 to 1.09) per interquartile-range increase in LUR-modeled estimates and monitoring-based air toxics exposure estimates in the entire pregnancy, the third trimester, and the last month of pregnancy. Models stratified by monitoring station (to investigate air toxics associations based solely on temporal variations) resulted in 2%-5% increased odds per interquartile-range increase in third-trimester benzene, toluene, ethyl benzene, and xylene exposures, with some confidence intervals containing the null value. This analysis highlights the importance of both spatial and temporal contributions to air pollution in epidemiologic birth outcome studies.
Method for a detailed measurement of image intensity nonuniformity in magnetic resonance imaging.
Wang, Deming; Doddrell, David M
2005-04-01
In magnetic resonance imaging (MRI), the MR signal intensity can vary spatially and this spatial variation is usually referred to as MR intensity nonuniformity. Although the main source of intensity nonuniformity arises from B1 inhomogeneity of the coil acting as a receiver and/or transmitter, geometric distortion also alters the MR signal intensity. It is useful on some occasions to have these two different sources be separately measured and analyzed. In this paper, we present a practical method for a detailed measurement of the MR intensity nonuniformity. This method is based on the same three-dimensional geometric phantom that was recently developed for a complete measurement of the geometric distortion in MR systems. In this paper, the contribution to the intensity nonuniformity from the geometric distortion can be estimated and thus, it provides a mechanism for estimation of the intensity nonuniformity that reflects solely the spatial characteristics arising from B1. Additionally, a comprehensive scheme for characterization of the intensity nonuniformity based on the new measurement method is proposed. To demonstrate the method, the intensity nonuniformity in a 1.5 T Sonata MR system was measured and is used to illustrate the main features of the method.
NASA Astrophysics Data System (ADS)
Sujatha, N.; Anand, B. S. Suresh; Nivetha, K. Bala; Narayanamurthy, V. B.; Seshadri, V.; Poddar, R.
2015-07-01
Light-based diagnostic techniques provide a minimally invasive way for selective biomarker estimation when tissues transform from a normal to a malignant state. Spectroscopic techniques based on diffuse reflectance characterize the changes in tissue hemoglobin/oxygenation levels during the tissue transformation process. Recent clinical investigations have shown that changes in tissue oxygenation and microcirculation are observed in diabetic subjects in the initial and progressive stages. In this pilot study, we discuss the potential of diffuse reflectance spectroscopy (DRS) in the visible (Vis) range to differentiate the skin microcirculatory hemoglobin levels between normal and advanced diabetic subjects with and without neuropathy. Average concentration of hemoglobin as well as hemoglobin oxygen saturation within the probed tissue volume is estimated for a total of four different sites in the foot sole. The results indicate a statistically significant decrease in average total hemoglobin and increase in hemoglobin oxygen saturation levels for diabetic foot compared with a normal foot. The present study demonstrates the ability of reflectance spectroscopy in the Vis range to determine and differentiate the changes in tissue hemoglobin and hemoglobin oxygen saturation levels in normal and diabetic subjects.
NASA Astrophysics Data System (ADS)
Epstein, R.; Rosenberg, M. J.; Solodov, A. A.; Myatt, J. F.; Regan, S. P.; Seka, W.; Hohenberger, M.; Barrios, M. A.; Moody, J. D.
2015-11-01
The Mn/Co isoelectronic emission-line ratio from a microdot source in planar CH foil targets was measured to infer the electron temperature (Te) in the ablating plasma during two-plasmon-decay experiments at the National Ignition Facility (NIF). We examine the systematic uncertainty in the Te estimate based on the temperature and density sensitivities of the line ratio in conjunction with plausible density constraints, and its contribution to the total Te estimate uncertainty. The potential advantages of alternative microdot elements (e.g., Ti/Cr and Sc/V) are considered. The microdot mass was selected to provide ample line strength while minimizing the effect of self-absorption on the line emission, which is of particular concern, given the narrow linewidths of mid- Z emitters at subcritical electron densities. Atomic line-formation theory and detailed atomic-radiative simulations show that the straight forward interpretation of the isoelectronic ratio solely in terms of its temperature independence remains valid with lines of moderate optical thickness (up to ~ 10) at line center. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.
NASA Astrophysics Data System (ADS)
Colomo-Palacios, Ricardo; Jiménez-López, Diego; García-Crespo, Ángel; Blanco-Iglesias, Borja
eLearning educative processes are a challenge for educative institutions and education professionals. In an environment in which learning resources are being produced, catalogued and stored using innovative ways, SOLE provides a platform in which exam questions can be produced supported by Web 2.0 tools, catalogued and labeled via semantic web and stored and distributed using eLearning standards. This paper presents, SOLE, a social network of exam questions sharing particularized for Software Engineering domain, based on semantics and built using semantic web and eLearning standards, such as IMS Question and Test Interoperability specification 2.1.
Ehrlich, Yael; Regev, Lior; Kerem, Zohar; Boaretto, Elisabetta
2017-01-01
The age of living massive olive trees is often assumed to be between hundreds and even thousands of years. These estimations are usually based on the girth of the trunk and an extrapolation based on a theoretical annual growth rate. It is difficult to objectively verify these claims, as a monumental tree may not be cut down for analysis of its cross-section. In addition, the inner and oldest part of the trunk in olive trees usually rots, precluding the possibility of carting out radiocarbon analysis of material from the first years of life of the tree. In this work we present a cross-section of an olive tree, previously estimated to be hundreds of years old, which was cut down post-mortem in 2013. The cross-section was radiocarbon dated at numerous points following the natural growth pattern, which was made possible to observe by viewing the entire cross-section. Annual growth rate values were calculated and compared between different radii. The cross-section also revealed a nearly independent segment of growth, which would clearly offset any estimations based solely on girth calculations. Multiple piths were identified, indicating the beginning of branching within the trunk. Different radii were found to have comparable growth rates, resulting in similar estimates dating the piths to the 19th century. The estimated age of the piths represent a terminus ante quem for the age of the tree, as these are piths of separate branches. However, the tree is likely not many years older than the dated piths, and certainly not centuries older. The oldest radiocarbon-datable material in this cross-section was less than 200 years old, which is in agreement with most other radiocarbon dates of internal wood from living olive trees, rarely older than 300 years.
Sado, Tetsuya; Hahn, Christoph; Byrkjedal, Ingvar; Miya, Masaki
2016-01-01
The family Opisthoproctidae (barreleyes) constitutes one of the most peculiar looking and unknown deep-sea fish groups in terms of taxonomy and specialized adaptations. All the species in the family are united by the possession of tubular eyes, with one distinct lineage exhibiting also drastic shortening of the body. Two new species of the mesopelagic opisthoproctid mirrorbelly genus Monacoa are described based on pigmentation patterns of the “sole”—a unique vertebrate structure used in the reflection and control of bioluminescence in most short-bodied forms. Different pigmentation patterns of the soles, previously noted as intraspecific variations based on preserved specimens, are here shown species-specific and likely used for communication in addition to counter-illumination of down-welling sunlight. The genus Monacoa is resurrected from Opisthoproctus based on extensive morphological synaphomorphies pertaining to the anal fin and snout. Doubling the species diversity within sole-bearing opisthoproctids, including recognition of two genera, is unambiguously supported by mitogenomic DNA sequence data. Regular fixation with formalin and alcohol preservation is shown problematic concerning the retention of species-specific pigmentation patterns. Examination or photos of fresh material before formalin fixation is shown paramount for correct species recognition of sole-bearing opisthoproctids—a relatively unknown issue concerning species diversity in the deep-sea pelagic realm. PMID:27508419
Optimizing footwear for older people at risk of falls.
Menant, Jasmine C; Steele, Julie R; Menz, Hylton B; Munro, Bridget J; Lord, Stephen R
2008-01-01
Footwear influences balance and the subsequent risk of slips, trips, and falls by altering somatosensory feedback to the foot and ankle and modifying frictional conditions at the shoe/floor interface. Walking indoors barefoot or in socks and walking indoors or outdoors in high-heel shoes have been shown to increase the risk of falls in older people. Other footwear characteristics such as heel collar height, sole hardness, and tread and heel geometry also influence measures of balance and gait. Because many older people wear suboptimal shoes, maximizing safe shoe use may offer an effective fall prevention strategy. Based on findings of a systematic literature review, older people should wear shoes with low heels and firm slip-resistant soles both inside and outside the home. Future research should investigate the potential benefits of tread sole shoes for preventing slips and whether shoes with high collars or flared soles can enhance balance when challenging tasks are undertaken.
Liegl, Gregor; Wahl, Inka; Berghöfer, Anne; Nolte, Sandra; Pieh, Christoph; Rose, Matthias; Fischer, Felix
2016-03-01
To investigate the validity of a common depression metric in independent samples. We applied a common metrics approach based on item-response theory for measuring depression to four German-speaking samples that completed the Patient Health Questionnaire (PHQ-9). We compared the PHQ item parameters reported for this common metric to reestimated item parameters that derived from fitting a generalized partial credit model solely to the PHQ-9 items. We calibrated the new model on the same scale as the common metric using two approaches (estimation with shifted prior and Stocking-Lord linking). By fitting a mixed-effects model and using Bland-Altman plots, we investigated the agreement between latent depression scores resulting from the different estimation models. We found different item parameters across samples and estimation methods. Although differences in latent depression scores between different estimation methods were statistically significant, these were clinically irrelevant. Our findings provide evidence that it is possible to estimate latent depression scores by using the item parameters from a common metric instead of reestimating and linking a model. The use of common metric parameters is simple, for example, using a Web application (http://www.common-metrics.org) and offers a long-term perspective to improve the comparability of patient-reported outcome measures. Copyright © 2016 Elsevier Inc. All rights reserved.
Diallo, Aboubacar; Zhao, Yu-Long; Wang, He; Li, Sha-Sha; Ren, Chuan-Qing; Liu, Qun
2012-11-16
An efficient synthesis of substituted benzenes via a base-catalyzed [3 + 3] aerobic oxidative aromatization of α,β-unsaturated carbonyl compounds with dimethyl glutaconate was reported. All the reactions were carried out under mild, metal-free conditions to afford the products in high to excellent yields with molecular oxygen as the sole oxidant and water as the sole byproduct. Furthermore, a more convenient tandem [3 + 2 + 1] aerobic oxidative aromatization reaction was developed through the in situ generation of the α,β-unsaturated carbonyl compounds from aldehydes and ketones.
Matching Real and Synthetic Panoramic Images Using a Variant of Geometric Hashing
NASA Astrophysics Data System (ADS)
Li-Chee-Ming, J.; Armenakis, C.
2017-05-01
This work demonstrates an approach to automatically initialize a visual model-based tracker, and recover from lost tracking, without prior camera pose information. These approaches are commonly referred to as tracking-by-detection. Previous tracking-by-detection techniques used either fiducials (i.e. landmarks or markers) or the object's texture. The main contribution of this work is the development of a tracking-by-detection algorithm that is based solely on natural geometric features. A variant of geometric hashing, a model-to-image registration algorithm, is proposed that searches for a matching panoramic image from a database of synthetic panoramic images captured in a 3D virtual environment. The approach identifies corresponding features between the matched panoramic images. The corresponding features are to be used in a photogrammetric space resection to estimate the camera pose. The experiments apply this algorithm to initialize a model-based tracker in an indoor environment using the 3D CAD model of the building.
How economics can further the success of ecological restoration.
Iftekhar, Md Sayed; Polyakov, Maksym; Ansell, Dean; Gibson, Fiona; Kay, Geoffrey M
2017-04-01
Restoration scientists and practitioners have recently begun to include economic and social aspects in the design and investment decisions for restoration projects. With few exceptions, ecological restoration studies that include economics focus solely on evaluating costs of restoration projects. However, economic principles, tools, and instruments can be applied to a range of other factors that affect project success. We considered the relevance of applying economics to address 4 key challenges of ecological restoration: assessing social and economic benefits, estimating overall costs, project prioritization and selection, and long-term financing of restoration programs. We found it is uncommon to consider all types of benefits (such as nonmarket values) and costs (such as transaction costs) in restoration programs. Total benefit of a restoration project can be estimated using market prices and various nonmarket valuation techniques. Total cost of a project can be estimated using methods based on property or land-sale prices, such as hedonic pricing method and organizational surveys. Securing continuous (or long-term) funding is also vital to accomplishing restoration goals and can be achieved by establishing synergy with existing programs, public-private partnerships, and financing through taxation. © 2016 Society for Conservation Biology.
Revisiting the Table 2 fallacy: A motivating example examining preeclampsia and preterm birth.
Bandoli, Gretchen; Palmsten, Kristin; Chambers, Christina D; Jelliffe-Pawlowski, Laura L; Baer, Rebecca J; Thompson, Caroline A
2018-05-21
A "Table Fallacy," as coined by Westreich and Greenland, reports multiple adjusted effect estimates from a single model. This practice, which remains common in published literature, can be problematic when different types of effect estimates are presented together in a single table. The purpose of this paper is to quantitatively illustrate this potential for misinterpretation with an example estimating the effects of preeclampsia on preterm birth. We analysed a retrospective population-based cohort of 2 963 888 singleton births in California between 2007 and 2012. We performed a modified Poisson regression to calculate the total effect of preeclampsia on the risk of PTB, adjusting for previous preterm birth. pregnancy alcohol abuse, maternal education, and maternal socio-demographic factors (Model 1). In subsequent models, we report the total effects of previous preterm birth, alcohol abuse, and education on the risk of PTB, comparing and contrasting the controlled direct effects, total effects, and confounded effect estimates, resulting from Model 1. The effect estimate for previous preterm birth (a controlled direct effect in Model 1) increased 10% when estimated as a total effect. The risk ratio for alcohol abuse, biased due to an uncontrolled confounder in Model 1, was reduced by 23% when adjusted for drug abuse. The risk ratio for maternal education, solely a predictor of the outcome, was essentially unchanged. Reporting multiple effect estimates from a single model may lead to misinterpretation and lack of reproducibility. This example highlights the need for careful consideration of the types of effects estimated in statistical models. © 2018 John Wiley & Sons Ltd.
Discriminating bot accounts based solely on temporal features of microblog behavior
NASA Astrophysics Data System (ADS)
Pan, Junshan; Liu, Ying; Liu, Xiang; Hu, Hanping
2016-05-01
As the largest microblog service in China, Sina Weibo has attracted numerous automated applications (known as bots) due to its popularity and open architecture. We classify the active users from Sina Weibo into human, bot-based and hybrid groups based solely on the study of temporal features of their posting behavior. The anomalous burstiness parameter and time-interval entropy value are exploited to characterize automation. We also reveal different behavior patterns among the three types of users regarding their reposting ratio, daily rhythm and active days. Our findings may help Sina Weibo manage a better community and should be considered for dynamic models of microblog behaviors.
Mayhew, C; Quinlan, M
1999-01-01
Outsourcing has become increasingly widespread throughout industrialized societies over the past 20 years. Accompanying this has been a renewed growth in home-based work, sometimes using new technologies (telework) but also entailing a re-emergence of old forms, such as clothing outwork, used extensively 100 years ago. A growing body of research indicates that changes to work organization associated with outsourcing adversely affect occupational health and safety (OHS), both for outsourced workers and for those working alongside them. This study assessed the OHS implications of the shift to home-based workers in the Australian clothing industry by systematically comparing the OHS experiences of 100 factory-based workers and 100 outworkers. The level of self-reported injury was over three times higher among outworkers than factory-based workers undertaking similar tasks. The most significant factor explaining this difference was the payment system. All outworkers were paid solely by the piece, whereas factory workers were paid either under a time plus production bonus system or solely on a time basis. While the incidence of injury was far higher among outworkers, factory-based workers paid under an incentive system reported more injuries than those paid solely on a time basis. Increasing injury was correlated with piecework payment systems.
Gorman, Emma; Leyland, Alastair H.; McCartney, Gerry; Katikireddi, Srinivasa Vittal; Rutherford, Lisa; Graham, Lesley; Robinson, Mark
2017-01-01
Abstract Background and aims Analytical approaches to addressing survey non‐participation bias typically use only demographic information to improve estimates. We applied a novel methodology which uses health information from data linkage to adjust for non‐representativeness. We illustrate the method by presenting adjusted alcohol consumption estimates for Scotland. Design Data on consenting respondents to the Scottish Health Surveys (SHeSs) 1995–2010 were linked confidentially to routinely collected hospital admission and mortality records. Synthetic observations representing non‐respondents were created using general population data. Multiple imputation was performed to compute adjusted alcohol estimates given a range of assumptions about the missing data. Adjusted estimates of mean weekly consumption were additionally calibrated to per‐capita alcohol sales data. Setting Scotland. Participants 13 936 male and 18 021 female respondents to the SHeSs 1995–2010, aged 20–64 years. Measurements Weekly alcohol consumption, non‐, binge‐ and problem‐drinking. Findings Initial adjustment for non‐response resulted in estimates of mean weekly consumption that were elevated by up to 17.8% [26.5 units (18.6–34.4)] compared with corrections based solely on socio‐demographic data [22.5 (17.7–27.3)]; other drinking behaviour estimates were little changed. Under more extreme assumptions the overall difference was up to 53%, and calibrating to sales estimates resulted in up to 88% difference. Increases were especially pronounced among males in deprived areas. Conclusions The use of routinely collected health data to reduce bias arising from survey non‐response resulted in higher alcohol consumption estimates among working‐age males in Scotland, with less impact for females. This new method of bias reduction can be generalized to other surveys to improve estimates of alternative harmful behaviours. PMID:28276110
NASA Astrophysics Data System (ADS)
Zhang, X.; Anagnostou, E. N.
2016-12-01
This research contributes to the improvement of high resolution satellite applications in tropical regions with mountainous topography. Such mountainous regions are usually covered by sparse networks of in-situ observations while quantitative precipitation estimation from satellite sensors exhibits strong underestimation of heavy orographically enhanced storm events. To address this issue, our research applies a satellite error correction technique based solely on high-resolution numerical weather predictions (NWP). Our previous work has demonstrated the accuracy of this method in two mid-latitude mountainous regions (Zhang et al. 2013*1, Zhang et al. 2016*2), while the current research focuses on a comprehensive evaluation in three topical mountainous regions: Colombia, Peru and Taiwan. In addition, two different satellite precipitation products, NOAA Climate Prediction Center morphing technique (CMORPH) and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS), are considered. The study includes a large number of heavy precipitation events (68 events over the three regions) in the period 2004 to 2012. The NWP-based adjustments of the two satellite products are contrasted to their corresponding gauge-adjusted post-processing products. Preliminary results show that the NWP-based adjusted CMORPH product is consistently improved relative to both original and gauge-adjusted precipitation products for all regions and storms examined. The improvement of PERSIANN-CCS product is less significant and less consistent relative to the CMORPH performance improvements from the NWP-based adjustment. *1Zhang, Xinxuan, Emmanouil N. Anagnostou, Maria Frediani, Stavros Solomos, and George Kallos. "Using NWP simulations in satellite rainfall estimation of heavy precipitation events over mountainous areas." Journal of Hydrometeorology 14, no. 6 (2013): 1844-1858.*2 Zhang, Xinxuan, Emmanouil N. Anagnostou, and Humberto Vergara. "Hydrologic Evaluation of NWP-Adjusted CMORPH Estimates of Hurricane-Induced Precipitation in the Southern Appalachians." Journal of Hydrometeorology 17.4 (2016): 1087-1099.
Developmental and individual differences in pure numerical estimation.
Booth, Julie L; Siegler, Robert S
2006-01-01
The authors examined developmental and individual differences in pure numerical estimation, the type of estimation that depends solely on knowledge of numbers. Children between kindergarten and 4th grade were asked to solve 4 types of numerical estimation problems: computational, numerosity, measurement, and number line. In Experiment 1, kindergartners and 1st, 2nd, and 3rd graders were presented problems involving the numbers 0-100; in Experiment 2, 2nd and 4th graders were presented problems involving the numbers 0-1,000. Parallel developmental trends, involving increasing reliance on linear representations of numbers and decreasing reliance on logarithmic ones, emerged across different types of estimation. Consistent individual differences across tasks were also apparent, and all types of estimation skill were positively related to math achievement test scores. Implications for understanding of mathematics learning in general are discussed. Copyright 2006 APA, all rights reserved.
Llull, Rosa Maria; Garí, Mercè; Canals, Miquel; Rey-Maquieira, Teresa; Grimalt, Joan O
2017-10-01
The present study reports total mercury (THg) and methylmercury (MeHg) concentrations in 32 different lean fish species from the Western Mediterranean Sea, with a special focus on the Balearic Islands. The concentrations of THg ranged between 0.05mg/kg ww and 3.1mg/kg ww (mean 0.41mg/kg ww). A considerable number of the most frequently fish species consumed by the Spanish population exceed the maximum levels proposed by the European legislation when they originate from the Mediterranean Sea, such as dusky grouper (100% of the examined specimens), common dentex (65%), conger (45%), common sole (38%), hake (26%) and angler (15%), among others. The estimated weekly intakes (EWI) in children (7-12 years of age) and adults from the Spanish population (2.7µg/kg bw and 2.1µg/kg bw, respectively) for population only consuming Mediterranean fish were below the provisional tolerable weekly intake (PTWI) of THg established by EFSA in 2012, 4µg/kg bw. However, the equivalent estimations for methylmercury, involving PTWI of 1.3µg/kg bw, were two times higher in children and above 50% in adults. For hake, sole, angler and dusky grouper, the most frequently consumed fish, the estimated weekly intakes in both children and adults were below the maximum levels accepted. These intakes correspond to maximum potential estimations because fish from non-Mediterranean origin is often consumed by the Spanish population including the one from the Balearic Islands. Copyright © 2017 Elsevier Inc. All rights reserved.
Signal location using generalized linear constraints
NASA Astrophysics Data System (ADS)
Griffiths, Lloyd J.; Feldman, D. D.
1992-01-01
This report has presented a two-part method for estimating the directions of arrival of uncorrelated narrowband sources when there are arbitrary phase errors and angle independent gain errors. The signal steering vectors are estimated in the first part of the method; in the second part, the arrival directions are estimated. It should be noted that the second part of the method can be tailored to incorporate additional information about the nature of the phase errors. For example, if the phase errors are known to be caused solely by element misplacement, the element locations can be estimated concurrently with the DOA's by trying to match the theoretical steering vectors to the estimated ones. Simulation results suggest that, for general perturbation, the method can resolve closely spaced sources under conditions for which a standard high-resolution DOA method such as MUSIC fails.
BCI`S domestic automotive replacement battery shipments by channel of distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-07-01
Thirteen manufacturing members, of Battery Council International, supplied the shipment figures contained in this report. This Channel of Distribution continues to account for an estimated 97% of the domestic replacement battery industry. This report indicates solely domestic replacement battery shipments for the following battery classifications: passenger car & light commercial; heavy duty commercial; special tractor; marine; general utility and golf car.
Ion distribution effects of turbulence on a kinetic auroral arc model
NASA Technical Reports Server (NTRS)
Cornwall, J. M.; Chiu, Y. T.
1982-01-01
An inverted-V auroral arc structure plasma-kinetic model is extended to phenomenologically include the effects of electrostatic turbulence, with k-parallel/k-perpendicular being much less than unity. It is shown that, unless plasma sheet ions are very much more energetic than the electrons, anomalous resistivity is not a large contributor to parallel electrostatic potential drops, since the support of the observed potential drop requires a greater dissipation of energy than can be provided by the plasma sheet. Wave turbulence can, however, be present, with the ion cyclotron turbulence levels suggested by the ion resonance broadening saturation mechanism of Dum and Dupree (1970) being comparable to those observed on auroral field lines. The diffusion coefficient and net growth rate are much smaller than estimates based solely on local plasma properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, S.; Gross, R.; Goble, W
The safety integrity level (SIL) of equipment used in safety instrumented functions is determined by the average probability of failure on demand (PFDavg) computed at the time of periodic inspection and maintenance, i.e., the time of proof testing. The computation of PFDavg is generally based solely on predictions or estimates of the assumed constant failure rate of the equipment. However, PFDavg is also affected by maintenance actions (or lack thereof) taken by the end user. This paper shows how maintenance actions can affect the PFDavg of spring operated pressure relief valves (SOPRV) and how these maintenance actions may be accountedmore » for in the computation of the PFDavg metric. The method provides a means for quantifying the effects of changes in maintenance practices and shows how these changes impact plant safety.« less
NASA Astrophysics Data System (ADS)
Gajek, Z.
2004-05-01
The electronic properties of the actinide ions in the series of semi-conducting, antiferromagnetic compounds: dioxides, AnO2 and oxychalcogenides, AnOY, where An=U, Np and Y=S, Se, are re-examined from the point of view of the consistency of the crystal field (CF) model. The discussion is based on the supposition that the effective metal-ligand interaction solely determines the net CF effect in non-metallic compounds. The main question we address here is, whether a reliable, consistent description of the CF effect in terms of the intrinsic parameters can be achieved for this particular family of compounds. Encouraging calculations reported previously for the AnO2 and UOY series serve as a reference data in the present estimation of electronic structure parameters for neptunium oxychalcogenides.
Indirect land use change and biofuel policy
NASA Astrophysics Data System (ADS)
Kocoloski, Matthew; Griffin, W. Michael; Matthews, H. Scott
2009-09-01
Biofuel debates often focus heavily on carbon emissions, with parties arguing for (or against) biofuels solely on the basis of whether the greenhouse gas emissions of biofuels are less than (or greater than) those of gasoline. Recent studies argue that land use change leads to significant greenhouse gas emissions, making some biofuels more carbon intensive than gasoline. We argue that evaluating the suitability and utility of biofuels or any alternative energy source within the limited framework of plus and minus carbon emissions is too narrow an approach. Biofuels have numerous impacts, and policy makers should seek compromises rather than relying solely on carbon emissions to determine policy. Here, we estimate that cellulosic ethanol, despite having potentially higher life cycle CO2 emissions (including from land use) than gasoline, would still be cost-effective at a CO2 price of 80 per ton or less, well above estimated CO2 mitigation costs for many alternatives. As an example of the broader approach to biofuel policy, we suggest the possibility of using the potential cost reductions of cellulosic ethanol relative to gasoline to balance out additional carbon emissions resulting from indirect land use change as an example of ways in which policies could be used to arrive at workable solutions.
Potential sites of compression of tibial nerve branches in foot: a cadaveric and imaging study.
Ghosh, Sanjib Kumar; Raheja, Shashi; Tuli, Anita
2013-09-01
Hypertrophy of abductor hallucis muscle is one of the reported causes of compression of tibial nerve branches in foot, resulting in tarsal tunnel syndrome. In this study, we dissected the foot (including the sole) of 120 lower limbs in 60 human cadavers (45 males and 15 females), aged between 45 and 70 years to analyze the possible impact of abductor hallucis muscle in compression neuropathy of tibial nerve branches. We identified five areas in foot, where tibial nerve branches could be compressed by abductor hallucis. Our findings regarding three of these areas were substantiated by clinical evidence from ultrasonography of ankle and sole region, conducted in the affected foot of 120 patients (82 males and 38 females), aged between 42 and 75 years, who were referred for evaluation of pain and/or swelling in medial side of ankle joint with or without associated heel and/or sole pain. We also assessed whether estimation of parameters for the muscle size could identify patients at risk of having nerve compression due to abductor hallucis muscle hypertrophy. The interclass correlation coefficient for dorso-planter thickness of abductor hallucis muscle was 0.84 (95% CI, 0.63-0.92) and that of medio-lateral width was 0.78 (95% CI, 0.62-0.88) in the imaging study, suggesting both are reliable parameters of the muscle size. Receiver operating characteristic curve analysis showed, if ultrasonographic estimation of dorso-plantar thickness is >12.8 mm and medio-lateral width > 30.66 mm in patients with symptoms of nerve compression in foot, abductor hallucis muscle hypertrophy associated compression neuropathy may be suspected. Copyright © 2012 Wiley Periodicals, Inc.
Improving Quality of Shoe Soles Product using Six Sigma
NASA Astrophysics Data System (ADS)
Jesslyn Wijaya, Athalia; Trusaji, Wildan; Akbar, Muhammad; Ma’ruf, Anas; Irianto, Dradjad
2018-03-01
A manufacture in Bandung produce kind of rubber-based product i.e. trim, rice rollers, shoe soles, etc. After penetrating the shoe soles market, the manufacture has met customer with tight quality control. Based on the past data, defect level of this product was 18.08% that caused the manufacture’s loss of time and money. Quality improvement effort was done using six sigma method that included phases of define, measure, analyse, improve, and control (DMAIC). In the design phase, the object’s problem and definition were defined. Delphi method was also used in this phase to identify critical factors. In the measure phase, the existing process stability and sigma quality level were measured. Fishbone diagram and failure mode and effect analysis (FMEA) were used in the next phase to analyse the root cause and determine the priority issues. Improve phase was done by designing alternative improvement strategy using 5W1H method. Some improvement efforts were identified, i.e. (i) modifying design of the hanging rack, (ii) create pantone colour book and check sheet, (iii) provide pedestrian line at compound department, (iv) buying stop watch, and (v) modifying shoe soles dies. Some control strategies for continuous improvement were proposed such as SOP or reward and punishment system.
Parents' work patterns and adolescent mental health.
Dockery, Alfred; Li, Jianghong; Kendall, Garth
2009-02-01
Previous research demonstrates that non-standard work schedules undermine the stability of marriage and reduce family cohesiveness. Limited research has investigated the effects of parents working non-standard schedules on children's health and wellbeing and no published Australian studies have addressed this important issue. This paper contributes to bridging this knowledge gap by focusing on adolescents aged 15-20 years and by including sole parent families which have been omitted in previous research, using panel data from the Household, Income and Labour Dynamics in Australia Survey. Multilevel linear regression models are estimated to analyse the association between parental work schedules and hours of work and measures of adolescents' mental health derived from the SF-36 Health Survey. Evidence of negative impacts of parents working non-standard hours upon adolescent wellbeing is found to exist primarily within sole parent families.
Features of the incorporation of single and double based powders within emulsion explosives
NASA Astrophysics Data System (ADS)
Ribeiro, J. B.; Mendes, R.; Tavares, B.; Louro, C.
2014-05-01
In this work, features of the thermal and detonation behaviour of compositions resulting from the mixture of single and double based powders within ammonium nitrate based emulsion explosives are shown. Those features are portrayed through results of thermodynamic-equilibrium calculations of the detonation velocity, the chemical compatibility assessment through differential thermal analysis [DTA] and thermo gravimetric analysis [TGA], the experimental determination of the detonation velocity and a comparative evaluation of the shock sensitivity using a modified version of the "gap-test". DTA/TGA results for the compositions and for the individual components overlap until the beginning of the thermal decomposition which is an indication of the absence of formation of any new chemical species and so of the compatibility of the components of the compositions. After the beginning of the thermal decomposition it can be seen that the rate of mass loss is much higher for the compositions with powder than for the one with sole emulsion explosive. Both, theoretical and experimental, values of the detonation velocity have been shown to be higher for the powdered compositions than for the sole emulsion explosive. Shock sensitivity assessments have ended-up with a slightly bigger sensitivity for the compositions with double based powder when compared to the single based compositions or to the sole emulsion.
Metamorphic sole genesis at the base of ophiolite nappes: Insights from numerical models
NASA Astrophysics Data System (ADS)
Yamato, Philippe; Agard, Philippe; Duretz, Thibault
2015-04-01
Obduction emplaces oceanic lithosphere on top of continental lithosphere. Although a number of studies have focused on this enigmatic process, the initial stages of obduction remain poorly understood. Field, petrological, and geochronological data reveal that during the first stages of the obduction (i.e., during the first 1-2 Myrs) a HT-LP metamorphic sole (~700-800 ° C and ~1 GPa) is systematically welded at the base of ophiolite nappes. However, the reason why such welding of the ophiolite soles occurs at these particular P-T conditions, and only at the onset of obduction, is still an open issue. The aim of this study is to explore the conditions required to explain the genesis of metamorphic soles. For this, we employ two-dimensional numerical modelling, constrained by the wealth of available data from the Oman ophiolite. We first present a thermo-kinematic model in which the velocity field is prescribed in order to simulate obduction initiation. The heat advection-diffusion equation is solved at each time step. The model is intentionally kept simple in order to control each parameter (e.g., convergence rate, dip angle, thermal age) and to test its influence on the resulting P-T conditions obtained through time along the obduction interface. Results show that the key factor allowing the formation of metamorphic soles is the age of the oceanic lithosphere involved. Moreover, we speculate that the reason why metamorphic soles are always welded at the same P-T conditions is due to the fact that, at these particular conditions, strength jumps occur within the oceanic lithosphere. These jumps lead to changes in strain localisation and allow the spalling of oceanic crust and its juxtaposition to the ophiolite nappe. This hypothesis is further tested using thermo-mechanical models in which the obduction initiates dynamically (only initial and boundary conditions are prescribed). The interplay between the temperature evolution and the mechanical behaviour is then discussed.
Model-Based and Model-Free Pavlovian Reward Learning: Revaluation, Revision and Revelation
Dayan, Peter; Berridge, Kent C.
2014-01-01
Evidence supports at least two methods for learning about reward and punishment and making predictions for guiding actions. One method, called model-free, progressively acquires cached estimates of the long-run values of circumstances and actions from retrospective experience. The other method, called model-based, uses representations of the environment, expectations and prospective calculations to make cognitive predictions of future value. Extensive attention has been paid to both methods in computational analyses of instrumental learning. By contrast, although a full computational analysis has been lacking, Pavlovian learning and prediction has typically been presumed to be solely model-free. Here, we revise that presumption and review compelling evidence from Pavlovian revaluation experiments showing that Pavlovian predictions can involve their own form of model-based evaluation. In model-based Pavlovian evaluation, prevailing states of the body and brain influence value computations, and thereby produce powerful incentive motivations that can sometimes be quite new. We consider the consequences of this revised Pavlovian view for the computational landscape of prediction, response and choice. We also revisit differences between Pavlovian and instrumental learning in the control of incentive motivation. PMID:24647659
Model-based and model-free Pavlovian reward learning: revaluation, revision, and revelation.
Dayan, Peter; Berridge, Kent C
2014-06-01
Evidence supports at least two methods for learning about reward and punishment and making predictions for guiding actions. One method, called model-free, progressively acquires cached estimates of the long-run values of circumstances and actions from retrospective experience. The other method, called model-based, uses representations of the environment, expectations, and prospective calculations to make cognitive predictions of future value. Extensive attention has been paid to both methods in computational analyses of instrumental learning. By contrast, although a full computational analysis has been lacking, Pavlovian learning and prediction has typically been presumed to be solely model-free. Here, we revise that presumption and review compelling evidence from Pavlovian revaluation experiments showing that Pavlovian predictions can involve their own form of model-based evaluation. In model-based Pavlovian evaluation, prevailing states of the body and brain influence value computations, and thereby produce powerful incentive motivations that can sometimes be quite new. We consider the consequences of this revised Pavlovian view for the computational landscape of prediction, response, and choice. We also revisit differences between Pavlovian and instrumental learning in the control of incentive motivation.
NASA Astrophysics Data System (ADS)
Min, Kyoungwon; Farah, Annette E.; Lee, Seung Ryeol; Lee, Jong Ik
2017-01-01
Shock conditions of Martian meteorites provide crucial information about ejection dynamics and original features of the Martian rocks. To better constrain equilibrium shock temperatures (Tequi-shock) of Martian meteorites, we investigated (U-Th)/He systematics of moderately-shocked (Zagami) and intensively shocked (ALHA77005) Martian meteorites. Multiple phosphate aggregates from Zagami and ALHA77005 yielded overall (U-Th)/He ages 92.2 ± 4.4 Ma (2σ) and 8.4 ± 1.2 Ma, respectively. These ages correspond to fractional losses of 0.49 ± 0.03 (Zagami) and 0.97 ± 0.01 (ALHA77005), assuming that the ejection-related shock event at ∼3 Ma is solely responsible for diffusive helium loss since crystallization. For He diffusion modeling, the diffusion domain radius is estimated based on detailed examination of fracture patterns in phosphates using a scanning electron microscope. For Zagami, the diffusion domain radius is estimated to be ∼2-9 μm, which is generally consistent with calculations from isothermal heating experiments (1-4 μm). For ALHA77005, the diffusion domain radius of ∼4-20 μm is estimated. Using the newly constrained (U-Th)/He data, diffusion domain radii, and other previously estimated parameters, the conductive cooling models yield Tequi-shock estimates of 360-410 °C and 460-560 °C for Zagami and ALHA77005, respectively. According to the sensitivity test, the estimated Tequi-shock values are relatively robust to input parameters. The Tequi-shock estimates for Zagami are more robust than those for ALHA77005, primarily because Zagami yielded intermediate fHe value (0.49) compared to ALHA77005 (0.97). For less intensively shocked Zagami, the He diffusion-based Tequi-shock estimates (this study) are significantly higher than expected from previously reported Tpost-shock values. For intensively shocked ALHA77005, the two independent approaches yielded generally consistent results. Using two other examples of previously studied Martian meteorites (ALHA84001 and Los Angeles), we compared Tequi-shock and Tpost-shock estimates. For intensively shocked meteorites (ALHA77005, Los Angeles), the He diffusion-based approach yield slightly higher or consistent Tequi-shock with estimations from Tpost-shock, and the discrepancy between the two methods increases as the intensity of shock increases. The reason for the discrepancy between the two methods, particularly for less-intensively shocked meteorites (Zagami, ALHA84001), remains to be resolved, but we prefer the He diffusion-based approach because its Tequi-shock estimates are relatively robust to input parameters.
Behavioral correlates of heart rates of free-living Greater White-fronted Geese
Ely, Craig R.; Ward, D.H.; Bollinger, K.S.
1999-01-01
We simultaneously monitored the heart rate and behavior of nine free-living Greater White-fronted Geese (Anser albifrons) on their wintering grounds in northern California. Heart rates of wild geese were monitored via abdominally-implanted radio transmitters with electrodes that received electrical impulses of the heart and emitted a radio signal with each ventricular contraction. Post-operative birds appeared to behave normally, readily rejoining flocks and flying up to 15 km daily from night-time roost sites to feed in surrounding agricultural fields. Heart rates varied significantly among individuals and among behaviors, and ranged from less than 100 beats per minute (BPM) during resting, to over 400 BPM during flight. Heart rates varied from 80 to 140 BPM during non-strenuous activities such as walking, feeding, and maintenance activities, to about 180 BPM when birds became alert, and over 400 BPM when birds were startled, even if they did not take flight. Postflight heart rate recovery time averaged < 10 sec. During agonistic encounters, heart rate exceeded 400 BPM; heart rates during social interactions were not predictable solely from postures, as heart rates were context-dependent, and were highest in initial encounters among individuals. Instantaneous measures of physiological parameters, such as heart rate, are often better indicators of the degree of response to external stimuli than visual observations and can be used to improve estimates of energy expenditure based solely on activity data.
NASA Astrophysics Data System (ADS)
Cluzel, Dominique; Jourdan, Fred; Meffre, SéBastien; Maurizot, Pierre; Lesimple, StéPhane
2012-06-01
Amphibolite lenses that locally crop out below the serpentinite sole at the base of the ophiolite of New Caledonia (termed Peridotite Nappe) recrystallized in the high-temperature amphibolite facies and thus sharply contrast with blueschists and eclogites of the Eocene metamorphic complex. Amphibolites mostly display the geochemical features of MORB with a slight Nb depletion and thus are similar to the youngest (Late Paleocene-Eocene) BABB components of the allochthonous Poya Terrane. Thermochronological data from hornblende (40Ar/39Ar), zircon, and sphene (U-Pb) suggest that these mafic rocks recrystallized at ˜56 Ma. Using various geothermobarometers provides a rough estimate of peak recrystallization conditions of ˜0.5 GPa at ˜800-950°C. The thermal gradient inferred from the metamorphic assemblage (˜60°C km-1), geometrical relationships, and geochemical similarity suggest that these mafic rocks belong to the oceanic crust of the lower plate of the subduction/obduction system and recrystallized when they subducted below young and hot oceanic lithosphere. They were detached from the down-going plate and finally thrust onto unmetamorphosed Poya Terrane basalts. This and the occurrence of slab melts at ˜53 Ma suggest that subduction inception occurred at or near to the spreading ridge of the South Loyalty Basin at ˜56 Ma.
Loukas, Constantinos; Lahanas, Vasileios; Georgiou, Evangelos
2013-12-01
Despite the popular use of virtual and physical reality simulators in laparoscopic training, the educational potential of augmented reality (AR) has not received much attention. A major challenge is the robust tracking and three-dimensional (3D) pose estimation of the endoscopic instrument, which are essential for achieving interaction with the virtual world and for realistic rendering when the virtual scene is occluded by the instrument. In this paper we propose a method that addresses these issues, based solely on visual information obtained from the endoscopic camera. Two different tracking algorithms are combined for estimating the 3D pose of the surgical instrument with respect to the camera. The first tracker creates an adaptive model of a colour strip attached to the distal part of the tool (close to the tip). The second algorithm tracks the endoscopic shaft, using a combined Hough-Kalman approach. The 3D pose is estimated with perspective geometry, using appropriate measurements extracted by the two trackers. The method has been validated on several complex image sequences for its tracking efficiency, pose estimation accuracy and applicability in AR-based training. Using a standard endoscopic camera, the absolute average error of the tip position was 2.5 mm for working distances commonly found in laparoscopic training. The average error of the instrument's angle with respect to the camera plane was approximately 2°. The results are also supplemented by video segments of laparoscopic training tasks performed in a physical and an AR environment. The experiments yielded promising results regarding the potential of applying AR technologies for laparoscopic skills training, based on a computer vision framework. The issue of occlusion handling was adequately addressed. The estimated trajectory of the instruments may also be used for surgical gesture interpretation and assessment. Copyright © 2013 John Wiley & Sons, Ltd.
Modifying Bagnold's Sediment Transport Equation for Use in Watershed-Scale Channel Incision Models
NASA Astrophysics Data System (ADS)
Lammers, R. W.; Bledsoe, B. P.
2016-12-01
Destabilized stream channels may evolve through a sequence of stages, initiated by bed incision and followed by bank erosion and widening. Channel incision can be modeled using Exner-type mass balance equations, but model accuracy is limited by the accuracy and applicability of the selected sediment transport equation. Additionally, many sediment transport relationships require significant data inputs, limiting their usefulness in data-poor environments. Bagnold's empirical relationship for bedload transport is attractive because it is based on stream power, a relatively straightforward parameter to estimate using remote sensing data. However, the equation is also dependent on flow depth, which is more difficult to measure or estimate for entire drainage networks. We recast Bagnold's original sediment transport equation using specific discharge in place of flow depth. Using a large dataset of sediment transport rates from the literature, we show that this approach yields similar predictive accuracy as other stream power based relationships. We also explore the applicability of various critical stream power equations, including Bagnold's original, and support previous conclusions that these critical values can be predicted well based solely on sediment grain size. In addition, we propagate error in these sediment transport equations through channel incision modeling to compare the errors associated with our equation to alternative formulations. This new version of Bagnold's bedload transport equation has utility for channel incision modeling at larger spatial scales using widely available and remote sensing data.
The Mathematical Bases for Qualitative Reasoning
1990-01-01
but solely in terms of ordinary language. A good deal of such qualitative reasoning makes implicit use of the properties of ordinal variables and...without use of mathematical formalisms, but solely in terms of ordinary language. A good deal of such qualitative reasoning makes implicit use of the...irenheit 2 QualItalive ReaonIng 27 Janury 1990 or Celsius temperature on either day. If we ar. considering an equation connecting two variables, wfAx), we
NASA Astrophysics Data System (ADS)
Kuang, Ye; Zhao, Chun Sheng; Zhao, Gang; Tao, Jiang Chuan; Xu, Wanyun; Ma, Nan; Bian, Yu Xuan
2018-05-01
Water condensed on ambient aerosol particles plays significant roles in atmospheric environment, atmospheric chemistry and climate. Before now, no instruments were available for real-time monitoring of ambient aerosol liquid water contents (ALWCs). In this paper, a novel method is proposed to calculate ambient ALWC based on measurements of a three-wavelength humidified nephelometer system, which measures aerosol light scattering coefficients and backscattering coefficients at three wavelengths under dry state and different relative humidity (RH) conditions, providing measurements of light scattering enhancement factor f(RH). The proposed ALWC calculation method includes two steps: the first step is the estimation of the dry state total volume concentration of ambient aerosol particles, Va(dry), with a machine learning method called random forest model based on measurements of the dry
nephelometer. The estimated Va(dry) agrees well with the measured one. The second step is the estimation of the volume growth factor Vg(RH) of ambient aerosol particles due to water uptake, using f(RH) and the Ångström exponent. The ALWC is calculated from the estimated Va(dry) and Vg(RH). To validate the new method, the ambient ALWC calculated from measurements of the humidified nephelometer system during the Gucheng campaign was compared with ambient ALWC calculated from ISORROPIA thermodynamic model using aerosol chemistry data. A good agreement was achieved, with a slope and intercept of 1.14 and -8.6 µm3 cm-3 (r2 = 0.92), respectively. The advantage of this new method is that the ambient ALWC can be obtained solely based on measurements of a three-wavelength humidified nephelometer system, facilitating the real-time monitoring of the ambient ALWC and promoting the study of aerosol liquid water and its role in atmospheric chemistry, secondary aerosol formation and climate change.
Global solar wind variations over the last four centuries
Owens, M. J.; Lockwood, M.; Riley, P.
2017-01-01
The most recent “grand minimum” of solar activity, the Maunder minimum (MM, 1650–1710), is of great interest both for understanding the solar dynamo and providing insight into possible future heliospheric conditions. Here, we use nearly 30 years of output from a data-constrained magnetohydrodynamic model of the solar corona to calibrate heliospheric reconstructions based solely on sunspot observations. Using these empirical relations, we produce the first quantitative estimate of global solar wind variations over the last 400 years. Relative to the modern era, the MM shows a factor 2 reduction in near-Earth heliospheric magnetic field strength and solar wind speed, and up to a factor 4 increase in solar wind Mach number. Thus solar wind energy input into the Earth’s magnetosphere was reduced, resulting in a more Jupiter-like system, in agreement with the dearth of auroral reports from the time. The global heliosphere was both smaller and more symmetric under MM conditions, which has implications for the interpretation of cosmogenic radionuclide data and resulting total solar irradiance estimates during grand minima. PMID:28139769
Goff, Ben M; Moore, Kenneth J; Fales, Steven L; Pedersen, Jeffery F
2011-06-01
Sorghum [Sorghum bicolor (L.) Moench] has been shown to contain the cyanogenic glycoside dhurrin, which is responsible for the disorder known as prussic acid poisoning in livestock. The current standard method for estimating hydrogen cyanide (HCN) uses spectrophotometry to measure the aglycone, p-hydroxybenzaldehyde (p-HB), after hydrolysis. Errors may occur due to the inability of this method to solely estimate the absorbance of p-HB at a given wavelength. The objective of this study was to compare the use of gas chromatography (GC) and near infrared spectroscopy (NIRS) methods, along with a spectrophotometry method to estimate the potential for prussic acid (HCNp) of sorghum and sudangrasses over three stages maturities. It was shown that the GC produced higher HCNp estimates than the spectrophotometer for the grain sorghums, but lower concentrations for the sudangrass. Based on what is known about the analytical process of each method, the GC data is likely closer to the true HCNp concentrations of the forages. Both the GC and spectrophotometry methods yielded robust equations with the NIRS method; however, using GC as the calibration method resulted in more accurate and repeatable estimates. The HCNp values obtained from using the GC quantification method are believed to be closer to the actual values of the forage, and that use of this method will provide a more accurate and easily automated means of quantifying prussic acid. Copyright © 2011 Society of Chemical Industry.
Shutdown of the Federal Government: Causes, Processes, and Effects
2013-09-25
has an ability to borrow to finance its obligations. As a result, the federal government would need to rely solely on incoming revenues to finance...loss of tourism revenues to local communities; and closure of national museums and monuments (reportedly with an estimated loss of 2 million...Shutdown of the Federal Government : Causes, Processes, and Effects Congressional Research Service 16 revenues and “carryover” funds from
RAiSE III: 3C radio AGN energetics and composition
NASA Astrophysics Data System (ADS)
Turner, Ross J.; Shabala, Stanislav S.; Krause, Martin G. H.
2018-03-01
Kinetic jet power estimates based exclusively on observed monochromatic radio luminosities are highly uncertain due to confounding variables and a lack of knowledge about some aspects of the physics of active galactic nuclei (AGNs). We propose a new methodology to calculate the jet powers of the largest, most powerful radio sources based on combinations of their size, lobe luminosity, and shape of their radio spectrum; this approach avoids the uncertainties encountered by previous relationships. The outputs of our model are calibrated using hydrodynamical simulations and tested against independent X-ray inverse-Compton measurements. The jet powers and lobe magnetic field strengths of radio sources are found to be recovered using solely the lobe luminosity and spectral curvature, enabling the intrinsic properties of unresolved high-redshift sources to be inferred. By contrast, the radio source ages cannot be estimated without knowledge of the lobe volumes. The monochromatic lobe luminosity alone is incapable of accurately estimating the jet power or source age without knowledge of the lobe magnetic field strength and size, respectively. We find that, on average, the lobes of the Third Cambridge Catalogue of Radio Sources (3C) have magnetic field strengths approximately a factor three lower than the equipartition value, inconsistent with equal energy in the particles and the fields at the 5σ level. The particle content of 3C radio lobes is discussed in the context of complementary observations; we do not find evidence favouring an energetically dominant proton population.
Andres, R. J. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Boden, T. A. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Marland, G. [Appalachian State University, Boone, NC (USA)
2010-01-01
The 2010 version of this database presents a time series recording 1° latitude by 1° longitude CO2 emissions in units of million metric tons of carbon per year from anthropogenic sources for 1751-2007. Detailed geographic information on CO2 emissions can be critical in understanding the pattern of the atmospheric and biospheric response to these emissions. Global, regional, and national annual estimates for 1751 through 2007 were published earlier (Boden et al. 2010). Those national, annual CO2 emission estimates were based on statistics about fossil-fuel burning, cement manufacturing and gas flaring in oil fields as well as energy production, consumption, and trade data, using the methods of Marland and Rotty (1984). The national annual estimates were combined with gridded 1° data on political units and 1984 human populations to create the new gridded CO2 emission time series. The same population distribution was used for each of the years as proxy for the emission distribution within each country. The implied assumption for that procedure was that per capita energy use and fuel mixes are uniform over a political unit. The consequence of this first-order procedure is that the spatial changes observed over time are solely due to changes in national energy consumption and nation-based fuel mix. Increases in fossil-fuel CO2 emissions over time are apparent for most areas.
Andres, R. J. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Boden, T. A. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Marland, G. [Appalachian State University, Boone, NC (USA)
2013-01-01
The 2013 version of this database presents a time series recording 1° latitude by 1° longitude CO2 emissions in units of million metric tons of carbon per year from anthropogenic sources for 1751-2010. Detailed geographic information on CO2 emissions can be critical in understanding the pattern of the atmospheric and biospheric response to these emissions. Global, regional, and national annual estimates for 1751 through 2010 were published earlier (Boden et al. 2013). Those national, annual CO2 emission estimates were based on statistics about fossil-fuel burning, cement manufacturing and gas flaring in oil fields as well as energy production, consumption, and trade data, using the methods of Marland and Rotty (1984). The national annual estimates were combined with gridded 1° data on political units and 1984 human populations to create the new gridded CO2 emission time series. The same population distribution was used for each of the years as proxy for the emission distribution within each country. The implied assumption for that procedure was that per capita energy use and fuel mixes are uniform over a political unit. The consequence of this first-order procedure is that the spatial changes observed over time are solely due to changes in national energy consumption and nation-based fuel mix. Increases in fossil-fuel CO2 emissions over time are apparent for most areas.
Andres, R. J. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Boden, T. A. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Marland, G. [Appalachian State University, Boone, NC (USA)
2015-01-01
The 2015 version of this database presents a time series recording 1° latitude by 1° longitude CO2 emissions in units of million metric tons of carbon per year from anthropogenic sources for 1751-2011. Detailed geographic information on CO2 emissions can be critical in understanding the pattern of the atmospheric and biospheric response to these emissions. Global, regional, and national annual estimates for 1751 through 2011 were published earlier (Boden et al. 2015). Those national, annual CO2 emission estimates were based on statistics about fossil-fuel burning, cement manufacturing and gas flaring in oil fields as well as energy production, consumption, and trade data, using the methods of Marland and Rotty (1984). The national annual estimates were combined with gridded 1° data on political units and 1984 human populations to create the new gridded CO2 emission time series. The same population distribution was used for each of the years as proxy for the emission distribution within each country. The implied assumption for that procedure was that per capita energy use and fuel mixes are uniform over a political unit. The consequence of this first-order procedure is that the spatial changes observed over time are solely due to changes in national energy consumption and nation-based fuel mix. Increases in fossil-fuel CO2 emissions over time are apparent for most areas.
Andres, R. J. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Boden, T. A. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Marland, G. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA)
2011-01-01
The 2011 version of this database presents a time series recording 1° latitude by 1° longitude CO2 emissions in units of million metric tons of carbon per year from anthropogenic sources for 1751-2008. Detailed geographic information on CO2 emissions can be critical in understanding the pattern of the atmospheric and biospheric response to these emissions. Global, regional, and national annual estimates for 1751 through 2008 were published earlier (Boden et al. 2011). Those national, annual CO2 emission estimates were based on statistics about fossil-fuel burning, cement manufacturing and gas flaring in oil fields as well as energy production, consumption, and trade data, using the methods of Marland and Rotty (1984). The national annual estimates were combined with gridded 1° data on political units and 1984 human populations to create the new gridded CO2 emission time series. The same population distribution was used for each of the years as proxy for the emission distribution within each country. The implied assumption for that procedure was that per capita energy use and fuel mixes are uniform over a political unit. The consequence of this first-order procedure is that the spatial changes observed over time are solely due to changes in national energy consumption and nation-based fuel mix. Increases in fossil-fuel CO2 emissions over time are apparent for most areas.
Andres, R. J. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Boden, T. A. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Marland, G. [Appalachian State University, Boone, NC (USA)
2012-01-01
The 2012 version of this database presents a time series recording 1° latitude by 1° longitude CO2 emissions in units of million metric tons of carbon per year from anthropogenic sources for 1751-2009. Detailed geographic information on CO2 emissions can be critical in understanding the pattern of the atmospheric and biospheric response to these emissions. Global, regional, and national annual estimates for 1751 through 2009 were published earlier (Boden et al. 2012). Those national, annual CO2 emission estimates were based on statistics about fossil-fuel burning, cement manufacturing and gas flaring in oil fields as well as energy production, consumption, and trade data, using the methods of Marland and Rotty (1984). The national annual estimates were combined with gridded 1° data on political units and 1984 human populations to create the new gridded CO2 emission time series. The same population distribution was used for each of the years as proxy for the emission distribution within each country. The implied assumption for that procedure was that per capita energy use and fuel mixes are uniform over a political unit. The consequence of this first-order procedure is that the spatial changes observed over time are solely due to changes in national energy consumption and nation-based fuel mix. Increases in fossil-fuel CO2 emissions over time are apparent for most areas.
Andres, R. J. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Boden, T. A. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Marland, G. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA)
2009-01-01
The 2009 version of this database presents a time series recording 1° latitude by 1° longitude CO2 emissions in units of million metric tons of carbon per year from anthropogenic sources for 1751-2006. Detailed geographic information on CO2 emissions can be critical in understanding the pattern of the atmospheric and biospheric response to these emissions. Global, regional, and national annual estimates for 1751 through 2006 were published earlier (Boden et al. 2009). Those national, annual CO2 emission estimates were based on statistics about fossil-fuel burning, cement manufacturing and gas flaring in oil fields as well as energy production, consumption, and trade data, using the methods of Marland and Rotty (1984). The national annual estimates were combined with gridded 1° data on political units and 1984 human populations to create the new gridded CO2 emission time series. The same population distribution was used for each of the years as proxy for the emission distribution within each country. The implied assumption for that procedure was that per capita energy use and fuel mixes are uniform over a political unit. The consequence of this first-order procedure is that the spatial changes observed over time are solely due to changes in national energy consumption and nation-based fuel mix. Increases in fossil-fuel CO2 emissions over time are apparent for most areas.
Andres, R. J. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Boden, T. A. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA)
2016-01-01
The 2016 version of this database presents a time series recording 1° latitude by 1° longitude CO2 emissions in units of million metric tons of carbon per year from anthropogenic sources for 1751-2013. Detailed geographic information on CO2 emissions can be critical in understanding the pattern of the atmospheric and biospheric response to these emissions. Global, regional, and national annual estimates for 1751 through 2013 were published earlier (Boden et al. 2016). Those national, annual CO2 emission estimates were based on statistics about fossil-fuel burning, cement manufacturing and gas flaring in oil fields as well as energy production, consumption, and trade data, using the methods of Marland and Rotty (1984). The national annual estimates were combined with gridded 1° data on political units and 1984 human populations to create the new gridded CO2 emission time series. The same population distribution was used for each of the years as proxy for the emission distribution within each country. The implied assumption for that procedure was that per capita energy use and fuel mixes are uniform over a political unit. The consequence of this first-order procedure is that the spatial changes observed over time are solely due to changes in national energy consumption and nation-based fuel mix. Increases in fossil-fuel CO2 emissions over time are apparent for most areas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brenkert, A.L.; Andres, R.J.; Marland, G.
1997-03-01
Data sets of one degree latitude by one degree longitude carbon dioxide (CO{sub 2}) emissions in units of thousand metric tons of carbon (C) per year from anthropogenic sources have been produced for 1950, 1960, 1970, 1980 and 1990. Detailed geographic information on CO{sub 2} emissions can be critical in understanding the pattern of the atmospheric and biospheric response to these emissions. Global, regional and national annual estimates for 1950 through 1992 were published previously. Those national, annual CO{sub 2} emission estimates were based on statistics on fossil-fuel burning, cement manufacturing and gas flaring in oil fields as well asmore » energy production, consumption and trade data, using the methods of Marland and Rotty. The national annual estimates were combined with gridded one-degree data on political units and 1984 human populations to create the new gridded CO{sub 2} emission data sets. The same population distribution was used for each of the years as proxy for the emission distribution within each country. The implied assumption for that procedure was that per capita energy use and fuel mix is uniform over a political unit. The consequence of this first-order procedure is that the spatial changes observed over time are solely due to changes in national energy consumption and nation-based fuel mix. Increases in emissions over time are apparent for most areas.« less
NASA Astrophysics Data System (ADS)
Visacro, Silverio; Guimaraes, Miguel; Murta Vale, Maria Helena
2017-12-01
First and subsequent return strokes' striking distances (SDs) were determined for negative cloud-to-ground flashes from high-speed videos exhibiting the development of positive and negative leaders and the pre-return stroke phase of currents measured along a short tower. In order to improve the results, a new criterion was used for the initiation and propagation of the sustained upward connecting leader, consisting of a 4 A continuous current threshold. An advanced approach developed from the combined use of this criterion and a reverse propagation procedure, which considers the calculated propagation speeds of the leaders, was applied and revealed that SDs determined solely from the first video frame showing the upward leader can be significantly underestimated. An original approach was proposed for a rough estimate of first strokes' SD using solely records of current. This approach combines the 4 A criterion and a representative composite three-dimensional propagation speed of 0.34 × 106 m/s for the leaders in the last 300 m propagated distance. SDs determined under this approach showed to be consistent with those of the advanced procedure. This approach was applied to determine the SD of 17 first return strokes of negative flashes measured at MCS, covering a wide peak-current range, from 18 to 153 kA. The estimated SDs exhibit very high dispersion and reveal great differences in relation to the SDs estimated for subsequent return strokes and strokes in triggered lightning.
Win-Win for Wind and Wildlife: A Vision to Facilitate Sustainable Development
Kiesecker, Joseph M.; Evans, Jeffrey S.; Fargione, Joe; Doherty, Kevin; Foresman, Kerry R.; Kunz, Thomas H.; Naugle, Dave; Nibbelink, Nathan P.; Niemuth, Neal D.
2011-01-01
Wind energy offers the potential to reduce carbon emissions while increasing energy independence and bolstering economic development. However, wind energy has a larger land footprint per Gigawatt (GW) than most other forms of energy production, making appropriate siting and mitigation particularly important. Species that require large unfragmented habitats and those known to avoid vertical structures are particularly at risk from wind development. Developing energy on disturbed lands rather than placing new developments within large and intact habitats would reduce cumulative impacts to wildlife. The U.S. Department of Energy estimates that it will take 241 GW of terrestrial based wind development on approximately 5 million hectares to reach 20% electricity production for the U.S. by 2030. We estimate there are ∼7,700 GW of potential wind energy available across the U.S., with ∼3,500 GW on disturbed lands. In addition, a disturbance-focused development strategy would avert the development of ∼2.3 million hectares of undisturbed lands while generating the same amount of energy as development based solely on maximizing wind potential. Wind subsidies targeted at favoring low-impact developments and creating avoidance and mitigation requirements that raise the costs for projects impacting sensitive lands could improve public value for both wind energy and biodiversity conservation. PMID:21533285
How much work-related injury and illness is missed by the current national surveillance system?
Rosenman, Kenneth D; Kalush, Alice; Reilly, Mary Jo; Gardiner, Joseph C; Reeves, Mathew; Luo, Zhewui
2006-04-01
We sought to estimate the undercount in the existing national surveillance system of occupational injuries and illnesses. Adhering to the strict confidentiality rules of the U.S. Bureau of Labor Statistics, we matched the companies and individuals who reported work-related injuries and illnesses to the Bureau in 1999, 2000, and 2001 in Michigan with companies and individuals reported in four other Michigan data bases, workers' compensation, OSHA Annual Survey, OSHA Integrated Management Information System, and the Occupational Disease Report. We performed capture-recapture analysis to estimate the number of cases missed by the combined systems. We calculated that the current national surveillance system did not include 61% and with capture-recapture analysis up to 68% of the work-related injuries and illnesses that occurred annually in Michigan. This was true for injuries alone, 60% and 67%, and illnesses alone 66% and 69%, respectively. The current national system for work-related injuries and illnesses markedly underestimates the magnitude of these conditions. A more comprehensive system, such as the one developed for traumatic workplace fatalities, that is not solely dependent on employer based data sources is needed to better guide decision-making and evaluation of public health programs to reduce work-related conditions.
Radiation Resistance of the U(Al, Si)3 Alloy: Ion-Induced Disordering
Yaniv, Gili; Horak, Pavel; Vacik, Jiri; Mykytenko, Natalia; Rafailov, Gennady; Dahan, Itzchak; Fuks, David; Kiv, Arik
2018-01-01
During the exploitation of nuclear reactors, various U-Al based ternary intermetallides are formed in the fuel-cladding interaction layer. Structure and physical properties of these intermetallides determine the radiation resistance of cladding and, ultimately, the reliability and lifetime of the nuclear reactor. In current research, U(Al, Si)3 composition was studied as a potential constituent of an interaction layer. Phase content of the alloy of an interest was ordered U(Al, Si)3, structure of which was reported earlier, and pure Al (constituting less than 20 vol % of the alloy). This alloy was investigated prior and after the irradiation performed by Ar ions at 30 keV. The irradiation was performed on the transmission electron microscopy (TEM, JEOL, Japan) samples, characterized before and after the irradiation process. Irradiation induced disorder accompanied by stress relief. Furthermore, it was found that there is a dose threshold for disordering of the crystalline matter in the irradiated region. Irradiation at doses equal or higher than this threshold resulted in almost solely disordered phase. Using the program “Stopping and Range of Ions in Matter” (SRIM), the parameters of penetration of Ar ions into the irradiated samples were estimated. Based on these estimations, the dose threshold for ion-induced disordering of the studied material was assessed. PMID:29393870
van der Put, Claudia E
2014-06-01
Estimating the risk for recidivism is important for many areas of the criminal justice system. In the present study, the Youth Actuarial Risk Assessment Tool (Y-ARAT) was developed for juvenile offenders based solely on police records, with the aim to estimate the risk of general recidivism among large groups of juvenile offenders by police officers without clinical expertise. On the basis of the Y-ARAT, juvenile offenders are classified into five risk groups based on (combinations of) 10 variables including different types of incidents in which the juvenile was a suspect, total number of incidents in which the juvenile was a suspect, total number of other incidents, total number of incidents in which co-occupants at the youth's address were suspects, gender, and age at first incident. The Y-ARAT was developed on a sample of 2,501 juvenile offenders and validated on another sample of 2,499 juvenile offenders, showing moderate predictive accuracy (area under the receiver-operating-characteristic curve = .73), with little variation between the construction and validation sample. The predictive accuracy of the Y-ARAT was considered sufficient to justify its use as a screening instrument for the police. © The Author(s) 2013.
Win-win for wind and wildlife: a vision to facilitate sustainable development.
Kiesecker, Joseph M; Evans, Jeffrey S; Fargione, Joe; Doherty, Kevin; Foresman, Kerry R; Kunz, Thomas H; Naugle, Dave; Nibbelink, Nathan P; Niemuth, Neal D
2011-04-13
Wind energy offers the potential to reduce carbon emissions while increasing energy independence and bolstering economic development. However, wind energy has a larger land footprint per Gigawatt (GW) than most other forms of energy production, making appropriate siting and mitigation particularly important. Species that require large unfragmented habitats and those known to avoid vertical structures are particularly at risk from wind development. Developing energy on disturbed lands rather than placing new developments within large and intact habitats would reduce cumulative impacts to wildlife. The U.S. Department of Energy estimates that it will take 241 GW of terrestrial based wind development on approximately 5 million hectares to reach 20% electricity production for the U.S. by 2030. We estimate there are ∼7,700 GW of potential wind energy available across the U.S., with ∼3,500 GW on disturbed lands. In addition, a disturbance-focused development strategy would avert the development of ∼2.3 million hectares of undisturbed lands while generating the same amount of energy as development based solely on maximizing wind potential. Wind subsidies targeted at favoring low-impact developments and creating avoidance and mitigation requirements that raise the costs for projects impacting sensitive lands could improve public value for both wind energy and biodiversity conservation.
Radiation Resistance of the U(Al, Si)₃ Alloy: Ion-Induced Disordering.
Meshi, Louisa; Yaniv, Gili; Horak, Pavel; Vacik, Jiri; Mykytenko, Natalia; Rafailov, Gennady; Dahan, Itzchak; Fuks, David; Kiv, Arik
2018-02-02
During the exploitation of nuclear reactors, various U-Al based ternary intermetallides are formed in the fuel-cladding interaction layer. Structure and physical properties of these intermetallides determine the radiation resistance of cladding and, ultimately, the reliability and lifetime of the nuclear reactor. In current research, U(Al, Si)₃ composition was studied as a potential constituent of an interaction layer. Phase content of the alloy of an interest was ordered U(Al, Si)₃, structure of which was reported earlier, and pure Al (constituting less than 20 vol % of the alloy). This alloy was investigated prior and after the irradiation performed by Ar ions at 30 keV. The irradiation was performed on the transmission electron microscopy (TEM, JEOL, Japan) samples, characterized before and after the irradiation process. Irradiation induced disorder accompanied by stress relief. Furthermore, it was found that there is a dose threshold for disordering of the crystalline matter in the irradiated region. Irradiation at doses equal or higher than this threshold resulted in almost solely disordered phase. Using the program "Stopping and Range of Ions in Matter" (SRIM), the parameters of penetration of Ar ions into the irradiated samples were estimated. Based on these estimations, the dose threshold for ion-induced disordering of the studied material was assessed.
Parameter Heterogeneity In Breast Cancer Cost Regressions – Evidence From Five European Countries
Banks, Helen; Campbell, Harry; Douglas, Anne; Fletcher, Eilidh; McCallum, Alison; Moger, Tron Anders; Peltola, Mikko; Sveréus, Sofia; Wild, Sarah; Williams, Linda J.; Forbes, John
2015-01-01
Abstract We investigate parameter heterogeneity in breast cancer 1‐year cumulative hospital costs across five European countries as part of the EuroHOPE project. The paper aims to explore whether conditional mean effects provide a suitable representation of the national variation in hospital costs. A cohort of patients with a primary diagnosis of invasive breast cancer (ICD‐9 codes 174 and ICD‐10 C50 codes) is derived using routinely collected individual breast cancer data from Finland, the metropolitan area of Turin (Italy), Norway, Scotland and Sweden. Conditional mean effects are estimated by ordinary least squares for each country, and quantile regressions are used to explore heterogeneity across the conditional quantile distribution. Point estimates based on conditional mean effects provide a good approximation of treatment response for some key demographic and diagnostic specific variables (e.g. age and ICD‐10 diagnosis) across the conditional quantile distribution. For many policy variables of interest, however, there is considerable evidence of parameter heterogeneity that is concealed if decisions are based solely on conditional mean results. The use of quantile regression methods reinforce the need to consider beyond an average effect given the greater recognition that breast cancer is a complex disease reflecting patient heterogeneity. © 2015 The Authors. Health Economics Published by John Wiley & Sons Ltd. PMID:26633866
Hardiman, Nigel; Dietz, Kristina Charlotte; Bride, Ian; Passfield, Louis
2017-01-01
Land managers of natural areas are under pressure to balance demands for increased recreation access with protection of the natural resource. Unintended dispersal of seeds by visitors to natural areas has high potential for weedy plant invasions, with initial seed attachment an important step in the dispersal process. Although walking and mountain biking are popular nature-based recreation activities, there are few studies quantifying propensity for seed attachment and transport rate on boot soles and none for bike tires. Attachment and transport rate can potentially be affected by a wide range of factors for which field testing can be time-consuming and expensive. We pilot tested a sampling methodology for measuring seed attachment and transport rate in a soil matrix carried on boot soles and bike tires traversing a known quantity and density of a seed analog (beads) over different distances and soil conditions. We found % attachment rate on boot soles was much lower overall than previously reported, but that boot soles had a higher propensity for seed attachment than bike tires in almost all conditions. We believe our methodology offers a cost-effective option for researchers seeking to manipulate and test effects of different influencing factors on these two dispersal vectors.
NASA Astrophysics Data System (ADS)
Hardiman, Nigel; Dietz, Kristina Charlotte; Bride, Ian; Passfield, Louis
2017-01-01
Land managers of natural areas are under pressure to balance demands for increased recreation access with protection of the natural resource. Unintended dispersal of seeds by visitors to natural areas has high potential for weedy plant invasions, with initial seed attachment an important step in the dispersal process. Although walking and mountain biking are popular nature-based recreation activities, there are few studies quantifying propensity for seed attachment and transport rate on boot soles and none for bike tires. Attachment and transport rate can potentially be affected by a wide range of factors for which field testing can be time-consuming and expensive. We pilot tested a sampling methodology for measuring seed attachment and transport rate in a soil matrix carried on boot soles and bike tires traversing a known quantity and density of a seed analog (beads) over different distances and soil conditions. We found % attachment rate on boot soles was much lower overall than previously reported, but that boot soles had a higher propensity for seed attachment than bike tires in almost all conditions. We believe our methodology offers a cost-effective option for researchers seeking to manipulate and test effects of different influencing factors on these two dispersal vectors.
Wilson, Bethany J; Nicholas, Frank W; James, John W; Wade, Claire M; Tammen, Imke; Raadsma, Herman W; Castle, Kao; Thomson, Peter C
2012-01-01
Canine Hip Dysplasia (CHD) is a common, painful and debilitating orthopaedic disorder of dogs with a partly genetic, multifactorial aetiology. Worldwide, potential breeding dogs are evaluated for CHD using radiographically based screening schemes such as the nine ordinally-scored British Veterinary Association Hip Traits (BVAHTs). The effectiveness of selective breeding based on screening results requires that a significant proportion of the phenotypic variation is caused by the presence of favourable alleles segregating in the population. This proportion, heritability, was measured in a cohort of 13,124 Australian German Shepherd Dogs born between 1976 and 2005, displaying phenotypic variation for BVAHTs, using ordinal, linear and binary mixed models fitted by a Restricted Maximum Likelihood method. Heritability estimates for the nine BVAHTs ranged from 0.14-0.24 (ordinal models), 0.14-0.25 (linear models) and 0.12-0.40 (binary models). Heritability for the summed BVAHT phenotype was 0.30 ± 0.02. The presence of heritable variation demonstrates that selection based on BVAHTs has the potential to improve BVAHT scores in the population. Assuming a genetic correlation between BVAHT scores and CHD-related pain and dysfunction, the welfare of Australian German Shepherds can be improved by continuing to consider BVAHT scores in the selection of breeding dogs, but that as heritability values are only moderate in magnitude the accuracy, and effectiveness, of selection could be improved by the use of Estimated Breeding Values in preference to solely phenotype based selection of breeding animals.
Anderson, R W G; Hutchinson, T P
2009-03-01
The motivation for this paper is the high rate of inappropriate child restraint selection in cars that is apparent in published surveys of child restraint use and how the public health messages promoting child restraints might respond. Advice has increasingly been given solely according to the child's weight, while many parents do not know the weight of their children. A common objection to promoting restraint use based on the age of the child is the imprecision of such advice, given the variation in the size of children, but the magnitude of the misclassification such advice would produce has never been estimated. This paper presents a method for estimating the misclassification of children by weight, when advice is posed in terms of age, and applies it to detailed child growth data published by the Centers for Disease Control and Prevention. In Australia, guidelines instructing all parents to promote their children from an infant restraint to a forward-facing child seat at 6 months, and then to a belt-positioning booster at 4 years, would mean that 5% of all children under the age of 6 years would be using a restraint not suited to their weight. Coordination of aged-based advice and the weight ranges chosen for the Australian Standard on child restraints could reduce this level of misclassification to less than 1%. The general method developed may also be applied to other aspects of restraint design that are more directly relevant to good restraint fit.
Invited review: Genetics and claw health: Opportunities to enhance claw health by genetic selection.
Heringstad, B; Egger-Danner, C; Charfeddine, N; Pryce, J E; Stock, K F; Kofler, J; Sogstad, A M; Holzhauer, M; Fiedler, A; Müller, K; Nielsen, P; Thomas, G; Gengler, N; de Jong, G; Ødegård, C; Malchiodi, F; Miglior, F; Alsaaod, M; Cole, J B
2018-06-01
Routine recording of claw health status at claw trimming of dairy cattle has been established in several countries, providing valuable data for genetic evaluation. In this review, we examine issues related to genetic evaluation of claw health; discuss data sources, trait definitions, and data validation procedures; and present a review of genetic parameters, possible indicator traits, and status of genetic and genomic evaluations for claw disorders. Different sources of data and traits can be used to describe claw health. Severe cases of claw disorders can be identified by veterinary diagnoses. Data from lameness and locomotion scoring, activity information from sensors, and feet and leg conformation traits are used as auxiliary traits. The most reliable and comprehensive information is data from regular hoof trimming. In genetic evaluation, claw disorders are usually defined as binary traits, based on whether or not the claw disorder was present (recorded) at least once during a defined time period. The traits can be specific disorders, composite traits, or overall claw health. Data validation and editing criteria are needed to ensure reliable data at the trimmer, herd, animal, and record levels. Different strategies have been chosen, reflecting differences in herd sizes, data structures, management practices, and recording systems among countries. Heritabilities of the most commonly analyzed claw disorders based on data from routine claw trimming were generally low, with ranges of linear model estimates from 0.01 to 0.14, and threshold model estimates from 0.06 to 0.39. Estimated genetic correlations among claw disorders varied from -0.40 to 0.98. The strongest genetic correlations were found among sole hemorrhage (SH), sole ulcer (SU), and white line disease (WL), and between digital/interdigital dermatitis (DD/ID) and heel horn erosion (HHE). Genetic correlations between DD/ID and HHE on the one hand and SH, SU, or WL on the other hand were, in most cases, low. Although some of the studies were based on relatively few records and the estimated genetic parameters had large standard errors, there was, with some exceptions, consistency among studies. Various studies evaluate the potential of various data soureces for use in breeding. The use of hoof trimming data is recommended for maximization of genetic gain, although auxiliary traits, such as locomotion score and some conformation traits, may be valuable for increasing the reliability of genetic evaluations. Routine genetic evaluation of direct claw health has been implemented in the Netherlands (2010); Denmark, Finland, and Sweden (joint Nordic evaluation; 2011); and Norway (2014), and other countries plan to implement evaluations in the near future. The Authors. Published by FASS Inc. and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).
NASA Astrophysics Data System (ADS)
Farhadi, Leila; Entekhabi, Dara; Salvucci, Guido
2016-04-01
In this study, we develop and apply a mapping estimation capability for key unknown parameters that link the surface water and energy balance equations. The method is applied to the Gourma region in West Africa. The accuracy of the estimation method at point scale was previously examined using flux tower data. In this study, the capability is scaled to be applicable with remotely sensed data products and hence allow mapping. Parameters of the system are estimated through a process that links atmospheric forcing (precipitation and incident radiation), surface states, and unknown parameters. Based on conditional averaging of land surface temperature and moisture states, respectively, a single objective function is posed that measures moisture and temperature-dependent errors solely in terms of observed forcings and surface states. This objective function is minimized with respect to parameters to identify evapotranspiration and drainage models and estimate water and energy balance flux components. The uncertainty of the estimated parameters (and associated statistical confidence limits) is obtained through the inverse of Hessian of the objective function, which is an approximation of the covariance matrix. This calibration-free method is applied to the mesoscale region of Gourma in West Africa using multiplatform remote sensing data. The retrievals are verified against tower-flux field site data and physiographic characteristics of the region. The focus is to find the functional form of the evaporative fraction dependence on soil moisture, a key closure function for surface and subsurface heat and moisture dynamics, using remote sensing data.
Waller, Kylie Anne; Dickinson, Jan E; Hart, Roger J
2017-08-01
Increasingly couples are travelling overseas to access assisted reproductive technology, known as cross border reproductive care, although the incidence, pregnancy outcomes and healthcare costs are unknown. To determine obstetric and neonatal outcomes for multiple pregnancies conceived through fertility treatment overseas, and estimate cost of these pregnancies to the health system. Retrospective study of women receiving care for a multiple gestation between July 2013 and June 2015 at Western Australia's sole tertiary obstetric hospital, where conception was by overseas fertility treatment. Obstetric and neonatal outcomes were recorded and cost estimates calculated. Of 11 710 births, 422 were multiple pregnancies. Thirty-seven pregnancies were conceived with fertility treatment, with 11 (29.7%) conceived overseas. Median antenatal clinic attendances, ultrasound examinations, and fetal assessments for the overseas fertility cases were six, 10, and nine, respectively. The gestational age at delivery ranged from 30 to 38 weeks (median 34 + 1). Median neonatal admission duration was 18 days (range 0-47). Cost for obstetric care was estimated between $170 000 and $216 000, and cost of neonatal care was estimated as $810 000, giving a combined total cost of between $980 000 and $1 026 000. At the sole tertiary obstetric centre in WA, approximately one-third of all multiple pregnancies conceived with fertility treatment resulted from treatment overseas. The Australian healthcare cost for these 11 women and their infants exceeded $1 000 000. This study suggests that overseas fertility treatment has a significant health-related cost to the mother and infant, and the local healthcare system. © 2017 The Royal Australian and New Zealand College of Obstetricians and Gynaecologists.
Naranjo, Ramon C.; Niswonger, Richard G.; Stone, Mark; Davis, Clinton; McKay, Alan
2012-01-01
We describe an approach for calibrating a two-dimensional (2-D) flow model of hyporheic exchange using observations of temperature and pressure to estimate hydraulic and thermal properties. A longitudinal 2-D heat and flow model was constructed for a riffle-pool sequence to simulate flow paths and flux rates for variable discharge conditions. A uniform random sampling approach was used to examine the solution space and identify optimal values at local and regional scales. We used a regional sensitivity analysis to examine the effects of parameter correlation and nonuniqueness commonly encountered in multidimensional modeling. The results from this study demonstrate the ability to estimate hydraulic and thermal parameters using measurements of temperature and pressure to simulate exchange and flow paths. Examination of the local parameter space provides the potential for refinement of zones that are used to represent sediment heterogeneity within the model. The results indicate vertical hydraulic conductivity was not identifiable solely using pressure observations; however, a distinct minimum was identified using temperature observations. The measured temperature and pressure and estimated vertical hydraulic conductivity values indicate the presence of a discontinuous low-permeability deposit that limits the vertical penetration of seepage beneath the riffle, whereas there is a much greater exchange where the low-permeability deposit is absent. Using both temperature and pressure to constrain the parameter estimation process provides the lowest overall root-mean-square error as compared to using solely temperature or pressure observations. This study demonstrates the benefits of combining continuous temperature and pressure for simulating hyporheic exchange and flow in a riffle-pool sequence. Copyright 2012 by the American Geophysical Union.
Impact of TRMM and SSM/I-derived Precipitation and Moisture Data on the GEOS Global Analysis
NASA Technical Reports Server (NTRS)
Hou, Arthur Y.; Zhang, Sara Q.; daSilva, Arlindo M.; Olson, William S.
1999-01-01
Current global analyses contain significant errors in primary hydrological fields such as precipitation, evaporation, and related cloud and moisture in the tropics. The Data Assimilation Office at NASA's Goddard Space Flight Center has been exploring the use of space-based rainfall and total precipitable water (TPW) estimates to constrain these hydrological parameters in the Goddard Earth Observing System (GEOS) global data assimilation system. We present results showing that assimilating the 6-hour averaged rain rates and TPW estimates from the Tropical Rainfall Measuring Mission (TRMM) and Special Sensor Microwave/Imager (SSM/I) instruments improves not only the precipitation and moisture estimates but also reduce state-dependent systematic errors in key climate parameters directly linked to convection such as the outgoing longwave radiation, clouds, and the large-scale circulation. The improved analysis also improves short-range forecasts beyond 1 day, but the impact is relatively modest compared with improvements in the time-averaged analysis. The study shows that, in the presence of biases and other errors of the forecast model, improving the short-range forecast is not necessarily prerequisite for improving the assimilation as a climate data set. The full impact of a given type of observation on the assimilated data set should not be measured solely in terms of forecast skills.
NASA Astrophysics Data System (ADS)
Ombadi, Mohammed; Nguyen, Phu; Sorooshian, Soroosh
2017-12-01
Intensity Duration Frequency (IDF) curves are essential for the resilient design of infrastructures. Since their earlier development, IDF relationships have been derived using precipitation records from rainfall gauge stations. However, with the recent advancement in satellite observation of precipitation which provides near global coverage and high spatiotemporal resolution, it is worthy of attention to investigate the validity of utilizing the relatively short record length of satellite rainfall to generate robust IDF relationships. These satellite-based IDF can address the paucity of such information in the developing countries. Few studies have used satellite precipitation data in IDF development but mainly focused on merging satellite and gauge precipitation. In this study, however, IDF have been derived solely from satellite observations using PERSIANN-CDR (Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks-Climate Data Record). The unique PERSIANN-CDR attributes of high spatial resolution (0.25°×0.25°), daily temporal resolution and a record dating back to 1983 allow for the investigation at fine resolution. The results are compared over most of the contiguous United States against NOAA Atlas 14. The impact of using different methods of sampling, distribution estimators and regionalization in the resulting relationships is investigated. Main challenges to estimate robust and accurate IDF from satellite observations are also highlighted.
Díaz García F; Arenas; Martínez Catalán JR; González del Tánago J; Dunning
1999-09-01
Analysis of the Careón Unit in the Ordenes Complex (northwest Iberian Massif) has supplied relevant data concerning the existence of a Paleozoic oceanic lithosphere, probably related to the Rheic realm, and the early subduction-related events that were obscured along much of the Variscan belt by subsequent collision tectonics. The ophiolite consists of serpentinized harzburgite and dunite in the lower section and a crustal section made up of coarse-grained and pegmatitic gabbros. An Early Devonian zircon age (395+/-2 Ma, U-Pb) was obtained in a leucocratic gabbro. The whole section was intruded by numerous diabasic gabbro dikes. Convergence processes took place shortly afterward, giving rise to a mantle-rooted synthetic thrust system, with some coeval igneous activity. Garnet amphibolite, developed in metamorphic soles, was found discontinuously attached to the thrust fault. The soles graded downward to epidote-amphibolite facies metabasite and were partially retrogressed to greenschist facies conditions. Thermobarometric estimations carried out at a metamorphic sole (T approximately 650 degrees C; P approximately 11.5 kbar) suggested that imbrications developed in a subduction setting, and regional geology places this subduction in the context of an early Variscan accretionary wedge. Subduction and imbrication of oceanic lithosphere was followed by underthrusting of the Gondwana continental margin.
Cabrera, Jaime A; Molina, Eduardo; González, Tania; Armenteras, Dolors
2016-12-01
Telemetry based on Global Positioning Systems (GPS) makes possible to gather large quantities of information in a very fine scale and work with species that were impossible to study in the past. When working with GPS telemetry, the option of storing data on board could be more desirable than the sole satellite transmitted data, due to the increase in the amount of locations available for analysis. Nonetheless, the uncertainty in the retrieving of the collar unit makes satellite-transmitted technologies something to take into account. Therefore, differences between store-on-board (SoB) and satellite-transmitted (IT) data sets need to be considered. Differences between SoB and IT data collected from two lowland tapirs (Tapirus terrestris), were explored by means of the calculation of home range areas by three different methods: the Minimum Convex Polygon (MCP), the Fixed Kernel Density Estimator (KDE) and the Brownian Bridges (BB). Results showed that SoB and IT data sets for the same individual were similar, with fix ranging from 63 % to 85 % respectively, and 16 m to 17 m horizontal errors. Depending on the total number of locations available for each individual, the home ranges estimated showed differences between 2.7 % and 79.3 %, for the 50 % probability contour and between 9.9 % and 61.8 % for the 95 % probability contour. These differences imply variations in the spatial coincidence of the estimated home ranges. We concluded that the use of IT data is not a good option for the estimation of home range areas if the collar settings have not been designed specifically for this use. Nonetheless, geographical representations of the IT based estimators could be of great help to identify areas of use, besides its assistance to locate the collar for its retrieval at the end of the field season and as a proximate backup when collars disappear.
NASA Astrophysics Data System (ADS)
Lobuglio, Joseph N.; Characklis, Gregory W.; Serre, Marc L.
2007-03-01
Sparse monitoring data and error inherent in water quality models make the identification of waters not meeting regulatory standards uncertain. Additional monitoring can be implemented to reduce this uncertainty, but it is often expensive. These costs are currently a major concern, since developing total maximum daily loads, as mandated by the Clean Water Act, will require assessing tens of thousands of water bodies across the United States. This work uses the Bayesian maximum entropy (BME) method of modern geostatistics to integrate water quality monitoring data together with model predictions to provide improved estimates of water quality in a cost-effective manner. This information includes estimates of uncertainty and can be used to aid probabilistic-based decisions concerning the status of a water (i.e., impaired or not impaired) and the level of monitoring needed to characterize the water for regulatory purposes. This approach is applied to the Catawba River reservoir system in western North Carolina as a means of estimating seasonal chlorophyll a concentration. Mean concentration and confidence intervals for chlorophyll a are estimated for 66 reservoir segments over an 11-year period (726 values) based on 219 measured seasonal averages and 54 model predictions. Although the model predictions had a high degree of uncertainty, integration of modeling results via BME methods reduced the uncertainty associated with chlorophyll estimates compared with estimates made solely with information from monitoring efforts. Probabilistic predictions of future chlorophyll levels on one reservoir are used to illustrate the cost savings that can be achieved by less extensive and rigorous monitoring methods within the BME framework. While BME methods have been applied in several environmental contexts, employing these methods as a means of integrating monitoring and modeling results, as well as application of this approach to the assessment of surface water monitoring networks, represent unexplored areas of research.
THE EFFECTS OF MAINTENANCE ACTIONS ON THE PFDavg OF SPRING OPERATED PRESSURE RELIEF VALVES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, S.; Gross, R.
2014-04-01
The safety integrity level (SIL) of equipment used in safety instrumented functions is determined by the average probability of failure on demand (PFDavg) computed at the time of periodic inspection and maintenance, i.e., the time of proof testing. The computation of PFDavg is generally based solely on predictions or estimates of the assumed constant failure rate of the equipment. However, PFDavg is also affected by maintenance actions (or lack thereof) taken by the end user. This paper shows how maintenance actions can affect the PFDavg of spring operated pressure relief valves (SOPRV) and how these maintenance actions may be accountedmore » for in the computation of the PFDavg metric. The method provides a means for quantifying the effects of changes in maintenance practices and shows how these changes impact plant safety.« less
The Effects of Maintenance Actions on the PFDavg of Spring Operated Pressure Relief Valves
Harris, S.; Gross, R.; Goble, W; ...
2015-12-01
The safety integrity level (SIL) of equipment used in safety instrumented functions is determined by the average probability of failure on demand (PFDavg) computed at the time of periodic inspection and maintenance, i.e., the time of proof testing. The computation of PFDavg is generally based solely on predictions or estimates of the assumed constant failure rate of the equipment. However, PFDavg is also affected by maintenance actions (or lack thereof) taken by the end user. This paper shows how maintenance actions can affect the PFDavg of spring operated pressure relief valves (SOPRV) and how these maintenance actions may be accountedmore » for in the computation of the PFDavg metric. The method provides a means for quantifying the effects of changes in maintenance practices and shows how these changes impact plant safety.« less
Global daily reference evapotranspiration modeling and evaluation
Senay, G.B.; Verdin, J.P.; Lietzow, R.; Melesse, Assefa M.
2008-01-01
Accurate and reliable evapotranspiration (ET) datasets are crucial in regional water and energy balance studies. Due to the complex instrumentation requirements, actual ET values are generally estimated from reference ET values by adjustment factors using coefficients for water stress and vegetation conditions, commonly referred to as crop coefficients. Until recently, the modeling of reference ET has been solely based on important weather variables collected from weather stations that are generally located in selected agro-climatic locations. Since 2001, the National Oceanic and Atmospheric Administration’s Global Data Assimilation System (GDAS) has been producing six-hourly climate parameter datasets that are used to calculate daily reference ET for the whole globe at 1-degree spatial resolution. The U.S. Geological Survey Center for Earth Resources Observation and Science has been producing daily reference ET (ETo) since 2001, and it has been used on a variety of operational hydrological models for drought and streamflow monitoring all over the world. With the increasing availability of local station-based reference ET estimates, we evaluated the GDAS-based reference ET estimates using data from the California Irrigation Management Information System (CIMIS). Daily CIMIS reference ET estimates from 85 stations were compared with GDAS-based reference ET at different spatial and temporal scales using five-year daily data from 2002 through 2006. Despite the large difference in spatial scale (point vs. ∼100 km grid cell) between the two datasets, the correlations between station-based ET and GDAS-ET were very high, exceeding 0.97 on a daily basis to more than 0.99 on time scales of more than 10 days. Both the temporal and spatial correspondences in trend/pattern and magnitudes between the two datasets were satisfactory, suggesting the reliability of using GDAS parameter-based reference ET for regional water and energy balance studies in many parts of the world. While the study revealed the potential of GDAS ETo for large-scale hydrological applications, site-specific use of GDAS ETo in complex hydro-climatic regions such as coastal areas and rugged terrain may require the application of bias correction and/or disaggregation of the GDAS ETo using downscaling techniques.
Lin, Yuan-Pin; Yang, Yi-Hsuan; Jung, Tzyy-Ping
2014-01-01
Electroencephalography (EEG)-based emotion classification during music listening has gained increasing attention nowadays due to its promise of potential applications such as musical affective brain-computer interface (ABCI), neuromarketing, music therapy, and implicit multimedia tagging and triggering. However, music is an ecologically valid and complex stimulus that conveys certain emotions to listeners through compositions of musical elements. Using solely EEG signals to distinguish emotions remained challenging. This study aimed to assess the applicability of a multimodal approach by leveraging the EEG dynamics and acoustic characteristics of musical contents for the classification of emotional valence and arousal. To this end, this study adopted machine-learning methods to systematically elucidate the roles of the EEG and music modalities in the emotion modeling. The empirical results suggested that when whole-head EEG signals were available, the inclusion of musical contents did not improve the classification performance. The obtained performance of 74~76% using solely EEG modality was statistically comparable to that using the multimodality approach. However, if EEG dynamics were only available from a small set of electrodes (likely the case in real-life applications), the music modality would play a complementary role and augment the EEG results from around 61-67% in valence classification and from around 58-67% in arousal classification. The musical timber appeared to replace less-discriminative EEG features and led to improvements in both valence and arousal classification, whereas musical loudness was contributed specifically to the arousal classification. The present study not only provided principles for constructing an EEG-based multimodal approach, but also revealed the fundamental insights into the interplay of the brain activity and musical contents in emotion modeling.
Lin, Yuan-Pin; Yang, Yi-Hsuan; Jung, Tzyy-Ping
2014-01-01
Electroencephalography (EEG)-based emotion classification during music listening has gained increasing attention nowadays due to its promise of potential applications such as musical affective brain-computer interface (ABCI), neuromarketing, music therapy, and implicit multimedia tagging and triggering. However, music is an ecologically valid and complex stimulus that conveys certain emotions to listeners through compositions of musical elements. Using solely EEG signals to distinguish emotions remained challenging. This study aimed to assess the applicability of a multimodal approach by leveraging the EEG dynamics and acoustic characteristics of musical contents for the classification of emotional valence and arousal. To this end, this study adopted machine-learning methods to systematically elucidate the roles of the EEG and music modalities in the emotion modeling. The empirical results suggested that when whole-head EEG signals were available, the inclusion of musical contents did not improve the classification performance. The obtained performance of 74~76% using solely EEG modality was statistically comparable to that using the multimodality approach. However, if EEG dynamics were only available from a small set of electrodes (likely the case in real-life applications), the music modality would play a complementary role and augment the EEG results from around 61–67% in valence classification and from around 58–67% in arousal classification. The musical timber appeared to replace less-discriminative EEG features and led to improvements in both valence and arousal classification, whereas musical loudness was contributed specifically to the arousal classification. The present study not only provided principles for constructing an EEG-based multimodal approach, but also revealed the fundamental insights into the interplay of the brain activity and musical contents in emotion modeling. PMID:24822035
Evaluation of ground motion scaling methods for analysis of structural systems
O'Donnell, A. P.; Beltsar, O.A.; Kurama, Y.C.; Kalkan, E.; Taflanidis, A.A.
2011-01-01
Ground motion selection and scaling comprises undoubtedly the most important component of any seismic risk assessment study that involves time-history analysis. Ironically, this is also the single parameter with the least guidance provided in current building codes, resulting in the use of mostly subjective choices in design. The relevant research to date has been primarily on single-degree-of-freedom systems, with only a few studies using multi-degree-of-freedom systems. Furthermore, the previous research is based solely on numerical simulations with no experimental data available for the validation of the results. By contrast, the research effort described in this paper focuses on an experimental evaluation of selected ground motion scaling methods based on small-scale shake-table experiments of re-configurable linearelastic and nonlinear multi-story building frame structure models. Ultimately, the experimental results will lead to the development of guidelines and procedures to achieve reliable demand estimates from nonlinear response history analysis in seismic design. In this paper, an overview of this research effort is discussed and preliminary results based on linear-elastic dynamic response are presented. ?? ASCE 2011.
Economic Assessment of Supercritical CO2 Extraction of Waxes as Part of a Maize Stover Biorefinery.
Attard, Thomas M; McElroy, Con Robert; Hunt, Andrew J
2015-07-31
To date limited work has focused on assessing the economic viability of scCO2 extraction to obtain waxes as part of a biorefinery. This work estimates the economic costs for wax extraction from maize stover. The cost of manufacture (COM) for maize stover wax extraction was found to be € 88.89 per kg of wax, with the fixed capital investment (FCI) and utility costs (CUT) contributing significantly to the COM. However, this value is based solely on scCO2 extraction of waxes and does not take into account the downstream processing of the biomass following extraction. The cost of extracting wax from maize stover can be reduced by utilizing pelletized leaves and combusting the residual biomass to generate electricity. This would lead to an overall cost of € 10.87 per kg of wax (based on 27% combustion efficiency for electricity generation) and €4.56 per kg of wax (based on 43% combustion efficiency for electricity generation). A sensitivity analysis study showed that utility costs (cost of electricity) had the greatest effect on the COM.
Economic Assessment of Supercritical CO2 Extraction of Waxes as Part of a Maize Stover Biorefinery
Attard, Thomas M.; McElroy, Con Robert; Hunt, Andrew J.
2015-01-01
To date limited work has focused on assessing the economic viability of scCO2 extraction to obtain waxes as part of a biorefinery. This work estimates the economic costs for wax extraction from maize stover. The cost of manufacture (COM) for maize stover wax extraction was found to be €88.89 per kg of wax, with the fixed capital investment (FCI) and utility costs (CUT) contributing significantly to the COM. However, this value is based solely on scCO2 extraction of waxes and does not take into account the downstream processing of the biomass following extraction. The cost of extracting wax from maize stover can be reduced by utilizing pelletized leaves and combusting the residual biomass to generate electricity. This would lead to an overall cost of €10.87 per kg of wax (based on 27% combustion efficiency for electricity generation) and €4.56 per kg of wax (based on 43% combustion efficiency for electricity generation). A sensitivity analysis study showed that utility costs (cost of electricity) had the greatest effect on the COM. PMID:26263976
Remote Diagnosis of Nitrogen Status in Winter Oilseed Rape
NASA Astrophysics Data System (ADS)
Liu, S.
2016-12-01
Winter oilseed rape is one of the most important oilseed crops in the world. Compared with cereal crops, it requires high amount of nitrogen (N) supplies, but it is also characterized by low N use efficiency. The N nutrition index (NNI), defined as the ratio of the actual plant N concentration (PNC) to the critical PNC at a given biomass level, has been widely used to diagnose plant N status and to aid optimizing N fertilization. But traditional techniques to determine NNI in the lab are time-consuming and expensive. Remote sensing provides a promising approach for large-scale and rapid monitoring and diagnosis of crop N status. In this study, we conducted the experiment in the winter oilseed rape field with eight fertilization treatments in the growing season of 2014 and 2015. PNC, dry mass, and canopy spectra were measured during the different growth stages of winter oilseed rape. The N dilution curve was developed with measurements, and NNI was computed and analyzed for different treatments and different growth stage. For the same treatment, NNI decreased as more leaves were developing. Two methods were applied to remotely estimating NNI for winter oilseed rape: (1) NNI was estimated directly with vegetation indices (VIs) derived from canopy spectra; (2) the actual PNC and the critical PNC at the given biomass level were estimated separately with different types of VIs, and NNI was then computed with the two parts of the estimations. We found that VIs based solely on bands in the visible region provided the most accurate estimates of PNC. Estimating NNI directly with VIs had better performance than estimating the actual PNC and the critical PNC separately.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fabrikant, J.I.
1981-05-01
The current knowledge of the carcinogenic effect of radiation in man is considered. The discussion is restricted to dose-incidence data in humans, particularly to certain of those epidemiological studies of human populations that are used most frequently for risk estimation for low-dose radiation carcinogenesis in man. Emphasis is placed solely on those surveys concerned with nuclear explosions and medical exposures. (ACR)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laves, Kevin S.; Loeb, Susan C.
2006-01-01
ABSTRACT.—It is commonly assumed that population estimates derived from trapping small mammals are accurate and unbiased or that estimates derived from different capture methods are comparable. We captured southern flying squirrels (Glaucomys volans) using two methods to study their effect on red-cockaded woodpecker (Picoides borealis) reproductive success. Southern flying squirrels were captured at and removed from 30 red-cockaded woodpecker cluster sites during March to July 1994 and 1995 using Sherman traps placed in a grid encompassing a red-cockaded woodpecker nest tree and by hand from red-cockaded woodpecker cavities. Totals of 195 (1994) and 190 (1995) red-cockaded woodpecker cavities were examinedmore » at least three times each year. Trappability of southern flying squirrels in Sherman traps was significantly greater in 1995 (1.18%; 22,384 trap nights) than in 1994 (0.42%; 20,384 trap nights), and capture rate of southern flying squirrels in cavities was significantly greater in 1994 (22.7%; 502 cavity inspections) than in 1995 (10.8%; 555 cavity inspections). However, more southern flying squirrels were captured per cavity inspection than per Sherman trap night in both years. Male southern flying squirrels were more likely to be captured from cavities than in Sherman traps in 1994, but not in 1995. Both male and female juveniles were more likely to be captured in cavities than in traps in both years. In 1994 males in reproductive condition were more likely to be captured in cavities than in traps and in 1995 we captured significantly more reproductive females in cavities than in traps. Our data suggest that population estimates based solely on one trapping method may not represent true population size or structure of southern flying squirrels.« less
Shao, Zhenfeng; Zhang, Linjing
2016-01-01
Estimation of forest aboveground biomass is critical for regional carbon policies and sustainable forest management. Passive optical remote sensing and active microwave remote sensing both play an important role in the monitoring of forest biomass. However, optical spectral reflectance is saturated in relatively dense vegetation areas, and microwave backscattering is significantly influenced by the underlying soil when the vegetation coverage is low. Both of these conditions decrease the estimation accuracy of forest biomass. A new optical and microwave integrated vegetation index (VI) was proposed based on observations from both field experiments and satellite (Landsat 8 Operational Land Imager (OLI) and RADARSAT-2) data. According to the difference in interaction between the multispectral reflectance and microwave backscattering signatures with biomass, the combined VI (COVI) was designed using the weighted optical optimized soil-adjusted vegetation index (OSAVI) and microwave horizontally transmitted and vertically received signal (HV) to overcome the disadvantages of both data types. The performance of the COVI was evaluated by comparison with those of the sole optical data, Synthetic Aperture Radar (SAR) data, and the simple combination of independent optical and SAR variables. The most accurate performance was obtained by the models based on the COVI and optical and microwave optimal variables excluding OSAVI and HV, in combination with a random forest algorithm and the largest number of reference samples. The results also revealed that the predictive accuracy depended highly on the statistical method and the number of sample units. The validation indicated that this integrated method of determining the new VI is a good synergistic way to combine both optical and microwave information for the accurate estimation of forest biomass. PMID:27338378
NASA Astrophysics Data System (ADS)
Roosjen, Peter P. J.; Brede, Benjamin; Suomalainen, Juha M.; Bartholomeus, Harm M.; Kooistra, Lammert; Clevers, Jan G. P. W.
2018-04-01
In addition to single-angle reflectance data, multi-angular observations can be used as an additional information source for the retrieval of properties of an observed target surface. In this paper, we studied the potential of multi-angular reflectance data for the improvement of leaf area index (LAI) and leaf chlorophyll content (LCC) estimation by numerical inversion of the PROSAIL model. The potential for improvement of LAI and LCC was evaluated for both measured data and simulated data. The measured data was collected on 19 July 2016 by a frame-camera mounted on an unmanned aerial vehicle (UAV) over a potato field, where eight experimental plots of 30 × 30 m were designed with different fertilization levels. Dozens of viewing angles, covering the hemisphere up to around 30° from nadir, were obtained by a large forward and sideways overlap of collected images. Simultaneously to the UAV flight, in situ measurements of LAI and LCC were performed. Inversion of the PROSAIL model was done based on nadir data and based on multi-angular data collected by the UAV. Inversion based on the multi-angular data performed slightly better than inversion based on nadir data, indicated by the decrease in RMSE from 0.70 to 0.65 m2/m2 for the estimation of LAI, and from 17.35 to 17.29 μg/cm2 for the estimation of LCC, when nadir data were used and when multi-angular data were used, respectively. In addition to inversions based on measured data, we simulated several datasets at different multi-angular configurations and compared the accuracy of the inversions of these datasets with the inversion based on data simulated at nadir position. In general, the results based on simulated (synthetic) data indicated that when more viewing angles, more well distributed viewing angles, and viewing angles up to larger zenith angles were available for inversion, the most accurate estimations were obtained. Interestingly, when using spectra simulated at multi-angular sampling configurations as were captured by the UAV platform (view zenith angles up to 30°), already a huge improvement could be obtained when compared to solely using spectra simulated at nadir position. The results of this study show that the estimation of LAI and LCC by numerical inversion of the PROSAIL model can be improved when multi-angular observations are introduced. However, for the potato crop, PROSAIL inversion for measured data only showed moderate accuracy and slight improvements.
Modelling larval dispersal dynamics of common sole (Solea solea) along the western Iberian coast
NASA Astrophysics Data System (ADS)
Tanner, Susanne E.; Teles-Machado, Ana; Martinho, Filipe; Peliz, Álvaro; Cabral, Henrique N.
2017-08-01
Individual-based coupled physical-biological models have become the standard tool for studying ichthyoplankton dynamics and assessing fish recruitment. Here, common sole (Solea solea L.), a flatfish of high commercial importance in Europe was used to evaluate transport of eggs and larvae and investigate the connectivity between spawning and nursery areas along the western Iberian coast as spatio-temporal variability in dispersal and recruitment patterns can result in very strong or weak year-classes causing large fluctuations in stock size. A three-dimensional particle tracking model coupled to Regional Ocean Modelling System model was used to investigate variability of sole larvae dispersal along the western Iberian coast over a five-year period (2004-2009). A sensitivity analysis evaluating: (1) the importance of diel vertical migrations of larvae and (2) the size of designated recruitment areas was performed. Results suggested that connectivity patterns of sole larvae dispersal and their spatio-temporal variability are influenced by the configuration of the coast with its topographical structures and thus the suitable recruitment area available as well as the wind-driven mesoscale circulation along the Iberian coast.
The competition between thermal contraction and differentiation in the stress history of the moon
NASA Astrophysics Data System (ADS)
Kirk, Randolph L.; Stevenson, David J.
1989-09-01
The stress history of the moon is discussed, taking into consideration the effects of thermal contraction and differentiation. The amount of expansion caused by extracting basalt from undifferentiated lunar material is estimated taking account of the uncertainty in the knowledge of the appropriate compositions, and the resulting estimate of the expansion is used to compare the relative importance of the thermal and differentiation effects in the moon's volumetric history. The results of calculations show that differentiation is likely to be of major importance and, thus, thermal expansion is not the sole possible contributor to evolutionary changes in the lunar radius.
Harrington, J M; McBride, D I; Sorahan, T; Paddle, G M; van Tongeren, M
1997-01-01
OBJECTIVE: To investigate whether the risks of mortality from brain cancer are related to occupational exposure to magnetic fields. METHODS: A total of 112 cases of primary brain cancer (1972-91) were identified from a cohort of 84,018 male and female employees of the (then) Central Electricity Generating Board and its privatised successor companies. Individual cumulative occupational exposures to magnetic fields were estimated by linking available computerised job history data with magnetic field measurements collected over 675 person-workshifts. Estimated exposure histories of the case workers were compared with those of 654 control workers drawn from the cohort (nested case-control study), by means of conditional logistic regression. RESULTS: For exposure assessments based on arithmetic means, the risk of mortality from brain cancer for subjects with an estimated cumulative exposure to magnetic fields of 5.4-13.4 microT.y v subjects with lower exposures (0.0-5.3 microT.y) was 1.04 (95% confidence interval (95% CI) 0.60 to 1.80). The corresponding relative risk in subjects with higher exposures (> or = 13.5 microT.y) was 0.95 (95% CI 0.54 to 1.69). There was no indication of a positive trend for cumulative exposure and risk of mortality from brain cancer either when the analysis used exposure assessments based on geometric means or when the analysis was restricted to exposures received within five years of the case diagnosis (or corresponding period for controls). CONCLUSIONS: Although the exposure categorisation was based solely on recent observations, the study findings do not support the hypothesis that the risk of brain cancer is associated with occupational exposure to magnetic fields. PMID:9072027
Schneider, Markus; Rosam, Mathias; Glaser, Manuel; Patronov, Atanas; Shah, Harpreet; Back, Katrin Christiane; Daake, Marina Angelika; Buchner, Johannes; Antes, Iris
2016-10-01
Substrate binding to Hsp70 chaperones is involved in many biological processes, and the identification of potential substrates is important for a comprehensive understanding of these events. We present a multi-scale pipeline for an accurate, yet efficient prediction of peptides binding to the Hsp70 chaperone BiP by combining sequence-based prediction with molecular docking and MMPBSA calculations. First, we measured the binding of 15mer peptides from known substrate proteins of BiP by peptide array (PA) experiments and performed an accuracy assessment of the PA data by fluorescence anisotropy studies. Several sequence-based prediction models were fitted using this and other peptide binding data. A structure-based position-specific scoring matrix (SB-PSSM) derived solely from structural modeling data forms the core of all models. The matrix elements are based on a combination of binding energy estimations, molecular dynamics simulations, and analysis of the BiP binding site, which led to new insights into the peptide binding specificities of the chaperone. Using this SB-PSSM, peptide binders could be predicted with high selectivity even without training of the model on experimental data. Additional training further increased the prediction accuracies. Subsequent molecular docking (DynaDock) and MMGBSA/MMPBSA-based binding affinity estimations for predicted binders allowed the identification of the correct binding mode of the peptides as well as the calculation of nearly quantitative binding affinities. The general concept behind the developed multi-scale pipeline can readily be applied to other protein-peptide complexes with linearly bound peptides, for which sufficient experimental binding data for the training of classical sequence-based prediction models is not available. Proteins 2016; 84:1390-1407. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Solari, Lely; Gutiérrez, Alfonso; Suárez, Carmen; Jave, Oswaldo; Castillo, Edith; Yale, Gloria; Ascencios, Luis; Quispe, Neyda; Valencia, Eddy; Suárez, Víctor
2011-01-01
To evaluate the costs of three methods for the diagnosis of drug susceptibility in tuberculosis, and to compare the cost per case of Multidrug-resistant tuberculosis (MDR TB) diagnosed with these (MODS, GRIESS and Genotype MTBDR plus®) in 4 epidemiologic groups in Peru. In the basis of programmatic figures, we divided the population in 4 groups: new cases from Lima/Callao, new cases from other provinces, previously treated patients from Lima/Callao and previously treated from other provinces. We calculated the costs of each test with the standard methodology of the Ministry of Health, from the perspective of the health system. Finally, we calculated the cost per patient diagnosed with MDR TB for each epidemiologic group. The estimated costs per test for MODS, GRIESS, and Genotype MTBDR plus® were 14.83. 15.51 and 176.41 nuevos soles respectively (the local currency, 1 nuevos sol=0.36 US dollars for August, 2011). The cost per patient diagnosed with GRIESS and MODS was lower than 200 nuevos soles in 3 out of the 4 groups. The costs per diagnosed MDR TB were higher than 2,000 nuevos soles with Genotype MTBDR plus® in the two groups of new patients, and lower than 1,000 nuevos soles in the group of previously treated patients. In high-prevalence groups, like the previously treated patients, the costs per diagnosis of MDR TB with the 3 evaluated tests were low, nevertheless, the costs with the molecular test in the low- prevalence groups were high. The use of the molecular tests must be optimized in high prevalence areas.
Lin, Yi-Jia; Lee, Shih-Chi; Chang, Chao-Chin; Liu, Tsung-Han
2018-01-01
This study is aimed at determining the effects of midsole thickness on movement characteristic during side cutting movement. Fifteen athletes performed side-step cutting while wearing shoes with varying midsole thicknesses. Temporal-spatial and ground reaction force variables as well as foot and ankle frontal kinematics were used to describe breaking and propulsive movement characteristics and modulation strategies. Regardless of midsole thickness, temporal-spatial variables and breaking and propulsive force during side cutting were statistically unchanged. Significantly greater peaks of ankle inversion and plantarflexion with a thicker sole and greater midtarsal pronation with a thinner sole were observed. Current results demonstrated that hypotheses formed solely based on material testing were insufficient to understand the adaptations in human movement because of the redundancy of the neuromusculoskeletal system. Participants were able to maintain temporal-spatial performance during side cutting while wearing shoes with midsoles of varying thicknesses. Increased pronation for a thinner sole might help reduce the force of impact but might be associated with an increased risk of excessive stress on soft tissue. Increased peak of ankle inversion and plantarflexion for a thicker sole may be unfavorable for the stability of ankle joint. Information provided in human movement testing is crucial for understanding factors associated with movement characteristics and injury and should be considered in the future development of shoe design. PMID:29854000
Livingston, Michael; Dietze, Paul; Ferris, Jason; Pennay, Darren; Hayes, Linda; Lenton, Simon
2013-03-16
Telephone surveys based on samples of landline telephone numbers are widely used to measure the prevalence of health risk behaviours such as smoking, drug use and alcohol consumption. An increasing number of households are relying solely on mobile telephones, creating a potential bias for population estimates derived from landline-based sampling frames which do not incorporate mobile phone numbers. Studies in the US have identified significant differences between landline and mobile telephone users in smoking and alcohol consumption, but there has been little work in other settings or focussed on illicit drugs. This study examined Australian prevalence estimates of cannabis use, tobacco smoking and risky alcohol consumption based on samples selected using a dual-frame (mobile and landline) approach. Respondents from the landline sample were compared both to the overall mobile sample (including respondents who had access to a landline) and specifically to respondents who lived in mobile-only households. Bivariate comparisons were complemented with multivariate logistic regression models, controlling for the effects of basic demographic variables. The landline sample reported much lower prevalence of tobacco use, cannabis use and alcohol consumption than the mobile samples. Once demographic variables were adjusted for, there were no significant differences between the landline and mobile respondents on any of the alcohol measures examined. In contrast, the mobile samples had significantly higher rates of cannabis and tobacco use, even after adjustment. Weighted estimates from the dual-frame sample were generally higher than the landline sample across all substances, but only significantly higher for tobacco use. Landline telephone surveys in Australia are likely to substantially underestimate the prevalence of tobacco smoking by excluding potential respondents who live in mobile-only households. In contrast, estimates of alcohol consumption and cannabis use from landline surveys are likely to be broadly accurate, once basic demographic weighting is undertaken.
Judging hardness of an object from the sounds of tapping created by a white cane.
Nunokawa, K; Seki, Y; Ino, S; Doi, K
2014-01-01
The white cane plays a vital role in the independent mobility support of the visually impaired. Allowing the recognition of target attributes through the contact of a white cane is an important function. We have conducted research to obtain fundamental knowledge concerning the exploration methods used to perceive the hardness of an object through contact with a white cane. This research has allowed us to examine methods that enhance accuracy in the perception of objects as well as the materials and structures of a white cane. Previous research suggest considering the roles of both auditory and tactile information from the white cane in determining objects' hardness is necessary. This experimental study examined the ability of people to perceive the hardness of an object solely through the tapping sounds of a white cane (i.e., auditory information) using a method of magnitude estimation. Two types of sounds were used to estimate hardness: 1) the playback of recorded tapping sounds and 2) the sounds produced on-site by tapping. Three types of handgrips were used to create different sounds of tapping on an object with a cane. The participants of this experiment were five sighted university students wearing eye masks and two totally blind students who walk independently with a white cane. The results showed that both sighted university students and totally blind participants were able to accurately judge the hardness of an object solely by using auditory information from a white cane. For the blind participants, different handgrips significantly influenced the accuracy of their estimation of an object's hardness.
From conservative to reactive transport under diffusion-controlled conditions
NASA Astrophysics Data System (ADS)
Babey, Tristan; de Dreuzy, Jean-Raynald; Ginn, Timothy R.
2016-05-01
We assess the possibility to use conservative transport information, such as that contained in transit time distributions, breakthrough curves and tracer tests, to predict nonlinear fluid-rock interactions in fracture/matrix or mobile/immobile conditions. Reference simulated data are given by conservative and reactive transport simulations in several diffusive porosity structures differing by their topological organization. Reactions includes nonlinear kinetically controlled dissolution and desorption. Effective Multi-Rate Mass Transfer models (MRMT) are calibrated solely on conservative transport information without pore topology information and provide concentration distributions on which effective reaction rates are estimated. Reference simulated reaction rates and effective reaction rates evaluated by MRMT are compared, as well as characteristic desorption and dissolution times. Although not exactly equal, these indicators remain very close whatever the porous structure, differing at most by 0.6% and 10% for desorption and dissolution. At early times, this close agreement arises from the fine characterization of the diffusive porosity close to the mobile zone that controls fast mobile-diffusive exchanges. At intermediate to late times, concentration gradients are strongly reduced by diffusion, and reactivity can be captured by a very limited number of rates. We conclude that effective models calibrated solely on conservative transport information like MRMT can accurately estimate monocomponent kinetically controlled nonlinear fluid-rock interactions. Their relevance might extend to more advanced biogeochemical reactions because of the good characterization of conservative concentration distributions, even by parsimonious models (e.g., MRMT with 3-5 rates). We propose a methodology to estimate reactive transport from conservative transport in mobile-immobile conditions.
NASA Astrophysics Data System (ADS)
Izett, Jonathan G.; Fennel, Katja
2018-02-01
Rivers deliver large amounts of terrestrially derived materials (such as nutrients, sediments, and pollutants) to the coastal ocean, but a global quantification of the fate of this delivery is lacking. Nutrients can accumulate on shelves, potentially driving high levels of primary production with negative consequences like hypoxia, or be exported across the shelf to the open ocean where impacts are minimized. Global biogeochemical models cannot resolve the relatively small-scale processes governing river plume dynamics and cross-shelf export; instead, river inputs are often parameterized assuming an "all or nothing" approach. Recently, Sharples et al. (2017), https://doi.org/10.1002/2016GB005483 proposed the SP number—a dimensionless number relating the estimated size of a plume as a function of latitude to the local shelf width—as a simple estimator of cross-shelf export. We extend their work, which is solely based on theoretical and empirical scaling arguments, and address some of its limitations using a numerical model of an idealized river plume. In a large number of simulations, we test whether the SP number can accurately describe export in unforced cases and with tidal and wind forcings imposed. Our numerical experiments confirm that the SP number can be used to estimate export and enable refinement of the quantitative relationships proposed by Sharples et al. We show that, in general, external forcing has only a weak influence compared to latitude and derive empirical relationships from the results of the numerical experiments that can be used to estimate riverine freshwater export to the open ocean.
The effect of aborting ongoing movements on end point position estimation.
Itaguchi, Yoshihiro; Fukuzawa, Kazuyoshi
2013-11-01
The present study investigated the impact of motor commands to abort ongoing movement on position estimation. Participants carried out visually guided reaching movements on a horizontal plane with their eyes open. By setting a mirror above their arm, however, they could not see the arm, only the start and target points. They estimated the position of their fingertip based solely on proprioception after their reaching movement was stopped before reaching the target. The participants stopped reaching as soon as they heard an auditory cue or were mechanically prevented from moving any further by an obstacle in their path. These reaching movements were carried out at two different speeds (fast or slow). It was assumed that additional motor commands to abort ongoing movement were required and that their magnitude was high, low, and zero, in the auditory-fast condition, the auditory-slow condition, and both the obstacle conditions, respectively. There were two main results. (1) When the participants voluntarily stopped a fast movement in response to the auditory cue (the auditory-fast condition), they showed more underestimates than in the other three conditions. This underestimate effect was positively related to movement velocity. (2) An inverted-U-shaped bias pattern as a function of movement distance was observed consistently, except in the auditory-fast condition. These findings indicate that voluntarily stopping fast ongoing movement created a negative bias in the position estimate, supporting the idea that additional motor commands or efforts to abort planned movement are involved with the position estimation system. In addition, spatially probabilistic inference and signal-dependent noise may explain the underestimate effect of aborting ongoing movement.
cBathy: A robust algorithm for estimating nearshore bathymetry
Plant, Nathaniel G.; Holman, Rob; Holland, K. Todd
2013-01-01
A three-part algorithm is described and tested to provide robust bathymetry maps based solely on long time series observations of surface wave motions. The first phase consists of frequency-dependent characterization of the wave field in which dominant frequencies are estimated by Fourier transform while corresponding wave numbers are derived from spatial gradients in cross-spectral phase over analysis tiles that can be small, allowing high-spatial resolution. Coherent spatial structures at each frequency are extracted by frequency-dependent empirical orthogonal function (EOF). In phase two, depths are found that best fit weighted sets of frequency-wave number pairs. These are subsequently smoothed in time in phase 3 using a Kalman filter that fills gaps in coverage and objectively averages new estimates of variable quality with prior estimates. Objective confidence intervals are returned. Tests at Duck, NC, using 16 surveys collected over 2 years showed a bias and root-mean-square (RMS) error of 0.19 and 0.51 m, respectively but were largest near the offshore limits of analysis (roughly 500 m from the camera) and near the steep shoreline where analysis tiles mix information from waves, swash and static dry sand. Performance was excellent for small waves but degraded somewhat with increasing wave height. Sand bars and their small-scale alongshore variability were well resolved. A single ground truth survey from a dissipative, low-sloping beach (Agate Beach, OR) showed similar errors over a region that extended several kilometers from the camera and reached depths of 14 m. Vector wave number estimates can also be incorporated into data assimilation models of nearshore dynamics.
Herrera, Pedro Javier; Pajares, Gonzalo; Guijarro, Maria; Ruz, José J.; Cruz, Jesús M.; Montes, Fernando
2009-01-01
This paper describes a novel feature-based stereovision matching process based on a pair of omnidirectional images in forest stands acquired with a stereovision sensor equipped with fish-eye lenses. The stereo analysis problem consists of the following steps: image acquisition, camera modelling, feature extraction, image matching and depth determination. Once the depths of significant points on the trees are obtained, the growing stock volume can be estimated by considering the geometrical camera modelling, which is the final goal. The key steps are feature extraction and image matching. This paper is devoted solely to these two steps. At a first stage a segmentation process extracts the trunks, which are the regions used as features, where each feature is identified through a set of attributes of properties useful for matching. In the second step the features are matched based on the application of the following four well known matching constraints, epipolar, similarity, ordering and uniqueness. The combination of the segmentation and matching processes for this specific kind of sensors make the main contribution of the paper. The method is tested with satisfactory results and compared against the human expert criterion. PMID:22303134
ERIC Educational Resources Information Center
General Accounting Office, Washington, DC.
Child welfare directors in 19 states and juvenile justice officials in 30 counties estimated that in fiscal year 2001 parents placed over 12,700 children into the child welfare or juvenile justice systems so that these children could receive mental health services. Neither the child welfare nor the juvenile justice system was designed to serve…
Afari-Dwamena, Nana Ama; Li, Ji; Chen, Rusan; Feinleib, Manning; Lamm, Steven H.
2016-01-01
Background. To examine whether the US EPA (2010) lung cancer risk estimate derived from the high arsenic exposures (10–934 µg/L) in southwest Taiwan accurately predicts the US experience from low arsenic exposures (3–59 µg/L). Methods. Analyses have been limited to US counties solely dependent on underground sources for their drinking water supply with median arsenic levels of ≥3 µg/L. Results. Cancer risks (slopes) were found to be indistinguishable from zero for males and females. The addition of arsenic level did not significantly increase the explanatory power of the models. Stratified, or categorical, analysis yielded relative risks that hover about 1.00. The unit risk estimates were nonpositive and not significantly different from zero, and the maximum (95% UCL) unit risk estimates for lung cancer were lower than those in US EPA (2010). Conclusions. These data do not demonstrate an increased risk of lung cancer associated with median drinking water arsenic levels in the range of 3–59 µg/L. The upper-bound estimates of the risks are lower than the risks predicted from the SW Taiwan data and do not support those predictions. These results are consistent with a recent metaregression that indicated no increased lung cancer risk for arsenic exposures below 100–150 µg/L. PMID:27382373
Stine, O C; Smith, K D
1990-01-01
The effects of mutation, migration, random drift, and selection on the change in frequency of the alleles associated with Huntington disease, porphyria variegata, and lipoid proteinosis have been assessed in the Afrikaner population of South Africa. Although admixture cannot be completely discounted, it was possible to exclude migration and new mutation as major sources of changes in the frequency of these alleles by limiting analyses to pedigrees descendant from founding families. Calculations which overestimated the possible effect of random drift demonstrated that drift did not account for the observed changes in gene frequencies. Therefore these changes must have been caused by natural selection, and a coefficient of selection was estimated for each trait. For the rare, dominant, deleterious allele associated with Huntington disease, the coefficient of selection was estimated to be .34, indicating that this allele has a selective disadvantage, contrary to some recent studies. For the presumed dominant and probably deleterious allele associated with porphyria variegata, the coefficient of selection lies between .07 and .02. The coefficient of selection for the rare, clinically recessive allele associated with lipoid proteinosis was estimated to be .07. Calculations based on a model system indicate that the observed decrease in allele frequency cannot be explained solely on the basis of selection against the homozygote. Thus, this may be an example of a pleiotropic gene which has a dominant effect in terms of selection even though its known clinical effect is recessive. PMID:2137963
Stine, O C; Smith, K D
1990-03-01
The effects of mutation, migration, random drift, and selection on the change in frequency of the alleles associated with Huntington disease, porphyria variegata, and lipoid proteinosis have been assessed in the Afrikaner population of South Africa. Although admixture cannot be completely discounted, it was possible to exclude migration and new mutation as major sources of changes in the frequency of these alleles by limiting analyses to pedigrees descendant from founding families. Calculations which overestimated the possible effect of random drift demonstrated that drift did not account for the observed changes in gene frequencies. Therefore these changes must have been caused by natural selection, and a coefficient of selection was estimated for each trait. For the rare, dominant, deleterious allele associated with Huntington disease, the coefficient of selection was estimated to be .34, indicating that this allele has a selective disadvantage, contrary to some recent studies. For the presumed dominant and probably deleterious allele associated with porphyria variegata, the coefficient of selection lies between .07 and .02. The coefficient of selection for the rare, clinically recessive allele associated with lipoid proteinosis was estimated to be .07. Calculations based on a model system indicate that the observed decrease in allele frequency cannot be explained solely on the basis of selection against the homozygote. Thus, this may be an example of a pleiotropic gene which has a dominant effect in terms of selection even though its known clinical effect is recessive.
Spungen, Judith H; MacMahon, Shaun; Leigh, Jessica; Flannery, Brenna; Kim, Grace; Chirtel, Stuart; Smegal, Deborah
2018-04-05
A dietary exposure assessment was conducted for 3-monochloropropane-1,2-diol (3-MCPD) esters (3-MCPDE) and glycidyl esters (GE) in infant formulas available for consumption in the U.S. 3-MCPDE and GE are food contaminants generated during the deodorization of refined edible oils, which are used in infant formulas and other foods. 3-MCPDE and GE are of potential toxicological concern because these compounds are metabolized to free 3-MCPD and free glycidol in rodents, and may have the same metabolic fate in humans. Free 3-MCPD and free glycidol have been found to cause adverse effects in rodents. Dietary exposures to 3-MCPDE and GE from consumption of infant formulas are of particular interest because formulas are the sole or primary food source for some infants. In this analysis, U.S. Food and Drug Administration (FDA) data on 3-MCPDE and GE concentrations (as 3-MCPD and glycidol equivalents, respectively) in a small convenience sample of infant formulas were used to estimate exposures from consumption of formula by infants 0 - 6 months of age. 3-MCPDE and GE exposures based on mean concentrations in all formulas were estimated at 7 - 10 µg/kg bw/day and 2 µg/kg bw/day, respectively. Estimated mean exposures from consumption of formulas produced by individual manufacturers ranged from 1 - 14 µg/kg bw/day for 3-MCPDE, and from 1 - 3 µg/kg for GE.
Intelligent visual localization of wireless capsule endoscopes enhanced by color information.
Dimas, George; Spyrou, Evaggelos; Iakovidis, Dimitris K; Koulaouzidis, Anastasios
2017-10-01
Wireless capsule endoscopy (WCE) is performed with a miniature swallowable endoscope enabling the visualization of the whole gastrointestinal (GI) tract. One of the most challenging problems in WCE is the localization of the capsule endoscope (CE) within the GI lumen. Contemporary, radiation-free localization approaches are mainly based on the use of external sensors and transit time estimation techniques, with practically low localization accuracy. Latest advances for the solution of this problem include localization approaches based solely on visual information from the CE camera. In this paper we present a novel visual localization approach based on an intelligent, artificial neural network, architecture which implements a generic visual odometry (VO) framework capable of estimating the motion of the CE in physical units. Unlike the conventional, geometric, VO approaches, the proposed one is adaptive to the geometric model of the CE used; therefore, it does not require any prior knowledge about and its intrinsic parameters. Furthermore, it exploits color as a cue to increase localization accuracy and robustness. Experiments were performed using a robotic-assisted setup providing ground truth information about the actual location of the CE. The lowest average localization error achieved is 2.70 ± 1.62 cm, which is significantly lower than the error obtained with the geometric approach. This result constitutes a promising step towards the in-vivo application of VO, which will open new horizons for accurate local treatment, including drug infusion and surgical interventions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dioxin inhalation doses from wood combustion in indoor cookfires
NASA Astrophysics Data System (ADS)
Northcross, Amanda L.; Katharine Hammond, S.; Canuz, Eduardo; Smith, Kirk R.
2012-03-01
Approximately 3 billion people worldwide rely on solid biomass fuels for household cooking and space heating, and approximately 50-60% use wood, often indoors in poorly ventilated situations. Daily exposures to high concentrations of smoke from cookstoves inside kitchens create large smoke exposures for women cooks and their small children. The smoke from burning the wood fuel contains hundred of toxic compounds, including dioxins and furans some of the most toxic compounds known to science. Health effects from exposure to dioxins include reproductive and developmental problems, damage the immune system, interference with hormones and also cause cancer. This study measured concentrations of dioxins and furans in a typical Guatemalan village home during open cookfires. Measured concentrations averaged 0.32 ± 0.07 ng m-3 over 31 fires. A Monte Carlo simulation was conducted using parameter estimates based on 8 years of research experience in the study area. The estimated total daily intake of 17 particle phase dioxin and furans for women, a 5-year-old child and a 6-month-old infant were 1.2 (S.D. = 0.4), 1.7 (S.D. = 0.7) and 2.0 (S.D. = 0.5) respectively. The 46% of babies have and estimated total daily intake (TDI) which exceed the WHO TDI guideline for dioxins and furans, 3% of women and 26% of 5-year-old children based solely inhalation of particle phase dioxins in woodsmoke from an open cooking fire. These values maybe underestimates, as they did not include gas phase concentrations or ingestion of dioxins and furans through food, which is the largest route of exposure in the developed world.
NASA Astrophysics Data System (ADS)
Rackow, Thomas; Wesche, Christine; Timmermann, Ralph; Hellmer, Hartmut H.; Juricke, Stephan; Jung, Thomas
2017-04-01
We present a simulation of Antarctic iceberg drift and melting that includes small, medium-sized, and giant tabular icebergs with a realistic size distribution. For the first time, an iceberg model is initialized with a set of nearly 7000 observed iceberg positions and sizes around Antarctica. The study highlights the necessity to account for larger and giant icebergs in order to obtain accurate melt climatologies. We simulate drift and lateral melt using iceberg-draft averaged ocean currents, temperature, and salinity. A new basal melting scheme, originally applied in ice shelf melting studies, uses in situ temperature, salinity, and relative velocities at an iceberg's bottom. Climatology estimates of Antarctic iceberg melting based on simulations of small (≤2.2 km), "small-to-medium-sized" (≤10 km), and small-to-giant icebergs (including icebergs >10 km) exhibit differential characteristics: successive inclusion of larger icebergs leads to a reduced seasonality of the iceberg meltwater flux and a shift of the mass input to the area north of 58°S, while less meltwater is released into the coastal areas. This suggests that estimates of meltwater input solely based on the simulation of small icebergs introduce a systematic meridional bias; they underestimate the northward mass transport and are, thus, closer to the rather crude treatment of iceberg melting as coastal runoff in models without an interactive iceberg model. Future ocean simulations will benefit from the improved meridional distribution of iceberg melt, especially in climate change scenarios where the impact of iceberg melt is likely to increase due to increased calving from the Antarctic ice sheet.
NASA Astrophysics Data System (ADS)
Rackow, Thomas; Wesche, Christine; Timmermann, Ralph; Hellmer, Hartmut H.; Juricke, Stephan; Jung, Thomas
2017-04-01
We present a simulation of Antarctic iceberg drift and melting that includes small (<2.2 km), medium-sized, and giant tabular icebergs with lengths of more than 10km. The model is initialized with a realistic size distribution obtained from satellite observations. Our study highlights the necessity to account for larger and giant icebergs in order to obtain accurate melt climatologies. Taking iceberg modeling a step further, we simulate drift and melting using iceberg-draft averaged ocean currents, temperature, and salinity. A new basal melting scheme, originally applied in ice shelf melting studies, uses in situ temperature, salinity, and relative velocities at an iceberg's keel. The climatology estimates of Antarctic iceberg melting based on simulations of small, 'small-to-medium'-sized, and small-to-giant icebergs (including icebergs > 10km) exhibit differential characteristics: successive inclusion of larger icebergs leads to a reduced seasonality of the iceberg meltwater flux and a shift of the mass input to the area north of 58°S, while less meltwater is released into the coastal areas. This suggests that estimates of meltwater input solely based on the simulation of small icebergs introduce a systematic meridional bias; they underestimate the northward mass transport and are, thus, closer to the rather crude treatment of iceberg melting as coastal runoff in models without an interactive iceberg model. Future ocean simulations will benefit from the improved meridional distribution of iceberg melt, especially in climate change scenarios where the impact of iceberg melt is likely to increase due to increased calving from the Antarctic ice sheet.
Skagen, Susan K.; Granfors, Diane A.; Melcher, Cynthia P.
2008-01-01
Conservation challenges enhance the need for quantitative information on dispersed bird populations in extensive landscapes, for techniques to monitor populations and assess environmental effects, and for conservation strategies at appropriate temporal and spatial scales. By estimating population sizes of shorebirds in the U.S. portion of the prairie pothole landscape in central North America, where most migrating shorebirds exhibit a highly dispersed spatial pattern, we determined that the region may play a vital role in the conservation of shorebirds. During northward and southward migration, 7.3 million shorebirds (95% CI: 4.3–10.3 million) and 3.9 million shorebirds (95% CI: 1.7–6.0 million) stopped to rest and refuel in the study area; inclusion of locally breeding species increases the estimates by 0.1 million and 0.07 million shorebirds, respectively. Seven species of calidridine sandpipers, including Semipalmated Sandpipers (Calidris pusilla), White-rumped Sandpipers (C. fuscicollis), and Stilt Sandpipers (C. himantopus), constituted 50% of northbound migrants in our study area. We present an approach to population estimation and monitoring, based on stratified random selection of townships as sample units, that is well suited to 11 migratory shorebird species. For extensive and dynamic wetland systems, we strongly caution against a monitoring program based solely on repeated counts of known stopover sites with historically high numbers of shorebirds. We recommend refinements in methodology to address sample-size requirements and potential sources of bias so that our approach may form the basis of a rigorous migration monitoring program in this and other prairie wetland regions.
Changes in Cirrus Cloudiness and their Relationship to Contrails
NASA Technical Reports Server (NTRS)
Minnis, Patrick; Ayers, J. Kirk; Palikonda, Rabindra; Doelling, David R.; Schumann, Ulrich; Gierens, Klaus
2001-01-01
Condensation trails, or contrails, formed in the wake of high-altitude aircraft have long been suspected of causing the formation of additional cirrus cloud cover. More cirrus is possible because 10 - 20% of the atmosphere at typical commercial flight altitudes is clear but ice-saturated. Since they can affect the radiation budget like natural cirrus clouds of equivalent optical depth and microphysical properties, contrail -generated cirrus clouds are another potential source of anthropogenic influence on climate. Initial estimates of contrail radiative forcing (CRF) were based on linear contrail coverage and optical depths derived from a limited number of satellite observations. Assuming that such estimates are accurate, they can be considered as the minimum possible CRF because contrails often develop into cirrus clouds unrecognizable as contrails. These anthropogenic cirrus are not likely to be identified as contrails from satellites and would, therefore, not contribute to estimates of contrail coverage. The mean lifetime and coverage of spreading contrails relative to linear contrails are needed to fully assess the climatic effect of contrails, but are difficult to measure directly. However, the maximum possible impact can be estimated using the relative trends in cirrus coverage over regions with and without air traffic. In this paper, the upper bound of CRF is derived by first computing the change in cirrus coverage over areas with heavy air traffic relative to that over the remainder of the globe assuming that the difference between the two trends is due solely to contrails. This difference is normalized to the corresponding linear contrail coverage for the same regions to obtain an average spreading factor. The maximum contrail-cirrus coverage, estimated as the product of the spreading factor and the linear contrail coverage, is then used in the radiative model to estimate the maximum potential CRF for current air traffic.
ERIC Educational Resources Information Center
Jones, Tiffany
2014-01-01
States are increasingly funding higher education institutions based on their performance or outcomes instead of relying solely on student enrollment to determine funding formulas. Performance Funding (also called Performance-Based and Outcomes-Based Funding) policies provide state support to public colleges and universities based on outcome…
NASA Astrophysics Data System (ADS)
Emel'yanenko, V. V.; Naroenkov, S. A.
2018-01-01
At the beginning of this century, the SOHO space observatory discovered near-Sun comets with perihelion distances q ≈ 0.05 AU, which remained observable over several close encounters with the Sun. This became one of the surprises in studying the small bodies of the Solar System. Currently, there are objects that have already been observed in four (342P) and five (321P, 322P, and 323P) apparitions. In the present work, the estimates of nongravitational effects are obtained for these objects based on the pair-wise linkage of the apparitions. The calculations show that the observations of these objects are poorly represented if solely the gravitational forces are considered. The magnitude of nongravitational effects in the semimajor axis noticeably changes with time. The motion of all comets is significantly affected by the components of nongravitational forces that are perpendicular to the orbital plane.
Distance-weighted city growth.
Rybski, Diego; García Cantú Ros, Anselmo; Kropp, Jürgen P
2013-04-01
Urban agglomerations exhibit complex emergent features of which Zipf's law, i.e., a power-law size distribution, and fractality may be regarded as the most prominent ones. We propose a simplistic model for the generation of citylike structures which is solely based on the assumption that growth is more likely to take place close to inhabited space. The model involves one parameter which is an exponent determining how strongly the attraction decays with the distance. In addition, the model is run iteratively so that existing clusters can grow (together) and new ones can emerge. The model is capable of reproducing the size distribution and the fractality of the boundary of the largest cluster. Although the power-law distribution depends on both, the imposed exponent and the iteration, the fractality seems to be independent of the former and only depends on the latter. Analyzing land-cover data, we estimate the parameter-value γ≈2.5 for Paris and its surroundings.
Poupin, Joseph; Corbari, Laure
2016-11-13
A preliminary assessment of the deep-sea Decapoda is proposed for Guadeloupe Island based solely on high definition macro photographs taken during the KARUBENTHOS 2015 Expedition to the Island (R/V Antea, 7-29 June 2015). Overall, 190 species are recognized, several of which are depicted with their fresh color for the first time. Previous records in the Lesser Antilles are documented and the geographic distribution of the species in these Islands is given. The historical contribution of the steamer Blake (1878-1879) in the Lesser Antilles is emphasized. All species inventoried during KARUBENTHOS 2015 were already reported in the western Atlantic but 34 of them are new records for the Lesser Antilles and 116 are reported for the first time from Guadeloupe Island. This preliminary inventory is estimated to include about 38% of the deep-sea Decapoda potentially occurring around Guadeloupe Island.
Overdiagnosis in breast cancer screening: The impact of study design and calculations.
Lynge, Elsebeth; Beau, Anna-Belle; Christiansen, Peer; von Euler-Chelpin, My; Kroman, Niels; Njor, Sisse; Vejborg, Ilse
2017-07-01
Overdiagnosis in breast cancer screening is an important issue. A recent study from Denmark concluded that one in three breast cancers diagnosed in screening areas in women aged 50-69 years were overdiagnosed. The purpose of this short communication was to disentangle the study's methodology in order to evaluate the soundness of this conclusion. We found that both the use of absolute differences as opposed to ratios; the sole focus on non-advanced tumours and the crude allocation of tumours and person-years by screening history for women aged 70-84 years, all contributed to the very high estimate of overdiagnosis. Screening affects cohorts of screened women. Danish registers allow very accurate mapping of the fate of every woman. We should be past the phase where studies of overdiagnosis are based on the fixed age groups from routine statistics. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
High-resolution behavioral mapping of electric fishes in Amazonian habitats.
Madhav, Manu S; Jayakumar, Ravikrishnan P; Demir, Alican; Stamper, Sarah A; Fortune, Eric S; Cowan, Noah J
2018-04-11
The study of animal behavior has been revolutionized by sophisticated methodologies that identify and track individuals in video recordings. Video recording of behavior, however, is challenging for many species and habitats including fishes that live in turbid water. Here we present a methodology for identifying and localizing weakly electric fishes on the centimeter scale with subsecond temporal resolution based solely on the electric signals generated by each individual. These signals are recorded with a grid of electrodes and analyzed using a two-part algorithm that identifies the signals from each individual fish and then estimates the position and orientation of each fish using Bayesian inference. Interestingly, because this system involves eavesdropping on electrocommunication signals, it permits monitoring of complex social and physical interactions in the wild. This approach has potential for large-scale non-invasive monitoring of aquatic habitats in the Amazon basin and other tropical freshwater systems.
Shale Gas Boom or Bust? Estimating US and Global Economically Recoverable Resources
NASA Astrophysics Data System (ADS)
Brecha, R. J.; Hilaire, J.; Bauer, N.
2014-12-01
One of the most disruptive energy system technological developments of the past few decades is the rapid expansion of shale gas production in the United States. Because the changes have been so rapid there are great uncertainties as to the impacts of shale production for medium- and long-term energy and climate change mitigation policies. A necessary starting point for incorporating shale resources into modeling efforts is to understand the size of the resource, how much is technically recoverable (TRR), and finally, how much is economically recoverable (ERR) at a given cost. To assess production costs of shale gas, we combine top-down data with detailed bottom-up information. Studies solely based on top-down approaches do not adequately account for the heterogeneity of shale gas deposits and are unlikely to appropriately estimate extraction costs. We design an expedient bottom-up method based on publicly available US data to compute the levelized costs of shale gas extraction. Our results indicate the existence of economically attractive areas but also reveal a dramatic cost increase as lower-quality reservoirs are exploited. Extrapolating results for the US to the global level, our best estimate suggests that, at a cost of 6 US$/GJ, only 39% of the technically recoverable resources reported in top-down studies should be considered economically recoverable. This estimate increases to about 77% when considering optimistic TRR and estimated ultimate recovery parameters but could be lower than 12% for more pessimistic parameters. The current lack of information on the heterogeneity of shale gas deposits as well as on the development of future production technologies leads to significant uncertainties regarding recovery rates and production costs. Much of this uncertainty may be inherent, but for energy system planning purposes, with or without climate change mitigation policies, it is crucial to recognize the full ranges of recoverable quantities and costs.
Information's role in the estimation of chaotic signals
NASA Astrophysics Data System (ADS)
Drake, Daniel Fred
1998-11-01
Researchers have proposed several methods designed to recover chaotic signals from noise-corrupted observations. While the methods vary, their qualitative performance does not: in low levels of noise all methods effectively recover the underlying signal; in high levels of noise no method can recover the underlying signal to any meaningful degree of accuracy. Of the methods proposed to date, all represent sub-optimal estimators. So: Is the inability to recover the signal in high noise levels simply a consequence of estimator sub-optimality? Or is estimator failure actually a manifestation of some intrinsic property of chaos itself? These questions are answered by deriving an optimal estimator for a class of chaotic systems and noting that it, too, fails in high levels of noise. An exact, closed- form expression for the estimator is obtained for a class of chaotic systems whose signals are solutions to a set of linear (but noncausal) difference equations. The existence of this linear description circumvents the difficulties normally encountered when manipulating the nonlinear (but causal) expressions that govern. chaotic behavior. The reason why even the optimal estimator fails to recover underlying chaotic signals in high levels of noise has its roots in information theory. At such noise levels, the mutual information linking the corrupted observations to the underlying signal is essentially nil, reducing the estimator to a simple guessing strategy based solely on a priori statistics. Entropy, long the common bond between information theory and dynamical systems, is actually one aspect of a far more complete characterization of information sources: the rate distortion function. Determining the rate distortion function associated with the class of chaotic systems considered in this work provides bounds on estimator performance in high levels of noise. Finally, a slight modification of the linear description leads to a method of synthesizing on limited precision platforms ``pseudo-chaotic'' sequences that mimic true chaotic behavior to any finite degree of precision and duration. The use of such a technique in spread-spectrum communications is considered.
Parameterising User Uptake in Economic Evaluations: The role of discrete choice experiments.
Terris-Prestholt, Fern; Quaife, Matthew; Vickerman, Peter
2016-02-01
Model-based economic evaluations of new interventions have shown that user behaviour (uptake) is a critical driver of overall impact achieved. However, early economic evaluations, prior to introduction, often rely on assumed levels of uptake based on expert opinion or uptake of similar interventions. In addition to the likely uncertainty surrounding these uptake assumptions, they also do not allow for uptake to be a function of product, intervention, or user characteristics. This letter proposes using uptake projections from discrete choice experiments (DCE) to better parameterize uptake and substitution in cost-effectiveness models. A simple impact model is developed and illustrated using an example from the HIV prevention field in South Africa. Comparison between the conventional approach and the DCE-based approach shows that, in our example, DCE-based impact predictions varied by up to 50% from conventional estimates and provided far more nuanced projections. In the absence of observed uptake data and to model the effect of variations in intervention characteristics, DCE-based uptake predictions are likely to greatly improve models parameterizing uptake solely based on expert opinion. This is particularly important for global and national level decision making around introducing new and probably more expensive interventions, particularly where resources are most constrained. © 2016 The Authors. Health Economics published by John Wiley & Sons Ltd.
Estimating Per-Pixel GPP of the Contiguous USA Directly from MODIS EVI Data
NASA Astrophysics Data System (ADS)
Rahman, A. F.; Sims, D. A.; El-Masri, B. Z.; Cordova, V. D.
2005-12-01
We estimated gross primary production (GPP) of the contiguous USA using enhanced vegetation index (EVI) data from NASA's moderate resolution imaging spectroradiometer (MODIS). Based on recently published values of correlation coefficients between EVI and GPP of North American vegetations, we derived GPP maps of the contiguous USA for 2001-2004, which included one La Nina year and three moderately El Nino years. The product was a truly per-pixel GPP estimate (named E-GPP), in contrast to the pseudo-continuous MOD17, the standard MODIS GPP product. We compared E-GPP with fine-scale experimental GPP data and MOD17 estimates from three Bigfoot experimental sites, and also with MOD17 estimates from the whole contiguous USA for the above-mentioned four years. For each of the '7 by 7' km Bigfoot experimental sites, E-GPP was able to track the primary production activity during the green-up period while MOD17 failed to do so. The E-GPP estimates during peak production season were similar to those from Bigfoot and MOD17 for most vegetation types except for the deciduous types, where it was lower. Annual E-GPP of the Bigfoot sites compared well with Bigfoot experimental GPP (r = 0.71) and MOD17 (r = 0.78). But for the contiguous USA for 2001-2004, annual E-GPP showed disagreement with MOD17 in both magnitude and seasonal trends for deciduous forests and grass lands. In this study we explored the reasons for this mismatch between E-GPP and MOD17 and also analyzed the uncertainties in E-GPP across multiple spatial scales. Our results show that the E-GPP, based on a simple regression model, can work as a robust alternative to MOD17 for large-area annual GPP estimation. The relative advantages of E-GPP are that it is truly per-pixel, solely dependent on remotely sensed data that is routinely available from NASA, easy to compute and has the potential of being used as an operational product.
Technology management: a perspective on system support, procurement, and replacement planning.
Dickerson, M L; Jackson, M E
1992-01-01
The escalating costs associated with medical technology present a host of challenges for the hospital clinical engineering department. As service and support costs comprise ever larger portions of a system's life cycle cost, innovative management of service provider mix and mechanisms can provide substantial savings in operating expenses. In addition to full-service contracts, the use of demand service and independents has become commonplace. Medical equipment maintenance insurance programs provide yet another service alternative, combining the flexibility of demand service with the safety of a capped budget. These programs have gained acceptance among hospitals as their providers have become more focused on the healthcare market and its many needs. In view of the long-term cost impact surrounding technology procurement, the authors recommend that hospitals refine system evaluation methodologies and develop more comprehensive techniques directed at capital equipment replacement planning. One replacement planning approach, based on an estimation of system value changes, is described and illustrated using data collected through client consultations. Although the validity of this method has not been demonstrated, it represents a simplified approach to life cycle cost analysis and is intended to provide a standard method by which system replacement planning may be quantified. As a departure from system devaluation based solely on depreciation, this method estimates prospective system values derived from anticipated operations and maintenance costs, projected revenue, and the availability of new technology.
Modeling receptor kinetics in the analysis of survival data for organophosphorus pesticides.
Jager, Tjalling; Kooijman, Sebastiaan A L M
2005-11-01
Acute ecotoxicological tests usually focus on survival at a standardized exposure time. However, LC50's decrease in time in a manner that depends both on the chemical and on the organism. DEBtox is an existing approach to analyze toxicity data in time, based on hazard modeling (the internal concentration increases the probability to die). However, certain chemicals elicit their response through (irreversible) interaction with a specific receptor, such as inhibition of acetylcholinesterase (AChE). Effects therefore do not solely depend on the actual internal concentration, but also on its (recent) past. In this paper, the DEBtox method is extended with a simple mechanistic model to deal with receptor interactions. We analyzed data from the literature for organophosphorus pesticides in guppies, fathead minnows, and springtails. Overall, the observed survival patterns do not clearly differ from those of chemicals with a less-specific mode of action. However, using the receptor model, resulting parameter estimates are easier to interpret in terms of underlying mechanisms and reveal similarities between the various pesticides. We observed thatthe no-effect concentration estimated from the receptor model is basically identical to the value from standard DEBtox, illustrating the robustness of this summary statistic.
Development of California Public Health Goals (PHGs) for chemicals in drinking water.
Howd, R A; Brown, J P; Morry, D W; Wang, Y Y; Bankowska, J; Budroe, J D; Campbell, M; DiBartolomeis, M J; Faust, J; Jowa, L; Lewis, D; Parker, T; Polakoff, J; Rice, D W; Salmon, A G; Tomar, R S; Fan, A M
2000-01-01
As part of a program for evaluation of environmental contaminants in drinking water, risk assessments are being conducted to develop Public Health Goals (PHGs) for chemicals in drinking water, based solely on public health considerations. California's Safe Drinking Water Act of 1996 mandated the development of PHGs for over 80 chemicals by 31 December 1999. The law allowed these levels to be set higher or lower than federal maximum contaminant levels (MCLs), including a level of zero if data are insufficient to determine a specific level. The estimated safe levels and toxicological rationale for the first 26 of these chemicals are described here. The chemicals include alachlor, antimony, benzo[a]pyrene, chlordane, copper, cyanide, dalapon, 1,2-dichlorobenzene, 1,4-dichlorobenzene, 2,4-D, diethylhexylphthalate, dinoseb, endothall, ethylbenzene, fluoride, glyphosate, lead, nitrate, nitrite, oxamyl, pentachlorophenol, picloram, trichlorofluoromethane, trichlorotrifluoroethane, uranium and xylene(s). These risk assessments are to be considered by the State of California in revising and developing state MCLs for chemicals in drinking water (which must not exceed federal MCLs). The estimates are also notable for incorporation or consideration of newer guidelines and principles for risk assessment extrapolations.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-13
...NMFS is prohibiting directed fishing for arrowtooth flounder, flathead sole, rex sole, deep-water flatfish, and shallow-water flatfish in the Western Regulatory Area of the Gulf of Alaska (GOA). This action is necessary to limit incidental catch of Pacific ocean perch by vessels fishing for arrowtooth flounder, flathead sole, rex sole, deep-water flatfish, and shallow-water flatfish in the Western Regulatory Area of the GOA.
Palmoplantar Dermatoses- A Clinical Study of 300 Cases
Rajashekhar, Nadiga; Gejje, Somashekar
2016-01-01
Introduction Dermatoses affecting palms and soles are among the most difficult of all dermatological therapeutic problems. Many previous studies have focused on the specific diseases of palmoplantar dermatoses. However, none of them have included a comprehensive study of palmoplantar dermatoses. Aims: To study the epidemiological aspects like age distribution, sex distribution, the dermatoses affecting the palms & soles and the frequency of involvement of palms, soles or both palms & soles, in patient with palmoplantar dermatoses. Materials and Methods This cross sectional study was conducted in the Department of Dermatology between October 2011 to September 2013. First 300 cases attending the department of dermatology primarily with complaints pertaining to palms and soles were enrolled in the study. After taking consent a detailed history and clinical examination pertaining to the aim of the study was recorded and analysed, which included inspection of morphology and distribution of lesions and palpation of any swelling. Direct microscopic examination of scrapings, wet mounted with 10% potassium hydroxide was done for cases with scaly lesions. Those who had a pustule, gram staining was done. Patch testing using Indian Standard Battery Series was done for those cases of eczema. A sample for biopsy was taken when diagnosis could not be arrived clinically, and subjected to histopathological examination. Results In our study of 300 patients with palmoplantar dermatoses, 164 were females and 136 were males, the ratio observed being 1.2:1. The peak incidence was found in the age group 21-30 years, with 41 females (25%) and 35 males (25.7%). Most frequently affected individuals in this study were housewives (30%). The most common five diseases of palmoplantar dermatoses were palmoplantar psoriasis (20.7%), moniliasis (19%), palmoplantar hyperhidrosis (7%), keratolysis exfoliativa (6%) and pitted keratolysis (6%). Majority of patients had involvement of both palms and soles (44.3%) as compared to patients with involvement of only palm (28%) and only sole (27.3%). The commonest palmoplantar dermatoses with only palm involvement was keratolysis exfoliativa (16.7%), with only sole involvement was moniliasis (41%) and with both palms and soles involvement was palmoplantar psoriasis (41.4%). Associated nail changes were seen in 80 cases (26.6%), with maximum incidence in palmoplantar psoriasis (62.5%). Associated dermatological conditions were observed in 43 patients (14.3%). Conclusion Palmoplantar dermatoses are frequently encount-ered in the dermatologic field. Further investigation with a wider and larger population is necessary to understand the epidemiology, based on which accurate diagnosis and proper treatment could be achieved. PMID:27656539
Enhanced ID Pit Sizing Using Multivariate Regression Algorithm
NASA Astrophysics Data System (ADS)
Krzywosz, Kenji
2007-03-01
EPRI is funding a program to enhance and improve the reliability of inside diameter (ID) pit sizing for balance-of plant heat exchangers, such as condensers and component cooling water heat exchangers. More traditional approaches to ID pit sizing involve the use of frequency-specific amplitude or phase angles. The enhanced multivariate regression algorithm for ID pit depth sizing incorporates three simultaneous input parameters of frequency, amplitude, and phase angle. A set of calibration data sets consisting of machined pits of various rounded and elongated shapes and depths was acquired in the frequency range of 100 kHz to 1 MHz for stainless steel tubing having nominal wall thickness of 0.028 inch. To add noise to the acquired data set, each test sample was rotated and test data acquired at 3, 6, 9, and 12 o'clock positions. The ID pit depths were estimated using a second order and fourth order regression functions by relying on normalized amplitude and phase angle information from multiple frequencies. Due to unique damage morphology associated with the microbiologically-influenced ID pits, it was necessary to modify the elongated calibration standard-based algorithms by relying on the algorithm developed solely from the destructive sectioning results. This paper presents the use of transformed multivariate regression algorithm to estimate ID pit depths and compare the results with the traditional univariate phase angle analysis. Both estimates were then compared with the destructive sectioning results.
Rapid gait termination: effects of age, walking surfaces and footwear characteristics.
Menant, Jasmine C; Steele, Julie R; Menz, Hylton B; Munro, Bridget J; Lord, Stephen R
2009-07-01
The aim of this study was to systematically investigate the influence of various walking surfaces and footwear characteristics on the ability to terminate gait rapidly in 10 young and 26 older people. Subjects walked at a self-selected speed in eight randomized shoe conditions (standard versus elevated heel, soft sole, hard sole, high-collar, flared sole, bevelled heel and tread sole) on three surfaces: control, irregular and wet. In response to an audible cue, subjects were required to stop as quickly as possible in three out of eight walking trials in each condition. Time to last foot contact, total stopping time, stopping distance, number of steps to stop, step length and step width post-cue and base of support length at total stop were calculated from kinematic data collected using two CODA scanner units. The older subjects took more time and a longer distance to last foot contact and were more frequently classified as using a three or more-steps stopping strategy compared to the young subjects. The wet surface impeded gait termination, as indicated by greater total stopping time and stopping distance. Subjects required more time to terminate gait in the soft sole shoes compared to the standard shoes. In contrast, the high-collar shoes reduced total stopping time on the wet surface. These findings suggest that older adults have more difficulty terminating gait rapidly than their younger counterparts and that footwear is likely to influence whole-body stability during challenging postural tasks on wet surfaces.
Effect of ABCD transformations on beam paraxiality.
Vaveliuk, Pablo; Martinez-Matos, Oscar
2011-12-19
The limits of the paraxial approximation for a laser beam under ABCD transformations is established through the relationship between a parameter concerning the beam paraxiality, the paraxial estimator, and the beam second-order moments. The applicability of such an estimator is extended to an optical system composed by optical elements as mirrors and lenses and sections of free space, what completes the analysis early performed for free-space propagation solely. As an example, the paraxiality of a system composed by free space and a spherical thin lens under the propagation of Hermite-Gauss and Laguerre-Gauss modes is established. The results show that the the paraxial approximation fails for a certain feasible range of values of main parameters. In this sense, the paraxial estimator is an useful tool to monitor the limits of the paraxial optics theory under ABCD transformations.
Influence of eye micromotions on spatially resolved refractometry
NASA Astrophysics Data System (ADS)
Chyzh, Igor H.; Sokurenko, Vyacheslav M.; Osipova, Irina Y.
2001-01-01
The influence eye micromotions on the accuracy of estimation of Zernike coefficients form eye transverse aberration measurements was investigated. By computer modeling, the following found eye aberrations have been examined: defocusing, primary astigmatism, spherical aberration of the 3rd and the 5th orders, as well as their combinations. It was determined that the standard deviation of estimated Zernike coefficients is proportional to the standard deviation of angular eye movements. Eye micromotions cause the estimation errors of Zernike coefficients of present aberrations and produce the appearance of Zernike coefficients of aberrations, absent in the eye. When solely defocusing is present, the biggest errors, cased by eye micromotions, are obtained for aberrations like coma and astigmatism. In comparison with other aberrations, spherical aberration of the 3rd and the 5th orders evokes the greatest increase of the standard deviation of other Zernike coefficients.
NASA Astrophysics Data System (ADS)
Katayama, Satoshi; Yamamoto, Masayuki; Gorie, Shigeaki
2010-11-01
We developed an ageing methodology and examined age composition of three flatfish stocks inhabiting the Seto Inland Sea, Japan. Ages were difficult to determine for three-lined tongue sole ( Cynoglossus abbreviates) and ridged-eye flounder ( Pleuronichthys cornutus) because the first year annulus ring was often indistinct; therefore, we used directional change in otolith growth to distinguish it. Sectioning and etching methods were powerful tools for identifying annual checks for red tongue sole ( Cynoglossus joyneri). Using these ageing methods, we determined age-length relationships and growth curves. The age composition of the populations studied and of the landings showed that a large proportion of the latter consisted of individuals under the mean age of sexual maturity, thereby reducing the percent spawning potential ratio (%SPR) to ≈ 20% for all species. These findings suggest that fishing pressure on immature fish is leading to overfishing of these flatfish stocks.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-03
.... 0910131363-0087-02] RIN 0648-XW74 Fisheries of the Exclusive Economic Zone Off Alaska; Rock Sole, Flathead... participating in the Amendment 80 limited access fishery in the Bering Sea and Aleutian Islands management area... the trawl rock sole, flathead sole, and ``other flatfish'' fishery category by vessels participating...
NASA Astrophysics Data System (ADS)
Abedini, M. J.; Nasseri, M.; Burn, D. H.
2012-04-01
In any geostatistical study, an important consideration is the choice of an appropriate, repeatable, and objective search strategy that controls the nearby samples to be included in the location-specific estimation procedure. Almost all geostatistical software available in the market puts the onus on the user to supply search strategy parameters in a heuristic manner. These parameters are solely controlled by geographical coordinates that are defined for the entire area under study, and the user has no guidance as to how to choose these parameters. The main thesis of the current study is that the selection of search strategy parameters has to be driven by data—both the spatial coordinates and the sample values—and cannot be chosen beforehand. For this purpose, a genetic-algorithm-based ordinary kriging with moving neighborhood technique is proposed. The search capability of a genetic algorithm is exploited to search the feature space for appropriate, either local or global, search strategy parameters. Radius of circle/sphere and/or radii of standard or rotated ellipse/ellipsoid are considered as the decision variables to be optimized by GA. The superiority of GA-based ordinary kriging is demonstrated through application to the Wolfcamp Aquifer piezometric head data. Assessment of numerical results showed that definition of search strategy parameters based on both geographical coordinates and sample values improves cross-validation statistics when compared with that based on geographical coordinates alone. In the case of a variable search neighborhood for each estimation point, optimization of local search strategy parameters for an elliptical support domain—the orientation of which is dictated by anisotropic axes—via GA was able to capture the dynamics of piezometric head in west Texas/New Mexico in an efficient way.
NASA Astrophysics Data System (ADS)
Bloom, A. Anthony; Bowman, Kevin W.; Lee, Meemong; Turner, Alexander J.; Schroeder, Ronny; Worden, John R.; Weidner, Richard; McDonald, Kyle C.; Jacob, Daniel J.
2017-06-01
Wetland emissions remain one of the principal sources of uncertainty in the global atmospheric methane (CH4) budget, largely due to poorly constrained process controls on CH4 production in waterlogged soils. Process-based estimates of global wetland CH4 emissions and their associated uncertainties can provide crucial prior information for model-based top-down CH4 emission estimates. Here we construct a global wetland CH4 emission model ensemble for use in atmospheric chemical transport models (WetCHARTs version 1.0). Our 0.5° × 0.5° resolution model ensemble is based on satellite-derived surface water extent and precipitation reanalyses, nine heterotrophic respiration simulations (eight carbon cycle models and a data-constrained terrestrial carbon cycle analysis) and three temperature dependence parameterizations for the period 2009-2010; an extended ensemble subset based solely on precipitation and the data-constrained terrestrial carbon cycle analysis is derived for the period 2001-2015. We incorporate the mean of the full and extended model ensembles into GEOS-Chem and compare the model against surface measurements of atmospheric CH4; the model performance (site-level and zonal mean anomaly residuals) compares favourably against published wetland CH4 emissions scenarios. We find that uncertainties in carbon decomposition rates and the wetland extent together account for more than 80 % of the dominant uncertainty in the timing, magnitude and seasonal variability in wetland CH4 emissions, although uncertainty in the temperature CH4 : C dependence is a significant contributor to seasonal variations in mid-latitude wetland CH4 emissions. The combination of satellite, carbon cycle models and temperature dependence parameterizations provides a physically informed structural a priori uncertainty that is critical for top-down estimates of wetland CH4 fluxes. Specifically, our ensemble can provide enhanced information on the prior CH4 emission uncertainty and the error covariance structure, as well as a means for using posterior flux estimates and their uncertainties to quantitatively constrain the biogeochemical process controls of global wetland CH4 emissions.
EM Bias-Correction for Ice Thickness and Surface Roughness Retrievals over Rough Deformed Sea Ice
NASA Astrophysics Data System (ADS)
Li, L.; Gaiser, P. W.; Allard, R.; Posey, P. G.; Hebert, D. A.; Richter-Menge, J.; Polashenski, C. M.
2016-12-01
The very rough ridge sea ice accounts for significant percentage of total ice areas and even larger percentage of total volume. The commonly used Radar altimeter surface detection techniques are empirical in nature and work well only over level/smooth sea ice. Rough sea ice surfaces can modify the return waveforms, resulting in significant Electromagnetic (EM) bias in the estimated surface elevations, and thus large errors in the ice thickness retrievals. To understand and quantify such sea ice surface roughness effects, a combined EM rough surface and volume scattering model was developed to simulate radar returns from the rough sea ice `layer cake' structure. A waveform matching technique was also developed to fit observed waveforms to a physically-based waveform model and subsequently correct the roughness induced EM bias in the estimated freeboard. This new EM Bias Corrected (EMBC) algorithm was able to better retrieve surface elevations and estimate the surface roughness parameter simultaneously. In situ data from multi-instrument airborne and ground campaigns were used to validate the ice thickness and surface roughness retrievals. For the surface roughness retrievals, we applied this EMBC algorithm to co-incident LiDAR/Radar measurements collected during a Cryosat-2 under-flight by the NASA IceBridge missions. Results show that not only does the waveform model fit very well to the measured radar waveform, but also the roughness parameters derived independently from the LiDAR and radar data agree very well for both level and deformed sea ice. For sea ice thickness retrievals, validation based on in-situ data from the coordinated CRREL/NRL field campaign demonstrates that the physically-based EMBC algorithm performs fundamentally better than the empirical algorithm over very rough deformed sea ice, suggesting that sea ice surface roughness effects can be modeled and corrected based solely on the radar return waveforms.
NASA Astrophysics Data System (ADS)
Dierking, Jan; Morat, Fabien; Letourneur, Yves; Harmelin-Vivien, Mireille
2012-06-01
The commercially important marine flatfish common sole (Solea solea) facultatively uses NW Mediterranean lagoons as nurseries. To assess the imprint left by the lagoonal passage, muscle carbon (C) and nitrogen (N) isotope values of S. solea juveniles caught in Mauguio lagoon in spring (shortly after arrival from the sea) and in autumn (before the return to the sea) were compared with values of juveniles from adjacent coastal marine nurseries. In addition, in the lagoon, sole otolith stable isotope (C and oxygen (O)) and elemental (11 elements) composition in spring and autumn, and the stable isotope composition (C and N) of organic matter sources in autumn, were determined. Overall, our data indicate that a distinct lagoonal signature existed. Specifically, lagoon soles showed a strong enrichment in muscle tissue 15N (>6‰) compared to their coastal relatives, likely linked to sewage inputs (see below), and a depletion in 13C (1-2‰), indicative of higher importance of 13C depleted terrestrial POM in the lagoon compared to coastal nurseries. In addition, over the time spent in the lagoon, sole otolith δ13C and δ18O values and otolith elemental composition changed significantly. Analysis of the lagoon sole foodweb based on C and N isotopes placed sediment particulate organic matter (POM) at the base. Seagrasses, formerly common but in decline in Mauguio lagoon, played a minor role in the detritus cycle. The very strong 15N enrichment of the entire foodweb (+7 to +11‰) compared to little impacted lagoons and coastal areas testified of important human sewage inputs. Regarding the S. solea migration, the analysis of higher turnover and fast growth muscle tissue and metabolically inert and slower growth otoliths indicated that soles arrived at least several weeks prior to capture in spring, and that no migrations took place in summer. In the autumn, the high muscle δ15N value acquired in Mauguio lagoon would be a good marker of recent return to the sea, whereas altered otolith δ18O values and elemental ratios hold promise as long-term markers. The combination of several complementary tracers from muscle and otoliths may present the chance to distinguish between fish from specific lagoons and coastal nurseries in the future.
Effect of rocker-soled shoes on parameters of knee joint load in knee osteoarthritis.
Madden, Elizabeth G; Kean, Crystal O; Wrigley, Tim V; Bennell, Kim L; Hinman, Rana S
2015-01-01
This study evaluated the immediate effects of rocker-soled shoes on parameters of the knee adduction moment (KAM) and pain in individuals with knee osteoarthritis (OA). Three-dimensional gait analysis was performed on 30 individuals (mean (SD): age, 61 (7) yr; 15 (50%) male) with radiographic and symptomatic knee OA under three walking conditions in a randomized order: i) wearing rocker-soled shoes (Skechers Shape-ups), ii) wearing non-rocker-soled shoes (ASICS walking shoes), and iii) barefoot. Peak KAM and KAM angular impulse were measured as primary indicators of knee load distribution. Secondary measures included the knee flexion moment (KFM) and knee pain during walking. Peak KAM was significantly lower when wearing the rocker-soled shoes compared with that when wearing the non-rocker-soled shoes (mean difference (95% confidence interval), -0.27 (-0.42 to -0.12) N·m/BW × Ht%; P < 0.001). Post hoc tests revealed no significant difference in KAM impulse between rocker-soled and non-rocker-soled shoe conditions (P = 0.13). Both peak KAM and KAM impulse were significantly higher during both shoe conditions compared with those during the barefoot condition (P < 0.001). There were no significant differences in KFM (P = 0.36) or knee pain (P = 0.89) between conditions. Rocker-soled shoes significantly reduced peak KAM when compared with non-rocker-soled shoes, without a concomitant change in KFM, and thus may potentially reduce medial knee joint loading. However, KAM parameters in the rocker-soled shoes remained significantly higher than those during barefoot walking. Wearing rocker-soled shoes did not have a significant immediate effect on walking pain. Further research is required to evaluate whether rocker-soled shoes can influence symptoms and progression of knee OA with prolonged wear.
NASA Technical Reports Server (NTRS)
Hatfield, Glen S.; Hark, Frank; Stott, James
2016-01-01
Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account system integration risks such as those attributable to manufacturing and assembly. These sources often dominate component level risk. While consequence of failure is often understood, using predicted values in a risk model to estimate the probability of occurrence may underestimate the actual risk. Managers and decision makers use the probability of occurrence to influence the determination whether to accept the risk or require a design modification. The actual risk threshold for acceptance may not be fully understood due to the absence of system level test data or operational data. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.
Development and validation of a short-lag spatial coherence theory for photoacoustic imaging
NASA Astrophysics Data System (ADS)
Graham, Michelle T.; Lediju Bell, Muyinatu A.
2018-02-01
We previously derived spatial coherence theory to be implemented for studying theoretical properties of ShortLag Spatial Coherence (SLSC) beamforming applied to photoacoustic images. In this paper, our newly derived theoretical equation is evaluated to generate SLSC images of a point target and a 1.2 mm diameter target and corresponding lateral profiles. We compared SLSC images simulated solely based on our theory to SLSC images created after beamforming acoustic channel data from k-Wave simulations of 1.2 mm-diameter disc target. This process was repeated for a point target and the full width at half the maximum signal amplitudes were measured to estimate the resolution of each imaging system. Resolution as a function of lag was comparable for the first 10% of the receive aperture (i.e., the short-lag region), after which resolution measurements diverged by a maximum of 1 mm between the two types of simulated images. These results indicate the potential for both simulation methods to be utilized as independent resources to study coherence-based photoacoustic beamformers when imaging point-like targets.
Comparison of segmentation algorithms for fluorescence microscopy images of cells.
Dima, Alden A; Elliott, John T; Filliben, James J; Halter, Michael; Peskin, Adele; Bernal, Javier; Kociolek, Marcin; Brady, Mary C; Tang, Hai C; Plant, Anne L
2011-07-01
The analysis of fluorescence microscopy of cells often requires the determination of cell edges. This is typically done using segmentation techniques that separate the cell objects in an image from the surrounding background. This study compares segmentation results from nine different segmentation techniques applied to two different cell lines and five different sets of imaging conditions. Significant variability in the results of segmentation was observed that was due solely to differences in imaging conditions or applications of different algorithms. We quantified and compared the results with a novel bivariate similarity index metric that evaluates the degree of underestimating or overestimating a cell object. The results show that commonly used threshold-based segmentation techniques are less accurate than k-means clustering with multiple clusters. Segmentation accuracy varies with imaging conditions that determine the sharpness of cell edges and with geometric features of a cell. Based on this observation, we propose a method that quantifies cell edge character to provide an estimate of how accurately an algorithm will perform. The results of this study will assist the development of criteria for evaluating interlaboratory comparability. Published 2011 Wiley-Liss, Inc.
Zubrick, Stephen R; Hafekost, Jennifer; Johnson, Sarah E; Lawrence, David; Saw, Suzy; Sawyer, Michael; Ainley, John; Buckingham, William J
2016-09-01
To (1) estimate the lifetime and 12-month prevalence of suicidal behaviours in Australian young people aged 12-17 years, (2) describe their co-morbidity with mental illness and (3) describe the co-variation of these estimates with social and demographic variables. A national random sample of children aged 4-17 years was recruited in 2013-2014. The response rate to the survey was 55% with 6310 parents and carers of eligible households participating. In addition, of the 2967 young people aged 11-17 years in these households, 89% (2653) of the 12- to 17-year-olds completed a self-report questionnaire that included questions about suicidal behaviour. In any 12-month period, about 2.4% or 41,400 young people would have made a suicide attempt. About 7.5% of 12- to 17-year-olds report having suicidal ideation, 5.2% making a plan and less than 1% (0.6%) receiving medical treatment for an attempt. The presence of a mental disorder shows the largest significant association with lifetime and 12-month suicidal behaviour, along with age, gender, sole parent family status and poor family functioning. Of young people with a major depressive disorder, 19.7% reported making a suicide attempt within the previous 12 months. There are also significant elevations in the proportions of young people reporting suicidal behaviour who have anxiety and conduct disorders. Mental disorders should be a leading intervention point for suicide prevention both in the primary health sector and in the mental health sector specifically. The associations examined here also suggest that efforts to assist sole parent and/or dysfunctional families would be worthy areas in which to target these efforts. © The Royal Australian and New Zealand College of Psychiatrists 2016.
Su, Ri-Qi; Wang, Wen-Xu; Wang, Xiao; Lai, Ying-Cheng
2016-01-01
Given a complex geospatial network with nodes distributed in a two-dimensional region of physical space, can the locations of the nodes be determined and their connection patterns be uncovered based solely on data? We consider the realistic situation where time series/signals can be collected from a single location. A key challenge is that the signals collected are necessarily time delayed, due to the varying physical distances from the nodes to the data collection centre. To meet this challenge, we develop a compressive-sensing-based approach enabling reconstruction of the full topology of the underlying geospatial network and more importantly, accurate estimate of the time delays. A standard triangularization algorithm can then be employed to find the physical locations of the nodes in the network. We further demonstrate successful detection of a hidden node (or a hidden source or threat), from which no signal can be obtained, through accurate detection of all its neighbouring nodes. As a geospatial network has the feature that a node tends to connect with geophysically nearby nodes, the localized region that contains the hidden node can be identified. PMID:26909187
49 CFR 172.400a - Exceptions from labeling.
Code of Federal Regulations, 2011 CFR
2011-10-01
... TABLE, SPECIAL PROVISIONS, HAZARDOUS MATERIALS COMMUNICATIONS, EMERGENCY RESPONSE INFORMATION, TRAINING....1 (poisonous) if the toxicity of the material is based solely on the corrosive destruction of tissue...
49 CFR 172.400a - Exceptions from labeling.
Code of Federal Regulations, 2010 CFR
2010-10-01
... TABLE, SPECIAL PROVISIONS, HAZARDOUS MATERIALS COMMUNICATIONS, EMERGENCY RESPONSE INFORMATION, TRAINING....1 (poisonous) if the toxicity of the material is based solely on the corrosive destruction of tissue...
Code of Federal Regulations, 2013 CFR
2013-07-01
... accreditation, based solely on the newness of the institution. (g) Medical college admission test. A nationally standardized examination, administered by the American Medical College Testing Program, which is designed to...
Code of Federal Regulations, 2012 CFR
2012-07-01
... accreditation, based solely on the newness of the institution. (g) Medical college admission test. A nationally standardized examination, administered by the American Medical College Testing Program, which is designed to...
Code of Federal Regulations, 2014 CFR
2014-07-01
... accreditation, based solely on the newness of the institution. (g) Medical college admission test. A nationally standardized examination, administered by the American Medical College Testing Program, which is designed to...
Spectroscopic and Photometric Observations of a Five-Magnitude Flare Event on UV Ceti
1992-05-12
bremsstrahlung spectrum. The energy emitted by the flare in the U band is approxi- mately 5.OX 10oi ergs. The corresotonding total flare energy in white !ight is...temporal resolutions. A strong violet continuum is seen which cannot be reproduced solely with a thermal bremsstrahlung spectrum. The energy emitted by...the flare in the U band is approxi- mately 5.0 X 1031 ergs. The corresponding total flare energy in white light is estimated to be 1.2X 1032 ergs
Abookire, Alisa A.; Norcross, Brenda L.
1998-01-01
Three transects in Kachemak Bay, Alaska, were sampled in September 1994, May and August 1995, and February, May, and August 1996. Juvenile flathead sole, Hippoglossoides elassodon, and rock sole, Pleuronectes bilineatus, were the most abundant flatfishes, comprising 65-85% of all fiatfishes captured at any period. Collections of fish and sediments were made at regular depth contour intervals of l0 m. Habitat distribution was described by depth at 10 m increments and sediment percent weights of gravel, sand, and mud. Year-round habitat of flathead sole age-0 was primarily from 40 to 60 m, and age-1 habitat was primarily from 40 to 80 m. Summer habitat of rock sole age-0 and -1 was from 10 to 30 m, and in winter they moved offshore to depths of up to 150 m. Both age classes of flathead sole were most abundant on mixed mud sediments, while age-1 were also in high abundance on muddy sand sediments. Rock sole age-0 and -1 were most abundant on sand, though age-1 were also found on a variety of sediments both finer and coarser grained than sand. Flathead sole and rock sole had distinctive depth and sediment habitats. When habitat overlap occurred between the species, it was most often due to rock sole moving offshore in the winter. Abundances were not significantly different among seasons for age-1 flatfishes.
Short communication: Genetic characterization of digital cushion thickness.
Oikonomou, G; Banos, G; Machado, V; Caixeta, L; Bicalho, R C
2014-01-01
Dairy cow lameness is a serious animal welfare issue. It is also a significant cause of economic losses, reducing reproductive efficiency and milk production and increasing culling rates. The digital cushion is a complex structure composed mostly of adipose tissue located underneath the distal phalanx and has recently been phenotypically associated with incidence of claw horn disruption lesions (CHDL); namely, sole ulcers and white line disease. The objective of this study was to characterize digital cushion thickness genetically and to investigate its association with body condition score (BCS), locomotion score (LOCO), CHDL, and milk production. Data were collected from 1 large closely monitored commercial dairy farm located in upstate New York; 923 dairy cows were used. Before trimming, the following data were collected by a member of the research team: BCS, cow height measurement, and LOCO. Presence or not of CHDL (sole ulcer or white line disease, or both) was recorded at trimming. Immediately after the cows were hoof trimmed, they underwent digital sonographic B-mode examination for the measurement of digital cushion thickness. Factors such as parity number, stage of lactation, calving date, mature-equivalent 305-d milk yield (ME305MY), and pedigree information were obtained from the farm's dairy management software (DairyCOMP 305; Valley Agricultural Software, Tulare, CA). Univariate animal models were used to obtain variance component estimations for each studied trait (CHDL, BCS, digital cushion thickness average, LOCO, height, and ME305MY) and a 6-variate analysis was conducted to estimate the genetic, residual, and phenotypic correlations between the studied traits. The heritability estimate of DCTA was 0.33±0.09, whereas a statistically significant genetic correlation was estimated between DCTA and CHDL (-0.60±0.29). Of the other genetic correlations, significant estimates were derived for BCS with LOCO (-0.49±0.19) and ME305MY (-0.48±0.20). Digital cushion thickness is moderately heritable and genetically strongly correlated with CHDL. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Spectral Classes for FAA's Integrated Noise Model Version 6.0.
DOT National Transportation Integrated Search
1999-12-07
The starting point in any empirical model such as the Federal Aviation Administrations (FAA) : Integrated Noise Model (INM) is a reference data base. In Version 5.2 and in previous versions : the reference data base consisted solely of a set of no...
Detection and Composition of Bacterial Communities in Waters using RNA-based Methods
In recent years, microbial water quality assessments have shifted from solely relying on pure culture-based methods to monitoring bacterial groups of interest using molecular assays such as PCR and qPCR. Furthermore, coupling next generation sequencing technologies with ribosomal...
NASA Astrophysics Data System (ADS)
Agard, P.; Yamato, P.; Soret, M.; Prigent, C.; Guillot, S.; Plunder, A.; Dubacq, B.; Chauvet, A.; Monié, P.
2016-10-01
Subduction infancy corresponds to the first few million years following subduction initiation, when slabs start their descent into the mantle. It coincides with the transient (yet systematic) transfer of material from the top of the slab to the upper plate, as witnessed by metamorphic soles welded beneath obducted ophiolites. Combining structure-lithology-pressure-temperature-time data from metamorphic soles with flow laws derived from experimental rock mechanics, this study highlights two main successive rheological switches across the subduction interface (mantle wedge vs. basalts, then mantle wedge vs. sediments; at ∼800 °C and ∼600 °C, respectively), during which interplate mechanical coupling is maximized by the existence of transiently similar rheologies across the plate contact. We propose that these rheological switches hinder slab penetration and are responsible for slicing the top of the slab and welding crustal pieces (high- then low-temperature metamorphic soles) to the base of the mantle wedge during subduction infancy. This mechanism has implications for the rheological properties of the crust and mantle (and for transient episodes of accretion/exhumation of HP-LT rocks in mature subduction systems) and highlights the role of fluids in enabling subduction to overcome the early resistance to slab penetration.
A self-modifying cellular automaton model of historical urbanization in the San Francisco Bay area
Clarke, K.C.; Hoppen, S.; Gaydos, L.
1997-01-01
In this paper we describe a cellular automaton (CA) simulation model developed to predict urban growth as part of a project for estimating the regional and broader impact of urbanization on the San Francisco Bay area's climate. The rules of the model are more complex than those of a typical CA and involve the use of multiple data sources, including topography, road networks, and existing settlement distributions, and their modification over time. In addition, the control parameters of the model are allowed to self-modify: that is, the CA adapts itself to the circumstances it generates, in particular, during periods of rapid growth or stagnation. In addition, the model was written to allow the accumulation of probabilistic estimates based on Monte Carlo methods. Calibration of the model has been accomplished by the use of historical maps to compare model predictions of urbanization, based solely upon the distribution in year 1900, with observed data for years 1940, 1954, 1962, 1974, and 1990. The complexity of this model has made calibration a particularly demanding step. Lessons learned about the methods, measures, and strategies developed to calibrate the model may be of use in other environmental modeling contexts. With the calibration complete, the model is being used to generate a set of future scenarios for the San Francisco Bay area along with their probabilities based on the Monte Carlo version of the model. Animated dynamic mapping of the simulations will be used to allow visualization of the impact of future urban growth.
Kaspar, Mathias; Fette, Georg; Güder, Gülmisal; Seidlmayer, Lea; Ertl, Maximilian; Dietrich, Georg; Greger, Helmut; Puppe, Frank; Störk, Stefan
2018-04-17
Heart failure is the predominant cause of hospitalization and amongst the leading causes of death in Germany. However, accurate estimates of prevalence and incidence are lacking. Reported figures originating from different information sources are compromised by factors like economic reasons or documentation quality. We implemented a clinical data warehouse that integrates various information sources (structured parameters, plain text, data extracted by natural language processing) and enables reliable approximations to the real number of heart failure patients. Performance of ICD-based diagnosis in detecting heart failure was compared across the years 2000-2015 with (a) advanced definitions based on algorithms that integrate various sources of the hospital information system, and (b) a physician-based reference standard. Applying these methods for detecting heart failure in inpatients revealed that relying on ICD codes resulted in a marked underestimation of the true prevalence of heart failure, ranging from 44% in the validation dataset to 55% (single year) and 31% (all years) in the overall analysis. Percentages changed over the years, indicating secular changes in coding practice and efficiency. Performance was markedly improved using search and permutation algorithms from the initial expert-specified query (F1 score of 81%) to the computer-optimized query (F1 score of 86%) or, alternatively, optimizing precision or sensitivity depending on the search objective. Estimating prevalence of heart failure using ICD codes as the sole data source yielded unreliable results. Diagnostic accuracy was markedly improved using dedicated search algorithms. Our approach may be transferred to other hospital information systems.
Code of Federal Regulations, 2012 CFR
2012-10-01
... contract action results from acceptance of a proposal under the Small Business Innovation Development Act... proposal and acceptance is based solely upon the unique capability of the source to perform the particular...
Code of Federal Regulations, 2013 CFR
2013-10-01
... contract action results from acceptance of a proposal under the Small Business Innovation Development Act... proposal and acceptance is based solely upon the unique capability of the source to perform the particular...
NASA Technical Reports Server (NTRS)
Didwall, E. M.
1981-01-01
Low latitude magnetic field variations (magnetic storms) caused by large fluctuations in the equatorial ring current were derived from magnetic field magnitude data obtained by OGO 2, 4, and 6 satellites over an almost 5 year period. Analysis procedures consisted of (1) separating the disturbance field into internal and external parts relative to the surface of the Earth; (2) estimating the response function which related to the internally generated magnetic field variations to the external variations due to the ring current; and (3) interpreting the estimated response function using theoretical response functions for known conductivity profiles. Special consideration is given to possible ocean effects. A temperature profile is proposed using conductivity temperature data for single crystal olivine. The resulting temperature profile is reasonable for depths below 150-200 km, but is too high for shallower depths. Apparently, conductivity is not controlled solely by olivine at shallow depths.
Korner-Nievergelt, Fränzi; Brinkmann, Robert; Niermann, Ivo; Behr, Oliver
2013-01-01
Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production. PMID:23844144
Korner-Nievergelt, Fränzi; Brinkmann, Robert; Niermann, Ivo; Behr, Oliver
2013-01-01
Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production.
Comparison of three methods of calculating strain in the mouse ulna in exogenous loading studies.
Norman, Stephanie C; Wagner, David W; Beaupre, Gary S; Castillo, Alesha B
2015-01-02
Axial compression of mouse limbs is commonly used to induce bone formation in a controlled, non-invasive manner. Determination of peak strains caused by loading is central to interpreting results. Load-strain calibration is typically performed using uniaxial strain gauges attached to the diaphyseal, periosteal surface of a small number of sacrificed animals. Strain is measured as the limb is loaded to a range of physiological loads known to be anabolic to bone. The load-strain relationship determined by this subgroup is then extrapolated to a larger group of experimental mice. This method of strain calculation requires the challenging process of strain gauging very small bones which is subject to variability in placement of the strain gauge. We previously developed a method to estimate animal-specific periosteal strain during axial ulnar loading using an image-based computational approach that does not require strain gauges. The purpose of this study was to compare the relationship between load-induced bone formation rates and periosteal strain at ulnar midshaft using three different methods to estimate strain: (A) Nominal strain values based solely on load-strain calibration; (B) Strains calculated from load-strain calibration, but scaled for differences in mid-shaft cross-sectional geometry among animals; and (C) An alternative image-based computational method for calculating strains based on beam theory and animal-specific bone geometry. Our results show that the alternative method (C) provides comparable correlation between strain and bone formation rates in the mouse ulna relative to the strain gauge-dependent methods (A and B), while avoiding the need to use strain gauges. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Zhang, X.; Anagnostou, E. N.; Schwartz, C. S.
2017-12-01
Satellite precipitation products tend to have significant biases over complex terrain. Our research investigates a statistical approach for satellite precipitation adjustment based solely on numerical weather simulations. This approach has been evaluated in two mid-latitude (Zhang et al. 2013*1, Zhang et al. 2016*2) and three topical mountainous regions by using the WRF model to adjust two high-resolution satellite products i) National Oceanic and Atmospheric Administration (NOAA) Climate Prediction Center morphing technique (CMORPH) and ii) Global Satellite Mapping of Precipitation (GSMaP). Results show the adjustment effectively reduces the satellite underestimation of high rain rates, which provides a solid proof-of-concept for continuing research of NWP-based satellite correction. In this study we investigate the feasibility of using NCAR Real-time Ensemble Forecasts*3 for adjusting near-real-time satellite precipitation datasets over complex terrain areas in the Continental United States (CONUS) such as Olympic Peninsula, California coastal mountain ranges, Rocky Mountains and South Appalachians. The research will focus on flood-inducing storms occurred from May 2015 to December 2016 and four satellite precipitation products (CMORPH, GSMaP, PERSIANN-CCS and IMERG). The error correction performance evaluation will be based on comparisons against the gauge-adjusted Stage IV precipitation data. *1 Zhang, Xinxuan, et al. "Using NWP simulations in satellite rainfall estimation of heavy precipitation events over mountainous areas." Journal of Hydrometeorology 14.6 (2013): 1844-1858. *2 Zhang, Xinxuan, et al. "Hydrologic Evaluation of NWP-Adjusted CMORPH Estimates of Hurricane-Induced Precipitation in the Southern Appalachians." Journal of Hydrometeorology 17.4 (2016): 1087-1099. *3 Schwartz, Craig S., et al. "NCAR's experimental real-time convection-allowing ensemble prediction system." Weather and Forecasting 30.6 (2015): 1645-1654.
NASA Astrophysics Data System (ADS)
Closas, Pau; Guillamon, Antoni
2017-12-01
This paper deals with the problem of inferring the signals and parameters that cause neural activity to occur. The ultimate challenge being to unveil brain's connectivity, here we focus on a microscopic vision of the problem, where single neurons (potentially connected to a network of peers) are at the core of our study. The sole observation available are noisy, sampled voltage traces obtained from intracellular recordings. We design algorithms and inference methods using the tools provided by stochastic filtering that allow a probabilistic interpretation and treatment of the problem. Using particle filtering, we are able to reconstruct traces of voltages and estimate the time course of auxiliary variables. By extending the algorithm, through PMCMC methodology, we are able to estimate hidden physiological parameters as well, like intrinsic conductances or reversal potentials. Last, but not least, the method is applied to estimate synaptic conductances arriving at a target cell, thus reconstructing the synaptic excitatory/inhibitory input traces. Notably, the performance of these estimations achieve the theoretical lower bounds even in spiking regimes.
NASA Astrophysics Data System (ADS)
Ma, Chuang; Chen, Han-Shuang; Lai, Ying-Cheng; Zhang, Hai-Feng
2018-02-01
Complex networks hosting binary-state dynamics arise in a variety of contexts. In spite of previous works, to fully reconstruct the network structure from observed binary data remains challenging. We articulate a statistical inference based approach to this problem. In particular, exploiting the expectation-maximization (EM) algorithm, we develop a method to ascertain the neighbors of any node in the network based solely on binary data, thereby recovering the full topology of the network. A key ingredient of our method is the maximum-likelihood estimation of the probabilities associated with actual or nonexistent links, and we show that the EM algorithm can distinguish the two kinds of probability values without any ambiguity, insofar as the length of the available binary time series is reasonably long. Our method does not require any a priori knowledge of the detailed dynamical processes, is parameter-free, and is capable of accurate reconstruction even in the presence of noise. We demonstrate the method using combinations of distinct types of binary dynamical processes and network topologies, and provide a physical understanding of the underlying reconstruction mechanism. Our statistical inference based reconstruction method contributes an additional piece to the rapidly expanding "toolbox" of data based reverse engineering of complex networked systems.
Ma, Chuang; Chen, Han-Shuang; Lai, Ying-Cheng; Zhang, Hai-Feng
2018-02-01
Complex networks hosting binary-state dynamics arise in a variety of contexts. In spite of previous works, to fully reconstruct the network structure from observed binary data remains challenging. We articulate a statistical inference based approach to this problem. In particular, exploiting the expectation-maximization (EM) algorithm, we develop a method to ascertain the neighbors of any node in the network based solely on binary data, thereby recovering the full topology of the network. A key ingredient of our method is the maximum-likelihood estimation of the probabilities associated with actual or nonexistent links, and we show that the EM algorithm can distinguish the two kinds of probability values without any ambiguity, insofar as the length of the available binary time series is reasonably long. Our method does not require any a priori knowledge of the detailed dynamical processes, is parameter-free, and is capable of accurate reconstruction even in the presence of noise. We demonstrate the method using combinations of distinct types of binary dynamical processes and network topologies, and provide a physical understanding of the underlying reconstruction mechanism. Our statistical inference based reconstruction method contributes an additional piece to the rapidly expanding "toolbox" of data based reverse engineering of complex networked systems.
The Change in Oceanic O2 Inventory Associated with Recent Global Warming
NASA Technical Reports Server (NTRS)
Keeling, Ralph; Garcia, Hernan
2002-01-01
Oceans general circulation models predict that global warming may cause a decrease in the oceanic O2 inventory and an associated O2 outgassing. An independent argument is presented here in support of this prediction based on observational evidence of the ocean's biogeochemical response to natural warming. On time scales from seasonal to centennial, natural O2 flux/heat flux ratios are shown to occur in a range of 2 to 10 nmol O2 per Joule of warming, with larger ratios typically occurring at higher latitudes and over longer time scales. The ratios are several times larger than would be expected solely from the effect of heating on the O2 solubility, indicating that most of the O2 exchange is biologically mediated through links between heating and stratification. The change in oceanic O2 inventory through the 1990's is estimated to be 0.3 - 0.4 x 10(exp 14) mol O2 per year based on scaling the observed anomalous long-term ocean warming by natural O2 flux/heating ratios and allowing for uncertainty due to decadal variability. Implications are discussed for carbon budgets based on observed changes in atmospheric O2/N2 ratio and based on observed changes in ocean dissolved inorganic carbon.
77 FR 12930 - Federal Acquisition Regulation: Socioeconomic Program Parity
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-02
... on May 6, 2011, reinstating the Rule of Two. C. Sole Source Dollar Thresholds Vary Among the... all socioeconomic programs had the same sole source dollar threshold. Response: The sole source dollar... business socioeconomic contracting program to utilize. D. Sole Source Authority Under the SDVOSB Program...
Code of Federal Regulations, 2010 CFR
2010-10-01
... Innovation Development Act of 1982 (Pub. L. 97-219); (8) The proposed contract action results from the... unsolicited research proposal and acceptance is based solely upon the unique capability of the source to...
Code of Federal Regulations, 2011 CFR
2011-10-01
... Innovation Development Act of 1982 (Pub. L. 97-219); (8) The proposed contract action results from the... unsolicited research proposal and acceptance is based solely upon the unique capability of the source to...
NASA Astrophysics Data System (ADS)
Misra, Gaurav; Izadi, Maziar; Sanyal, Amit; Scheeres, Daniel
2016-04-01
The effects of dynamical coupling between the rotational (attitude) and translational (orbital) motion of spacecraft near small Solar System bodies is investigated. This coupling arises due to the weak gravity of these bodies, as well as solar radiation pressure. The traditional approach assumes a point-mass spacecraft model to describe the translational motion of the spacecraft, while the attitude motion is considered to be completely decoupled from the translational motion. The model used here to describe the rigid-body spacecraft dynamics includes the non-uniform rotating gravity field of the small body up to second degree and order along with the attitude dependent terms, solar tide, and solar radiation pressure. This model shows that the second degree and order gravity terms due to the small body affect the dynamics of the spacecraft to the same extent as the orbit-attitude coupling due to the primary gravity (zeroth order) term. Variational integrators are used to simulate the dynamics of both the rigid spacecraft and the point mass. The small bodies considered here are modeled after Near-Earth Objects (NEO) 101955 Bennu, and 25143 Itokawa, and are assumed to be triaxial ellipsoids with uniform density. Differences in the numerically obtained trajectories of a rigid spacecraft and a point mass are then compared, to illustrate the impact of the orbit-attitude coupling on spacecraft dynamics in proximity of small bodies. Possible implications on the performance of model-based spacecraft control and on the station-keeping budget, if the orbit-attitude coupling is not accounted for in the model of the dynamics, are also discussed. An almost globally asymptotically stable motion estimation scheme based solely on visual/optical feedback that estimates the relative motion of the asteroid with respect to the spacecraft is also obtained. This estimation scheme does not require a model of the dynamics of the asteroid, which makes it perfectly suited for asteroids whose properties are not well known.
Bonino, Angela Yarnell; Leibold, Lori J
2017-01-23
Collecting reliable behavioral data from toddlers and preschoolers is challenging. As a result, there are significant gaps in our understanding of human auditory development for these age groups. This paper describes an observer-based procedure for measuring hearing sensitivity with a two-interval, two-alternative forced-choice paradigm. Young children are trained to perform a play-based, motor response (e.g., putting a block in a bucket) whenever they hear a target signal. An experimenter observes the child's behavior and makes a judgment about whether the signal was presented during the first or second observation interval; the experimenter is blinded to the true signal interval, so this judgment is based solely on the child's behavior. These procedures were used to test 2 to 4 year-olds (n = 33) with no known hearing problems. The signal was a 1,000 Hz warble tone presented in quiet, and the signal level was adjusted to estimate a threshold corresponding to 71%-correct detection. A valid threshold was obtained for 82% of children. These results indicate that the two-interval procedure is both feasible and reliable for use with toddlers and preschoolers. The two-interval, observer-based procedure described in this paper is a powerful tool for evaluating hearing in young children because it guards against response bias on the part of the experimenter.
NASA Astrophysics Data System (ADS)
Lin, Pei-Sheng; Rosset, Denis; Zhang, Yanbao; Bancal, Jean-Daniel; Liang, Yeong-Cherng
2018-03-01
The device-independent approach to physics is one where conclusions are drawn directly from the observed correlations between measurement outcomes. In quantum information, this approach allows one to make strong statements about the properties of the underlying systems or devices solely via the observation of Bell-inequality-violating correlations. However, since one can only perform a finite number of experimental trials, statistical fluctuations necessarily accompany any estimation of these correlations. Consequently, an important gap remains between the many theoretical tools developed for the asymptotic scenario and the experimentally obtained raw data. In particular, a physical and concurrently practical way to estimate the underlying quantum distribution has so far remained elusive. Here, we show that the natural analogs of the maximum-likelihood estimation technique and the least-square-error estimation technique in the device-independent context result in point estimates of the true distribution that are physical, unique, computationally tractable, and consistent. They thus serve as sound algorithmic tools allowing one to bridge the aforementioned gap. As an application, we demonstrate how such estimates of the underlying quantum distribution can be used to provide, in certain cases, trustworthy estimates of the amount of entanglement present in the measured system. In stark contrast to existing approaches to device-independent parameter estimations, our estimation does not require the prior knowledge of any Bell inequality tailored for the specific property and the specific distribution of interest.
Financial analysis of technology acquisition using fractionated lasers as a model.
Jutkowitz, Eric; Carniol, Paul J; Carniol, Alan R
2010-08-01
Ablative fractional lasers are among the most advanced and costly devices on the market. Yet, there is a dearth of published literature on the cost and potential return on investment (ROI) of such devices. The objective of this study was to provide a methodological framework for physicians to evaluate ROI. To facilitate this analysis, we conducted a case study on the potential ROI of eight ablative fractional lasers. In the base case analysis, a 5-year lease and a 3-year lease were assumed as the purchase option with a $0 down payment and 3-month payment deferral. In addition to lease payments, service contracts, labor cost, and disposables were included in the total cost estimate. Revenue was estimated as price per procedure multiplied by total number of procedures in a year. Sensitivity analyses were performed to account for variability in model assumptions. Based on the assumptions of the model, all lasers had higher ROI under the 5-year lease agreement compared with that for the 3-year lease agreement. When comparing results between lasers, those with lower operating and purchase cost delivered a higher ROI. Sensitivity analysis indicates the model is most sensitive to purchase method. If physicians opt to purchase the device rather than lease, they can significantly enhance ROI. ROI analysis is an important tool for physicians who are considering making an expensive device acquisition. However, physicians should not rely solely on ROI and must also consider the clinical benefits of a laser. (c) Thieme Medical Publishers.
Joint kinematic calculation based on clinical direct kinematic versus inverse kinematic gait models.
Kainz, H; Modenese, L; Lloyd, D G; Maine, S; Walsh, H P J; Carty, C P
2016-06-14
Most clinical gait laboratories use the conventional gait analysis model. This model uses a computational method called Direct Kinematics (DK) to calculate joint kinematics. In contrast, musculoskeletal modelling approaches use Inverse Kinematics (IK) to obtain joint angles. IK allows additional analysis (e.g. muscle-tendon length estimates), which may provide valuable information for clinical decision-making in people with movement disorders. The twofold aims of the current study were: (1) to compare joint kinematics obtained by a clinical DK model (Vicon Plug-in-Gait) with those produced by a widely used IK model (available with the OpenSim distribution), and (2) to evaluate the difference in joint kinematics that can be solely attributed to the different computational methods (DK versus IK), anatomical models and marker sets by using MRI based models. Eight children with cerebral palsy were recruited and presented for gait and MRI data collection sessions. Differences in joint kinematics up to 13° were found between the Plug-in-Gait and the gait 2392 OpenSim model. The majority of these differences (94.4%) were attributed to differences in the anatomical models, which included different anatomical segment frames and joint constraints. Different computational methods (DK versus IK) were responsible for only 2.7% of the differences. We recommend using the same anatomical model for kinematic and musculoskeletal analysis to ensure consistency between the obtained joint angles and musculoskeletal estimates. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Rebetsky, Yu. L.; Sim, L. A.; Kozyrev, A. A.
2017-07-01
The paper discusses questions related to the generation of increasing crustal horizontal compressive stresses compared to the idea of the standard gravitational state at the elastic stage or even from the prevalence of horizontal compression over vertical stress equal to the lithostatic pressure. We consider a variant of superfluous horizontal compression related to internal lithospheric processes occurrin in the crust of orogens, shields, and plates. The vertical ascending movements caused by these motions at the sole of the crust or the lithosphere pertain to these and the concomitant exogenic processes giving rise to denudation and, in particular, to erosion of the surfaces of forming rises. The residual stresses of the gravitational stressed state at the upper crust of the Kola Peninsula have been estimated for the first time. These calculations are based on the volume of sediments that have been deposited in Arctic seas beginning from the Mesozoic. The data speak to the possible level of residual horizontal compressive stresses up to 90 MPa in near-surface crustal units. This estimate is consistent with the results of in situ measurements that have been carried out at the Mining Institute of the Kola Science Center, Russian Academy of Sciences (RAS), for over 40 years. It is possible to forecast the horizontal stress gradient based on depth using our concept on the genesis of horizontal overpressure, and this forecasting is important for studying the formation of endogenic deposits.
Estimating the abundance of mouse populations of known size: promises and pitfalls of new methods
Conn, P.B.; Arthur, A.D.; Bailey, L.L.; Singleton, G.R.
2006-01-01
Knowledge of animal abundance is fundamental to many ecological studies. Frequently, researchers cannot determine true abundance, and so must estimate it using a method such as mark-recapture or distance sampling. Recent advances in abundance estimation allow one to model heterogeneity with individual covariates or mixture distributions and to derive multimodel abundance estimators that explicitly address uncertainty about which model parameterization best represents truth. Further, it is possible to borrow information on detection probability across several populations when data are sparse. While promising, these methods have not been evaluated using mark?recapture data from populations of known abundance, and thus far have largely been overlooked by ecologists. In this paper, we explored the utility of newly developed mark?recapture methods for estimating the abundance of 12 captive populations of wild house mice (Mus musculus). We found that mark?recapture methods employing individual covariates yielded satisfactory abundance estimates for most populations. In contrast, model sets with heterogeneity formulations consisting solely of mixture distributions did not perform well for several of the populations. We show through simulation that a higher number of trapping occasions would have been necessary to achieve good estimator performance in this case. Finally, we show that simultaneous analysis of data from low abundance populations can yield viable abundance estimates.
31 CFR 800.223 - Solely for the purpose of passive investment.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 31 Money and Finance:Treasury 3 2011-07-01 2011-07-01 false Solely for the purpose of passive..., ACQUISITIONS, AND TAKEOVERS BY FOREIGN PERSONS Definitions § 800.223 Solely for the purpose of passive investment. Ownership interests are held or acquired solely for the purpose of passive investment if the...
First Ecological Study of the Bawean Warty Pig (Sus blouchi), One of the Rarest Pigs on Earth.
Rademaker, Mark; Meijaard, Erik; Semiadi, Gono; Blokland, Simen; Neilson, Eric W; Rode-Margono, Eva Johanna
2016-01-01
The Bawean warty pig (Sus blouchi) is an endemic pig species confined to the 192 km(2) large island of Bawean in the Java Sea, Indonesia. Due to a lack of quantitative ecological research, understanding of natural history and conservation requirements have so far been based solely on anecdotal information from interviews with local people and study of captive and museum specimens. In this study we provide the first assessment of population and habitat preferences for S. blouchi by using camera trapping. From the 4th of November 2014 to January 8th 2015, we placed camera traps at 100 locations in the forested protected areas on Bawean. In 690.31 camera days (16567.45 hours) we captured 92 independent videos showing S. blouchi. Variation in S. blouchi trapping rates with cumulative trap effort stabilized after 500 camera days. An important outcome is that, in contrast to the suggestion of previous assessments, only S. blouchi was detected and no S. scrofa was found, which excludes hybridization threats. We fitted a Random Encounter Model, which does not require the identification of individual animals, to our camera-trapping data and estimated 172-377 individuals to be present on the island. Activity patterns and habitat data indicate that S. blouchi is mainly nocturnal and prefers community forests and areas near forest borders. Next to this, we found a positive relationship between S. blouchi occupancy, distance to nearest border, litter depth and tree density in the highest ranking occupancy models. Although these relationships proved non-significant based on model averaging, their presence in the top ranking models suggests that these covariables do play a role in predicting S. blouchi occurrence on Bawean. The estimated amount of sites occupied reached 58%. Based on our results, especially the estimation of the population size and area of occupancy, we determine that the species is Endangered according to the IUCN/SSC Red List criteria.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Administrative Enforcement Proceedings § 7.32 Appeal. (a) Right of appeal. Within ten days after receipt by... shall be based solely on the hearing record or those portions thereof cited by the parties to limit the...
19 CFR 354.14 - Initial decision.
Code of Federal Regulations, 2010 CFR
2010-04-01
... submissions. The initial decision will be based solely on evidence received into the record, and the pleadings... sanctions to impose, the presiding official or the Deputy Under Secretary will consider the nature of the...
Using a Mindfulness-Based Procedure in the Community: Translating Research to Practice
ERIC Educational Resources Information Center
Adkins, Angela D.; Singh, Ashvind N.; Winton, Alan S. W.; McKeegan, Gerald F.; Singh, Judy
2010-01-01
Maladaptive behaviors, such as aggressive and disruptive behaviors, are a significant risk factor for maintaining community placement by individuals with intellectual disabilities. When experienced researchers provide training to individuals with intellectual disabilities on a mindfulness-based strategy, "Meditation on the Soles of the Feet," the…
48 CFR 515.209-70 - Examination of records by GSA clause.
Code of Federal Regulations, 2010 CFR
2010-10-01
... payments based on cost, or guaranteed loan. (3) Contain a price warranty or price reduction clause. (4.... (5) Include an economic price adjustment clause where the adjustment is not based solely on an... property, compliance with the price reduction clause). Counsel and the Assistant Inspector General—Auditing...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-25
... directly in investments comprising or otherwise based on any combination of futures contracts, options on futures contracts, forward contracts, swap contracts, commodities and/or securities rather than solely in... investments comprising or otherwise based on any combination of futures contracts, options on futures...
10 CFR 503.34 - Inability to comply with applicable environmental requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
... environmental compliance of the facility, including an analysis of its ability to meet applicable standards and... will be based solely on an analysis of the petitioner's capacity to physically achieve applicable... exemption. All such analysis must be based on accepted analytical techniques, such as air quality modeling...
10 CFR 503.34 - Inability to comply with applicable environmental requirements.
Code of Federal Regulations, 2013 CFR
2013-01-01
... environmental compliance of the facility, including an analysis of its ability to meet applicable standards and... will be based solely on an analysis of the petitioner's capacity to physically achieve applicable... exemption. All such analysis must be based on accepted analytical techniques, such as air quality modeling...
Schram, Edward; Bierman, Stijn; Teal, Lorna R.; Haenen, Olga; van de Vis, Hans; Rijnsdorp, Adriaan D.
2013-01-01
Dover sole (Solea solea) is an obligate ectotherm with a natural thermal habitat ranging from approximately 5 to 27°C. Thermal optima for growth lie in the range of 20 to 25°C. More precise information on thermal optima for growth is needed for cost-effective Dover sole aquaculture. The main objective of this study was to determine the optimal growth temperature of juvenile Dover sole (Solea solea) and in addition to test the hypothesis that the final preferendum equals the optimal growth temperature. Temperature preference was measured in a circular preference chamber for Dover sole acclimated to 18, 22 and 28°C. Optimal growth temperature was measured by rearing Dover sole at 19, 22, 25 and 28°C. The optimal growth temperature resulting from this growth experiment was 22.7°C for Dover sole with a size between 30 to 50 g. The temperature preferred by juvenile Dover sole increases with acclimation temperature and exceeds the optimal temperature for growth. A final preferendum could not be detected. Although a confounding effect of behavioural fever on temperature preference could not be entirely excluded, thermal preference and thermal optima for physiological processes seem to be unrelated in Dover sole. PMID:23613837
Schram, Edward; Bierman, Stijn; Teal, Lorna R; Haenen, Olga; van de Vis, Hans; Rijnsdorp, Adriaan D
2013-01-01
Dover sole (Solea solea) is an obligate ectotherm with a natural thermal habitat ranging from approximately 5 to 27°C. Thermal optima for growth lie in the range of 20 to 25°C. More precise information on thermal optima for growth is needed for cost-effective Dover sole aquaculture. The main objective of this study was to determine the optimal growth temperature of juvenile Dover sole (Solea solea) and in addition to test the hypothesis that the final preferendum equals the optimal growth temperature. Temperature preference was measured in a circular preference chamber for Dover sole acclimated to 18, 22 and 28°C. Optimal growth temperature was measured by rearing Dover sole at 19, 22, 25 and 28°C. The optimal growth temperature resulting from this growth experiment was 22.7°C for Dover sole with a size between 30 to 50 g. The temperature preferred by juvenile Dover sole increases with acclimation temperature and exceeds the optimal temperature for growth. A final preferendum could not be detected. Although a confounding effect of behavioural fever on temperature preference could not be entirely excluded, thermal preference and thermal optima for physiological processes seem to be unrelated in Dover sole.
Reliability based design including future tests and multiagent approaches
NASA Astrophysics Data System (ADS)
Villanueva, Diane
The initial stages of reliability-based design optimization involve the formulation of objective functions and constraints, and building a model to estimate the reliability of the design with quantified uncertainties. However, even experienced hands often overlook important objective functions and constraints that affect the design. In addition, uncertainty reduction measures, such as tests and redesign, are often not considered in reliability calculations during the initial stages. This research considers two areas that concern the design of engineering systems: 1) the trade-off of the effect of a test and post-test redesign on reliability and cost and 2) the search for multiple candidate designs as insurance against unforeseen faults in some designs. In this research, a methodology was developed to estimate the effect of a single future test and post-test redesign on reliability and cost. The methodology uses assumed distributions of computational and experimental errors with re-design rules to simulate alternative future test and redesign outcomes to form a probabilistic estimate of the reliability and cost for a given design. Further, it was explored how modeling a future test and redesign provides a company an opportunity to balance development costs versus performance by simultaneously designing the design and the post-test redesign rules during the initial design stage. The second area of this research considers the use of dynamic local surrogates, or surrogate-based agents, to locate multiple candidate designs. Surrogate-based global optimization algorithms often require search in multiple candidate regions of design space, expending most of the computation needed to define multiple alternate designs. Thus, focusing on solely locating the best design may be wasteful. We extended adaptive sampling surrogate techniques to locate multiple optima by building local surrogates in sub-regions of the design space to identify optima. The efficiency of this method was studied, and the method was compared to other surrogate-based optimization methods that aim to locate the global optimum using two two-dimensional test functions, a six-dimensional test function, and a five-dimensional engineering example.
Idiosyncratic risk in the Dow Jones Eurostoxx50 Index
NASA Astrophysics Data System (ADS)
Daly, Kevin; Vo, Vinh
2008-07-01
Recent evidence by Campbell et al. [J.Y. Campbell, M. Lettau B.G. Malkiel, Y. Xu, Have individual stocks become more volatile? An empirical exploration of idiosyncratic risk, The Journal of Finance (February) (2001)] shows an increase in firm-level volatility and a decline of the correlation among stock returns in the US. In relation to the Euro-Area stock markets, we find that both aggregate firm-level volatility and average stock market correlation have trended upwards. We estimate a linear model of the market risk-return relationship nested in an EGARCH(1, 1)-M model for conditional second moments. We then show that traditional estimates of the conditional risk-return relationship, that use ex-post excess-returns as the conditioning information set, lead to joint tests of the theoretical model (usually the ICAPM) and of the Efficient Market Hypothesis in its strong form. To overcome this problem we propose alternative measures of expected market risk based on implied volatility extracted from traded option prices and we discuss the conditions under which implied volatility depends solely on expected risk. We then regress market excess-returns on lagged market implied variance computed from implied market volatility to estimate the relationship between expected market excess-returns and expected market risk.We investigate whether, as predicted by the ICAPM, the expected market risk is the main factor in explaining the market risk premium and the latter is independent of aggregate idiosyncratic risk.
Ebrahimifar, Jafar; Allahyari, Hossein
2017-01-01
The parasitoid wasp, Eretmocerus delhiensis (Hymenoptera, Aphelinidae) is a thelytokous and syn-ovigenic parasitoid. To evaluate E. delhiensis as a biocontrol agent in greenhouse, the killing efficiency of this parasitoid by parasitism and host-feeding, were studied. Killing efficiency can be compared by estimation of functional response parameters. Laboratory experiments were performed in controllable conditions to evaluate the functional response of E. delhiensis at eight densities (2, 4, 8, 16, 32, 64, 100, and 120 third nymphal stage) of Trialeurodes vaporariorum (Hemiptera, Aleyrodidae) on two hosts including; tomato and prickly lettuce. The maximum likelihood estimates from regression logistic analysis revealed type II functional response for two host plants and the type of functional response was not affected by host plant. Roger’s model was used to fit the data. The attack rate (a) for E. delhiensis was 0.0286 and 0.0144 per hour on tomato and 0.0434 and 0.0170 per hour on prickly lettuce for parasitism and host feeding, respectively. Furthermore, estimated handling times (Th) were 0.4911 and 1.4453 h on tomato and 0.5713 and 1.5001 h on prickly lettuce for parasitism and host feeding, respectively. Based on 95% confidence interval, functional response parameters were significantly different between the host plants solely in parasitism. Results of this study opens new insight in the host parasitoid interactions, subsequently needs further investigation before utilizing it for management and reduction of greenhouse whitefly. PMID:28423420
A Foot-Arch Parameter Measurement System Using a RGB-D Camera.
Chun, Sungkuk; Kong, Sejin; Mun, Kyung-Ryoul; Kim, Jinwook
2017-08-04
The conventional method of measuring foot-arch parameters is highly dependent on the measurer's skill level, so accurate measurements are difficult to obtain. To solve this problem, we propose an autonomous geometric foot-arch analysis platform that is capable of capturing the sole of the foot and yields three foot-arch parameters: arch index (AI), arch width (AW) and arch height (AH). The proposed system captures 3D geometric and color data on the plantar surface of the foot in a static standing pose using a commercial RGB-D camera. It detects the region of the foot surface in contact with the footplate by applying the clustering and Markov random field (MRF)-based image segmentation methods. The system computes the foot-arch parameters by analyzing the 2/3D shape of the contact region. Validation experiments were carried out to assess the accuracy and repeatability of the system. The average errors for AI, AW, and AH estimation on 99 data collected from 11 subjects during 3 days were -0.17%, 0.95 mm, and 0.52 mm, respectively. Reliability and statistical analysis on the estimated foot-arch parameters, the robustness to the change of weights used in the MRF, the processing time were also performed to show the feasibility of the system.
The continuum of hydroclimate variability in western North America during the last millennium
Ault, Toby R.; Cole, Julia E.; Overpeck, Jonathan T.; Pederson, Gregory T.; St. George, Scott; Otto-Bliesner, Bette; Woodhouse, Connie A.; Deser, Clara
2013-01-01
The distribution of climatic variance across the frequency spectrum has substantial importance for anticipating how climate will evolve in the future. Here we estimate power spectra and power laws (ß) from instrumental, proxy, and climate model data to characterize the hydroclimate continuum in western North America (WNA). We test the significance of our estimates of spectral densities and ß against the null hypothesis that they reflect solely the effects of local (non-climate) sources of autocorrelation at the monthly timescale. Although tree-ring based hydroclimate reconstructions are generally consistent with this null hypothesis, values of ß calculated from long-moisture sensitive chronologies (as opposed to reconstructions), and other types of hydroclimate proxies, exceed null expectations. We therefore argue that there is more low-frequency variability in hydroclimate than monthly autocorrelation alone can generate. Coupled model results archived as part of the Climate Model Intercomparison Project 5 (CMIP5) are consistent with the null hypothesis and appear unable to generate variance in hydroclimate commensurate with paleoclimate records. Consequently, at decadal to multidecadal timescales there is more variability in instrumental and proxy data than in the models, suggesting that the risk of prolonged droughts under climate change may be underestimated by CMIP5 simulations of the future.
A Foot-Arch Parameter Measurement System Using a RGB-D Camera
Kong, Sejin; Mun, Kyung-Ryoul; Kim, Jinwook
2017-01-01
The conventional method of measuring foot-arch parameters is highly dependent on the measurer’s skill level, so accurate measurements are difficult to obtain. To solve this problem, we propose an autonomous geometric foot-arch analysis platform that is capable of capturing the sole of the foot and yields three foot-arch parameters: arch index (AI), arch width (AW) and arch height (AH). The proposed system captures 3D geometric and color data on the plantar surface of the foot in a static standing pose using a commercial RGB-D camera. It detects the region of the foot surface in contact with the footplate by applying the clustering and Markov random field (MRF)-based image segmentation methods. The system computes the foot-arch parameters by analyzing the 2/3D shape of the contact region. Validation experiments were carried out to assess the accuracy and repeatability of the system. The average errors for AI, AW, and AH estimation on 99 data collected from 11 subjects during 3 days were −0.17%, 0.95 mm, and 0.52 mm, respectively. Reliability and statistical analysis on the estimated foot-arch parameters, the robustness to the change of weights used in the MRF, the processing time were also performed to show the feasibility of the system. PMID:28777349
Development of mapped stress-field boundary conditions based on a Hill-type muscle model.
Cardiff, P; Karač, A; FitzPatrick, D; Flavin, R; Ivanković, A
2014-09-01
Forces generated in the muscles and tendons actuate the movement of the skeleton. Accurate estimation and application of these musculotendon forces in a continuum model is not a trivial matter. Frequently, musculotendon attachments are approximated as point forces; however, accurate estimation of local mechanics requires a more realistic application of musculotendon forces. This paper describes the development of mapped Hill-type muscle models as boundary conditions for a finite volume model of the hip joint, where the calculated muscle fibres map continuously between attachment sites. The applied muscle forces are calculated using active Hill-type models, where input electromyography signals are determined from gait analysis. Realistic muscle attachment sites are determined directly from tomography images. The mapped muscle boundary conditions, implemented in a finite volume structural OpenFOAM (ESI-OpenCFD, Bracknell, UK) solver, are employed to simulate the mid-stance phase of gait using a patient-specific natural hip joint, and a comparison is performed with the standard point load muscle approach. It is concluded that physiological joint loading is not accurately represented by simplistic muscle point loading conditions; however, when contact pressures are of sole interest, simplifying assumptions with regard to muscular forces may be valid. Copyright © 2014 John Wiley & Sons, Ltd.
To what extent can biogenic SOA be controlled?
Carlton, Annmarie G; Pinder, Robert W; Bhave, Prakash V; Pouliot, George A
2010-05-01
The implicit assumption that biogenic secondary organic aerosol (SOA) is natural and can not be controlled hinders effective air quality management. Anthropogenic pollution facilitates transformation of naturally emitted volatile organic compounds (VOCs) to the particle phase, enhancing the ambient concentrations of biogenic secondary organic aerosol (SOA). It is therefore conceivable that some portion of ambient biogenic SOA can be removed by controlling emissions of anthropogenic pollutants. Direct measurement of the controllable fraction of biogenic SOA is not possible, but can be estimated through 3-dimensional photochemical air quality modeling. To examine this in detail, 22 CMAQ model simulations were conducted over the continental U.S. (August 15 to September 4, 2003). The relative contributions of five emitted pollution classes (i.e., NO(x), NH(3), SO(x), reactive non methane carbon (RNMC) and primary carbonaceous particulate matter (PCM)) on biogenic SOA were estimated by removing anthropogenic emissions of these pollutants, one at a time and all together. Model results demonstrate a strong influence of anthropogenic emissions on predicted biogenic SOA concentrations, suggesting more than 50% of biogenic SOA in the eastern U.S. can be controlled. Because biogenic SOA is substantially enhanced by controllable emissions, classification of SOA as biogenic or anthropogenic based solely on VOC origin is not sufficient to describe the controllable fraction.
Kirschner, Wolf; Dudenhausen, Joachim W; Henrich, Wolfgang
2016-04-01
The conditions of iron deficiency are highly incident in pregnancy with elevated risks for preterm birth and low birth weight. In our recent study, we found 6% of participants having anemia, whereas between 39% and 47% showed iron deficiency without anemia. In many countries in prenatal care solely hemoglobin (Hb) measurement is applied. For the gynecologists till date there is no indication to determine other markers (e.g., serum-ferritin). As iron deficiency results from an imbalance between intake and loss of iron, our aim was to find out if the risk of iron deficiency conditions can be estimated by a diet history protocol as well as questionnaires to find about iron loss. We found that the risk of having iron deficiency in upper gestational week (>=21) increased by a factor of five. Thus, additional diagnostics should be done in this group by now. Using the questionnaire as a screening instrument, we further estimated the probability of disease in terms of a positive likelihood ratio (LR+). The positive LR for the group below 21th week of gestation is 1.9 thus, increasing the post-test probability to 52% from 36% as before. Further research based on higher sample sizes will show if the ratios can be increased further.
Shobeiri, Fatemeh; Manoucheri, Behnaz; Parsa, Parisa; Roshanaei, Ghodratolah
2017-06-01
Increase of fatigue may lead to problems during pregnancy, delivery and post delivery. Sole reflexology is the application of pressure to areas on the feet. Reflexology is generally relaxing and may be an effective way to alleviate fatigue and stress. To investigate the effect of counselling and sole reflexology on fatigue in pregnant women, referred to the medical centers of Hamadan city, Iran. This study was a randomized clinical trial with three groups - Group A (counselling and reflexology), Group B (reflexology) and Group C (control) with pre and post intervention. A total of forty two pregnant women were selected for each group. Measurement tool was a 30 question standard checklist for fatigue assessment. For all three groups, an explanatory session was held to get their written consents and conduct a pretest. The intervention included five education sessions, twice a week about reflexology in the form of counselling and sole reflexology. The groups were assessed immediately after intervention. Data were analysed using IBM SPSS Statistics 20.0. To analyse the data, descriptive statistics, t test and ANOVA with repeated measures were used. In group A and group B, the mean score of fatigue severity after the intervention demonstrated a significant decrease (p<0.05); furthermore, after intervention, a significant difference was observed between the control and experimental groups in terms of fatigue severity (p<0.01). Based on the results of this study, counselling and sole reflexology significantly decreased fatigue in pregnant women. It is hoped that the results of this study can be used by all treatment groups and midwives for controlling and providing midwifery cares for pregnant women.
Shobeiri, Fatemeh; Parsa, Parisa; Roshanaei, Ghodratolah
2017-01-01
Introduction Increase of fatigue may lead to problems during pregnancy, delivery and post delivery. Sole reflexology is the application of pressure to areas on the feet. Reflexology is generally relaxing and may be an effective way to alleviate fatigue and stress. Aim To investigate the effect of counselling and sole reflexology on fatigue in pregnant women, referred to the medical centers of Hamadan city, Iran. Materials and Methods This study was a randomized clinical trial with three groups - Group A (counselling and reflexology), Group B (reflexology) and Group C (control) with pre and post intervention. A total of forty two pregnant women were selected for each group. Measurement tool was a 30 question standard checklist for fatigue assessment. For all three groups, an explanatory session was held to get their written consents and conduct a pretest. The intervention included five education sessions, twice a week about reflexology in the form of counselling and sole reflexology. The groups were assessed immediately after intervention. Data were analysed using IBM SPSS Statistics 20.0. To analyse the data, descriptive statistics, t test and ANOVA with repeated measures were used. Results In group A and group B, the mean score of fatigue severity after the intervention demonstrated a significant decrease (p<0.05); furthermore, after intervention, a significant difference was observed between the control and experimental groups in terms of fatigue severity (p<0.01). Conclusion Based on the results of this study, counselling and sole reflexology significantly decreased fatigue in pregnant women. It is hoped that the results of this study can be used by all treatment groups and midwives for controlling and providing midwifery cares for pregnant women. PMID:28764252
Wilke, M; Rathmayer, M; Schenker, M; Schepp, W
2016-05-01
Neoplastic changes (mild or high grade intraepithelial neoplasia (L- or HGIEN) or early cancer) in Barrett esophagus are treated with various methods. This study compares clinical-economical aspects of sole stepwise radical endoscopic resection (SRER) against combination treatment with EMR (Endoscopic mucosal resection) and RFA (radiofrequency ablation). Based on clinical data from a randomized controlled trial 1 we developed an economic model for costs of treatment according to the German Hospital Remuneration System (G-DRG). Our calculating incorporated initial treatment costs and the cost of treating complications (both paid via G-DRG). Medical and economically, the treatment with EMR + RFA advantages over sole SRER treatment 1. The successful complete resection or destruction of neoplastic intestinal metaplastic tissue is similar in both procedures. Acute complications (24 vs. 13 % in SRER EMR + RFA) and late complications (88 vs. 13 % in SRER EMR + RFA) are significantly more likely in sole SRER than in the EMR + RFA. While SRER initially appears more cost-effective as a sole therapy, cost levels move significantly above EMR+RFA due to higher complication rates and following procedures costs. Overall, the costs of treatment was € 13 272.11 in the SRER group and € 11 389.33 in the EMR + RFA group. The EMR + RFA group thus achieved a cost advantage of € 1882.78. The study shows that the treatment of neoplastic Barrett esophagus with EMR + RFA is also appropriate in economic terms. © Georg Thieme Verlag KG Stuttgart · New York.
NASA Astrophysics Data System (ADS)
Wilderbuer, T. K.; Hollowed, A. B.; Ingraham, W. J.; Spencer, P. D.; Conners, M. E.; Bond, N. A.; Walters, G. E.
2002-10-01
This paper provides a retrospective analysis of the relationship of physical oceanography and biology and recruitment of three Eastern Bering Sea flatfish stocks: flathead sole ( Hippoglossoides elassodon), northern rock sole ( Lepidopsetta polyxystra), and arrowtooth flounder ( Atheresthes stomias) for the period 1978-1996. Temporal trends in flatfish production in the Eastern Bering Sea are consistent with the hypothesis that decadal scale climate variability influences marine survival during the early life history period. Density-dependence (spawning stock size) is statistically significant in a Ricker model of flatfish recruitment, which includes environmental terms. Wind-driven advection of flatfish larvae to favorable nursery grounds was also found to coincide with years of above-average recruitment through the use of an ocean surface current simulation model (OSCURS). Ocean forcing of Bristol Bay surface waters during springtime was mostly shoreward (eastward) during the 1980s and seaward (westerly) during the 1990s, corresponding with periods of good and poor recruitment. Distance from shore and water depth at the endpoint of 90-day drift periods (estimated time of settlement) were also found to correspond with flatfish productivity.
Das, Jyotirmoy; Aggarwal, Amit; Aggarwal, Naresh Kumar
2010-01-01
Since the invention of pulse oximetry by Takuo Aoyagi in the early 1970s, its use has expanded beyond the perioperative care into neonatal, paediatric and adult intensive care units (ICUs). Pulse oximetry is one of the most important advances in respiratory monitoring as its readings (SpO2) are used clinically as an indirect estimation of arterial oxygen saturation (SaO2). Sensors were placed frequently on the sole, palm, ear lobe or toes in addition to finger. On performing an extensive Medline search using the terms “accuracy of pulse oximetry” and “precision of pulse oximetry”, limited data were found in congenital heart disease patients in the immediate post-corrective stage. Also, there are no reports and comparative data of the reliability and precision of pulse oximetry when readings from five different sensor locations (viz. finger, palm, toe, sole and ear) are analysed simultaneously. To fill these lacunae of knowledge, we undertook the present study in 50 infants and children with cyanotic heart disease in the immediate post-corrective stage. PMID:21224970
Das, Jyotirmoy; Aggarwal, Amit; Aggarwal, Naresh Kumar
2010-11-01
Since the invention of pulse oximetry by Takuo Aoyagi in the early 1970s, its use has expanded beyond the perioperative care into neonatal, paediatric and adult intensive care units (ICUs). Pulse oximetry is one of the most important advances in respiratory monitoring as its readings (SpO(2)) are used clinically as an indirect estimation of arterial oxygen saturation (SaO(2)). Sensors were placed frequently on the sole, palm, ear lobe or toes in addition to finger. On performing an extensive Medline search using the terms "accuracy of pulse oximetry" and "precision of pulse oximetry", limited data were found in congenital heart disease patients in the immediate post-corrective stage. Also, there are no reports and comparative data of the reliability and precision of pulse oximetry when readings from five different sensor locations (viz. finger, palm, toe, sole and ear) are analysed simultaneously. To fill these lacunae of knowledge, we undertook the present study in 50 infants and children with cyanotic heart disease in the immediate post-corrective stage.
Friend, Tynan H; Paula, Ashley; Klemm, Jason; Rosa, Mark; Levine, Wilton
2018-05-28
Being the economic powerhouses of most large medical centers, operating rooms (ORs) require the highest levels of teamwork, communication, and efficiency in order to optimize patient safety and reduce hospital waste. A major component of OR waste comes from unused surgical instrumentation; instruments that are frequently prepared for procedures but are never touched by the surgical team still require a full reprocessing cycle at the conclusion of the case. Based on our own previous successes in the perioperative domain, in this work we detail an initiative that reduces surgical instrumentation waste of video-assisted thoracoscopic surgery (VATS) procedures by placing thoracotomy conversion instrumentation in a standby location and designing a specific instrument kit to be used solely for VATS cases. Our estimates suggest that this initiative will reduce at least 91,800 pounds of unnecessary surgical instrumentation from cycling through our ORs and reprocessing department annually, resulting in increased OR team communication without sacrificing the highest standard of patient safety.
Huusom, Henrik; Strange, Niels
2008-04-01
The theoretical concept, "asset specificity," is applied to real data in the context of Danish nature conservation network planning in order to produce illustrative examples of an economic measure of the network's vulnerability to exogenous shocks to the species composition. Three different measures of asset specificity are quantified from the shadow value of eliminating a key species from the individual grid cells. This represents a novel approach and a different interpretation of the term, as it is conventionally used as a qualitative indicator in the transaction cost economics literature. Apart from supplementing existing cost measures with an indicator of risk associated with investments in protected areas, this study demonstrates how the estimation and interpretation of various asset specificity measures for geographical areas may qualify policy makers' choice of policy instrument in conservation planning. This differs from the more intuitive approach of basing policy instrument choice solely on the rarity of the species in a given area.
How to Be Proactive About Interference: Lessons From Animal Memory
Wright, Anthony A.; Katz, Jeffrey S.; Ma, Wei Ji
2015-01-01
Processes of proactive interference were explored using the pigeon as a model system of memory. This study shows that proactive interference extends back in time at least 16 trials (and as many minutes), revealing a continuum of interference and providing a framework for studying memory. Pigeons were tested in a delayed same/different task containing trial-unique pictures. On interference trials, sample pictures from previous trials reappeared as test pictures on different trials. Proactive-interference functions showed greatest interference from the most recent trial and with the longer of two delays (10 s vs. 1 s). These interference functions are accounted for by a time-estimation model based on signal detection theory. The model predicts that accuracy at test is determined solely by the ratio of the elapsed time since the offset of the current-trial sample to the elapsed time since the offset of the interfering sample. Implications for comparing memory of different species and different types of memory (e.g., familiarity vs. recollection) are discussed. PMID:22491142
How to be proactive about interference: lessons from animal memory.
Wright, Anthony A; Katz, Jeffrey S; Ma, Wei Ji
2012-05-01
Processes of proactive interference were explored using the pigeon as a model system of memory. This study shows that proactive interference extends back in time at least 16 trials (and as many minutes), revealing a continuum of interference and providing a framework for studying memory. Pigeons were tested in a delayed same/different task containing trial-unique pictures. On interference trials, sample pictures from previous trials reappeared as test pictures on different trials. Proactive-interference functions showed greatest interference from the most recent trial and with the longer of two delays (10 s vs. 1 s). These interference functions are accounted for by a time-estimation model based on signal detection theory. The model predicts that accuracy at test is determined solely by the ratio of the elapsed time since the offset of the current-trial sample to the elapsed time since the offset of the interfering sample. Implications for comparing memory of different species and different types of memory (e.g., familiarity vs. recollection) are discussed.
The role of storage dynamics in annual wheat prices
NASA Astrophysics Data System (ADS)
Schewe, Jacob; Otto, Christian; Frieler, Katja
2017-05-01
Identifying the drivers of global crop price fluctuations is essential for estimating the risks of unexpected weather-induced production shortfalls and for designing optimal response measures. Here we show that with a consistent representation of storage dynamics, a simple supply-demand model can explain most of the observed variations in wheat prices over the last 40 yr solely based on time series of annual production and long term demand trends. Even the most recent price peaks in 2007/08 and 2010/11 can be explained by additionally accounting for documented changes in countries’ trade policies and storage strategies, without the need for external drivers such as oil prices or speculation across different commodity or stock markets. This underlines the critical sensitivity of global prices to fluctuations in production. The consistent inclusion of storage into a dynamic supply-demand model closes an important gap when it comes to exploring potential responses to future crop yield variability under climate and land-use change.
Gleason, Colin J.; Smith, Laurence C.
2014-01-01
Rivers provide critical water supply for many human societies and ecosystems, yet global knowledge of their flow rates is poor. We show that useful estimates of absolute river discharge (in cubic meters per second) may be derived solely from satellite images, with no ground-based or a priori information whatsoever. The approach works owing to discovery of a characteristic scaling law uniquely fundamental to natural rivers, here termed a river’s at-many-stations hydraulic geometry. A first demonstration using Landsat Thematic Mapper images over three rivers in the United States, Canada, and China yields absolute discharges agreeing to within 20–30% of traditional in situ gauging station measurements and good tracking of flow changes over time. Within such accuracies, the door appears open for quantifying river resources globally with repeat imaging, both retroactively and henceforth into the future, with strong implications for water resource management, food security, ecosystem studies, flood forecasting, and geopolitics. PMID:24639551
Sunkara, Adhira
2015-01-01
As we navigate through the world, eye and head movements add rotational velocity patterns to the retinal image. When such rotations accompany observer translation, the rotational velocity patterns must be discounted to accurately perceive heading. The conventional view holds that this computation requires efference copies of self-generated eye/head movements. Here we demonstrate that the brain implements an alternative solution in which retinal velocity patterns are themselves used to dissociate translations from rotations. These results reveal a novel role for visual cues in achieving a rotation-invariant representation of heading in the macaque ventral intraparietal area. Specifically, we show that the visual system utilizes both local motion parallax cues and global perspective distortions to estimate heading in the presence of rotations. These findings further suggest that the brain is capable of performing complex computations to infer eye movements and discount their sensory consequences based solely on visual cues. DOI: http://dx.doi.org/10.7554/eLife.04693.001 PMID:25693417
Perspectives About Personalization for mHealth Solutions Against Noise Pollution.
Kepplinger, Sara; Liebetrau, Judith; Clauss, Tobias; Pharow, Peter
2017-01-01
Noise harms the environmental quality and can have negative effect on health and wellbeing. Providing silent areas and periods of rest is one way to improve the perceived environmental quality. However, realization is not easy in the day to day life. The usage of mHealth solutions which can provide information about the sound of a certain area and the respective effect on humans could be supportive. As the perception of sound is highly subjective, the prediction of the perceived acoustic environments is very difficult. This paper describes a course of action to develop an automatic estimation of an acoustic environment, based on the measurement of sound properties solely. The challenges of this endeavor are explained in detail. Possible application areas in mHealth are identified and presented. This future vision paper wants to draw the attention to different possibilities to cope with noise pollution either by personal behavior change or by using personalized data to reach out for a more general applicability for example through soundscape.
Stem cell divisions, somatic mutations, cancer etiology, and cancer prevention.
Tomasetti, Cristian; Li, Lu; Vogelstein, Bert
2017-03-24
Cancers are caused by mutations that may be inherited, induced by environmental factors, or result from DNA replication errors (R). We studied the relationship between the number of normal stem cell divisions and the risk of 17 cancer types in 69 countries throughout the world. The data revealed a strong correlation (median = 0.80) between cancer incidence and normal stem cell divisions in all countries, regardless of their environment. The major role of R mutations in cancer etiology was supported by an independent approach, based solely on cancer genome sequencing and epidemiological data, which suggested that R mutations are responsible for two-thirds of the mutations in human cancers. All of these results are consistent with epidemiological estimates of the fraction of cancers that can be prevented by changes in the environment. Moreover, they accentuate the importance of early detection and intervention to reduce deaths from the many cancers arising from unavoidable R mutations. Copyright © 2017, American Association for the Advancement of Science.
Isolation and identification of efficient Egyptian malathion-degrading bacterial isolates.
Hamouda, S A; Marzouk, M A; Abbassy, M A; Abd-El-Haleem, D A; Shamseldin, Abdelaal
2015-03-01
Bacterial isolates degrading malathion were isolated from the soil and agricultural waste water due to their ability to grow on minimal salt media amended with malathion as a sole carbon source. Efficiencies of native Egyptian bacterial malathion-degrading isolates were investigated and the study generated nine highly effective malathion-degrading bacterial strains among 40. Strains were identified by partial sequencing of 16S rDNA analysis. Comparative analysis of 16S rDNA sequences revealed that these bacteria are similar with the genus Acinetobacter and Bacillus spp. and RFLP based PCR of 16S rDNA gave four different RFLP patterns among strains with enzyme HinfI while with enzyme HaeI they gave two RFLP profiles. The degradation rate of malathion in liquid culture was estimated using gas chromatography. Bacterial strains could degrade more than 90% of the initial malathion concentration (1000 ppm) within 4 days. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Shirzaei, Manoochehr; Bürgmann, Roland
2018-01-01
The current global projections of future sea level rise are the basis for developing inundation hazard maps. However, contributions from spatially variable coastal subsidence have generally not been considered in these projections. We use synthetic aperture radar interferometric measurements and global navigation satellite system data to show subsidence rates of less than 2 mm/year along most of the coastal areas along San Francisco Bay. However, rates exceed 10 mm/year in some areas underlain by compacting artificial landfill and Holocene mud deposits. The maps estimating 100-year inundation hazards solely based on the projection of sea level rise from various emission scenarios underestimate the area at risk of flooding by 3.7 to 90.9%, compared with revised maps that account for the contribution of local land subsidence. Given ongoing land subsidence, we project that an area of 125 to 429 km2 will be vulnerable to inundation, as opposed to 51 to 413 km2 considering sea level rise alone. PMID:29536042
Gleason, Colin J; Smith, Laurence C
2014-04-01
Rivers provide critical water supply for many human societies and ecosystems, yet global knowledge of their flow rates is poor. We show that useful estimates of absolute river discharge (in cubic meters per second) may be derived solely from satellite images, with no ground-based or a priori information whatsoever. The approach works owing to discovery of a characteristic scaling law uniquely fundamental to natural rivers, here termed a river's at-many-stations hydraulic geometry. A first demonstration using Landsat Thematic Mapper images over three rivers in the United States, Canada, and China yields absolute discharges agreeing to within 20-30% of traditional in situ gauging station measurements and good tracking of flow changes over time. Within such accuracies, the door appears open for quantifying river resources globally with repeat imaging, both retroactively and henceforth into the future, with strong implications for water resource management, food security, ecosystem studies, flood forecasting, and geopolitics.
13 CFR 134.102 - Jurisdiction of OHA.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) Denial of program admission based solely on a negative finding as to social disadvantage, economic....html or through OHA's Web site http://www.sba.gov/oha) and subpart H of this part; (s) Appeals from...
Impact of cutting meat intake on hidden greenhouse gas emissions in an import-reliant city
NASA Astrophysics Data System (ADS)
Yau, Y. Y.; Thibodeau, B.; Not, C.
2018-06-01
Greenhouse gas emissions embodied in trade is a growing concern for the international community. Multiple studies have highlighted drawbacks in the territorial and production-based accounting of greenhouse gas emissions because it neglects emissions from the consumption of goods in trade. This creates weak carbon leakage and complicates international agreements on emissions regulations. Therefore, we estimated consumption-based emissions using input-output analysis and life cycle assessment to calculate the greenhouse gas emissions hidden in meat and dairy products in Hong Kong, a city predominately reliant on imports. We found that emissions solely from meat and dairy consumption were higher than the city’s total greenhouse gas emissions using conventional production-based calculation. This implies that government reports underestimate more than half of the emissions, as 62% of emissions are embodied in international trade. The discrepancy emphasizes the need of transitioning climate targets and policy to consumption-based accounting. Furthermore, we have shown that dietary change from a meat-heavy diet to a diet in accordance with governmental nutrition guidelines could achieve a 67% reduction in livestock-related emissions, allowing Hong Kong to achieve the Paris Agreement targets for 2030. Consequently, we concluded that consumption-based accounting for greenhouse gas emissions is crucial to target the areas where emissions reduction is realistically achievable, especially for import-reliant cities like Hong Kong.
5 CFR 550.143 - Bases for determining positions for which premium pay under § 550.141 is authorized.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Bases for determining positions for which... Standby Duty Pay § 550.143 Bases for determining positions for which premium pay under § 550.141 is... isolation, or solely because the employee lives on the grounds. (2) The hours during which the requirement...
5 CFR 550.143 - Bases for determining positions for which premium pay under § 550.141 is authorized.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Bases for determining positions for which... Standby Duty Pay § 550.143 Bases for determining positions for which premium pay under § 550.141 is... isolation, or solely because the employee lives on the grounds. (2) The hours during which the requirement...
Mechanical response tissue analyzer for estimating bone strength
NASA Technical Reports Server (NTRS)
Arnaud, Sara B.; Steele, Charles; Mauriello, Anthony
1991-01-01
One of the major concerns for extended space flight is weakness of the long bones of the legs, composed primarily of cortical bone, that functions to provide mechanical support. The strength of cortical bone is due to its complex structure, described simplistically as cylinders of parallel osteons composed of layers of mineralized collagen. The reduced mechanical stresses during space flight or immobilization of bone on Earth reduces the mineral content, and changes the components of its matrix and structure so that its strength is reduced. Currently, the established clinical measures of bone strength are indirect. The measures are based on determinations of mineral density by means of radiography, photon absorptiometry, and quantitative computer tomography. While the mineral content of bone is essential to its strength, there is growing awareness of the limitations of the measurement as the sole predictor of fracture risk in metabolic bone diseases, especially limitations of the measurement as the sole predictor of fracture risk in metabolic bone diseases, especially osteoporosis. Other experimental methods in clinical trials that more directly evaluate the physical properties of bone, and do not require exposure to radiation, include ultrasound, acoustic emission, and low-frequency mechanical vibration. The last method can be considered a direct measure of the functional capacity of a long bone since it quantifies the mechanical response to a stimulus delivered directly to the bone. A low frequency vibration induces a response (impedance) curve with a minimum at the resonant frequency, that a few investigators use for the evaluation of the bone. An alternative approach, the method under consideration, is to use the response curve as the basis for determination of the bone bending stiffness EI (E is the intrinsic material property and I is the cross-sectional moment of inertia) and mass, fundamental mechanical properties of bone.
Neural Issues in the Control of Muscular Strength
ERIC Educational Resources Information Center
Kamen, Gary
2004-01-01
During the earliest stages of resistance exercise training, initial muscular strength gains occur too rapidly to be explained solely by muscle-based mechanisms. However, increases in surface-based EMG amplitude as well as motor unit discharge rate provide some insight to the existence of neural mechanisms in the earliest phases of resistance…
Cranford, James A.; Bohnert, Kipling M.; Perron, Brian E.; Bourque, Carrie; Ilgen, Mark
2016-01-01
Purpose To examine the prevalence and correlates of vaporization (i.e., “vaping”) as a route of cannabis administration in a sample of medical cannabis patients. Procedures Adults ages 21 and older (N = 1,485 M age = 45.1) who were seeking medical cannabis certification (either for the first time or as a renewal) at medical cannabis clinics in southern Michigan completed a screening assessment. Participants completed measures of route of cannabis administration, cannabis use, alcohol and other substance use. Findings An estimated 39% (n=511) of the sample reported past-month cannabis vaping, but vaping as the sole route of cannabis administration was rare. Specifically, only 30 participants (2.3% of the full sample and 5.9% of those who reported any vaping) indicated vaping as the sole route of cannabis administration. The majority (87.3%) of those who reported vaping also reported smoking (combustion) as a route of cannabis administration. Being younger than age 44, having more than a high school education, engaging in nonmedical stimulant use, being a returning medical cannabis patient, and greater frequency of cannabis use were associated with higher odds of vaping at the bivariate level and with all variables considered simultaneously. Conclusions Vaping appears to be relatively common among medical cannabis patients, but is seldom used as the sole route of cannabis administration. Results highlight the importance of monitoring trends in vaping and other substance use behaviors in this population and underscore the need for longitudinal research into the motives, correlates, and consequences of cannabis vaping in medical cannabis patients. PMID:27770657
Cranford, James A; Bohnert, Kipling M; Perron, Brian E; Bourque, Carrie; Ilgen, Mark
2016-12-01
To examine the prevalence and correlates of vaporization (i.e., "vaping") as a route of cannabis administration in a sample of medical cannabis patients. Adults ages 21 and older (N=1485M age=45.1) who were seeking medical cannabis certification (either for the first time or as a renewal) at medical cannabis clinics in southern Michigan completed a screening assessment. Participants completed measures of route of cannabis administration, cannabis use, alcohol and other substance use. An estimated 39% (n=511) of the sample reported past-month cannabis vaping, but vaping as the sole route of cannabis administration was rare. Specifically, only 30 participants (2.3% of the full sample and 5.9% of those who reported any vaping) indicated vaping as the sole route of cannabis administration. The majority (87.3%) of those who reported vaping also reported smoking (combustion) as a route of cannabis administration. Being younger than age 44, having more than a high school education, engaging in nonmedical stimulant use, being a returning medical cannabis patient, and greater frequency of cannabis use were associated with higher odds of vaping at the bivariate level and with all variables considered simultaneously. Vaping appears to be relatively common among medical cannabis patients, but is seldom used as the sole route of cannabis administration. highlight the importance of monitoring trends in vaping and other substance use behaviors in this population and underscore the need for longitudinal research into the motives, correlates, and consequences of cannabis vaping in medical cannabis patients. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Stahnke, N; Liebscher, V; Staubach, C; Ziller, M
2013-11-01
The analysis of epidemiological field data from monitoring and surveillance systems (MOSSs) in wild animals is of great importance in order to evaluate the performance of such systems. By parameter estimation from MOSS data, conclusions about disease dynamics in the observed population can be drawn. To strengthen the analysis, the implementation of a maximum likelihood estimation is the main aim of our work. The new approach presented here is based on an underlying simple SIR (susceptible-infected-recovered) model for a disease scenario in a wildlife population. The three corresponding classes are assumed to govern the intensities (number of animals in the classes) of non-homogeneous Poisson processes. A sampling rate was defined which describes the process of data collection (for MOSSs). Further, the performance of the diagnostics was implemented in the model by a diagnostic matrix containing misclassification rates. Both descriptions of these MOSS parts were included in the Poisson process approach. For simulation studies, the combined model demonstrates its ability to validly estimate epidemiological parameters, such as the basic reproduction rate R0. These parameters will help the evaluation of existing disease control systems. They will also enable comparison with other simulation models. The model has been tested with data from a Classical Swine Fever (CSF) outbreak in wild boars (Sus scrofa scrofa L.) from a region of Germany (1999-2002). The results show that the hunting strategy as a sole control tool is insufficient to decrease the threshold for susceptible animals to eradicate the disease, since the estimated R0 confirms an ongoing epidemic of CSF. Copyright © 2013 Elsevier B.V. All rights reserved.
Impact of Planetary Boundary Layer Depth on Climatological Tracer Transport in the GEOS-5 AGCM
NASA Astrophysics Data System (ADS)
McGrath-Spangler, E. L.; Molod, A.
2013-12-01
Planetary boundary layer (PBL) processes have large implications for tropospheric tracer transport since surface fluxes are diluted by the depth of the PBL through vertical mixing. However, no consensus on PBL depth definition currently exists and various methods for estimating this parameter can give results that differ by hundreds of meters or more. In order to facilitate comparisons between the Goddard Earth Observation System (GEOS-5) and other modeling and observational systems, seven PBL depth estimation methods are used to diagnose PBL depth and produce climatologies that are evaluated here. All seven methods evaluate a single atmosphere so differences are related solely to the definition chosen. PBL depths that are estimated using a Richardson number are shallower than those given by methods based on the scalar diffusivity during warm, moist conditions at midday and collapse to lower values at night. In GEOS-5, the PBL depth is used in the estimation of the turbulent length scale and so impacts vertical mixing. Changing the method used to determine the PBL depth for this length scale thus changes the tracer transport. Using a bulk Richardson number method instead of a scalar diffusivity method produces changes in the quantity of Saharan dust lofted into the free troposphere and advected to North America, with more surface dust in North America during boreal summer and less in boreal winter. Additionally, greenhouse gases are considerably impacted. During boreal winter, changing the PBL depth definition produces carbon dioxide differences of nearly 5 ppm over Siberia and gradients of about 5 ppm over 1000 km in Europe. PBL depth changes are responsible for surface carbon monoxide changes of 20 ppb or more over the biomass burning regions of Africa.
The contribution of viral hepatitis to the burden of chronic liver disease in the United States.
Roberts, Henry W; Utuama, Ovie A; Klevens, Monina; Teshale, Eyasu; Hughes, Elizabeth; Jiles, Ruth
2014-03-01
Chronic liver disease (CLD) is increasingly recognized as a major public health problem. However, in the United States, there are few nationally representative data on the contribution of viral hepatitis as an etiology of CLD. We applied a previously used International Classification of Diseases, Ninth Revision, Clinical Modification-based definition of CLD cases to the National Ambulatory Medical Care Survey and National Hospital Ambulatory Medical Care Survey databases for 2006-2010. We estimated the mean number of CLD visits per year, prevalence ratio of visits by patient characteristics, and the percentage of CLD visits attributed to viral hepatitis and other selected etiologies. An estimated 6.0 billion ambulatory care visits occurred in the United States from 2006 to 2010, of which an estimated 25.8 million (0.43%) were CLD-related. Among adults aged 45-64 years, Medicaid and Medicare recipients were 3.9 (prevalence ratio (PR)=3.9, 95% confidence limit (CL; 2.8, 5.4)) and 2.3 (PR=2.3, 95% CL (1.6, 3.4)) times more likely to have a CLD-related ambulatory visit than those with private insurance, respectively. In the United States, from 2006 to 2010, an estimated 49.6% of all CLD-related ambulatory visits were attributed solely to viral hepatitis B and C diagnoses. In this unique application of health-care utilization data, we confirm that viral hepatitis is an important etiology of CLD in the United States, with hepatitis B and C contributing approximately one-half of the CLD burden. CLD ambulatory visits in the United States disproportionately occur among adults, aged 45-64 years, who are primarily minorities, men, and Medicare or Medicaid recipients.
NASA Astrophysics Data System (ADS)
Wilson, Barry T.; Knight, Joseph F.; McRoberts, Ronald E.
2018-03-01
Imagery from the Landsat Program has been used frequently as a source of auxiliary data for modeling land cover, as well as a variety of attributes associated with tree cover. With ready access to all scenes in the archive since 2008 due to the USGS Landsat Data Policy, new approaches to deriving such auxiliary data from dense Landsat time series are required. Several methods have previously been developed for use with finer temporal resolution imagery (e.g. AVHRR and MODIS), including image compositing and harmonic regression using Fourier series. The manuscript presents a study, using Minnesota, USA during the years 2009-2013 as the study area and timeframe. The study examined the relative predictive power of land cover models, in particular those related to tree cover, using predictor variables based solely on composite imagery versus those using estimated harmonic regression coefficients. The study used two common non-parametric modeling approaches (i.e. k-nearest neighbors and random forests) for fitting classification and regression models of multiple attributes measured on USFS Forest Inventory and Analysis plots using all available Landsat imagery for the study area and timeframe. The estimated Fourier coefficients developed by harmonic regression of tasseled cap transformation time series data were shown to be correlated with land cover, including tree cover. Regression models using estimated Fourier coefficients as predictor variables showed a two- to threefold increase in explained variance for a small set of continuous response variables, relative to comparable models using monthly image composites. Similarly, the overall accuracies of classification models using the estimated Fourier coefficients were approximately 10-20 percentage points higher than the models using the image composites, with corresponding individual class accuracies between six and 45 percentage points higher.
Frisch, Simon; Dshemuchadse, Maja; Görner, Max; Goschke, Thomas; Scherbaum, Stefan
2015-11-01
Selective attention biases information processing toward stimuli that are relevant for achieving our goals. However, the nature of this bias is under debate: Does it solely rely on the amplification of goal-relevant information or is there a need for additional inhibitory processes that selectively suppress currently distracting information? Here, we explored the processes underlying selective attention with a dynamic, modeling-based approach that focuses on the continuous evolution of behavior over time. We present two dynamic neural field models incorporating the diverging theoretical assumptions. Simulations with both models showed that they make similar predictions with regard to response times but differ markedly with regard to their continuous behavior. Human data observed via mouse tracking as a continuous measure of performance revealed evidence for the model solely based on amplification but no indication of persisting selective distracter inhibition.
Moraes, Carolina Borsoi; Yang, Gyongseon; Kang, Myungjoo; Freitas-Junior, Lucio H.; Hansen, Michael A. E.
2014-01-01
We present a customized high content (image-based) and high throughput screening algorithm for the quantification of Trypanosoma cruzi infection in host cells. Based solely on DNA staining and single-channel images, the algorithm precisely segments and identifies the nuclei and cytoplasm of mammalian host cells as well as the intracellular parasites infecting the cells. The algorithm outputs statistical parameters including the total number of cells, number of infected cells and the total number of parasites per image, the average number of parasites per infected cell, and the infection ratio (defined as the number of infected cells divided by the total number of cells). Accurate and precise estimation of these parameters allow for both quantification of compound activity against parasites, as well as the compound cytotoxicity, thus eliminating the need for an additional toxicity-assay, hereby reducing screening costs significantly. We validate the performance of the algorithm using two known drugs against T.cruzi: Benznidazole and Nifurtimox. Also, we have checked the performance of the cell detection with manual inspection of the images. Finally, from the titration of the two compounds, we confirm that the algorithm provides the expected half maximal effective concentration (EC50) of the anti-T. cruzi activity. PMID:24503652
Perco, Paul; Heinzel, Andreas; Leierer, Johannes; Schneeberger, Stefan; Bösmüller, Claudia; Oberhuber, Rupert; Wagner, Silvia; Engler, Franziska; Mayer, Gert
2018-05-03
Donor organ quality affects long term outcome after renal transplantation. A variety of prognostic molecular markers is available, yet their validity often remains undetermined. A network-based molecular model reflecting donor kidney status based on transcriptomics data and molecular features reported in scientific literature to be associated with chronic allograft nephropathy was created. Significantly enriched biological processes were identified and representative markers were selected. An independent kidney pre-implantation transcriptomics dataset of 76 organs was used to predict estimated glomerular filtration rate (eGFR) values twelve months after transplantation using available clinical data and marker expression values. The best-performing regression model solely based on the clinical parameters donor age, donor gender, and recipient gender explained 17% of variance in post-transplant eGFR values. The five molecular markers EGF, CD2BP2, RALBP1, SF3B1, and DDX19B representing key molecular processes of the constructed renal donor organ status molecular model in addition to the clinical parameters significantly improved model performance (p-value = 0.0007) explaining around 33% of the variability of eGFR values twelve months after transplantation. Collectively, molecular markers reflecting donor organ status significantly add to prediction of post-transplant renal function when added to the clinical parameters donor age and gender.
Acoustic window planning for ultrasound acquisition.
Göbl, Rüdiger; Virga, Salvatore; Rackerseder, Julia; Frisch, Benjamin; Navab, Nassir; Hennersperger, Christoph
2017-06-01
Autonomous robotic ultrasound has recently gained considerable interest, especially for collaborative applications. Existing methods for acquisition trajectory planning are solely based on geometrical considerations, such as the pose of the transducer with respect to the patient surface. This work aims at establishing acoustic window planning to enable autonomous ultrasound acquisitions of anatomies with restricted acoustic windows, such as the liver or the heart. We propose a fully automatic approach for the planning of acquisition trajectories, which only requires information about the target region as well as existing tomographic imaging data, such as X-ray computed tomography. The framework integrates both geometrical and physics-based constraints to estimate the best ultrasound acquisition trajectories with respect to the available acoustic windows. We evaluate the developed method using virtual planning scenarios based on real patient data as well as for real robotic ultrasound acquisitions on a tissue-mimicking phantom. The proposed method yields superior image quality in comparison with a naive planning approach, while maintaining the necessary coverage of the target. We demonstrate that by taking image formation properties into account acquisition planning methods can outperform naive plannings. Furthermore, we show the need for such planning techniques, since naive approaches are not sufficient as they do not take the expected image quality into account.
NASA Technical Reports Server (NTRS)
Hatfield, Glen S.; Hark, Frank; Stott, James
2016-01-01
Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account risks attributable to manufacturing, assembly, and process controls. These sources often dominate component level reliability or risk of failure probability. While consequences of failure is often understood in assessing risk, using predicted values in a risk model to estimate the probability of occurrence will likely underestimate the risk. Managers and decision makers often use the probability of occurrence in determining whether to accept the risk or require a design modification. Due to the absence of system level test and operational data inherent in aerospace applications, the actual risk threshold for acceptance may not be appropriately characterized for decision making purposes. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.
Boulder-based wave hindcasting underestimates storm size
NASA Astrophysics Data System (ADS)
Kennedy, David; Woods, Joesphine; Rosser, Nick; Hansom, James; Naylor, Larissa
2017-04-01
Large boulder-size clasts represent an important archive of erosion and wave activity on the coast. From tropical coral reefs to eroding cliffs in the high-latitudes, boulders have been used to hindcast the frequency and magnitude of cyclones and tsunami. Such reconstructions are based on the balance between the hydrodynamic forces acting on individual clasts and the counteracting resistive forces of friction and gravity. Here we test the three principle hindcasting relationships on nearly 1000 intertidal boulders in North Yorkshire, U.K using a combination of field and airborne terrestrial LiDAR data. We quantify the predicted versus actual rates of movement and the degree to which local geomorphology can retard or accelerate transport. Actual clast movement is significantly less than predicted values, regardless of boulder volume, shape or location. In situ cementation of clasts to the substrate by marine organisms and clustering of clasts increases friction thereby preventing transport. The implication is that boulders do not always provide a reliable estimation of wave height on the coast and reliance solely on hindcasting relationships leads to an under prediction of the frequency and magnitude of past storm wave activity. The crucial need for process field studies to refine boulder transport models is thus demonstrated.
Toxicokinetic and Dosimetry Modeling Tools for Exposure ...
New technologies and in vitro testing approaches have been valuable additions to risk assessments that have historically relied solely on in vivo test results. Compared to in vivo methods, in vitro high throughput screening (HTS) assays are less expensive, faster and can provide mechanistic insights on chemical action. However, extrapolating from in vitro chemical concentrations to target tissue or blood concentrations in vivo is fraught with uncertainties, and modeling is dependent upon pharmacokinetic variables not measured in in vitro assays. To address this need, new tools have been created for characterizing, simulating, and evaluating chemical toxicokinetics. Physiologically-based pharmacokinetic (PBPK) models provide estimates of chemical exposures that produce potentially hazardous tissue concentrations, while tissue microdosimetry PK models relate whole-body chemical exposures to cell-scale concentrations. These tools rely on high-throughput in vitro measurements, and successful methods exist for pharmaceutical compounds that determine PK from limited in vitro measurements and chemical structure-derived property predictions. These high throughput (HT) methods provide a more rapid and less resource–intensive alternative to traditional PK model development. We have augmented these in vitro data with chemical structure-based descriptors and mechanistic tissue partitioning models to construct HTPBPK models for over three hundred environmental and pharmace
Deslauriers, David; Rosburg, Alex J.; Chipps, Steven R.
2017-01-01
We developed a foraging model for young fishes that incorporates handling and digestion rate to estimate daily food consumption. Feeding trials were used to quantify functional feeding response, satiation, and gut evacuation rate. Once parameterized, the foraging model was then applied to evaluate effects of prey type, prey density, water temperature, and fish size on daily feeding rate by age-0 (19–70 mm) pallid sturgeon (Scaphirhynchus albus). Prey consumption was positively related to prey density (for fish >30 mm) and water temperature, but negatively related to prey size and the presence of sand substrate. Model evaluation results revealed good agreement between observed estimates of daily consumption and those predicted by the model (r2 = 0.95). Model simulations showed that fish feeding on Chironomidae or Ephemeroptera larvae were able to gain mass, whereas fish feeding solely on zooplankton lost mass under most conditions. By accounting for satiation and digestive processes in addition to handling time and prey density, the model provides realistic estimates of daily food consumption that can prove useful for evaluating rearing conditions for age-0 fishes.
Wang, Chi-Yu; Chang, Chun-Kai; Chou, Chang-Yi; Wu, Chien-Ju; Chu, Tzi-Shiang; Chiao, Hao-Yu; Chen, Chun-Yu; Chen, Tim-Mo; Tzeng, Yuan-Sheng
2018-02-01
Plantar hyperkeratosis, such as corns and calluses, is common in older people and associated with pain, mobility impairment, and functional limitations. It usually develops on the palms, knees, or soles of feet, especially under the heels or balls. There are several treatment methods for plantar hyperkeratosis, such as salicylic acid plaster and scalpel debridement, and conservative modalities, such as using a shoe insert and properly fitting shoes. We present an effective method of reconstructing the wound after corn excision using a split-thickness sole skin graft (STSSG). We harvested the skin graft from the arch of the sole using the dermatome with a skin thickness of 14/1000th inches. Because the split-thickness skin graft, harvested from the sole arch near the distal sole, is much thicker than the split-thickness skin graft from the thigh, it is more resistant to weight and friction. The healed wound with STSSG coverage over the distal sole was intact, and the donor site over the sole arch had healed without complication during the outpatient follow-up, 3 months after surgery. The recovery time of STSSG for corn excision is shorter than that with traditional treatment. Therefore, STSSG can be a reliable alternative treatment for recurrent palmoplantar hyperkeratosis.
Submarine harbor navigation using image data
NASA Astrophysics Data System (ADS)
Stubberud, Stephen C.; Kramer, Kathleen A.
2017-01-01
The process of ingress and egress of a United States Navy submarine is a human-intensive process that takes numerous individuals to monitor locations and for hazards. Sailors pass vocal information to bridge where it is processed manually. There is interest in using video imaging of the periscope view to more automatically provide navigation within harbors and other points of ingress and egress. In this paper, video-based navigation is examined as a target-tracking problem. While some image-processing methods claim to provide range information, the moving platform problem and weather concerns, such as fog, reduce the effectiveness of these range estimates. The video-navigation problem then becomes an angle-only tracking problem. Angle-only tracking is known to be fraught with difficulties, due to the fact that the unobservable space is not the null space. When using a Kalman filter estimator to perform the tracking, significant errors arise which could endanger the submarine. This work analyzes the performance of the Kalman filter when angle-only measurements are used to provide the target tracks. This paper addresses estimation unobservability and the minimal set of requirements that are needed to address it in this complex but real-world problem. Three major issues are addressed: the knowledge of navigation beacons/landmarks' locations, the minimal number of these beacons needed to maintain the course, and update rates of the angles of the landmarks as the periscope rotates and landmarks become obscured due to blockage and weather. The goal is to address the problem of navigation to and from the docks, while maintaining the traversing of the harbor channel based on maritime rules relying solely on the image-based data. The minimal number of beacons will be considered. For this effort, the image correlation from frame to frame is assumed to be achieved perfectly. Variation in the update rates and the dropping of data due to rotation and obscuration is considered. The analysis will be based on a simple straight-line channel harbor entry to the dock, similar to a submarine entering the submarine port in San Diego.
Müller, Elisabeth; Schüssler, Walter; Horn, Harald; Lemmer, Hilde
2013-08-01
Potential aerobic biodegradation mechanisms of the widely used polar, low-adsorptive sulfonamide antibiotic sulfamethoxazole (SMX) were investigated in activated sludge at bench scale. The study focused on (i) SMX co-metabolism with acetate and ammonium nitrate and (ii) SMX utilization when present as the sole carbon and nitrogen source. With SMX adsorption being negligible, elimination was primarily based on biodegradation. Activated sludge was able to utilize SMX both as a carbon and/or nitrogen source. SMX biodegradation was enhanced when a readily degradable energy supply (acetate) was provided which fostered metabolic activity. Moreover, it was raised under nitrogen deficiency conditions. The mass balance for dissolved organic carbon showed an incomplete SMX mineralization with two scenarios: (i) with SMX as a co-substrate, 3-amino-5-methyl-isoxazole represented the main stable metabolite and (ii) SMX as sole carbon and nitrogen source possibly yielded hydroxyl-N-(5-methyl-1,2-oxazole-3-yl)benzene-1-sulfonamide as a further metabolite. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Papagiannis, P.; Azariadis, P.; Papanikos, P.
2017-10-01
Footwear is subject to bending and torsion deformations that affect comfort perception. Following review of Finite Element Analysis studies of sole rigidity and comfort, a three-dimensional, linear multi-material finite element sole model for quasi-static bending and torsion simulation, overcoming boundary and optimisation limitations, is described. Common footwear materials properties and boundary conditions from gait biomechanics are used. The use of normalised strain energy for product benchmarking is demonstrated along with comfort level determination through strain energy density stratification. Sensitivity of strain energy against material thickness is greater for bending than for torsion, with results of both deformations showing positive correlation. Optimization for a targeted performance level and given layer thickness is demonstrated with bending simulations sufficing for overall comfort assessment. An algorithm for comfort optimization w.r.t. bending is presented, based on a discrete approach with thickness values set in line with practical manufacturing accuracy. This work illustrates the potential of the developed finite element analysis applications to offer viable and proven aids to modern footwear sole design assessment and optimization.
Transmyocardial revascularization by a 1000-watt CO2 laser: sole therapy (Abstract Only)
NASA Astrophysics Data System (ADS)
Crew, John R.; Dean, Marilyn; Jones, Reinold; Fisher, John C.
1993-05-01
The concept of transmyocardial revascularization (TMR) providing blood flow to the left heart muscle based on the reptilian heart model has now been extended from an adjunctive procedure with coronary artery bypass to sole therapy. At Seton Hospital and Medical Center TMR is now being performed for the first time in clinical trials with patients who have no other mechanism of perfusion and a history of either failed PTCA or coronary artery bypass, with angina already under maximum medical therapy with a demonstrable ischemic muscle target. Longevity and reperfusion by tomographic thallium of these laser-drilled holes has been previously demonstrated but effectiveness of these channels for primary perfusion (sole therapy) apart from normal coronary bypass collateral supply is under investigation. Phase I of the FDA study has been completed with 15 cases and now Phase II includes three other beta test sites along with alternative therapy in marginal cases as the investigational format for the next 50 cases. More than 2 year followup in the first 15 cases is presented.
Liu, Feng; Tian, Yu; Ding, Yi; Li, Zhipeng
2016-11-01
Wastewater primary sedimentation sludge was prepared into fermentation liquid as denitrification carbon source, and the main components of fermentation liquid was short-chain volatile fatty acids. Meanwhile, the acetic acid and propionic acid respectively accounted for about 29.36% and 26.56% in short-chain volatile fatty acids. The performance of fermentation liquid, methanol, acetic acid, propionic acid and glucose used as sole carbon source were compared. It was found that the denitrification rate with fermentation liquid as carbon source was 0.17mgNO3(-)-N/mg mixed liquor suspended solid d, faster than that with methanol, acetic acid, and propionic acid as sole carbon source, and lower than that with glucose as sole carbon source. For the fermentation liquid as carbon source, the transient accumulation of nitrite was insignificantly under different initial total nitrogen concentration. Therefore, the use of fermentation liquid for nitrogen removal could improve denitrification rate, and reduce nitrite accumulation in denitrification process. Copyright © 2016 Elsevier Ltd. All rights reserved.
Howes, Rosalind E.; Piel, Frédéric B.; Patil, Anand P.; Nyangiri, Oscar A.; Gething, Peter W.; Dewi, Mewahyu; Hogg, Mariana M.; Battle, Katherine E.; Padilla, Carmencita D.; Baird, J. Kevin; Hay, Simon I.
2012-01-01
Background Primaquine is a key drug for malaria elimination. In addition to being the only drug active against the dormant relapsing forms of Plasmodium vivax, primaquine is the sole effective treatment of infectious P. falciparum gametocytes, and may interrupt transmission and help contain the spread of artemisinin resistance. However, primaquine can trigger haemolysis in patients with a deficiency in glucose-6-phosphate dehydrogenase (G6PDd). Poor information is available about the distribution of individuals at risk of primaquine-induced haemolysis. We present a continuous evidence-based prevalence map of G6PDd and estimates of affected populations, together with a national index of relative haemolytic risk. Methods and Findings Representative community surveys of phenotypic G6PDd prevalence were identified for 1,734 spatially unique sites. These surveys formed the evidence-base for a Bayesian geostatistical model adapted to the gene's X-linked inheritance, which predicted a G6PDd allele frequency map across malaria endemic countries (MECs) and generated population-weighted estimates of affected populations. Highest median prevalence (peaking at 32.5%) was predicted across sub-Saharan Africa and the Arabian Peninsula. Although G6PDd prevalence was generally lower across central and southeast Asia, rarely exceeding 20%, the majority of G6PDd individuals (67.5% median estimate) were from Asian countries. We estimated a G6PDd allele frequency of 8.0% (interquartile range: 7.4–8.8) across MECs, and 5.3% (4.4–6.7) within malaria-eliminating countries. The reliability of the map is contingent on the underlying data informing the model; population heterogeneity can only be represented by the available surveys, and important weaknesses exist in the map across data-sparse regions. Uncertainty metrics are used to quantify some aspects of these limitations in the map. Finally, we assembled a database of G6PDd variant occurrences to inform a national-level index of relative G6PDd haemolytic risk. Asian countries, where variants were most severe, had the highest relative risks from G6PDd. Conclusions G6PDd is widespread and spatially heterogeneous across most MECs where primaquine would be valuable for malaria control and elimination. The maps and population estimates presented here reflect potential risk of primaquine-associated harm. In the absence of non-toxic alternatives to primaquine, these results represent additional evidence to help inform safe use of this valuable, yet dangerous, component of the malaria-elimination toolkit. Please see later in the article for the Editors' Summary PMID:23152723
Howes, Rosalind E; Piel, Frédéric B; Patil, Anand P; Nyangiri, Oscar A; Gething, Peter W; Dewi, Mewahyu; Hogg, Mariana M; Battle, Katherine E; Padilla, Carmencita D; Baird, J Kevin; Hay, Simon I
2012-01-01
Primaquine is a key drug for malaria elimination. In addition to being the only drug active against the dormant relapsing forms of Plasmodium vivax, primaquine is the sole effective treatment of infectious P. falciparum gametocytes, and may interrupt transmission and help contain the spread of artemisinin resistance. However, primaquine can trigger haemolysis in patients with a deficiency in glucose-6-phosphate dehydrogenase (G6PDd). Poor information is available about the distribution of individuals at risk of primaquine-induced haemolysis. We present a continuous evidence-based prevalence map of G6PDd and estimates of affected populations, together with a national index of relative haemolytic risk. Representative community surveys of phenotypic G6PDd prevalence were identified for 1,734 spatially unique sites. These surveys formed the evidence-base for a Bayesian geostatistical model adapted to the gene's X-linked inheritance, which predicted a G6PDd allele frequency map across malaria endemic countries (MECs) and generated population-weighted estimates of affected populations. Highest median prevalence (peaking at 32.5%) was predicted across sub-Saharan Africa and the Arabian Peninsula. Although G6PDd prevalence was generally lower across central and southeast Asia, rarely exceeding 20%, the majority of G6PDd individuals (67.5% median estimate) were from Asian countries. We estimated a G6PDd allele frequency of 8.0% (interquartile range: 7.4-8.8) across MECs, and 5.3% (4.4-6.7) within malaria-eliminating countries. The reliability of the map is contingent on the underlying data informing the model; population heterogeneity can only be represented by the available surveys, and important weaknesses exist in the map across data-sparse regions. Uncertainty metrics are used to quantify some aspects of these limitations in the map. Finally, we assembled a database of G6PDd variant occurrences to inform a national-level index of relative G6PDd haemolytic risk. Asian countries, where variants were most severe, had the highest relative risks from G6PDd. G6PDd is widespread and spatially heterogeneous across most MECs where primaquine would be valuable for malaria control and elimination. The maps and population estimates presented here reflect potential risk of primaquine-associated harm. In the absence of non-toxic alternatives to primaquine, these results represent additional evidence to help inform safe use of this valuable, yet dangerous, component of the malaria-elimination toolkit. Please see later in the article for the Editors' Summary.
James S. Meadows; Daniel A. Skojac
2012-01-01
Stand quality management is a new management strategy in which thinning prescriptions are based solely on tree quality rather than a quantitative level of residual stand density. As long as residual density falls within fairly broad limits, prescriptions are based on tree quality alone. We applied four thinning prescriptions based on stand quality management, along...
Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira
2015-12-18
An approach that is commonly used for calculating the retention time of a compound in GC departs from the thermodynamic properties ΔH, ΔS and ΔCp of phase change (from mobile to stationary). Such properties can be estimated by using experimental retention time data, which results in a non-linear regression problem for non-isothermal temperature programs. As shown in this work, the surface of the objective function (approximation error criterion) on the basis of thermodynamic parameters can be divided into three clearly defined regions, and solely in one of them there is a possibility for the global optimum to be found. The main contribution of this study was the development of an algorithm that distinguishes the different regions of the error surface and its use in the robust initialization of the estimation of parameters ΔH, ΔS and ΔCp. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Choudhury, Anustup; Farrell, Suzanne; Atkins, Robin; Daly, Scott
2017-09-01
We present an approach to predict overall HDR display quality as a function of key HDR display parameters. We first performed subjective experiments on a high quality HDR display that explored five key HDR display parameters: maximum luminance, minimum luminance, color gamut, bit-depth and local contrast. Subjects rated overall quality for different combinations of these display parameters. We explored two models | a physical model solely based on physically measured display characteristics and a perceptual model that transforms physical parameters using human vision system models. For the perceptual model, we use a family of metrics based on a recently published color volume model (ICT-CP), which consists of the PQ luminance non-linearity (ST2084) and LMS-based opponent color, as well as an estimate of the display point spread function. To predict overall visual quality, we apply linear regression and machine learning techniques such as Multilayer Perceptron, RBF and SVM networks. We use RMSE and Pearson/Spearman correlation coefficients to quantify performance. We found that the perceptual model is better at predicting subjective quality than the physical model and that SVM is better at prediction than linear regression. The significance and contribution of each display parameter was investigated. In addition, we found that combined parameters such as contrast do not improve prediction. Traditional perceptual models were also evaluated and we found that models based on the PQ non-linearity performed better.
Santos-Neto, Guilherme da Cruz; Beasley, Colin Robert; Schneider, Horacio; Pimpão, Daniel Mansur; Hoeh, Walter Randolph; Simone, Luiz Ricardo Lopes de; Tagliaro, Claudia Helena
2016-07-01
The current phylogenetic framework for the South American Hyriidae is solely based on morphological data. However, freshwater bivalve morphology is highly variable due to both genetic and environmental factors. The present study used both mitochondrial (COI and 16S) and nuclear (18S-ITS1) sequences in molecular phylogenetic analyses of nine Neotropical species of Hyriidae, collected from 15 South American rivers, and sequences of hyriids from Australia and New Zealand obtained from GenBank. The present molecular findings support traditional taxonomic proposals, based on morphology, for the South American subfamily Hyriinae, currently divided in three tribes: Hyriini, Castaliini and Rhipidodontini. Phylogenetic trees based on COI nucleotide sequences revealed at least four geographical groups of Castalia ambigua: northeast Amazon (Piriá, Tocantins and Caeté rivers), central Amazon, including C. quadrata (Amazon and Aripuanã rivers), north (Trombetas river), and C. ambigua from Peru. Genetic distances suggest that some specimens may be cryptic species. Among the Hyriini, a total evidence data set generated phylogenetic trees indicating that Paxyodon syrmatophorus and Prisodon obliquus are more closely related, followed by Triplodon corrugatus. The molecular clock, based on COI, agreed with the fossil record of Neotropical hyriids. The ancestor of both Australasian and Neotropical Hyriidae is estimated to have lived around 225million years ago. Copyright © 2016 Elsevier Inc. All rights reserved.
Tzamaloukas, Antonios H; Murata, Glen H; Piraino, Beth; Raj, Dominic S C; VanderJagt, Dorothy J; Bernardini, Judith; Servilla, Karen S; Sun, Yijuan; Glew, Robert H; Oreopoulos, Dimitrios G
2010-03-01
We identified factors that account for differences between lean body mass computed from creatinine kinetics (LBM(cr)) and from either body water (LBM(V)) or body mass index (LBM(BMI)) in patients on continuous peritoneal dialysis (CPD). We compared the LBM(cr) and LBM(V) or LBM(BMI) in hypothetical subjects and actual CPD patients. We studied 439 CPD patients in Albuquerque, Pittsburgh, and Toronto, with 925 clearance studies. Creatinine production was estimated using formulas derived in CPD patients. Body water (V) was estimated from anthropometric formulas. We calculated LBM(BMI) from a formula that estimates body composition based on body mass index. In hypothetical subjects, LBM values were calculated by varying the determinants of body composition (gender, diabetic status, age, weight, and height) one at a time, while the other determinants were kept constant. In actual CPD patients, multiple linear regression and logistic regression were used to identify factors associated with differences in the estimates of LBM (LBM(cr)
Muhsen, K; Anis, E; Rubinstein, U; Kassem, E; Goren, S; Shulman, L M; Ephros, M; Cohen, D
2018-01-01
The use of rotavirus pentavalent vaccine (RotaTeq ® ) as a sole vaccine within rotavirus universal immunization programmes remains limited. We examined the effectiveness of RotaTeq in preventing rotavirus gastroenteritis (RVGE) hospitalization in Israel, after the introduction of universal immunization against the disease. A test-negative case-control study included age-eligible children for universal RotaTeq immunization (aged 2-59 months, born in 2011-2015). Cases (n = 98) were patients who tested positive for rotavirus by immunochromatography; those who tested negative (n = 628) comprised the control group. Information on rotavirus immunization history was obtained through linkage with a national immunization registry. Vaccination status was compared between cases and controls, adjusted odds ratios (aORs) were obtained from logistic regression models, and vaccine effectiveness calculated as (1 - aOR)*100. Immunization with RotaTeq was less frequent in RVGE cases (73.5%) than in controls (90.1%), p < 0.001; this association persisted after controlling for potential confounders. Effectiveness of the complete vaccine series was estimated at 77% (95% confidence interval (CI): 49-90) in children aged 6-59 months, and 86% (95% CI: 65-94) in children aged 6-23 months; whereas for the incomplete series, the respective estimates were 72% (95% CI: 28-89) and 75% (95% CI: 30-91). Vaccine effectiveness was estimated at 79% (95% CI: 45-92) against G1P[8]-associated RVGE hospitalizations and 69% (95% CI: 11-89) against other genotype-RVGE hospitalizations. High effectiveness of RotaTeq as the sole rotavirus vaccine in a universal immunization programme was demonstrated in a high-income country. Although partial vaccination conferred protection, completing the vaccine series is warranted to maximize the benefit. Copyright © 2017 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
Single Nucleotide Polymorphisms Predict Symptom Severity of Autism Spectrum Disorder
ERIC Educational Resources Information Center
Jiao, Yun; Chen, Rong; Ke, Xiaoyan; Cheng, Lu; Chu, Kangkang; Lu, Zuhong; Herskovits, Edward H.
2012-01-01
Autism is widely believed to be a heterogeneous disorder; diagnosis is currently based solely on clinical criteria, although genetic, as well as environmental, influences are thought to be prominent factors in the etiology of most forms of autism. Our goal is to determine whether a predictive model based on single-nucleotide polymorphisms (SNPs)…
13 CFR 124.513 - Under what circumstances can a joint venture be awarded an 8(a) contract?
Code of Federal Regulations, 2010 CFR
2010-01-01
...; and (ii)(A) For a procurement having a revenue-based size standard, the procurement exceeds half the... an employee-based size standard, the procurement exceeds $10 million; (2) For sole source and... the purpose of performing one or more specific 8(a) contracts. (2) A joint venture agreement is...
Code of Federal Regulations, 2010 CFR
2010-07-01
... current employee or prospective employee based solely on the analysis of a polygraph test chart or the refusal to take a polygraph test. (b) Analysis of a polygraph test chart or refusal to take a polygraph..., job performance, etc. may be used as a basis for employment decisions. Employment decisions based on...
Literature-Based Reading Series for Grades K-3: Are They Truly Literature?
ERIC Educational Resources Information Center
Donoghue, Mildred R.
In its current adoptions of reading series for the elementary grades, California has chosen literature-based programs rather than the more traditional basal readers. A study investigated whether the contents of such readers for grades K-3 truly consist solely of literary selections, since it is well-recognized that quality books carry heavy…
ERIC Educational Resources Information Center
Zunz, Sharyn J.; Ferguson, Nancy L.; Senter, Meredith
2005-01-01
School based efforts need to embrace a continuum of care model that moves beyond solely primary prevention to address the needs of students who are identified as substance dependent. The extent of this problem and barriers to program implementation are presented. Efforts to offer services through Student/Assistance Programs, School-based…
ERIC Educational Resources Information Center
Singh, Nirbhay N.; Lancioni, Giulio E.; Singh, Angela D. A.; Winton, Alan S. W.; Singh, Ashvind N. A.; Singh, Judy
2011-01-01
Children and adolescents with Asperger syndrome occasionally exhibit aggressive behavior against peers and parents. In a multiple baseline design across subjects, three adolescents with Asperger syndrome were taught to use a mindfulness-based procedure called "Meditation on the Soles of the Feet" to control their physical aggression in the family…
ERIC Educational Resources Information Center
Green, Terrance L.
2017-01-01
Purpose: To equitably transform urban schools of color and the neighborhoods where they are nested requires approaches that promote community equity and foster solidarity among a range of stakeholders. However, most school-community approaches solely focus on improving school-based outcomes and leave educational leaders with little guidance for…
Filling the Gap: The Use of Intentional and Incidental Need-Meeting Financial Aid
ERIC Educational Resources Information Center
Cheslock, John J.; Hughes, Rodney P.; Cardelle, Rachel Frick; Heller, Donald E.
2018-01-01
When measuring institutional aid awards that address financial need, some researchers count all awards distributed based upon need-based criteria while other researchers count any awards that meet need. The sole use of either measure will omit key information, so we present two new measures--intentional and incidental need-meeting aid--that can be…
Estimating annualized earthquake losses for the conterminous United States
Jaiswal, Kishor S.; Bausch, Douglas; Chen, Rui; Bouabid, Jawhar; Seligson, Hope
2015-01-01
We make use of the most recent National Seismic Hazard Maps (the years 2008 and 2014 cycles), updated census data on population, and economic exposure estimates of general building stock to quantify annualized earthquake loss (AEL) for the conterminous United States. The AEL analyses were performed using the Federal Emergency Management Agency's (FEMA) Hazus software, which facilitated a systematic comparison of the influence of the 2014 National Seismic Hazard Maps in terms of annualized loss estimates in different parts of the country. The losses from an individual earthquake could easily exceed many tens of billions of dollars, and the long-term averaged value of losses from all earthquakes within the conterminous U.S. has been estimated to be a few billion dollars per year. This study estimated nationwide losses to be approximately $4.5 billion per year (in 2012$), roughly 80% of which can be attributed to the States of California, Oregon and Washington. We document the change in estimated AELs arising solely from the change in the assumed hazard map. The change from the 2008 map to the 2014 map results in a 10 to 20% reduction in AELs for the highly seismic States of the Western United States, whereas the reduction is even more significant for Central and Eastern United States.
Manning, D P; Jones, C
2001-04-01
Research over a period of about 18 years has shown that a microcellular polyurethane known as AP66033 is the most slip-resistant safety footwear soling material on oily and wet surfaces. In recent years it has been replaced in commercially available footwear by a dual density polyurethane (DDP) which has a dense outer layer and a soft microcellular backing. This research programme has compared the slip resistance of AP66033 with DDP and some rubber solings. In addition, data were obtained on the effects of soling and floor roughness, and floor polish on slip resistance. Some data were also obtained for walking on ice. The coefficient of friction (CoF) of the solings was measured on 19 water wet surfaces in three conditions: (I) when the solings were new, (II) following abrasion to create maximum roughness and (III) after polishing. The CoF was measured on four oily surfaces after each of 11 abrasion or polishing treatments. The profound effects of the roughening of all soles and of floor roughness on the CoF were demonstrated for both wet and oily surfaces. The superior slip resistance of AP66033 was confirmed for oily and wet conditions; however, some rubbers not suitable for safety footwear achieved higher CoF values on wet floors. All of the floor polishes reduced the CoF of all floors when contaminated with water. The mean CoF of DDP solings was lower than the mean for AP66033 on wet and oily surfaces. No safety footwear soling provided adequate grip on dry ice and the CoF was reduced by water on the ice. A rubber used for rock climbing footwear was one of the most slip-resistant solings on wet surfaces in the laboratory but recorded the lowest CoF on ice. It is concluded that the incidence of occupational injuries caused by slipping could be reduced by the following: (A) returning to safety footwear soled with the microcellular polyurethane AP66033; (B) abrading all new and smooth footwear solings with a belt sanding machine coated with P100 grit; (C) avoiding the use of floor polish; (D) informing the general public about the poor slip resistance of ordinary footwear on ice and the lowering of slip resistance in cold weather.
NASA Astrophysics Data System (ADS)
Leakey, Chris D. B.; Attrill, Martin J.; Fitzsimons, Mark F.
2009-04-01
Estuaries are regarded as valuable nursery habitats for many commercially important marine fishes, potentially providing a thermal resource, refuge from predators and a source of abundant prey. To assess the extent of estuarine use by juvenile (0+) common sole ( Solea solea), whiting ( Merlangius merlangus) and European seabass ( Dicentrarchus labrax) we: (1) developed techniques to distinguish between estuarine and coastally-caught juveniles using otolith chemistry; and (2) examined the accuracy with which multi-elemental signatures could re-classify juveniles to their region of collection. High-resolution solution-based inductively coupled plasma mass spectrometry (HB-SB-ICPMS) was used to quantify 32 elements within the juvenile otoliths; 14 elements occurred above detection limits for all samples. Some elemental distributions demonstrated clear differences between estuarine and coastally-caught fish. Multivariate analysis of the otolith chemistry data resulted in 95-100% re-classification accuracy to the region of collection. Estuarine and coastal signatures were most clearly defined for sole which, compared to bass and whiting, have low mobility and are less likely to move from estuarine to coastal habitats between larval settlement and later migration to adult stocks. Sole were the only species to reveal an energetic benefit associated with an estuarine juvenile phase. The physiological ability of bass to access upper estuarine regions was consistent with some elemental data, while the high mobility and restricted range of whiting resulted in less distinct otolith chemistries.
Ibarra-Zatarain, Z; Fatsini, E; Rey, S; Chereguini, O; Martin, I; Rasines, I; Alcaraz, C; Duncan, N
2016-11-01
The aim of this work was to characterize stress coping styles of Senegalese sole ( Solea senegalensis ) juveniles and breeders and to select an operational behavioural screening test (OBST) that can be used by the aquaculture industry to classify and select between behavioural phenotypes in order to improve production indicators. A total of 61 juveniles and 59 breeders were subjected to five individual behavioural tests and two grouping tests. At the end of the individual tests, all animals were blood sampled in order to measure cortisol, glucose and lactate. Three tests (restraining, new environment and confinement) characterized the stress coping style behaviour of Senegalese sole juveniles and breeders and demonstrated inter-individual consistency. Further, the tests when incorporated into a principal components analysis (PCA) (i) identified two principal axes of personality traits: 'fearfulness-reactivity' and 'activity-exploration', (ii) were representative of the physiological axis of stress coping style, and (iii) were validated by established group tests. This study proposed for the first time three individual coping style tests that reliably represented proactive and reactive personalities of Senegalese sole juveniles and breeders. In addition, the three proposed tests met some basic operational criteria (rapid testing, no special equipment and easy to apply and interpret) that could prove attractive for fish farmers to identify fish with a specific behaviour that gives advantages in the culture system and that could be used to establish selection-based breeding programmes to improve domestication and production.
Fatsini, E.; Rey, S.; Chereguini, O.; Martin, I.; Rasines, I.; Duncan, N.
2016-01-01
The aim of this work was to characterize stress coping styles of Senegalese sole (Solea senegalensis) juveniles and breeders and to select an operational behavioural screening test (OBST) that can be used by the aquaculture industry to classify and select between behavioural phenotypes in order to improve production indicators. A total of 61 juveniles and 59 breeders were subjected to five individual behavioural tests and two grouping tests. At the end of the individual tests, all animals were blood sampled in order to measure cortisol, glucose and lactate. Three tests (restraining, new environment and confinement) characterized the stress coping style behaviour of Senegalese sole juveniles and breeders and demonstrated inter-individual consistency. Further, the tests when incorporated into a principal components analysis (PCA) (i) identified two principal axes of personality traits: ‘fearfulness-reactivity’ and ‘activity-exploration’, (ii) were representative of the physiological axis of stress coping style, and (iii) were validated by established group tests. This study proposed for the first time three individual coping style tests that reliably represented proactive and reactive personalities of Senegalese sole juveniles and breeders. In addition, the three proposed tests met some basic operational criteria (rapid testing, no special equipment and easy to apply and interpret) that could prove attractive for fish farmers to identify fish with a specific behaviour that gives advantages in the culture system and that could be used to establish selection-based breeding programmes to improve domestication and production. PMID:28018634
Quantitative assessment of the equine hoof using digital radiography and magnetic resonance imaging.
Grundmann, I N M; Drost, W T; Zekas, L J; Belknap, J K; Garabed, R B; Weisbrode, S E; Parks, A H; Knopp, M V; Maierl, J
2015-09-01
Evaluation of laminitis cases relies on radiographic measurements of the equine foot. Reference values have not been established for all layers of the foot. To establish normal hoof wall and sole measurements using digital radiography (DR) and magnetic resonance imaging (MRI) and to document tissue components present in the dorsal hoof wall and solar layers seen on DR. Prospective observational case-control study. Digital radiography and MRI were performed on 50 cadaver front feet from 25 horses subjected to euthanasia for nonlameness-related reasons. Four observers measured hoof wall (dorsal, lateral and medial) and sole thickness (sagittal, lateral and medial) using DR and magnetic resonance images. One observer repeated the measurements 3 times. Inter- and intraobserver correlation was assessed. Digital radiography and MRI measurements for the normal hoof wall and sole were established. Inter- and intraobserver pairwise Pearson's correlation for DR (r>0.98) and MRI measurements (r>0.99) was excellent. Based on MRI, the less radiopaque layer on DR is comprised of the stratum lamellatum and stratum reticulare. Normal DR and MRI measurements for the hoof wall and sole were established. On DR images, the less radiopaque layer of the foot observed corresponds to the critical tissues injured in laminitis, the strata lamellatum and reticulare. These reference measurements may be used by the clinician to detect soft-tissue changes in the laminitic equine foot and provide a foundation for future research determining changes in these measurements in horses with laminitis. © 2014 EVJ Ltd.
EPA Region 1 Sole Source Aquifers
This coverage contains boundaries of EPA-approved sole source aquifers. Sole source aquifers are defined as an aquifer designated as the sole or principal source of drinking water for a given aquifer service area; that is, an aquifer which is needed to supply 50% or more of the drinking water for the area and for which there are no reasonable alternative sources should the aquifer become contaminated.The aquifers were defined by a EPA hydrogeologist. Aquifer boundaries were then drafted by EPA onto 1:24000 USGS quadrangles. For the coastal sole source aquifers the shoreline as it appeared on the quadrangle was used as a boundary. Delineated boundaries were then digitized into ARC/INFO.
ERIC Educational Resources Information Center
Farley, Dean E.
A study examined the treatment of sole community hospitals under the Tax Equity and Fiscal Responsibility Act of 1982 (TEFRA) and the Prospective Payment System (PPS) for Medicare as compared to the treatment of hospitals not designated as sole community hospitals under these same two policy guidelines. (A sole community hospital is defined as a…
NASA Astrophysics Data System (ADS)
Abookire, Alisa A.; Bailey, Kevin M.
2007-02-01
Dover sole ( Microstomus pacificus) and rex sole ( Glyptocephalus zachirus) are both commercially valuable, long-lived pleuronectids that are distributed widely throughout the North Pacific. While their ecology and life cycle have been described for southern stocks, few investigations have focused on these species at higher latitudes. We synthesized historical research survey data among critical developmental stages to determine the distribution of life cycle stages for both species in the northern Gulf of Alaska (GOA). Bottom trawl survey data from 1953 to 2004 (25 519 trawls) were used to characterize adult distribution during the non-spawning and spawning seasons, ichthyoplankton data from 1972 to 2003 (10 776 tows) were used to determine the spatial and vertical distribution of eggs and larvae, and small-meshed shrimp trawl survey data from 1972 to 2004 (6536 trawls) were used to characterize areas utilized by immature stages. During the non-spawning season, adult Dover sole and rex sole were widely distributed from the inner shelf to outer slope. While both species concentrated on the continental slope to spawn, Dover sole spawning areas were more geographically specific than rex sole. Although spawned in deep water, eggs of both species were found in surface waters near spawning areas. Dover sole larvae did not appear to have an organized migration from offshore spawning grounds toward coastal nursery areas, and our data indicated facultative settling to their juvenile habitat in winter. Rex sole larvae progressively moved cross-shelf toward shore as they grew from April to September, and larvae presumably settled in coastal nursery areas in the autumn. In contrast with studies in the southern end of their range, we found no evidence in the GOA that Dover or rex sole have pelagic larval stages longer than nine months; however, more sampling for large larvae is needed in winter offshore of the continental shelf as well as sampling for newly settled larvae over the shelf to verify an abbreviated pelagic larval stage for both species at the northern end of their range.
Real-time estimation and biofeedback of single-neuron firing rates using local field potentials
Hall, Thomas M.; Nazarpour, Kianoush; Jackson, Andrew
2014-01-01
The long-term stability and low-frequency composition of local field potentials (LFPs) offer important advantages for robust and efficient neuroprostheses. However, cortical LFPs recorded by multi-electrode arrays are often assumed to contain only redundant information arising from the activity of large neuronal populations. Here we show that multichannel LFPs in monkey motor cortex each contain a slightly different mixture of distinctive slow potentials that accompany neuronal firing. As a result, the firing rates of individual neurons can be estimated with surprising accuracy. We implemented this method in a real-time biofeedback brain–machine interface, and found that monkeys could learn to modulate the activity of arbitrary neurons using feedback derived solely from LFPs. These findings provide a principled method for monitoring individual neurons without long-term recording of action potentials. PMID:25394574
Calculating Henry’s Constants of Charged Molecules Using SPARC
SPARC Performs Automated Reasoning in Chemistry is a computer program designed to model physical and chemical properties of molecules solely based on thier chemical structure. SPARC uses a toolbox of mechanistic perturbation models to model intermolecular interactions. SPARC has ...
48 CFR 45.000 - Scope of part.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Government has acquired a lien or title solely because of partial, advance, progress, or performance-based... Section 45.000 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACT MANAGEMENT... Government property to contractors, contractors' management and use of Government property, and reporting...
5 CFR 9901.515 - Competitive examining procedures.
Code of Federal Regulations, 2010 CFR
2010-01-01
....515 Section 9901.515 Administrative Personnel DEPARTMENT OF DEFENSE HUMAN RESOURCES MANAGEMENT AND LABOR RELATIONS SYSTEMS (DEPARTMENT OF DEFENSE-OFFICE OF PERSONNEL MANAGEMENT) DEPARTMENT OF DEFENSE... status, or other prohibited criteria, and will be based solely on job-related factors. These policies...
Further analysis of a snowfall enhancement project in the Snowy Mountains of Australia
NASA Astrophysics Data System (ADS)
Manton, Michael J.; Peace, Andrew D.; Kemsley, Karen; Kenyon, Suzanne; Speirs, Johanna C.; Warren, Loredana; Denholm, John
2017-09-01
The first phase of the Snowy Precipitation Enhancement Research Project (SPERP-1) was a confirmatory experiment on winter orographic cloud seeding (Manton et al., 2011). Analysis of the data (Manton and Warren, 2011) found that a statistically significant impact of seeding could be obtained by removing any 5-hour experimental units (EUs) for which the amount of released seeding material was below a specified minimum. Analysis of the SPERP-1 data is extended in the present work by first considering the uncertainties in the measurement of precipitation and in the methodology. It is found that the estimation of the natural precipitation in the target area, based solely on the precipitation in the designated control area, is a significant source of uncertainty. A systematic search for optimal predictors shows that both the Froude number of the low-level flow across the mountains and the control precipitation should be used to estimate the natural precipitation. Applying the optimal predictors for the natural precipitation, statistically significant impacts are found using all EUs. This approach also supports a novel analysis of the sensitivity of seeding impacts to environmental variables, such as wind speed and cloud top temperature. The spatial distribution of seeding impact across the target is investigated. Building on the results of SPERP-1, phase 2 of the experiment (SPERP-2) ran from 2010 to 2013 with the target area extended to the north along the mountain ridges. Using the revised methodology, the seeding impacts in SPERP-2 are found to be consistent with those in SPERP-1, provided that the natural precipitation is estimated accurately.
1996-01-01
We developed and evaluated a total toxic units modeling approach for predicting mean toxicity as measured in laboratory tests for Great Lakes sediments containing complex mixtures of environmental contaminants (e.g., polychlorinated biphenyls, polycyclic aromatic hydrocarbons, pesticides, chlorinated dioxins, and metals). The approach incorporates equilibrium partitioning and organic carbon control of bioavailability for organic contaminants and acid volatile sulfide (AVS) control for metals, and includes toxic equivalency for planar organic chemicals. A toxic unit is defined as the ratio of the estimated pore-water concentration of a contaminant to the chronic toxicity of that contaminant, as estimated by U.S. Environmental Protection Agency Ambient Water Quality Criteria (AWQC). The toxic unit models we developed assume complete additivity of contaminant effects, are completely mechanistic in form, and were evaluated without any a posteriori modification of either the models or the data from which the models were developed and against which they were tested. A linear relationship between total toxic units, which included toxicity attributable to both iron and un-ionized ammonia, accounted for about 88% of observed variability in mean toxicity; a quadratic relationship accounted for almost 94%. Exclusion of either bioavailability components (i.e., equilibrium partitioning control of organic contaminants and AVS control of metals) or iron from the model substantially decreased its ability to predict mean toxicity. A model based solely on un-ionized ammonia accounted for about 47% of the variability in mean toxicity. We found the toxic unit approach to be a viable method for assessing and ranking the relative potential toxicity of contaminated sediments.
Malchiodi, F; Koeck, A; Mason, S; Christen, A M; Kelton, D F; Schenkel, F S; Miglior, F
2017-04-01
A national genetic evaluation program for hoof health could be achieved by using hoof lesion data collected directly by hoof trimmers. However, not all cows in the herds during the trimming period are always presented to the hoof trimmer. This preselection process may not be completely random, leading to erroneous estimations of the prevalence of hoof lesions in the herd and inaccuracies in the genetic evaluation. The main objective of this study was to estimate genetic parameters for individual hoof lesions in Canadian Holsteins by using an alternative cohort to consider all cows in the herd during the period of the hoof trimming sessions, including those that were not examined by the trimmer over the entire lactation. A second objective was to compare the estimated heritabilities and breeding values for resistance to hoof lesions obtained with threshold and linear models. Data were recorded by 23 hoof trimmers serving 521 herds located in Alberta, British Columbia, and Ontario. A total of 73,559 hoof-trimming records from 53,654 cows were collected between 2009 and 2012. Hoof lesions included in the analysis were digital dermatitis, interdigital dermatitis, interdigital hyperplasia, sole hemorrhage, sole ulcer, toe ulcer, and white line disease. All variables were analyzed as binary traits, as the presence or the absence of the lesions, using a threshold and a linear animal model. Two different cohorts were created: Cohort 1, which included only cows presented to hoof trimmers, and Cohort 2, which included all cows present in the herd at the time of hoof trimmer visit. Using a threshold model, heritabilities on the observed scale ranged from 0.01 to 0.08 for Cohort 1 and from 0.01 to 0.06 for Cohort 2. Heritabilities estimated with the linear model ranged from 0.01 to 0.07 for Cohort 1 and from 0.01 to 0.05 for Cohort 2. Despite a low heritability, the distribution of the sire breeding values showed large and exploitable variation among sires. Higher breeding values for hoof lesion resistance corresponded to sires with a higher prevalence of healthy daughters. The rank correlations between estimated breeding values ranged from 0.96 to 0.99 when predicted using either one of the 2 cohorts and from 0.94 to 0.99 when predicted using either a threshold or a linear model. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Kwicklis, Edward M.; Wolfsberg, Andrew V.; Stauffer, Philip H.; Walvoord, Michelle Ann; Sully, Michael J.
2006-01-01
Multiphase, multicomponent numerical models of long-term unsaturated-zone liquid and vapor movement were created for a thick alluvial basin at the Nevada Test Site to predict present-day liquid and vapor fluxes. The numerical models are based on recently developed conceptual models of unsaturated-zone moisture movement in thick alluvium that explain present-day water potential and tracer profiles in terms of major climate and vegetation transitions that have occurred during the past 10 000 yr or more. The numerical models were calibrated using borehole hydrologic and environmental tracer data available from a low-level radioactive waste management site located in a former nuclear weapons testing area. The environmental tracer data used in the model calibration includes tracers that migrate in both the liquid and vapor phases (??D, ??18O) and tracers that migrate solely as dissolved solutes (Cl), thus enabling the estimation of some gas-phase as well as liquid-phase transport parameters. Parameter uncertainties and correlations identified during model calibration were used to generate parameter combinations for a set of Monte Carlo simulations to more fully characterize the uncertainty in liquid and vapor fluxes. The calculated background liquid and vapor fluxes decrease as the estimated time since the transition to the present-day arid climate increases. However, on the whole, the estimated fluxes display relatively little variability because correlations among parameters tend to create parameter sets for which changes in some parameters offset the effects of others in the set. Independent estimates on the timing since the climate transition established from packrat midden data were essential for constraining the model calibration results. The study demonstrates the utility of environmental tracer data in developing numerical models of liquid- and gas-phase moisture movement and the importance of considering parameter correlations when using Monte Carlo analysis to characterize the uncertainty in moisture fluxes. ?? Soil Science Society of America.
Schuller, P; Walling, D E; Sepúlveda, A; Trumper, R E; Rouanet, J L; Pino, I; Castillo, A
2004-05-01
Intensification of agricultural production in south-central Chile since the 1970s has caused problems of increased soil erosion and associated soil degradation. These problems have prompted a shift from conventional tillage to no-till management practices. Faced with the need to establish the impact of this shift in soil management on rates of soil loss, the use of caesium-137 (137Cs) measurements has been explored. A novel procedure for using measurements of the 137Cs depth distribution to estimate rates of soil loss at a sampling point under the original conventional tillage and after the shift to no-till management has been developed. This procedure has been successfully applied to a study site at Buenos Aires farm near Carahue in the 9th region of Chile. The results obtained indicate that the shift from conventional tillage to no-till management has caused net rates of soil loss to decrease to about 40% of those existing under conventional tillage. This assessment of the impact of introducing no-till management at the study site must, however, be seen as provisional, since only a limited number of sampling points were used. A simplified procedure aimed at documenting the reduction in erosion rates at additional sampling points, based solely on measurements of the 137Cs inventory of bulk cores and the 137Cs activity in the upper part of the soil has been developed and successfully tested at the study site. Previous application of 137Cs measurements to estimate erosion rates has been limited to estimation of medium-term erosion rates during the period extending from the beginning of fallout receipt to the time of sampling. The procedures described in this paper, which permits estimation of the change in erosion rates associated with a shift in land management practices, must be seen as representing a novel application of 137Cs measurements in soil erosion investigations.
21 CFR 558.62 - Arsanilic acid.
Code of Federal Regulations, 2011 CFR
2011-04-01
... efficiency; improving pigmentation Withdraw 5 days before slaughter; as sole source of organic arsenic 015565... pigmentation. As erythromycin thiocyanate; withdraw 5 days before slaughter; as sole source of organic arsenic... pigmentation As erythromycin thiocyanate; withdraw 5 days before slaughter; as sole source of organic arsenic...
Micro Electro-Mechanical System (MEMS) Pressure Sensor for Footwear
Kholwadwala, Deepesh K.; Rohrer, Brandon R.; Spletzer, Barry L.; Galambos, Paul C.; Wheeler, Jason W.; Hobart, Clinton G.; Givler, Richard C.
2008-09-23
Footwear comprises a sole and a plurality of sealed cavities contained within the sole. The sealed cavities can be incorporated as deformable containers within an elastic medium, comprising the sole. A plurality of micro electro-mechanical system (MEMS) pressure sensors are respectively contained within the sealed cavity plurality, and can be adapted to measure static and dynamic pressure within each of the sealed cavities. The pressure measurements can provide information relating to the contact pressure distribution between the sole of the footwear and the wearer's environment.
Habitat Suitability Index Models: Juvenile English sole
Toole, Christopher L.; Barnhart, Roger A.; Onuf, Christopher P.
1987-01-01
English sole (Parophrys vetulus) is one of the major commercial groundfish species caught along the Pacific coast. Landings in the United States and Canada averaged 4,947 t/yr between 1975 and 1984, placing it third in importance among flatfish caught by Pacific coast trawlers (Pacific Marine Fisheries Commission 1985). Juvenile English sole are also among the most abundant fishes in many bays and estuaries along the Pacific (Westrheim 1955; Sopher 1974; Ambrose 1976; Rogers 1985). The English sole is not an important recreational species.
2007-06-15
the base -case, a series analysis can be performed by varying the various inputs to the network to examine the impact of potential changes to improve...successfully interrogated was the primary MOE. • Based solely on the cost benefit analysis , the RSTG found that the addition of an Unmanned Surface...cargo. The CBP uses a risk based analysis and intelligence to pre-screen, assess and examine 100% of suspicious containers. The remaining cargo is
DOE Office of Scientific and Technical Information (OSTI.GOV)
Varanasi, U.; Reichert, W.L.; Stein, J.E.
The 1-butanol adduct enhancement version of the 32P-postlabeling assay was used to measure the levels of hepatic DNA adducts in the marine flatfish, English sole (Parophrys vetulus), sampled from the Duwamish Waterway and Eagle Harbor, Puget Sound, WA, where they are exposed to high concentrations of sediment-associated chemical contaminants and exhibit an elevated prevalence of hepatic neoplasms. Hepatic DNA was also analyzed from English sole from a reference area (Useless Bay, WA) and from reference English sole treated with organic-solvent extracts of sediments from the two contaminated sites. Autoradiograms of thin-layer chromatograms of 32P-labeled hepatic DNA digests from English solemore » from the contaminated sites exhibited up to three diagonal radioactive zones, which were not present in autoradiograms of thin-layer chromatogram maps of 32P-labeled DNA digests from English sole from the reference site. These diagonal radioactive zones contained several distinct spots as well as what appeared to be multiple overlapping adduct spots. The levels (nmol of adducts/mol of nucleotides) of total DNA adducts for English sole from Duwamish Waterway and Eagle Harbor were 26 +/- 28 (DS) and 17 +/- 9.6, respectively. All autoradiograms of DNA from fish from the contaminated sites exhibited a diagonal radioactive zone where DNA adducts of chrysene, benzo(a)pyrene, and dibenz(a,h)anthracene, formed in vitro using English sole hepatic microsomes, were shown to chromatograph. English sole treated with extracts of the contaminated sediments had adduct profiles generally similar to those for English sole from the respective contaminated sites.« less
Yuki, Fuchigami; Rie, Ikeda; Miki, Kuzushima; Mitsuhiro, Wada; Naotaka, Kuroda; Kenichiro, Nakashima
2013-04-11
3,4-Methylenedioxymethamphetamine (MDMA) and methamphetamine often cause serious adverse effects (e.g., rhabdomyolysis, and cardiac disease) following hyperthermia triggered by release of brain monoamines such as dopamine and serotonin. Therefore, evaluation of brain monoamine concentrations is useful to predict these drugs' risks in human. This study aimed to evaluate risks of co-administration of MDMA and methamphetamine, both of which are abused frequently in Japan, based on drug distribution and monoamine level in the rat brain. Rats were allocated to three groups: (1) sole MDMA administration (12 or 25 mg/kg, intraperitoneally), (2) sole methamphetamine administration (10 mg/kg, intraperitoneally) and (3) co-administration of MDMA (12 mg/kg, intraperitoneally) and methamphetamine (10 mg/kg, intraperitoneally). We monitored pharmacokinetic and pharmacodynamic variables for drugs and monoamines in the rat brain. Area under the curve for concentration vs. time until 600 min from drug administration (AUC₀₋₆₀₀) increased from 348.0 to 689.8 μgmin/L for MDMA and from 29.9 to 243.4 μMmin for dopamine in response to co-administration of methamphetamine and MDMA compared to sole MDMA (12 mg/kg) administration. After sole methamphetamine or that with MDMA administration, AUC₀₋₆₀₀ of methamphetamine were 401.8 and 671.1 μgmin/L, and AUC₀₋₆₀₀ of dopamine were 159.9 and 243.4 μMmin. In conclusion, the brain had greater exposure to MDMA, methamphetamine and dopamine after co-administration of MDMA and methamphetamine than when these two drugs were given alone. This suggests co-administration of MDMA with methamphetamine confers greater risk than sole administration, and that adverse events of MDMA ingestion may increase when methamphetamine is co-administered. Copyright © 2013 Elsevier B.V. All rights reserved.
van Elk, Michiel; Matzke, Dora; Gronau, Quentin F.; Guan, Maime; Vandekerckhove, Joachim; Wagenmakers, Eric-Jan
2015-01-01
According to a recent meta-analysis, religious priming has a positive effect on prosocial behavior (Shariff et al., 2015). We first argue that this meta-analysis suffers from a number of methodological shortcomings that limit the conclusions that can be drawn about the potential benefits of religious priming. Next we present a re-analysis of the religious priming data using two different meta-analytic techniques. A Precision-Effect Testing–Precision-Effect-Estimate with Standard Error (PET-PEESE) meta-analysis suggests that the effect of religious priming is driven solely by publication bias. In contrast, an analysis using Bayesian bias correction suggests the presence of a religious priming effect, even after controlling for publication bias. These contradictory statistical results demonstrate that meta-analytic techniques alone may not be sufficiently robust to firmly establish the presence or absence of an effect. We argue that a conclusive resolution of the debate about the effect of religious priming on prosocial behavior – and about theoretically disputed effects more generally – requires a large-scale, preregistered replication project, which we consider to be the sole remedy for the adverse effects of experimenter bias and publication bias. PMID:26441741
The cost of different types of lameness in dairy cows calculated by dynamic programming.
Cha, E; Hertl, J A; Bar, D; Gröhn, Y T
2010-10-01
Traditionally, studies which placed a monetary value on the effect of lameness have calculated the costs at the herd level and rarely have they been specific to different types of lameness. These costs which have been calculated from former studies are not particularly useful for farmers in making economically optimal decisions depending on individual cow characteristics. The objective of this study was to calculate the cost of different types of lameness at the individual cow level and thereby identify the optimal management decision for each of three representative lameness diagnoses. This model would provide a more informed decision making process in lameness management for maximal economic profitability. We made modifications to an existing dynamic optimization and simulation model, studying the effects of various factors (incidence of lameness, milk loss, pregnancy rate and treatment cost) on the cost of different types of lameness. The average cost per case (US$) of sole ulcer, digital dermatitis and foot rot were 216.07, 132.96 and 120.70, respectively. It was recommended that 97.3% of foot rot cases, 95.5% of digital dermatitis cases and 92.3% of sole ulcer cases be treated. The main contributor to the total cost per case of sole ulcer was milk loss (38%), treatment cost for digital dermatitis (42%) and the effect of decreased fertility for foot rot (50%). This model affords versatility as it allows for parameters such as production costs, economic values and disease frequencies to be altered. Therefore, cost estimates are the direct outcome of the farm specific parameters entered into the model. Thus, this model can provide farmers economically optimal guidelines specific to their individual cows suffering from different types of lameness. Copyright © 2010 Elsevier B.V. All rights reserved.
Simionescu, A.; Werner, N.; Urban, O.; ...
2015-09-24
We present the first measurements of the abundances of α-elements (Mg, Si, and S) extending out beyond the virial radius of a cluster of galaxies. Our results, based on Suzaku Key Project observations of the Virgo Cluster, show that the chemical composition of the intracluster medium is consistent with being constant on large scales, with a flat distribution of the Si/Fe, S/Fe, and Mg/Fe ratios as a function of radius and azimuth out to 1.4 Mpc (1.3 r 200). Chemical enrichment of the intergalactic medium due solely to core-collapse supernovae (SNcc) is excluded with very high significance; instead, the measuredmore » metal abundance ratios are generally consistent with the solar value. The uniform metal abundance ratios observed today are likely the result of an early phase of enrichment and mixing, with both SNcc and SNe Ia contributing to the metal budget during the period of peak star formation activity at redshifts of 2–3. Furthermore, we estimate the ratio between the number of SNe Ia and the total number of supernovae enriching the intergalactic medium to be between 12% and 37%, broadly consistent with the metal abundance patterns in our own Galaxy or with the SN Ia contribution estimated for the cluster cores.« less
Reproductive biology and feeding habits of the prickly dogfish Oxynotus bruniensis.
Finucci, B; Bustamante, C; Jones, E G; Dunn, M R
2016-11-01
The reproductive biology and diet of prickly dogfish Oxynotus bruniensis, a deep-sea elasmobranch, endemic to the outer continental and insular shelves of southern Australia and New Zealand, and caught as by-catch in demersal fisheries, are described from specimens caught in New Zealand waters. A total of 53 specimens were obtained from research surveys and commercial fisheries, including juveniles and adults ranging in size from 33·5 to 75·6 cm total length (L T ). Estimated size-at-maturity was 54·7 cm L T in males and 64·0 cm L T in females. Three gravid females (65·0, 67·5 and 71·2 cm L T ) were observed, all with eight embryos. Size-at-birth was estimated to be 25-27 cm L T . Vitellogenesis was not concurrent with embryo development. Analysis of diet from stomach contents, including DNA identification of prey using the mitochondrial genes cox1 and nadh2, revealed that O. bruniensis preys exclusively on the egg capsules of holocephalans, potentially making it the only known elasmobranch with a diet reliant solely upon other chondrichthyans. Based on spatial overlap with deep-sea fisheries, a highly specialized diet, and reproductive characteristics representative of a low productivity fish, the commercial fisheries by-catch of O. bruniensis may put this species at relatively high risk of overfishing. © 2016 The Fisheries Society of the British Isles.
Finarelli, John A; Goswami, Anjali
2013-12-01
Reconstructing evolutionary patterns and their underlying processes is a central goal in biology. Yet many analyses of deep evolutionary histories assume that data from the fossil record is too incomplete to include, and rely solely on databases of extant taxa. Excluding fossil taxa assumes that character state distributions across living taxa are faithful representations of a clade's entire evolutionary history. Many factors can make this assumption problematic. Fossil taxa do not simply lead-up to extant taxa; they represent now-extinct lineages that can substantially impact interpretations of character evolution for extant groups. Here, we analyze body mass data for extant and fossil canids (dogs, foxes, and relatives) for changes in mean and variance through time. AIC-based model selection recovered distinct models for each of eight canid subgroups. We compared model fit of parameter estimates for (1) extant data alone and (2) extant and fossil data, demonstrating that the latter performs significantly better. Moreover, extant-only analyses result in unrealistically low estimates of ancestral mass. Although fossil data are not always available, reconstructions of deep-time organismal evolution in the absence of deep-time data can be highly inaccurate, and we argue that every effort should be made to include fossil data in macroevolutionary studies. © 2013 The Authors. Evolution published by Wiley Periodicals, Inc. on behalf of The Society for the Study of Evolution.
Maximized exoEarth candidate yields for starshades
NASA Astrophysics Data System (ADS)
Stark, Christopher C.; Shaklan, Stuart; Lisman, Doug; Cady, Eric; Savransky, Dmitry; Roberge, Aki; Mandell, Avi M.
2016-10-01
The design and scale of a future mission to directly image and characterize potentially Earth-like planets will be impacted, to some degree, by the expected yield of such planets. Recent efforts to increase the estimated yields, by creating observation plans optimized for the detection and characterization of Earth-twins, have focused solely on coronagraphic instruments; starshade-based missions could benefit from a similar analysis. Here we explore how to prioritize observations for a starshade given the limiting resources of both fuel and time, present analytic expressions to estimate fuel use, and provide efficient numerical techniques for maximizing the yield of starshades. We implemented these techniques to create an approximate design reference mission code for starshades and used this code to investigate how exoEarth candidate yield responds to changes in mission, instrument, and astrophysical parameters for missions with a single starshade. We find that a starshade mission operates most efficiently somewhere between the fuel- and exposuretime-limited regimes and, as a result, is less sensitive to photometric noise sources as well as parameters controlling the photon collection rate in comparison to a coronagraph. We produced optimistic yield curves for starshades, assuming our optimized observation plans are schedulable and future starshades are not thrust-limited. Given these yield curves, detecting and characterizing several dozen exoEarth candidates requires either multiple starshades or an η≳0.3.
NASA Astrophysics Data System (ADS)
Reuveni, Y.; Leontiev, A.
2016-12-01
Using GPS satellites signals, we can study atmospheric processes and coupling mechanisms, which can help us understand the physical conditions in the upper atmosphere that might lead or act as proxies for severe weather events such as extreme storms and flooding. GPS signals received by geodetic stations on the ground are multi-purpose and can also provide estimates of tropospheric zenith delays, which can be converted into mm-accuracy Precipitable Water Vapor (PWV) using collocated pressure and temperature measurements on the ground. Here, we present the use of Israel's geodetic GPS receivers network for extracting tropospheric zenith path delays combined with near Real Time (RT) METEOSAT-10 Water Vapor (WV) and surface temperature pixel intensity values (7.3 and 12.1 channels, respectively) in order to obtain absolute IWV (kg/m2) or PWV (mm) map distribution. The results show good agreement between the absolute values obtained from our triangulation strategy based solely on GPS Zenith Total Delays (ZTD) and METEOSAT-10 surface temperature data compared with available radiosonde Precipitable IWV/PWV absolute values. The presented strategy can provide unprecedented temporal and special IWV/PWV distribution, which is needed as part of the accurate and comprehensive initial conditions provided by upper-air observation systems at temporal and spatial resolutions consistent with the models assimilating them.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simionescu, A.; Ichinohe, Y.; Werner, N.
2015-10-01
We present the first measurements of the abundances of α-elements (Mg, Si, and S) extending out beyond the virial radius of a cluster of galaxies. Our results, based on Suzaku Key Project observations of the Virgo Cluster, show that the chemical composition of the intracluster medium is consistent with being constant on large scales, with a flat distribution of the Si/Fe, S/Fe, and Mg/Fe ratios as a function of radius and azimuth out to 1.4 Mpc (1.3 r{sub 200}). Chemical enrichment of the intergalactic medium due solely to core-collapse supernovae (SNcc) is excluded with very high significance; instead, the measuredmore » metal abundance ratios are generally consistent with the solar value. The uniform metal abundance ratios observed today are likely the result of an early phase of enrichment and mixing, with both SNcc and SNe Ia contributing to the metal budget during the period of peak star formation activity at redshifts of 2–3. We estimate the ratio between the number of SNe Ia and the total number of supernovae enriching the intergalactic medium to be between 12% and 37%, broadly consistent with the metal abundance patterns in our own Galaxy or with the SN Ia contribution estimated for the cluster cores.« less
Opportunities and Challenges of Electric Vehicles Development in Mitigating Climate Change in China
NASA Astrophysics Data System (ADS)
Liu, R.; Li, M. H.; Zhang, H. N.
2017-10-01
As a developing country, China has also undergone a noticeable climate change due to the increasing consumption of fossil fuels. The automotive market in China is estimated to be the world’s second largest new automotive growth market. China is now capable of manufacturing cars totally independently, which makes car prices more attractive to middle- income families. As one of the energy solutions, Electric Vehicles Technologies cannot be considered solely in environmental aspects. One energy system should not only contribute to sustainable development, but also to the environmental, social and economic aspects.
Habitat selection of juvenile sole (Solea solea L.): Consequences for shoreface nourishment
NASA Astrophysics Data System (ADS)
Post, Marjolein H. M.; Blom, Ewout; Chen, Chun; Bolle, Loes J.; Baptist, Martin J.
2017-04-01
The shallow coastal zone is an essential nursery habitat for juvenile flatfish species such as sole (Solea solea L.). The increased frequency of shoreface nourishments along the coast is likely to affect this nursery function by altering important habitat conditions, including sediment grain size. Sediment preference of juvenile sole (41-91 mm) was studied in a circular preference chamber in order to understand the relationship between grain size and sole distribution. The preference tests were carried out at 11 °C and 20 °C to reflect seasonal influences. The juveniles showed a significant preference for finer sediments. This preference was not length dependent (within the length range tested) nor affected by either temperatures. Juvenile sole have a small home range and are not expected to move in response to unfavourable conditions. As a result, habitat alterations may have consequences for juvenile survival and subsequently for recruitment to adult populations. It is therefore important to carefully consider nourishment grain size characteristics to safeguard suitable nursery habitats for juvenile sole.
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.
2017-01-01
A color algebra refers to a system for computing sums and products of colors, analogous to additive and subtractive color mixtures. The difficulty addressed here is the fact that, because of metamerism, we cannot know with certainty the spectrum that produced a particular color solely on the basis of sensory data. Knowledge of the spectrum is not required to compute additive mixture of colors, but is critical for subtractive (multiplicative) mixture. Therefore, we cannot predict with certainty the multiplicative interactions between colors based solely on sensory data. There are two potential applications of a color algebra: first, to aid modeling phenomena of human visual perception, such as color constancy and transparency; and, second, to provide better models of the interactions of lights and surfaces for computer graphics rendering.
78 FR 63036 - Transmission Planning Reliability Standards
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-23
... blend of specific quantitative and qualitative parameters for the permissible use of planned non... circumstances, Reliability Standard TPL-001-4 provides a blend of specific quantitative and qualitative... considerations, such as costs and alternatives, guards against a determination based solely on a quantitative...
Cronkite, Collingwood, Cliches, and Caca.
ERIC Educational Resources Information Center
Weingartner, Charles
1978-01-01
This author views the CBS report "Is Anyone Out There Learning?" as an attack on education based solely on ill-informed cliches and anachronistic assumptions, piously intoned by commentators Walter Cronkite and Charles Collingwood. All articles in this journal issue comment on this television program. (SJL)
48 CFR 45.600 - Scope of subpart.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Section 45.600 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACT MANAGEMENT... property for which the Government has a lien or title solely as a result of advance, progress, or performance-based payments that have been liquidated. [72 FR 27389, May 15, 2007] ...
Popper's experiment and communication
NASA Astrophysics Data System (ADS)
Gerjuoy, Edward; Sessler, Andrew M.
2006-07-01
We comment on an analysis by Qureshi of an experiment proposed by Popper and show that an analysis based solely on conventional nonrelativistic quantum mechanics is sufficient to exclude the possibility of subluminal or superluminal communication. That is, local operations cannot be employed to transmit information.
Electrical Equipment of Electrical Stations and Substations,
1979-10-25
of Communist society. In 1921 he wrote: "fhe sole material base of socialism can be the large/coarse machine industry, capable of reorganizing and...produced with the aid of special switching system, structurally/ constructionally being part transformer itself. The transformers, supplied with this
ERIC Educational Resources Information Center
Singh, Nirbhay N.; Lancioni, Giulio E.; Winton, Alan S. W.; Singh, Ashvind N.; Adkins, Angela D.; Singh, Judy
2008-01-01
The effects of a mindfulness-based procedure, called "Meditation on the Soles of the Feet", were evaluated as a cognitive-behavioral intervention for physical aggression in 6 offenders with mild intellectual disabilities. They were taught a simple meditation technique that required them to shift their attention and awareness from the precursors of…
John W. Hanna; James T. Blodgett; Eric W. I. Pitman; Sarah M. Ashiglar; John E. Lundquist; Mee-Sook Kim; Amy L. Ross-Davis; Ned B. Klopfenstein
2014-01-01
As part of an ongoing project to predict Armillaria root disease in the Rocky Mountain zone, this project predicts suitable climate space (potential distribution) for A. solidipes in Wyoming and associated forest areas at risk to disease caused by this pathogen. Two bioclimatic models are being developed. One model is based solely on verified locations of A. solidipes...
ERIC Educational Resources Information Center
Perry, Pam
2010-01-01
The undergraduate business program rankings in USNWR are based solely on peer assessments from deans and associate deans of AACSB accredited U.S. business schools. Often these reputation-based rankings are discounted and likened to a beauty pageant because the process lacks transparent input data. In this study, ten deans and ten associate…
Activity-Based Management Accounting for DoD Depot Maintenance
1994-08-01
used to establish a management accounting system for the depots is described. The current accounting system does not provide the information to answer...nondirect costs are tied solely to direct labor hours. A possible alternative management accounting system uses Activity-Based Costing (ABC). ABC links...along with its probable benefits and costs. Accounting, Management accounting , Cost analysis, Depot maintenance cost.
Victoria A. Saab; Hugh D. W. Powell; Natasha B. Kotliar; Karen R. Newlon
2005-01-01
Information about avian responses to fire in the U.S. Rocky Mountains is based solely on studies of crown fires. However, fire management in this region is based primarily on studies of low-elevation ponderosa pine (Pinus ponderosa) forests maintained largely by frequent understory fires. In contrast to both of these trends, most Rocky Mountain...
Predicting Electron Population Characteristics in 2-D Using Multispectral Ground-Based Imaging
NASA Astrophysics Data System (ADS)
Grubbs, Guy; Michell, Robert; Samara, Marilia; Hampton, Donald; Jahn, Jorg-Micha
2018-01-01
Ground-based imaging and in situ sounding rocket data are compared to electron transport modeling for an active inverted-V type auroral event. The Ground-to-Rocket Electrodynamics-Electrons Correlative Experiment (GREECE) mission successfully launched from Poker Flat, Alaska, on 3 March 2014 at 11:09:50 UT and reached an apogee of approximately 335 km over the aurora. Multiple ground-based electron-multiplying charge-coupled device (EMCCD) imagers were positioned at Venetie, Alaska, and aimed toward magnetic zenith. The imagers observed the intensity of different auroral emission lines (427.8, 557.7, and 844.6 nm) at the magnetic foot point of the rocket payload. Emission line intensity data are correlated with electron characteristics measured by the GREECE onboard electron spectrometer. A modified version of the GLobal airglOW (GLOW) model is used to estimate precipitating electron characteristics based on optical emissions. GLOW predicted the electron population characteristics with 20% error given the observed spectral intensities within 10° of magnetic zenith. Predictions are within 30% of the actual values within 20° of magnetic zenith for inverted-V-type aurora. Therefore, it is argued that this technique can be used, at least in certain types of aurora, such as the inverted-V type presented here, to derive 2-D maps of electron characteristics. These can then be used to further derive 2-D maps of ionospheric parameters as a function of time, based solely on multispectral optical imaging data.
Sole Source Aquifers for NY and NJ
This layer is the designated sole source aquifers of New York and New Jersey. A Sole Source Aquifer, is an aquifer that supplies 50% or more of the drinking water for a given area where there are no reasonably available alternative sources should the aquifer become contaminated.
21 CFR 520.2088 - Roxarsone tablets.
Code of Federal Regulations, 2011 CFR
2011-04-01
... period. Withdraw 5 days before slaughter. Use as sole source of organic arsenic. (ii) Growing chickens—(a.... Withdraw 5 days before slaughter. Use as sole source of organic arsenic. (b)(1) Specifications. Each tablet... slaughter. Use as sole source of organic arsenic. (ii) [Reserved] (c)(1) Specifications. Each tablet...
Kinetic Assessment of Golf Shoe Outer Sole Design Features
Smith, Neal A.; Dyson, Rosemary J.
2009-01-01
This study assessed human kinetics in relation to golf shoe outer sole design features during the golf swing using a driver club by measuring both within the shoe, and beneath the shoe at the natural grass interface. Three different shoes were assessed: metal 7- spike shoe, alternative 7-spike shoe, and a flat soled shoe. In-shoe plantar pressure data were recorded using Footscan RS International pressure insoles and sampling at 500 Hz. Simultaneously ground reaction force at the shoe outer sole was measured using 2 natural grass covered Kistler force platforms and 1000 Hz data acquisition. Video recording of the 18 right-handed golfers at 200 Hz was undertaken while the golfer performed 5 golf shots with his own driver in each type of shoe. Front foot (nearest to shot direction) maximum vertical force and torque were greater than at the back foot, and there was no significant difference related to the shoe type. Wearing the metal spike shoe when using a driver was associated with more torque generation at the back foot (p < 0. 05) than when the flat soled shoe was worn. Within shoe regional pressures differed significantly with golf shoe outer sole design features (p < 0.05). Comparison of the metal spike and alternative spike shoe results provided indications of the quality of regional traction on the outer sole. Potential golf shoe outer sole design features and traction were presented in relation to phases of the golf swing movement. Application of two kinetic measurement methods identified that moderated (adapted) muscular control of foot and body movement may be induced by golf shoe outer sole design features. Ground reaction force measures inform comparisons of overall shoe functional performance, and insole pressure measurements inform comparisons of the underfoot conditions induced by specific regions of the golf shoe outer sole. Key points Assessments of within golf shoe pressures and beneath shoe forces at the natural grass interface were conducted during golf shots with a driver. Application of two kinetic measurement methods simultaneously identified that moderated (adapted) muscular control of the foot and body movement may be induced by golf shoe outer sole localised design features. Ground force measures inform overall shoe kinetic functional performance. Insole pressure measurement informs of underfoot conditions induced by localised specific regions of the golf outer sole. Significant differences in ground-shoe torque generation and insole regional pressures were identified when different golf shoes were worn. PMID:24149603
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manwaring, John, E-mail: manwaring.jd@pg.com; Rothe, Helga; Obringer, Cindy
Approaches to assess the role of absorption, metabolism and excretion of cosmetic ingredients that are based on the integration of different in vitro data are important for their safety assessment, specifically as it offers an opportunity to refine that safety assessment. In order to estimate systemic exposure (AUC) to aromatic amine hair dyes following typical product application conditions, skin penetration and epidermal and systemic metabolic conversion of the parent compound was assessed in human skin explants and human keratinocyte (HaCaT) and hepatocyte cultures. To estimate the amount of the aromatic amine that can reach the general circulation unchanged after passagemore » through the skin the following toxicokinetically relevant parameters were applied: a) Michaelis–Menten kinetics to quantify the epidermal metabolism; b) the estimated keratinocyte cell abundance in the viable epidermis; c) the skin penetration rate; d) the calculated Mean Residence Time in the viable epidermis; e) the viable epidermis thickness and f) the skin permeability coefficient. In a next step, in vitro hepatocyte K{sub m} and V{sub max} values and whole liver mass and cell abundance were used to calculate the scaled intrinsic clearance, which was combined with liver blood flow and fraction of compound unbound in the blood to give hepatic clearance. The systemic exposure in the general circulation (AUC) was extrapolated using internal dose and hepatic clearance, and C{sub max} was extrapolated (conservative overestimation) using internal dose and volume of distribution, indicating that appropriate toxicokinetic information can be generated based solely on in vitro data. For the hair dye, p-phenylenediamine, these data were found to be in the same order of magnitude as those published for human volunteers. - Highlights: • An entirely in silico/in vitro approach to predict in vivo exposure to dermally applied hair dyes • Skin penetration and epidermal conversion assessed in human skin explants and HaCaT • Systemic metabolism was modeled using hepatocyte cultures. • Toxicokinetically relevant parameters were applied to estimate systemic exposure. • There was a good agreement between in vitro and in vivo data.« less
Seasonal productivity in a population of migratory songbirds: why nest data are not enough
Streby, Henry M.; Andersen, David E.
2011-01-01
Population models for many animals are limited by a lack of information regarding juvenile survival. In particular, studies of songbird reproductive output typically terminate with the success or failure of nests, despite the fact that adults spend the rest of the reproductive season rearing dependent fledglings. Unless fledgling survival does not vary, or varies consistently with nest productivity, conclusions about population dynamics based solely on nest data may be misleading. During 2007 and 2008, we monitored nests and used radio telemetry to monitor fledgling survival for a population of Ovenbirds (Seiurus aurocapilla) in a managed-forest landscape in north-central Minnesota, USA. In addition to estimating nest and fledgling survival, we modeled growth for population segments partitioned by proximity to edges of non-nesting cover types (regenerating clearcuts). Nest survival was significantly lower, but fledgling survival was significantly higher, in 2007 than in 2008. Despite higher nest productivity in 2008, seasonal productivity (number of young surviving to independence per breeding female) was higher in 2007. Proximity to clearcut edge did not affect nest productivity. However, fledglings from nests near regenerating sapling-dominated clearcuts (7–20 years since harvest) had higher daily survival (0.992 ± 0.005) than those from nests in interior forest (0.978 ± 0.006), which in turn had higher daily survival than fledglings from nests near shrub-dominated clearcuts (≤6 years since harvest; 0.927 ± 0.030) in 2007, with a similar but statistically non-significant trend in 2008. Our population growth models predicted growth rates that differed by 2–39% (x¯ = 25%) from simpler models in which we replaced our estimates of first-year survival with one-half adult annual survival (an estimate commonly used in songbird population growth models). We conclude that nest productivity is an inadequate measure of songbird seasonal productivity, and that results based exclusively on nest data can yield misleading conclusions about population growth and clearcut edge effects. We suggest that direct estimates of juvenile survival could provide more accurate information for the management and conservation of many animal taxa.
31 CFR 543.301 - Arms or any related materiel.
Code of Federal Regulations, 2010 CFR
2010-07-01
... solely for humanitarian or protective use, and related technical assistance and training; (c) Supplies of... of arms and related materiel and technical training and assistance intended solely for support of or... technical assistance intended solely for the support of or use by the United Nations Operation in Côte d...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-02
... a sole source SDVOSB concern acquisition. The final rule contains language that more closely mirrors...-AL29 Federal Acquisition Regulation; FAR Case 2008-023, Clarification of Criteria for Sole Source...: Final rule. SUMMARY: The Civilian Agency Acquisition Council and the Defense Acquisition Regulations...
31 CFR 515.546 - Accounts of Cuban sole proprietorships.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 31 Money and Finance:Treasury 3 2011-07-01 2011-07-01 false Accounts of Cuban sole proprietorships. 515.546 Section 515.546 Money and Finance: Treasury Regulations Relating to Money and Finance... proprietorships. Specific licenses are issued unblocking sole proprietorships established under the laws of Cuba...
31 CFR 515.546 - Accounts of Cuban sole proprietorships.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false Accounts of Cuban sole proprietorships. 515.546 Section 515.546 Money and Finance: Treasury Regulations Relating to Money and Finance... proprietorships. Specific licenses are issued unblocking sole proprietorships established under the laws of Cuba...
31 CFR 515.546 - Accounts of Cuban sole proprietorships.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 31 Money and Finance:Treasury 3 2013-07-01 2013-07-01 false Accounts of Cuban sole proprietorships. 515.546 Section 515.546 Money and Finance: Treasury Regulations Relating to Money and Finance... proprietorships. Specific licenses are issued unblocking sole proprietorships established under the laws of Cuba...
31 CFR 515.546 - Accounts of Cuban sole proprietorships.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 31 Money and Finance:Treasury 3 2014-07-01 2014-07-01 false Accounts of Cuban sole proprietorships. 515.546 Section 515.546 Money and Finance: Treasury Regulations Relating to Money and Finance... proprietorships. Specific licenses are issued unblocking sole proprietorships established under the laws of Cuba...
31 CFR 800.223 - Solely for the purpose of passive investment.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (Continued) OFFICE OF INVESTMENT SECURITY, DEPARTMENT OF THE TREASURY REGULATIONS PERTAINING TO MERGERS, ACQUISITIONS, AND TAKEOVERS BY FOREIGN PERSONS Definitions § 800.223 Solely for the purpose of passive... Board of Directors. The acquisition by Corporation A of a voting interest in Corporation B is not solely...
Performatively Queer: Sole Parent Postgraduates in the Australian Academy
ERIC Educational Resources Information Center
Hook, Genine A.
2015-01-01
This paper draws on research that considers how gender and agency influence the engagement of sole parent postgraduates within the Australian academy. I argue that parental care responsibilities critically influence participation in higher education for sole parents. I suggest that the gendered construct of caring for children is a feminine…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-20
... DEPARTMENT OF DEFENSE 32 CFR Part 199 [DoD-2010-HA-0072] RIN 0720-AB41 TRICARE; Reimbursement of Sole Community Hospitals and Adjustment to Reimbursement of Critical Access Hospitals; Correction... TRICARE; Reimbursement of Sole Community Hospitals and Adjustment to Reimbursement of Critical Access...
1989-09-01
flathead sole, rex sole, and rock sole all showed indications of blood worm infestations. One liver tumor was found in a rex sole during spring in the ZSF...concentrations Hainly in invertebrates; some trations (.01 ppb) in waters (from lOx to 42Ox reference) in fish livers ; rarely in fish of Puget Sound central...Eagle Harbor, and Sinclair fish livers , and birds in Inlet. Highest elevation industrialized ’-ban areas. along Ruston-Point Defiance Copper is a natural
The detailed measurement of foot clearance by young adults during stair descent.
Telonio, A; Blanchet, S; Maganaris, C N; Baltzopoulos, V; McFadyen, B J
2013-04-26
Foot clearance is an important variable for understanding safe stair negotiation, but few studies have provided detailed measures of it. This paper presents a new method to calculate minimal shoe clearance during stair descent and compares it to previous literature. Seventeen healthy young subjects descended a five step staircase with step treads of 300 mm and step heights of 188 mm. Kinematic data were collected with an Optotrak system (model 3020) and three non-colinear infrared markers on the feet. Ninety points were digitized on the foot sole prior to data collection using a 6 marker probe and related to the triad of markers on the foot. The foot sole was reconstructed using the Matlab (version 7.0) "meshgrid" function and minimal distance to each step edge was calculated for the heel, toe and foot sole. Results showed significant differences in minimum clearance between sole, heel and toe, with the shoe sole being the closest and the toe the furthest. While the hind foot sole was closest for 69% of the time, the actual minimum clearance point on the sole did vary across subjects and staircase steps. This new method, and the findings on healthy young subjects, can be applied to future studies of other populations and staircase dimensions. Copyright © 2013 Elsevier Ltd. All rights reserved.
Ultrasonographic measurement of the mechanical properties of the sole under the metatarsal heads.
Wang, C L; Hsu, T C; Shau, Y W; Shieh, J Y; Hsu, K H
1999-09-01
The sole under the metatarsal heads functions as a shock absorber during walking and running. The mechanical properties of the sole provide the primary defense against the development of metatarsalgia and foot ulceration. However, limited information about these properties has been documented. In this study, we used ultrasonography to evaluate the mechanical properties, including unloaded thickness, compressibility index, elastic modulus, and energy dissipation ratio, of the sole in 20 healthy subjects. The unloaded thickness decreased progressively from the first to the fifth metatarsal heads, with values of 1.50, 1.36, 1.25, 1.14, and 1.04 cm. The sole under the first metatarsal head had the greatest values for the compressibility index and elastic modulus (55.9% and 1.39 kg/cm2), and the sole under the third metatarsal head had the smallest values (50.8% and 1.23 kg/cm2). The sole under the fifth metatarsal head had the greatest energy dissipation ratio (33.7%), followed by that under the third, second, first, and fourth metatarsal heads. Multivariate adjusted linear regression showed that the unloaded thickness, compressibility index, and elastic modulus values increased significantly with age and body weight (p < 0.05) and that the energy dissipation ratio increased significantly with body weight (p < 0.05)
Quantifying the accuracy of snow water equivalent estimates using broadband radar signal phase
NASA Astrophysics Data System (ADS)
Deeb, E. J.; Marshall, H. P.; Lamie, N. J.; Arcone, S. A.
2014-12-01
Radar wave velocity in dry snow depends solely on density. Consequently, ground-based pulsed systems can be used to accurately measure snow depth and snow water equivalent (SWE) using signal travel-time, along with manual depth-probing for signal velocity calibration. Travel-time measurements require a large bandwidth pulse not possible in airborne/space-borne platforms. In addition, radar backscatter from snow cover is sensitive to grain size and to a lesser extent roughness of layers at current/proposed satellite-based frequencies (~ 8 - 18 GHz), complicating inversion for SWE. Therefore, accurate retrievals of SWE still require local calibration due to this sensitivity to microstructure and layering. Conversely, satellite radar interferometry, which senses the difference in signal phase between acquisitions, has shown a potential relationship with SWE at lower frequencies (~ 1 - 5 GHz) because the phase of the snow-refracted signal is sensitive to depth and dielectric properties of the snowpack, as opposed to its microstructure and stratigraphy. We have constructed a lab-based, experimental test bed to quantify the change in radar phase over a wide range of frequencies for varying depths of dry quartz sand, a material dielectrically similar to dry snow. We use a laboratory grade Vector Network Analyzer (0.01 - 25.6 GHz) and a pair of antennae mounted on a trolley over the test bed to measure amplitude and phase repeatedly/accurately at many frequencies. Using ground-based LiDAR instrumentation, we collect a coordinated high-resolution digital surface model (DSM) of the test bed and subsequent depth surfaces with which to compare the radar record of changes in phase. Our plans to transition this methodology to a field deployment during winter 2014-2015 using precision pan/tilt instrumentation will also be presented, as well as applications to airborne and space-borne platforms toward the estimation of SWE at high spatial resolution (on the order of meters) over large regions (> 100 square kilometers).
NASA Astrophysics Data System (ADS)
Savina, M.; Lunghi, M.; Archambault, B.; Baulier, L.; Huret, M.; Le Pape, O.
2016-05-01
Simulating fish larval drift helps assess the sensitivity of recruitment variability to early life history. An individual-based model (IBM) coupled to a hydrodynamic model was used to simulate common sole larval supply from spawning areas to coastal and estuarine nursery grounds at the meta-population scale (4 assessed stocks), from the southern North Sea to the Bay of Biscay (Western Europe) on a 26-yr time series, from 1982 to 2007. The IBM allowed each particle released to be transported by currents, to grow depending on temperature, to migrate vertically depending on development stage, to die along pelagic stages or to settle on a nursery, representing the life history from spawning to metamorphosis. The model outputs were analysed to explore interannual patterns in the amounts of settled sole larvae at the population scale; they suggested: (i) a low connectivity between populations at the larval stage, (ii) a moderate influence of interannual variation in the spawning biomass, (iii) dramatic consequences of life history on the abundance of settling larvae and (iv) the effects of climate variability on the interannual variability of the larvae settlement success.
Evaluating the methodology and performance of jetting and flooding of granular backfill materials.
DOT National Transportation Integrated Search
2014-11-01
Compaction of backfill in confined spaces on highway projects is often performed with small vibratory plates, based : solely on the experience of the contractor, leading to inadequate compaction. As a result, the backfill is prone to : erosion and of...
TARGET ORGAN TOXICITY IN MARINE AND FRESHWATER TELEOSTS: VOLUME 1 - ORGANS
In any given aquatic ecosystem, fish serve a multitude of critical functions and so, are typically included in the risk assessment of various chemicals in waterways. However, uncertainties in toxicity evaluation can arise since these assessments are usually based solely on acute ...
TARGET ORGAN TOXICITY IN MARINE AND FRESHWATER TELEOSTS: VOLUME 2 - SYSTEMS
In any given aquatic ecosystem, fish serve a multitude of critical functions and so, are typically included in the risk assessment of various chemicals in waterways. However, uncertainties in toxicity evaluation can arise since these assessments are usually based solely on acute ...
Comprehensive Testing Guidelines to Increase Efficiency in INDOT Operations
DOT National Transportation Integrated Search
2012-08-01
When INDOT designs a pavement project, the decision for QC/QA or nonQC/QA is made solely based on the quantity of pavement materials. However, the actual risk will vary depending on the severity of road conditions. The question is how to different...
Comprehensive Testing Guidelines to Increase Efficiency in INDOT Operations
DOT National Transportation Integrated Search
2012-08-01
When INDOT designs a pavement project, the decision for QC/QA or nonQC/QA is made solely based on the quantity of : pavement materials. However, the actual risk will vary depending on the severity of road conditions. The question is how : to diffe...
CyberKM: Harnessing Dynamic Knowledge for Competitive Advantage through Cyberspace
2010-11-01
action (e.g., consider attempting to ride a bicycle , negotiate a contract, or conduct qualitative research based solely upon reading a book about the......interprets the data from signals, develops information through incorporation of meaning and context, and finally develops actionable knowledge
Comprehensive Testing Guidelines to Increase Efficiency in INDOT Operations : [Technical Summary
DOT National Transportation Integrated Search
2012-01-01
When the Indiana Department of Transportation designs a pavement project, a decision for QC/QA (Quality Control/ Quality Assurance) or nonQC/QA is made solely based on the quantity of pavement materials to be used in the project. Once the pavement...
Comprehensive Testing Guidelines to Increase Efficiency in INDOT Operations : [Technical Summary
DOT National Transportation Integrated Search
2012-01-01
When the Indiana Department of Transportation designs : a pavement project, a decision for QC/QA (Quality Control/ : Quality Assurance) or nonQC/QA is made solely : based on the quantity of pavement materials to be used : in the project. Once the ...
THE LEARNING BARGE: ENVIRONMENTAL + CULTURAL ECOLOGIES ON THE ELIZABETH RIVER
A University of Virginia interdisciplinary student team will design and fabricate the Learning Barge—a floating environmental education field station powered solely by site-based solar and wind energy systems. The 32’x120’ barge will support a contained be...
Finance issue brief: medical necessity: year end report-2003.
MacEachern, Lillian
2003-12-31
The information in this issue brief is based on a 50--state survey and a recent literature review. The Health Policy Tracking Service recognizes the complexity of this issue and discourages the use of this document as a sole resource on the issue.