Strategic Methodologies in Public Health Cost Analyses.
Whittington, Melanie; Atherly, Adam; VanRaemdonck, Lisa; Lampe, Sarah
The National Research Agenda for Public Health Services and Systems Research states the need for research to determine the cost of delivering public health services in order to assist the public health system in communicating financial needs to decision makers, partners, and health reform leaders. The objective of this analysis is to compare 2 cost estimation methodologies, public health manager estimates of employee time spent and activity logs completed by public health workers, to understand to what degree manager surveys could be used in lieu of more time-consuming and burdensome activity logs. Employees recorded their time spent on communicable disease surveillance for a 2-week period using an activity log. Managers then estimated time spent by each employee on a manager survey. Robust and ordinary least squares regression was used to measure the agreement between the time estimated by the manager and the time recorded by the employee. The 2 outcomes for this study included time recorded by the employee on the activity log and time estimated by the manager on the manager survey. This study was conducted in local health departments in Colorado. Forty-one Colorado local health departments (82%) agreed to participate. Seven of the 8 models showed that managers underestimate their employees' time, especially for activities on which an employee spent little time. Manager surveys can best estimate time for time-intensive activities, such as total time spent on a core service or broad public health activity, and yet are less precise when estimating discrete activities. When Public Health Services and Systems Research researchers and health departments are conducting studies to determine the cost of public health services, there are many situations in which managers can closely approximate the time required and produce a relatively precise approximation of cost without as much time investment by practitioners.
Population Estimates, Conservation Concerns, and Management of Tropicbirds in the Western Atlantic
DAVID S. LEE; MARTHA WALSH-MCGEHEE
2000-01-01
Two species of tropicbirds (Phaethontidae) live in the Western North Atlantic. The Whitetailed Tropicbird of the region is an endemic race, Phaethon lepturus catesbyi, with approximately 5000 pairs. This is half the estimate made less than two decades ago. The Red-billed Tropicbird, P. aethereus, has approximately 2000 pairs regionally and fewer than 8000 pairs...
USDA-ARS?s Scientific Manuscript database
Rangelands are the most dominant land cover type in the United States (770 million acres) with approximately 53% of the nation’s rangelands owned and managed by the private sector, while approximately 43% are managed by the federal government. Information on the type, extent, and spatial location of...
Epidemiology and costs of cervical cancer screening and cervical dysplasia in Italy
Rossi, Paolo Giorgi; Ricciardi, Alessandro; Cohet, Catherine; Palazzo, Fabio; Furnari, Giacomo; Valle, Sabrina; Largeron, Nathalie; Federici, Antonio
2009-01-01
Background We estimated the number of women undergoing cervical cancer screening annually in Italy, the rates of cervical abnormalities detected, and the costs of screening and management of abnormalities. Methods The annual number of screened women was estimated from National Health Interview data. Data from the Italian Group for Cervical Cancer Screening were used to estimate the number of positive, negative and unsatisfactory Pap smears. The incidence of CIN (cervical intra-epithelial neoplasia) was estimated from the Emilia Romagna Cancer Registry. Patterns of follow-up and treatment costs were estimated using a typical disease management approach based on national guidelines and data from the Italian Group for Cervical Cancer Screening. Treatment unit costs were obtained from Italian National Health Service and Hospital Information System of the Lazio Region. Results An estimated 6.4 million women aged 25–69 years undergo screening annually in Italy (1.2 million and 5.2 million through organized and opportunistic screening programs, respectively). Approximately 2.4% of tests have positive findings. There are approximately 21,000 cases of CIN1 and 7,000–17,000 cases of CIN2/3. Estimated costs to the healthcare service amount to €158.5 million for screening and €22.9 million for the management of cervical abnormalities. Conclusion Although some cervical abnormalities might have been underestimated, the total annual cost of cervical cancer prevention in Italy is approximately €181.5 million, of which 87% is attributable to screening. PMID:19243586
Extant process-based hydrologic and water quality models are indispensable to water resources planning and environmental management. However, models are only approximations of real systems and often calibrated with incomplete and uncertain data. Reliable estimates, or perhaps f...
Extant process-based hydrologic and water quality models are indispensable to water resources planning and environmental management. However, models are only approximations of real systems and often calibrated with incomplete and uncertain data. Reliable estimates, or perhaps f...
Estimating ecosystem carbon stocks at Redwood National and State Parks
van Mantgem, Phillip J.; Madej, Mary Ann; Seney, Joseph; Deshais, Janelle
2013-01-01
Accounting for ecosystem carbon is increasingly important for park managers. In this case study we present our efforts to estimate carbon stocks and the effects of management on carbon stocks for Redwood National and State Parks in northern California. Using currently available information, we estimate that on average these parks’ soils contain approximately 89 tons of carbon per acre (200 Mg C per ha), while vegetation contains about 130 tons C per acre (300 Mg C per ha). estoration activities at the parks (logging-road removal, second-growth forest management) were shown to initially reduce ecosystem carbon, but may provide for enhanced ecosystem carbon storage over the long term. We highlight currently available tools that could be used to estimate ecosystem carbon at other units of the National Park System.
USDA-ARS?s Scientific Manuscript database
Water management is a critical aspect of successful grape production in California’s Central Valley, which represents nearly 1 million acres of grape production valued at approximately 6 billion dollars. Despite competing water use interest and a reduction in water availability over much of Californ...
Food loss and waste management in Turkey.
Salihoglu, Guray; Salihoglu, Nezih Kamil; Ucaroglu, Selnur; Banar, Mufide
2018-01-01
Food waste can be an environmental and economic problem if not managed properly but it can meet various demands of a country if it is considered as a resource. The purpose of this report is to review the existing state of the field in Turkey and identify the potential of food waste as a resource. Food loss and waste (FLW) was examined throughout the food supply chain (FSC) and quantified using the FAO model. Edible FLW was estimated to be approximately 26milliontons/year. The amount of biodegradable waste was estimated based on waste statistics and research conducted on household food waste in Turkey. The total amount of biodegradable waste was found to be approximately 20milliontons/year, where more than 8.6milliontons/year of this waste is FLW from distribution and consumption in the FSC. Options for the end-of-life management of biodegradable wastes are also discussed in this review article. Copyright © 2017 Elsevier Ltd. All rights reserved.
Challenges in global ballast water management.
Endresen, Øyvind; Lee Behrens, Hanna; Brynestad, Sigrid; Bjørn Andersen, Aage; Skjong, Rolf
2004-04-01
Ballast water management is a complex issue raising the challenge of merging international regulations, ship's specific configurations along with ecological conservation. This complexity is illustrated in this paper by considering ballast water volume, discharge frequency, ship safety and operational issues aligned with regional characteristics to address ecological risk for selected routes. A re-estimation of ballast water volumes gives a global annual level of 3500 Mton. Global ballast water volume discharged into open sea originating from ballast water exchange operations is estimated to approximately 2800 Mton. Risk based decision support systems coupled to databases for different ports and invasive species characteristics and distributions can allow for differentiated treatment levels while maintaining low risk levels. On certain routes, the risk is estimated to be unacceptable and some kind of ballast water treatment or management should be applied.
The Large Synoptic Survey Telescope project management control system
NASA Astrophysics Data System (ADS)
Kantor, Jeffrey P.
2012-09-01
The Large Synoptic Survey Telescope (LSST) program is jointly funded by the NSF, the DOE, and private institutions and donors. From an NSF funding standpoint, the LSST is a Major Research Equipment and Facilities (MREFC) project. The NSF funding process requires proposals and D&D reviews to include activity-based budgets and schedules; documented basis of estimates; risk-based contingency analysis; cost escalation and categorization. "Out-of-the box," the commercial tool Primavera P6 contains approximately 90% of the planning and estimating capability needed to satisfy R&D phase requirements, and it is customizable/configurable for remainder with relatively little effort. We describe the customization/configuration and use of Primavera for the LSST Project Management Control System (PMCS), assess our experience to date, and describe future directions. Examples in this paper are drawn from the LSST Data Management System (DMS), which is one of three main subsystems of the LSST and is funded by the NSF. By astronomy standards the LSST DMS is a large data management project, processing and archiving over 70 petabyes of image data, producing over 20 petabytes of catalogs annually, and generating 2 million transient alerts per night. Over the 6-year construction and commissioning phase, the DM project is estimated to require 600,000 hours of engineering effort. In total, the DMS cost is approximately 60% hardware/system software and 40% labor.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-11
... received by the Agency under the Pre-IDE program over the past 10 years. Based on FDA's experience with the... rate and reach a steady state of approximately 2,544 submissions per year. FDA estimates from past... annual estimate of 2,544 submissions is based on experienced trends over the past several years. FDA's...
Montuno, Michael A; Kohner, Andrew B; Foote, Kelly D; Okun, Michael S
2013-01-01
Deep brain stimulation (DBS) is an effective technique that has been utilized to treat advanced and medication-refractory movement and psychiatric disorders. In order to avoid implanted pulse generator (IPG) failure and consequent adverse symptoms, a better understanding of IPG battery longevity and management is necessary. Existing methods for battery estimation lack the specificity required for clinical incorporation. Technical challenges prevent higher accuracy longevity estimations, and a better approach to managing end of DBS battery life is needed. The literature was reviewed and DBS battery estimators were constructed by the authors and made available on the web at http://mdc.mbi.ufl.edu/surgery/dbs-battery-estimator. A clinical algorithm for management of DBS battery life was constructed. The algorithm takes into account battery estimations and clinical symptoms. Existing methods of DBS battery life estimation utilize an interpolation of averaged current drains to calculate how long a battery will last. Unfortunately, this technique can only provide general approximations. There are inherent errors in this technique, and these errors compound with each iteration of the battery estimation. Some of these errors cannot be accounted for in the estimation process, and some of the errors stem from device variation, battery voltage dependence, battery usage, battery chemistry, impedance fluctuations, interpolation error, usage patterns, and self-discharge. We present web-based battery estimators along with an algorithm for clinical management. We discuss the perils of using a battery estimator without taking into account the clinical picture. Future work will be needed to provide more reliable management of implanted device batteries; however, implementation of a clinical algorithm that accounts for both estimated battery life and for patient symptoms should improve the care of DBS patients. © 2012 International Neuromodulation Society.
Case Report: Perioperative management of a pregnant poly trauma patient for spine fixation surgery.
Vandse, Rashmi; Cook, Meghan; Bergese, Sergio
2015-01-01
Trauma is estimated to complicate approximately one in twelve pregnancies, and is currently a leading non-obstetric cause of maternal death. Pregnant trauma patients requiring non-obstetric surgery pose a number of challenges for anesthesiologists. Here we present the successful perioperative management of a pregnant trauma patient with multiple injuries including occult pneumothorax who underwent T9 to L1 fusion in prone position, and address the pertinent perioperative anesthetic considerations and management.
COST MODELS FOR WATER SUPPLY DISTRIBUTION SYSTEMS
A major challenge for society in the twenty-first century will be replacement, design and optimal management of urban infrastructure. It is estimated that the current world wide demand for infrastructure investment is approximately three trillion dollars annually. A Drinking Wate...
Global health benefits of mitigating ozone pollution with methane emission controls.
West, J Jason; Fiore, Arlene M; Horowitz, Larry W; Mauzerall, Denise L
2006-03-14
Methane (CH(4)) contributes to the growing global background concentration of tropospheric ozone (O(3)), an air pollutant associated with premature mortality. Methane and ozone are also important greenhouse gases. Reducing methane emissions therefore decreases surface ozone everywhere while slowing climate warming, but although methane mitigation has been considered to address climate change, it has not for air quality. Here we show that global decreases in surface ozone concentrations, due to methane mitigation, result in substantial and widespread decreases in premature human mortality. Reducing global anthropogenic methane emissions by 20% beginning in 2010 would decrease the average daily maximum 8-h surface ozone by approximately 1 part per billion by volume globally. By using epidemiologic ozone-mortality relationships, this ozone reduction is estimated to prevent approximately 30,000 premature all-cause mortalities globally in 2030, and approximately 370,000 between 2010 and 2030. If only cardiovascular and respiratory mortalities are considered, approximately 17,000 global mortalities can be avoided in 2030. The marginal cost-effectiveness of this 20% methane reduction is estimated to be approximately 420,000 US dollars per avoided mortality. If avoided mortalities are valued at 1 US dollars million each, the benefit is approximately 240 US dollars per tone of CH(4) ( approximately 12 US dollars per tone of CO(2) equivalent), which exceeds the marginal cost of the methane reduction. These estimated air pollution ancillary benefits of climate-motivated methane emission reductions are comparable with those estimated previously for CO(2). Methane mitigation offers a unique opportunity to improve air quality globally and can be a cost-effective component of international ozone management, bringing multiple benefits for air quality, public health, agriculture, climate, and energy.
National Economic Burden Associated with Management of Periodontitis in Malaysia.
Mohd Dom, Tuti Ningseh; Ayob, Rasidah; Abd Muttalib, Khairiyah; Aljunid, Syed Mohamed
2016-01-01
Objectives. The aim of this study is to estimate the economic burden associated with the management of periodontitis in Malaysia from the societal perspective. Methods. We estimated the economic burden of periodontitis by combining the disease prevalence with its treatment costs. We estimated treatment costs (with 2012 value of Malaysian Ringgit) using the cost-of-illness approach and included both direct and indirect costs. We used the National Oral Health Survey for Adults (2010) data to estimate the prevalence of periodontitis and 2010 national census data to estimate the adult population at risk for periodontitis. Results. The economic burden of managing all cases of periodontitis at the national level from the societal perspective was approximately MYR 32.5 billion, accounting for 3.83% of the 2012 Gross Domestic Product of the country. It would cost the nation MYR 18.3 billion to treat patients with moderate periodontitis and MYR 13.7 billion to treat patients with severe periodontitis. Conclusion. The economic burden of periodontitis in Malaysia is substantial and comparable with that of other chronic diseases in the country. This is attributable to its high prevalence and high cost of treatment. Judicious application of promotive, preventive, and curative approaches to periodontitis management is decidedly warranted.
ABC estimation of unit costs for emergency department services.
Holmes, R L; Schroeder, R E
1996-04-01
Rapid evolution of the health care industry forces managers to make cost-effective decisions. Typical hospital cost accounting systems do not provide emergency department managers with the information needed, but emergency department settings are so complex and dynamic as to make the more accurate activity-based costing (ABC) system prohibitively expensive. Through judicious use of the available traditional cost accounting information and simple computer spreadsheets. managers may approximate the decision-guiding information that would result from the much more costly and time-consuming implementation of ABC.
Understanding the Socioeconomic Effects of Wildfires on Western U.S. Public Lands
NASA Astrophysics Data System (ADS)
Sanchez, J. J.; Srivastava, L.; Marcos-Martinez, R.
2017-12-01
Climate change has resulted in the increased severity and frequency of forest disturbances due to wildfires, droughts, pests and diseases that compromise the sustainable provision of forest ecosystem services (e.g., water quantity and quality, carbon sequestration, recreation). A better understanding of the environmental and socioeconomic consequences of forest disturbances (i.e., wildfires) could improve the management and protection of public lands. We used a single-site benefit transfer function and spatially explicit information for demographic, socioeconomic, and site-specific characteristics to estimate the monetized value of market and non-market ecosystem services provided by forests on Western US public lands. These estimates are then used to approximate the costs of forest disturbances caused by wildfires of varying frequency and intensity, and across sites with heterogeneous characteristics and protection and management strategies. Our analysis provides credible estimates of the benefits of the forest for land management by the United States Forest Service, thereby assisting forest managers in planning resourcing and budgeting priorities.
NASA Astrophysics Data System (ADS)
Sun, Xiaolong; Xiang, Yang; Shi, Zheming
2018-05-01
Groundwater flow models implemented to manage regional water resources require aquifer hydraulic parameters. Traditional methods for obtaining these parameters include laboratory experiments, field tests and model inversions, and each are potentially hindered by their unique limitations. Here, we propose a methodology for estimating hydraulic conductivity and storage coefficients using the spectral characteristics of the coseismic groundwater-level oscillations and seismic Rayleigh waves. The results from Well X10 are consistent with the variations and spectral characteristics of the water-level oscillations and seismic waves and present an estimated hydraulic conductivity of approximately 1 × 10-3 m s-1 and storativity of 15 × 10-6. The proposed methodology for estimating hydraulic parameters in confined aquifers is a practical and novel approach for groundwater management and seismic precursor anomaly analyses.
Information flow in the DAMA project beyond database managers: information flow managers
NASA Astrophysics Data System (ADS)
Russell, Lucian; Wolfson, Ouri; Yu, Clement
1996-12-01
To meet the demands of commercial data traffic on the information highway, a new look at managing data is necessary. One projected activity, sharing of point of sale information, is being considered in the Demand Activated Manufacturing Project (DAMA) of the American Textile Partnership (AMTEX) project. A scenario is examined in which 100 000 retail outlets communicate over a period of days. They provide the latest estimate of demand for sewn products across a chain of 26 000 suppliers through the use of bill of materials explosions at four levels of detail. Enabling this communication requires an approach that shares common features with both workflows and database management. A new paradigm, the information flow manager, is developed to handle this situation, including the case where members of the supply chain fail to communicate and go out of business. Techniques for approximation are introduced so as to keep estimates of demand as current as possible.
National Economic Burden Associated with Management of Periodontitis in Malaysia
Ayob, Rasidah; Abd Muttalib, Khairiyah
2016-01-01
Objectives. The aim of this study is to estimate the economic burden associated with the management of periodontitis in Malaysia from the societal perspective. Methods. We estimated the economic burden of periodontitis by combining the disease prevalence with its treatment costs. We estimated treatment costs (with 2012 value of Malaysian Ringgit) using the cost-of-illness approach and included both direct and indirect costs. We used the National Oral Health Survey for Adults (2010) data to estimate the prevalence of periodontitis and 2010 national census data to estimate the adult population at risk for periodontitis. Results. The economic burden of managing all cases of periodontitis at the national level from the societal perspective was approximately MYR 32.5 billion, accounting for 3.83% of the 2012 Gross Domestic Product of the country. It would cost the nation MYR 18.3 billion to treat patients with moderate periodontitis and MYR 13.7 billion to treat patients with severe periodontitis. Conclusion. The economic burden of periodontitis in Malaysia is substantial and comparable with that of other chronic diseases in the country. This is attributable to its high prevalence and high cost of treatment. Judicious application of promotive, preventive, and curative approaches to periodontitis management is decidedly warranted. PMID:27092180
A major challenge for society in the 21st century will be replacement, design and optimal management of urban infrastructure. It is estimated that the current world wide demand for infrastructure investment is approximately three trillion US dollars annually. Many developing coun...
First data on direct costs of lung cancer management in Morocco.
Tachfouti, N; Belkacemi, Y; Raherison, C; Bekkali, R; Benider, A; Nejjari, C
2012-01-01
Lung cancer is the leading cause of cancer morbidity and mortality. Its management has a significant economic impact on society. Despite a high incidence of cancer, so far, there is no national register for this disease in Morocco. The main goal of this report was to estimate the medical costs of lung cancer in our country. We first estimated the number of annual new cases according to stage of the disease on the basis of the Grand-Casablanca-Region Cancer Registry data. For each sub-group, the protocol of treatment was described taking into account the international guidelines, and an evaluation of individual costs during the first year following diagnosis was made. Extrapolation of the results to the whole country was used to calculate the total annual cost of treatments for lung cancer in Morocco. Overall approximately 3,500 new cases of lung cancer occur each year in the country. Stages I and II account for only 4% of cases, while 96% are diagnosed at locally advanced or metastatic stages III and IV. The total medical cost of lung cancer in Morocco is estimated to be around USD 12 million. This cost represents approximately 1% of the global budget of the Health Department. According to AROME Guidelines, about 86% of the newly diagnosed lung cancer cases needed palliative treatment while 14% required curative intent therapy. The total cost of early and advanced stages lung cancer management during the first year were estimated to be 4,600 and 3,420 USD, respectively. This study provides health decision-makers with a first estimate of costs and the opportunity to achieve the optimal use of available data to estimate the needs of health facilities in Morocco. A substantial proportion of the burden of lung cancer could be prevented through the application of existing cancer control knowledge and by implementing tobacco control programs.
Risley, John C.; Gannett, Marshall W.
2006-01-01
The Lower Klamath and Tule Lake National Wildlife Refuges, located in the upper Klamath Basin of Oregon and California, encompass approximately 46,700 and 39,100 acres, respectively. Demand for water in the semiarid upper Klamath Basin has increased in recent years, resulting in the need to better quantify water availability and use in the refuges. This report presents an evaluation of water-use estimates for both refuges derived on the basis of two approaches. One approach used evaporation and evapotranspiration estimates and the other used measured inflow and outflow data. The quality of the inflow and outflow data also was assessed. Annual water use in the refuges, using evapotranspiration estimates, was computed with the use of different rates for each of four land-use categories. Annual water-use rates for grain fields, seasonal wetlands, permanently flooded wetlands with emergent vegetation, and open-water bodies were 2.5, 2.9, 2.63, and 4.07 feet per year, respectively. Total water use was estimated as the sum of the products of each rate and the number of acres in its associated land-use category. Mean annual (2003-2005) water use for the Lower Klamath and Tule Lake refuges was approximately 124,000 and 95,900 acre-feet, respectively. To estimate water deliveries needed for each refuge, first, annual precipitation for 2003-2005 was subtracted from the annual water use for those years. Then, an adjusted total was obtained by adding 20 percent to the difference to account for salinity flushing. Resulting estimated mean annual adjusted needed water deliveries in 2003-2005 for the Lower Klamath and Tule Lake refuges were 107,000 and 82,800 acre-feet, respectively. Mean annual net inflow to the refuges for 2003-2005 was computed by subtracting estimated and measured surface-water outflows from inflows. Mean annual net inflow during the 3-year period for the Lower Klamath refuge, calculated for a subsection of the refuge, was approximately 73,700 acre-feet. The adjusted needed water delivery for this section of the refuge, calculated from evapotranspiration estimates, was approximately 77,600 acre-feet. For the Tule Lake refuge, mean annual net inflow during the 3-year period was approximately 76,100 acre-feet, which is comparable to the estimated annual needed water delivery for the refuge of 82,800 acre-feet. For 1962-2005, mean annual net inflow to the Lower Klamath refuge was approximately 49,800 acre-feet, about 23,900 acre-feet less than for 2003-2005. Although mean April-September net inflows for 1962-2005 and 2003-2005 have remained fairly constant, annual net inflow has increased for October-March, which accounts for the difference. Consistently higher autumn and winter flow deliveries since the mid-1980s reflect a significant change in refuge management. More sections of the refuge are currently managed as seasonal wetlands than were in the 1960s and 1970s. Flow records for the Ady Canal at State Line Road, Klamath Straits Drain at State Line Road, and D Pumping Plant were evaluated for their data quality. On the basis of USGS flow-record criteria, all three flow records were rated as 'poor.' By definition, 95 percent of the daily flows in a record having this rating could be in error by more than 15 percent.
CONCENTRATED ANIMAL FEEDING OPERATIONS AS A SOURCE OF EDCS AND THEIR MANAGEMENT
In the United States, there is an estimated 376,000 animal feed operations, generating approximately 128 billion pounds of waste each year. A facility is an animal feed operation (AFO) if animals are stabled/confined, or fed/maintained, for 45 days or more within any 12-month per...
Community forestry enterprises in Mexico: sustainability and competitiveness
Frederick W. Cubbage; Robert R. Davis; Diana Rodriguez Paredes; Ramon Mollenhauer; Yoanna Kraus Elsin; Gregory E. Frey; Ignacio A. Gonzalez Hernandez; Humberto Albarran Hurtado; Anita Mercedes Salazar Cruz; Diana Nacibe Chemor Salas
2015-01-01
Community-based forest management such as Community Forests Enterprises (CFEs), has potential to generate positive socio-environmental and economic outcomes. We performed a detailed survey of financial and production parameters for 30 of the approximately 992 CFEs in Mexico in order to estimate costs, income, profits and sustainability, but only two of these had...
USDA-ARS?s Scientific Manuscript database
Irrigation in the central valley of California is essential for successful wine grape production, which represents nearly 1 million acres valued at approximately 6 billion dollars. With reductions in water availability and competing water use interests in much of California, there is a critical need...
USDA-ARS?s Scientific Manuscript database
It is estimated that food-borne pathogens cause approximately 76 million cases of gastrointestinal illnesses, 325,000 hospitalizations, and 5,000 deaths in the United States annually. Genomic, proteomic, and metabolomic studies, particularly, genome sequencing projects are providing valuable inform...
Taking error into account when fitting models using Approximate Bayesian Computation.
van der Vaart, Elske; Prangle, Dennis; Sibly, Richard M
2018-03-01
Stochastic computer simulations are often the only practical way of answering questions relating to ecological management. However, due to their complexity, such models are difficult to calibrate and evaluate. Approximate Bayesian Computation (ABC) offers an increasingly popular approach to this problem, widely applied across a variety of fields. However, ensuring the accuracy of ABC's estimates has been difficult. Here, we obtain more accurate estimates by incorporating estimation of error into the ABC protocol. We show how this can be done where the data consist of repeated measures of the same quantity and errors may be assumed to be normally distributed and independent. We then derive the correct acceptance probabilities for a probabilistic ABC algorithm, and update the coverage test with which accuracy is assessed. We apply this method, which we call error-calibrated ABC, to a toy example and a realistic 14-parameter simulation model of earthworms that is used in environmental risk assessment. A comparison with exact methods and the diagnostic coverage test show that our approach improves estimation of parameter values and their credible intervals for both models. © 2017 by the Ecological Society of America.
Impacts of Autonomous Adaptations on the Hydrological Drought Under Climate Change Condition
NASA Astrophysics Data System (ADS)
Oki, T.; Satoh, Y.; Pokhrel, Y. N.; KIM, H.; Yoshimura, K.
2014-12-01
Because of expected effects of climate changes on quantity and spatial distribution of available water resources, assessment of the changes in the balance between the demand and supply of water resources is critical for some regions. Historically, water deficiencies were overcome by planned water management such as dam regulation and irrigation. But only few studies have investigated the effect of anthropogenic factors on the risk of imbalance of water demand and supply under climate change conditions. Therefore, estimation of the potential deficiency in existing infrastructures under water-environment change is needed to support our society to adapt against future climate changes. This study aims to estimate the impacts of climate changes on the risk of water scarcity projected based on CMIP5 RCP scenarios and the efficiency of autonomous adaptation by anthropogenic water management, such as reservoir operation and irrigation using ground water. First, tendencies of the changes in water scarcity under climate change are estimated by an improved land surface model, which integrates natural water cycles and human activities. Second, the efficiencies of human-developed infrastructure are analyzed by comparing the naturalized and fully anthropogenic offline simulations. It was found that number of hydrological drought days will be increased and decreased in approximately 70 % and 24 % of global land, respectively, considering anthropogenic water management, however, they are approximately 82 % and 16 %, respectively, under naturalized condition without anthropogenic water management. The differences indicate how autonomous adaptation through anthropogenic water management can reduce the impacts of climate change. Also, adequate enhancement of infrastructure is necessary against expected water scarcity under climate change because such positive and negative effects of artificial water regulation show comparable impact on water scarcity risk to that of climate change in regions where human activity is significant, even if it is under the worst-case RCP8.5 scenario. More realistic assessment of the impacts of climate change on water resources and the cost estimation of how much economic investments are needed to maintain the current level of the risks of water scarcity are necessary.
Development of an adaptive harvest management program for Taiga bean geese
Johnson, Fred A.; Alhainen, Mikko; Fox, Anthony D.; Madsen, Jesper
2016-01-01
This report describes recent progress in specifying the elements of an adaptive harvest program for taiga bean goose. It describes harvest levels appropriate for first rebuilding the population of the Central Management Unit and then maintaining it near the goal specified in the AEWA International Single Species Action Plan (ISSAP). This report also provides estimates of the length of time it would take under ideal conditions (no density dependence and no harvest) to rebuild depleted populations in the Western and Eastern Management Units. We emphasize that our estimates are a first approximation because detailed demographic information is lacking for taiga bean geese. Using allometric relationships, we estimated parameters of a thetalogistic matrix population model. The mean intrinsic rate of growth was estimated as r = 0.150 (90% credible interval: 0.120 – 0.182). We estimated the mean form of density dependence as 2.361 (90% credible interval: 0.473 – 11.778), suggesting the strongest density dependence occurs when the population is near its carrying capacity. Based on expert opinion, carrying capacity (i.e., population size expected in the absence of hunting) for the Central Management Unit was estimated as K 87,900 (90% credible interval: 82,000 – 94,100). The ISSAP specifies a population goal for the Central Management Unit of 60,000 – 80,000 individuals in winter; thus, we specified a preliminary objective function as one which would minimize the difference between this goal and population size. Using the concept of stochastic dominance to explicitly account for uncertainty in demography, we determined that optimal harvest rates for 5, 10, 15, and 20-year time horizons were h = 0.00, 0.02, 0.05, and 0.06, respectively. These optima represent a tradeoff between the harvest rate and the time required to achieve and maintain a population size within desired bounds. We recognize, however, that regulation of absolute harvest rather than harvest rate is more practical, but our matrix model does not permit one to calculate an exact harvest associated with a specific harvest rate. Approximate harvests for current population size in the Central Management Unit are 0, 1,200, 2,300, and 3,500 for the 5, 10, 15, and 20-year time horizons, respectively. Populations of taiga bean geese in the Western and Eastern Units would require at least 10 and 13 years, respectively, to reach their minimum goals under the most optimistic of scenarios. The presence of harvest, density dependence, or environmental variation could extend these time frames considerably. Finally, we stress that development and implementation of internationally coordinated monitoring programs will be essential to further development and implementation of an adaptive harvest management program.
Where did the US forest biomass/carbon go?
Christopher William Woodall
2012-01-01
In Apr. 2012, with the submission of the 1990-2010 US Greenhouse Gas (GHG) Inventory to the United Nations Framework Convention on Climate Change (UNFCCC), the official estimates of aboveground live tree carbon stocks within managed forests of the United States will drop by approximately 14%, compared with last year's inventory. It does not stop there, dead wood...
Sim, Natasha M; Wilson, David C; Velis, Costas A; Smith, Stephen R
2013-10-01
The UN-Habitat Integrated Sustainable Waste Management (ISWM) benchmarking methodology was applied to profile the physical and governance features of municipal solid waste (MSW) management in the former Soviet Union city of Bishkek, capital of the Kyrgyz Republic. Most of the ISWM indicators were in the expected range for a low-income city when compared with 20 reference cities. Approximately 240,000 t yr(-1) of MSW is generated in Bishkek (equivalent to 200 kg capita(-1) yr(-1)); collection coverage is over 80% and 90% of waste disposed goes to semi-controlled sites operating with minimal environmental standards. The waste composition was a distinctive feature, with relatively high paper content (20-27% wt.) and intermediate organic content (30-40% wt.). The study provides the first quantitative estimates of informal sector recycling, which is currently unrecognised by the city authorities. Approximately 18% wt. of generated MSW is recycled, representing an estimated annual saving to the city authorities of US$0.7-1.1 million in avoided collection/disposal costs. The waste management system is controlled by a centralised municipal waste enterprise (Tazalyk); therefore, institutional coherence is high relative to lower-middle and low-income cities. However, performance on other governance factors, such as inclusivity and financial sustainability, is variable. Future priorities in Bishkek include extending collection to unserved communities; improving landfill standards; increasing recycling rates through informal sector cooperation; improving data availability; and engaging all stakeholders in waste management strategy decisions. Extending the scope and flexibility of the ISWM protocol is recommended to better represent the variation in conditions that occur in waste management systems in practice.
Vilaysouk, Xaysackda; Babel, Sandhya
2017-07-01
Climate change is a consequence of greenhouse gas emissions. Greenhouse gas (GHG) emissions from the waste sector contribute to 3% of total anthropogenic emissions. In this study, applicable solutions for municipal solid waste (MSW) management in Luangprabang (LPB) and Laos were examined. Material flow analysis of MSW was performed to estimate the amount of MSW generated in 2015. Approximately 29,419 tonnes of MSW is estimated for 2015. Unmanaged landfilling was the main disposal method, while MSW open burning was also practiced to some extent. The International Panel on Climate Change 2006 model and the Atmospheric Brown Clouds Emission Inventory Manual were used to estimate GHG emissions from existing MSW management, and total emissions are 33,889 tonnes/year carbon dioxide-equivalents (CO 2 -eq). Three scenarios were developed in order to reduce GHG emissions and environmental problems. Improvement of the MSW management by expanding MSW collection services, introducing composting and recycling, and avoiding open burning, can be considered as solutions to overcome the problems for LPB. The lowest GHG emissions are achieved in the scenario where composting and recycling are proposed, with the total GHG emissions reduction by 18,264 tonnes/year CO 2 -eq.
A pilot outreach program for small quantity generators of hazardous waste.
Brown, M S; Kelley, B G; Gutensohn, J
1988-01-01
The Massachusetts Department of Environmental Management initiated a pilot project to improve compliance with hazardous waste regulations and management of hazardous wastes with auto body shops around the state. The program consisted of mass mailings, a series of workshops throughout the state, a coordinated inspection program by the state regulatory agency, and technology transfer. At the start of the program in January 1986, approximately 650 of the estimated 2,350 auto body shops in the state had notified EPA of their waste generating activities; by January 1987, approximately 1,200 shops had done so. Suggestions for improving program efforts include tailoring the outreach effort to the industry, government-sponsored research and development directed at the needs of small firms, mandatory participation in hazardous waste transportation programs, and better coordination by EPA of its information collection and distribution program. PMID:3421393
Thornton, Philip K.; Herrero, Mario
2010-01-01
We estimate the potential reductions in methane and carbon dioxide emissions from several livestock and pasture management options in the mixed and rangeland-based production systems in the tropics. The impacts of adoption of improved pastures, intensifying ruminant diets, changes in land-use practices, and changing breeds of large ruminants on the production of methane and carbon dioxide are calculated for two levels of adoption: complete adoption, to estimate the upper limit to reductions in these greenhouse gases (GHGs), and optimistic but plausible adoption rates taken from the literature, where these exist. Results are expressed both in GHG per ton of livestock product and in Gt CO2-eq. We estimate that the maximum mitigation potential of these options in the land-based livestock systems in the tropics amounts to approximately 7% of the global agricultural mitigation potential to 2030. Using historical adoption rates from the literature, the plausible mitigation potential of these options could contribute approximately 4% of global agricultural GHG mitigation. This could be worth on the order of $1.3 billion per year at a price of $20 per t CO2-eq. The household-level and sociocultural impacts of some of these options warrant further study, however, because livestock have multiple roles in tropical systems that often go far beyond their productive utility. PMID:20823225
NASA Astrophysics Data System (ADS)
Francois, Baptiste; Hingray, Benoit; Creutin, Jean-Dominique; Hendrickx, Frederic
2015-04-01
The performance of water systems used worldwide for the management of water resources is expected to be influenced by future changes in regional climates and water uses. Anticipating possible performance changes of a given system requires a modeling chain simulating its management. Operational management is usually not trivial especially when several conflicting objectives have to be accounted for. Management models are therefore often a crude representation of the real system and they only approximate its performance. Estimated performance changes are expected to depend on the management model used, but this is often not assessed. This communication analyzes the influence of the management strategy representation on the performance of an Alpine reservoir (Serre-Ponçon, South-East of France) for which irrigation supply, hydropower generation and recreational activities are the main objectives. We consider three ways to construct the strategy named as clear-, short- and far-sighted management. They are based on different forecastability degrees of seasonal inflows into the reservoir. The strategies are optimized using a Dynamic Programming algorithm (deterministic for clear-sighted and implicit stochastic for short- and far-sighted). System performance is estimated for an ensemble of future hydro-meteorological projections obtained in the RIWER2030 research project (http://www.lthe.fr/RIWER2030/) from a suite of climate experiments from the EU - ENSEMBLES research project. Our results show that changes in system performance is much more influenced by changes in hydro-meteorological variables than by the choice of strategy modeling. They also show that a simple strategy representation (i.e. clear-sighted management) leads to similar estimates of performance modifications than those obtained with a representation supposedly closer to real world (i.e. the far-sighted management). The Short-Sighted management approach lead to significantly different results, especially when inter-annual inflow variability is high. Key words: Climate change, water resource, impact, management strategy modelling
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-19
... tentatively estimates that each staff member will spend approximately six hours per work week for six months... determined as follows: Seven employees x (six hours/week/employee x 24 weeks) = 1,008 hours. Assuming the... employees x (six hours/week/employee x 24 weeks) = 432 hours. Assuming the employees are at the Level III...
NASA Astrophysics Data System (ADS)
Affandy, Nur Azizah; Isnaini, Enik; Laksono, Arif Budi
2017-06-01
Waste management becomes a serious issue in Indonesia. Significantly, waste production in Lamongan Regency is increasing in linear with the growth of population and current people activities, creating a gap between waste production and waste management. It is a critical problem that should be solved immediately. As a reaction to the issue, the Government of Lamongan Regency has enacted a new policy regarding waste management through a program named Lamongan Green and Clean (LGC). From the collected data, it showed that the "wet waste" or "organic waste" was approximately 63% of total domestic waste. With such condition, it can be predicted that the trashes will decompose quite quickly. From the observation, it was discovered that the generated waste was approximately 0.25 kg/person/day. Meanwhile, the number of population in Tumenggungan Village, Lamongan (data obtained from Monograph in Lamongan district, 2012) was 4651 people. Thus, it can be estimated the total waste in Lamongan was approximately 0.25 kg/person/day x 4651 characters = 930 kg/day. Within 3RWB Model, several stages have to be conducted. In the planning stage, the promotion of self-awareness among the communities in selecting and managing waste due to their interest in a potential benefit, is done. It indicated that community's awareness of waste management waste grew significantly. Meanwhile in socialization stage, each village staff, environmental expert, and policymaker should bear significant role in disseminating the awareness among the people. In the implementation phase, waste management with 3RWB model is promoted by applying it among of the community, starting from selection, waste management, until recycled products sale through the waste bank. In evaluation stage, the village managers, environmental expert, and waste managers are expected to regularly supervise and evaluate the whole activity of the waste management.
Improving nitrogen management via a regional management plan for Chinese rice production
NASA Astrophysics Data System (ADS)
Wu, Liang; Chen, Xinping; Cui, Zhenling; Wang, Guiliang; Zhang, Weifeng
2015-09-01
A lack of basic information on optimal nitrogen (N) management often results in over- or under-application of N fertilizer in small-scale intensive rice farming. Here, we present a new database of N input from a survey of 6611 small-scale rice farmers and rice yield in response to added N in 1177 experimental on-farm tests across eight agroecological subregions of China. This database enables us to evaluate N management by farmers and develop an optimal approach to regional N management. We also investigated grain yield, N application rate, and estimated greenhouse gas (GHG) emissions in comparison to N application and farming practices. Across all farmers, the average N application rate, weighted by the area of rice production in each subregion, was 210 kg ha-1 and ranged from 30 to 744 kg ha-1 across fields and from 131 to 316 kg ha-1 across regions. The regionally optimal N rate (RONR) determined from the experiments averaged 167 kg ha-1 and varied from 114 to 224 kg N ha-1 for the different regions. If these RONR were widely adopted in China, approximately 56% of farms would reduce their use of N fertilizer, and approximately 33% would increase their use of N fertilizer. As a result, grain yield would increase by 7.4% from 7.14 to 7.67 Mg ha-1, and the estimated GHG emissions would be reduced by 11.1% from 1390 to 1236 kg carbon dioxide (CO2) eq Mg-1 grain. These results suggest that to achieve the goals of improvement in regional yield and sustainable environmental development, regional N use should be optimized among N-poor and N-rich farms and regions in China.
The peri-operative management of anti-platelet therapy in elective, non-cardiac surgery.
Alcock, Richard F; Naoum, Chris; Aliprandi-Costa, Bernadette; Hillis, Graham S; Brieger, David B
2013-07-31
Cardiovascular complications are important causes of morbidity and mortality in patients undergoing elective non-cardiac surgery, with adverse cardiac outcomes estimated to occur in approximately 4% of all patients. Anti-platelet therapy withdrawal may precede up to 10% of acute cardiovascular syndromes, with withdrawal in the peri-operative setting incompletely appraised. The aims of our study were to determine the proportion of patients undergoing elective non-cardiac surgery currently prescribed anti-platelet therapy, and identify current practice in peri-operative management. In addition, the relationship between management of anti-platelet therapy and peri-operative cardiac risk was assessed. We evaluated consecutive patients attending elective non-cardiac surgery at a major tertiary referral centre. Clinical and biochemical data were collected and analysed on patients currently prescribed anti-platelet therapy. Peri-operative management of anti-platelet therapy was compared with estimated peri-operative cardiac risk. Included were 2950 consecutive patients, with 516 (17%) prescribed anti-platelet therapy, primarily for ischaemic heart disease. Two hundred and eighty nine (56%) patients had all anti-platelet therapy ceased in the peri-operative period, including 49% of patients with ischaemic heart disease and 46% of patients with previous coronary stenting. Peri-operative cardiac risk score did not influence anti-platelet therapy management. Approximately 17% of patients undergoing elective non-cardiac surgery are prescribed anti-platelet therapy, the predominant indication being for ischaemic heart disease. Almost half of all patients with previous coronary stenting had no anti-platelet therapy during the peri-operative period. The decision to cease anti-platelet therapy, which occurred commonly, did not appear to be guided by peri-operative cardiac risk stratification. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Lee, Casey; Foster, Guy
2013-01-01
In-stream sensors are increasingly deployed as part of ambient water quality-monitoring networks. Temporally dense data from these networks can be used to better understand the transport of constituents through streams, lakes or reservoirs. Data from existing, continuously recording in-stream flow and water quality monitoring stations were coupled with the two-dimensional hydrodynamic CE-QUAL-W2 model to assess the potential of altered reservoir outflow management to reduce sediment trapping in John Redmond Reservoir, located in east-central Kansas. Monitoring stations upstream and downstream from the reservoir were used to estimate 5.6 million metric tons of sediment transported to John Redmond Reservoir from 2007 through 2010, 88% of which was trapped within the reservoir. The two-dimensional model was used to estimate the residence time of 55 equal-volume releases from the reservoir; sediment trapping for these releases varied from 48% to 97%. Smaller trapping efficiencies were observed when the reservoir was maintained near the normal operating capacity (relative to higher flood pool levels) and when average residence times were relatively short. An idealized, alternative outflow management scenario was constructed, which minimized reservoir elevations and the length of time water was in the reservoir, while continuing to meet downstream flood control end points identified in the reservoir water control manual. The alternative scenario is projected to reduce sediment trapping in the reservoir by approximately 3%, preventing approximately 45 000 metric tons of sediment from being deposited within the reservoir annually. This article presents an approach to quantify the potential of reservoir management using existing in-stream data; actual management decisions need to consider the effects on other reservoir benefits, such as downstream flood control and aquatic life.
Lee, Joonnyong; Sohn, JangJay; Park, Jonghyun; Yang, SeungMan; Lee, Saram; Kim, Hee Chan
2018-06-18
Non-invasive continuous blood pressure monitors are of great interest to the medical community due to their value in hypertension management. Recently, studies have shown the potential of pulse pressure as a therapeutic target for hypertension, but not enough attention has been given to non-invasive continuous monitoring of pulse pressure. Although accurate pulse pressure estimation can be of direct value to hypertension management and indirectly to the estimation of systolic blood pressure, as it is the sum of pulse pressure and diastolic blood pressure, only a few inadequate methods of pulse pressure estimation have been proposed. We present a novel, non-invasive blood pressure and pulse pressure estimation method based on pulse transit time and pre-ejection period. Pre-ejection period and pulse transit time were measured non-invasively using electrocardiogram, seismocardiogram, and photoplethysmogram measured from the torso. The proposed method used the 2-element Windkessel model to model pulse pressure with the ratio of stroke volume, approximated by pre-ejection period, and arterial compliance, estimated by pulse transit time. Diastolic blood pressure was estimated using pulse transit time, and systolic blood pressure was estimated as the sum of the two estimates. The estimation method was verified in 11 subjects in two separate conditions with induced cardiovascular response and the results were compared against a reference measurement and values obtained from a previously proposed method. The proposed method yielded high agreement with the reference (pulse pressure correlation with reference R ≥ 0.927, diastolic blood pressure correlation with reference R ≥ 0.854, systolic blood pressure correlation with reference R ≥ 0.914) and high estimation accuracy in pulse pressure (mean root-mean-squared error ≤ 3.46 mmHg) and blood pressure (mean root-mean-squared error ≤ 6.31 mmHg for diastolic blood pressure and ≤ 8.41 mmHg for systolic blood pressure) over a wide range of hemodynamic changes. The proposed pulse pressure estimation method provides accurate estimates in situations with and without significant changes in stroke volume. The proposed method improves upon the currently available systolic blood pressure estimation methods by providing accurate pulse pressure estimates.
Seo, Seongwon; Hwang, Yongwoo
1999-08-01
Construction and demolition (C&D) debris is generated at the site of various construction activities. However, the amount of the debris is usually so large that it is necessary to estimate the amount of C&D debris as accurately as possible for effective waste management and control in urban areas. In this paper, an effective estimation method using a statistical model was proposed. The estimation process was composed of five steps: estimation of the life span of buildings; estimation of the floor area of buildings to be constructed and demolished; calculation of individual intensity units of C&D debris; and estimation of the future C&D debris production. This method was also applied in the city of Seoul as an actual case, and the estimated amount of C&D debris in Seoul in 2021 was approximately 24 million tons. Of this total amount, 98% was generated by demolition, and the main components of debris were concrete and brick.
Adams, C. G.; Schenker, J. H.; McGhee, P. S.; Gut, L. J.; Brunner, J. F.
2017-01-01
Abstract Novel methods of data analysis were used to interpret codling moth (Cydia pomonella) catch data from central-trap, multiple-release experiments using a standard codlemone-baited monitoring trap in commercial apple orchards not under mating disruption. The main objectives were to determine consistency and reliability for measures of: 1) the trapping radius, composed of the trap’s behaviorally effective plume reach and the maximum dispersive distance of a responder population; and 2) the proportion of the population present in the trapping area that is caught. Two moth release designs were used: 1) moth releases at regular intervals in the four cardinal directions, and 2) evenly distributed moth releases across entire approximately 18-ha orchard blocks using both high and low codling moth populations. For both release designs, at high populations, the mean proportion catch was 0.01, and for the even release of low populations, that value was approximately 0.02. Mean maximum dispersive distance for released codling moth males was approximately 260 m. Behaviorally effective plume reach for the standard codling moth trap was < 5 m, and total trapping area for a single trap was approximately 21 ha. These estimates were consistent across three growing seasons and are supported by extraordinarily high replication for this type of field experiment. Knowing the trapping area and mean proportion caught, catch number per single monitoring trap can be translated into absolute pest density using the equation: males per trapping area = catch per trapping area/proportion caught. Thus, catches of 1, 3, 10, and 30 codling moth males per trap translate to approximately 5, 14, 48, and 143 males/ha, respectively, and reflect equal densities of females, because the codling moth sex ratio is 1:1. Combined with life-table data on codling moth fecundity and mortality, along with data on crop yield per trapping area, this fundamental knowledge of how to interpret catch numbers will enable pest managers to make considerably more precise projections of damage and therefore more precise and reliable decisions on whether insecticide applications are justified. The principles and methods established here for estimating absolute codling moth density may be broadly applicable to pests generally and thereby could set a new standard for integrated pest management decisions based on trapping. PMID:28131989
[Transsexualism: from diagnosis to management].
De Bonnecaze, G; Pessey, J J; Chaput, B; Al Hawat, A; Vairel, B
2013-01-01
The transsexualism or gender dysphoria is a pathology during which an individual does not recognize himself in his sexual identity and wishes to change it: in that it must be differentiated from the sexual ambiguities (hermaphrodism, pseudohermaphroditism) in which the sexual phenotype is not clearly established. In France the number of transsexuals is estimated at approximately 50,000 people. Since 2009 the transsexualism is not any more considered as a mental illness, it remains regarded as a long term illness. The objective of this article is to present the recent evolutions concerning the management of transsexual patients seeking feminization.
Abundance and distribution of feral pigs at Hakalau Forest National Wildlife Refuge, 2010-2013
Hess, Steven C.; Leopold, Christina R.; Kendall, Steven J.
2013-01-01
The Hakalau Forest Unit of the Big Island National Wildlife Refuge Complex has intensively managed feral pigs (Sus scrofa) and monitored feral pig presence with surveys of all managed areas since 1988. Results of all available data regarding pig management activities through 2004 were compiled and analyzed, but no further analyses had been conducted since then. The objective of this report was to analyze recent feral ungulate surveys at the Hakalau Forest Unit to determine current pig abundance and distribution. Activity indices for feral pigs, consisting of the presence of fresh or intermediate sign at 422 stations, each with approximately 20 sample plots, were compiled for years 2010–2013. A calibrated model based on the number of pigs removed from one management unit and concurrent activity surveys was applied to estimate pig abundance in other management units. Although point estimates appeared to decrease from 489.1 (±105.6) in 2010 to 407.6 (±88.0) in 2013, 95% confidence intervals overlapped, indicating no significant change in pig abundance within all management units. Nonetheless, there were significant declines in pig abundance over the four-year period within management units 1, 6, and 7. Areas where pig abundance remained high include the southern portion of Unit 2. Results of these surveys will be useful for directing management actions towards specific management units.
Factor XIII deficiency in Iran: a comprehensive review of the literature.
Dorgalaleh, Akbar; Naderi, Majid; Hosseini, Maryam Sadat; Alizadeh, Shaban; Hosseini, Soudabeh; Tabibian, Shadi; Eshghi, Peyman
2015-04-01
Factor XIII deficiency (FXIIID) is a rare bleeding disorder with an estimated prevalence of 1 in 2-million population worldwide. In Iran, a Middle Eastern country with a high rate of consanguineous marriages, there are approximately 473 patients afflicted with FXIIID. An approximately 12-fold higher prevalence of FXIIID is estimated in Iran in comparison with overall worldwide frequency. In this study, we have undertaken a comprehensive review on different aspects of FXIIID in the Iranian population. The distribution of this disease in different regions of Iran reveals that Sistan and Baluchestan Province has not only the highest number of patients with FXIIID in Iran but the highest global incidence of this condition. Among Iranian patients, umbilical cord bleeding, hematoma, and prolonged wound bleeding are the most frequent clinical manifestations. There are several disease causing mutations in Iranian patients with FXIIID, with Trp187Arg being the most common mutation in FXIIID in Iran. Traditionally, the management of FXIIID in Iran was only based on administration of fresh frozen plasma or cryoprecipitate, until 2009 when FXIII concentrate became available for patient management. Various studies have evaluated the efficacy and safety of prophylactic regimens in different situations with valuable findings. Although the focus of this study is on Iran, it offers considerable insight into FXIIID, which can be applied more extensively to improve the management and quality of life in all affected patients. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Type 2 diabetes detection and management among insured adults.
Dall, Timothy M; Yang, Weyna; Halder, Pragna; Franz, Jerry; Byrne, Erin; Semilla, April P; Chakrabarti, Ritashree; Stuart, Bruce
2016-01-01
The Centers for Disease Control and Prevention estimates that 28.9 million adults had diabetes in 2012 in the US, though many patients are undiagnosed or not managing their condition. This study provides US national and state estimates of insured adults with type 2 diabetes who are diagnosed, receiving exams and medication, managing glycemic levels, with diabetes complications, and their health expenditures. Such information can be used for benchmarking and to identify gaps in diabetes detection and management. The study combines analysis of survey data with medical claims analysis for the commercially insured, Medicare, and Medicaid populations to estimate the number of adults with diagnosed type 2 diabetes and undiagnosed diabetes by insurance type, age, and sex. Medical claims analysis used the 2012 de-identified Normative Health Information database covering a nationally representative commercially insured population, the 2011 Medicare 5% Sample, and the 2008 Medicaid Mini-Max. Among insured adults in 2012, approximately 16.9 million had diagnosed type 2 diabetes, 1.45 million had diagnosed type 1 diabetes, and 6.9 million had undiagnosed diabetes. Of those with diagnosed type 2, approximately 13.0 million (77%) received diabetes medication-ranging from 70% in New Jersey to 82% in Utah. Suboptimal percentages had claims indicating recommended exams were performed. Of those receiving diabetes medication, 43% (5.6 million) had medical claims indicating poorly controlled diabetes-ranging from 29% with poor control in Minnesota and Iowa to 53% in Texas. Poor control was correlated with higher prevalence of neurological complications (+14%), renal complications (+14%), and peripheral vascular disease (+11%). Patients with poor control averaged $4,860 higher average annual health care expenditures-ranging from $6,680 for commercially insured patients to $4,360 for Medicaid and $3,430 for Medicare patients. This study highlights the large number of insured adults with undiagnosed type 2 diabetes by insurance type and state. Furthermore, this study sheds light on other gaps in diabetes care quality among patients with diagnosed diabetes and corresponding poorly controlled diabetes. These findings underscore the need for improvements in data collection and diabetes screening and management, along with policies that support these improvements.
Sauer, John R.; Link, William A.; Nichols, James D.; Royle, J. Andrew
2005-01-01
Bart et al. (2004) develop methods for predicting needed samples for estimation of long-term trends from Count survey data, and they apply these methods to the North American Breeding Bird Survey (BBS). They recommend adding approximately 40% more survey routes ill the BBS to allow for estimation of long-term (i.e., 20 year) trends for a collection of species. We critique several aspects of their analysis and suggest that their focus on long-term trends and expansion of the present survey design will provide limited benefits for conservation because it fails to either enhance the credibility of the survey or better tie the survey to regional management activities. A primary innovation claimed by Bart et al. (2004) is the incorporation of bias in estimation of study planning. We question the value of this approach, as it requires reliable estimates of range of future bias. We show that estimates of bias used by Bart et al. (2004) are speculative. Failure to obtain better estimates of this bias is likely to compromise the credibility of future analyses of the survey. We also note that the generic analysis of population trends that they provide is of questionable validity and is unlikely to be relevant for regions and species of management concern.
Man Portable Vector EMI Sensor for Full UXO Characterization
2012-05-01
with project management and coordination. Drs. Laurens Beran, Leonard Pasion , and Stephen Billings advised on technical aspects and Dr. Gregory Schultz...approximated as a point dipole (e.g., Bell et al., 2001; Pasion and Oldenburg, 2001; Gasperikova et al., 2009). The process of estimating the target...39, 1286–1293. Bell, T. 2005. Geo-location Requirements for UXO Discrimination. SERDP Geo-location Workshop. Billings, S., L. Pasion , N. Lhomme
Anthony C. Caprio; David M. Graber
2000-01-01
This paper examines the resultant conditions of Sequoia and Kings Canyon National Parkâs burn program relative to knowledge about past fire regimes in this ecosystem. Estimates of past fire-return intervals provide management direction and were used to develop approximations of area burned prior to Euroamerican settlement. This information was used to develop simple...
Narladkar, B. W.
2018-01-01
Broadly, species of arthropods infesting livestock are grouped into flies (biting and non-biting), fleas, lice (biting and sucking), ticks (soft and hard), and mites (burrowing, non-burrowing, and follicular). Among which, biting and non-biting flies and ticks are the potent vectors for many bacterial, viral, rickettsial, and protozoan diseases. Vectors of livestock are having economic significance on three points (1) direct losses from their bite and annoyance, worries, and psychological disturbances produced during the act of biting and feeding, (2) diseases they transmit, and (3) expenditure incurred for their control. Flies such as Culicoides spp. and Musca spp. and various species of hard ticks play important role in disease transmission in addition to their direct effects. For control of vectors, recent concept of integrated pest management (IPM) provides the best solution and also addresses the problems related to acaricide resistance and environmental protection from hazardous chemicals. However, to successfully implement the concept of IPM, for each vector species, estimation of two monitory benchmarks, i.e., economic injury level (EIL) and economic threshold level (ETL) is essential prerequisite. For many vector species and under several circumstances, estimation of EIL and ETL appears to be difficult. Under such scenario, although may not be exact, an approximate estimate can be accrued by taking into account several criteria such as percent prevalence of vectors in a geographical area, percent losses produced, total livestock population, and current prices of livestock products such as milk, meat, and wool. Method for approximate estimation is first time described and elaborated in the present review article. PMID:29657396
Mroz, T A
1999-10-01
This paper contains a Monte Carlo evaluation of estimators used to control for endogeneity of dummy explanatory variables in continuous outcome regression models. When the true model has bivariate normal disturbances, estimators using discrete factor approximations compare favorably to efficient estimators in terms of precision and bias; these approximation estimators dominate all the other estimators examined when the disturbances are non-normal. The experiments also indicate that one should liberally add points of support to the discrete factor distribution. The paper concludes with an application of the discrete factor approximation to the estimation of the impact of marriage on wages.
Baldini, Christopher G; Culley, Eric J
2011-01-01
A large managed care organization (MCO) in western Pennsylvania initiated a Medical Injectable Drug (MID) program in 2002 that transferred a specific subset of specialty drugs from physician reimbursement under the traditional "buy-and-bill" model in the medical benefit to MCO purchase from a specialty pharmacy provider (SPP) that supplied physician offices with the MIDs. The MID program was initiated with 4 drugs in 2002 (palivizumab and 3 hyaluronate products/derivatives) growing to more than 50 drugs by 2007-2008. To (a) describe the MID program as a method to manage the cost and delivery of this subset of specialty drugs, and (b) estimate the MID program cost savings in 2007 and 2008 in an MCO with approximately 4.6 million members. Cost savings generated by the MID program were calculated by comparing the total actual expenditure (plan cost plus member cost) on medications included in the MID program for calendar years 2007 and 2008 with the total estimated expenditure that would have been paid to physicians during the same time period for the same medication if reimbursement had been made using HCPCS (J code) billing under the physician "buy-and-bill" reimbursement rates. For the approximately 50 drugs in the MID program in 2007 and 2008, the drug cost savings in 2007 were estimated to be $15.5 million (18.2%) or $290 per claim ($0.28 per member per month [PMPM]) and about $13 million (12.7%) or $201 per claim ($0.23 PMPM) in 2008. Although 28% of MID claims continued to be billed by physicians using J codes in 2007 and 22% in 2008, all claims for MIDs were limited to the SPP reimbursement rates. This MID program was associated with health plan cost savings of approximately $28.5 million over 2 years, achieved by the transfer of about 50 physician-administered injectable pharmaceuticals from reimbursement to physicians to reimbursement to a single SPP and payment of physician claims for MIDs at the SPP reimbursement rates.
Price, A.; Peterson, James T.
2010-01-01
Stream fish managers often use fish sample data to inform management decisions affecting fish populations. Fish sample data, however, can be biased by the same factors affecting fish populations. To minimize the effect of sample biases on decision making, biologists need information on the effectiveness of fish sampling methods. We evaluated single-pass backpack electrofishing and seining combined with electrofishing by following a dual-gear, mark–recapture approach in 61 blocknetted sample units within first- to third-order streams. We also estimated fish movement out of unblocked units during sampling. Capture efficiency and fish abundances were modeled for 50 fish species by use of conditional multinomial capture–recapture models. The best-approximating models indicated that capture efficiencies were generally low and differed among species groups based on family or genus. Efficiencies of single-pass electrofishing and seining combined with electrofishing were greatest for Catostomidae and lowest for Ictaluridae. Fish body length and stream habitat characteristics (mean cross-sectional area, wood density, mean current velocity, and turbidity) also were related to capture efficiency of both methods, but the effects differed among species groups. We estimated that, on average, 23% of fish left the unblocked sample units, but net movement varied among species. Our results suggest that (1) common warmwater stream fish sampling methods have low capture efficiency and (2) failure to adjust for incomplete capture may bias estimates of fish abundance. We suggest that managers minimize bias from incomplete capture by adjusting data for site- and species-specific capture efficiency and by choosing sampling gear that provide estimates with minimal bias and variance. Furthermore, if block nets are not used, we recommend that managers adjust the data based on unconditional capture efficiency.
Forecasting F10.7 with Solar Magnetic Flux Transport Modeling (Postprint)
2012-04-03
Charles N. Arge Joel B. Mozer Project Manager, RVBXS Chief, RVB This report is published in the interest of...within 6 hours of the F10.7 measurements during the years 1993 through 2010, the Spearman correlation coefficient, rs, for an empirical model of...estimation of the Earth-side solar magnetic field distribution used to forecast F10.7. Spearman correlation values of approximately 0.97, 0.95, and 0.93 are
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berkel, M. van; Fellow of the Japan Society for the Promotion of Science; FOM Institute DIFFER-Dutch Institute for Fundamental Energy Research, Association EURATOM- FOM, Trilateral Euregio Cluster, PO Box 1207, 3430 BE Nieuwegein
2014-11-15
In this paper, a number of new approximations are introduced to estimate the perturbative diffusivity (χ), convectivity (V), and damping (τ) in cylindrical geometry. For this purpose, the harmonic components of heat waves induced by localized deposition of modulated power are used. The approximations are based on semi-infinite slab approximations of the heat equation. The main result is the approximation of χ under the influence of V and τ based on the phase of two harmonics making the estimate less sensitive to calibration errors. To understand why the slab approximations can estimate χ well in cylindrical geometry, the relationships betweenmore » heat transport models in slab and cylindrical geometry are studied. In addition, the relationship between amplitude and phase with respect to their derivatives, used to estimate χ, is discussed. The results are presented in terms of the relative error for the different derived approximations for different values of frequency, transport coefficients, and dimensionless radius. The approximations show a significant region in which χ, V, and τ can be estimated well, but also regions in which the error is large. Also, it is shown that some compensation is necessary to estimate V and τ in a cylindrical geometry. On the other hand, errors resulting from the simplified assumptions are also discussed showing that estimating realistic values for V and τ based on infinite domains will be difficult in practice. This paper is the first part (Part I) of a series of three papers. In Part II and Part III, cylindrical approximations based directly on semi-infinite cylindrical domain (outward propagating heat pulses) and inward propagating heat pulses in a cylindrical domain, respectively, will be treated.« less
Thompson, Kimberly M; Badizadegan, Nima D
2017-06-01
Policy makers responsible for managing measles and rubella immunization programs currently use a wide range of different vaccines formulations and immunization schedules. With endemic measles and rubella transmission interrupted in the region of the Americas, all five other regions of the World Health Organization (WHO) targeting the elimination of measles transmission by 2020, and increasing adoption of rubella vaccine globally, integrated dynamic disease, risk, decision, and economic models can help national, regional, and global health leaders manage measles and rubella population immunity. Despite hundreds of publications describing models for measles or rubella and decades of use of vaccines that contain both antigens (e.g., measles, mumps, and rubella vaccine or MMR), no transmission models for measles and rubella exist to support global policy analyses. We describe the development of a dynamic disease model for measles and rubella transmission, which we apply to 180 WHO member states and three other areas (Puerto Rico, Hong Kong, and Macao) representing >99.5% of the global population in 2013. The model accounts for seasonality, age-heterogeneous mixing, and the potential existence of preferentially mixing undervaccinated subpopulations, which create heterogeneity in immunization coverage that impacts transmission. Using our transmission model with the best available information about routine, supplemental, and outbreak response immunization, we characterize the complex transmission dynamics for measles and rubella historically to compare the results with available incidence and serological data. We show the results from several countries that represent diverse epidemiological situations to demonstrate the performance of the model. The model suggests relatively high measles and rubella control costs of approximately $3 billion annually for vaccination based on 2013 estimates, but still leads to approximately 17 million disability-adjusted life years lost with associated costs for treatment, home care, and productivity loss costs of approximately $4, $3, and $47 billion annually, respectively. Combined with vaccination and other financial cost estimates, our estimates imply that the eradication of measles and rubella could save at least $10 billion per year, even without considering the benefits of preventing lost productivity and potential savings from reductions in vaccination. The model should provide a useful tool for exploring the health and economic outcomes of prospective opportunities to manage measles and rubella. Improving the quality of data available to support decision making and modeling should represent a priority as countries work toward measles and rubella goals. © 2017 Society for Risk Analysis.
Borderline personality disorder in the primary care setting.
Dubovsky, Amelia N; Kiefer, Meghan M
2014-09-01
Borderline personality disorder is estimated to be present in approximately 6% of outpatient primary care settings. However, the time and energy spent on this population can greatly exceed what primary care doctors are able to spend. This article gives an overview of borderline personality disorder, including the clinical characteristics, epidemiology, and comorbidities, as well as pharmacologic and most important behavioral management. It is our hope that, with improved understanding of the disorder and skills for managing this population, caring for patients with the disorder can be more satisfying and less taxing for both primary care doctors and their patients. Copyright © 2014 Elsevier Inc. All rights reserved.
Edgil, Dianna; Stankard, Petra; Forsythe, Steven; Rech, Dino; Chrouser, Kristin; Adamu, Tigistu; Sakallah, Sameer; Thomas, Anne Goldzier; Albertini, Jennifer; Stanton, David; Dickson, Kim Eva; Njeuhmeli, Emmanuel
2011-11-01
The global HIV prevention community is implementing voluntary medical male circumcision (VMMC) programs across eastern and southern Africa, with a goal of reaching 80% coverage in adult males by 2015. Successful implementation will depend on the accessibility of commodities essential for VMMC programming and the appropriate allocation of resources to support the VMMC supply chain. For this, the United States President's Emergency Plan for AIDS Relief, in collaboration with the World Health Organization and the Joint United Nations Programme on HIV/AIDS, has developed a standard list of commodities for VMMC programs. This list of commodities was used to inform program planning for a 1-y program to circumcise 152,000 adult men in Swaziland. During this process, additional key commodities were identified, expanding the standard list to include commodities for waste management, HIV counseling and testing, and the treatment of sexually transmitted infections. The approximate costs for the procurement of commodities, management of a supply chain, and waste disposal, were determined for the VMMC program in Swaziland using current market prices of goods and services. Previous costing studies of VMMC programs did not capture supply chain costs, nor the full range of commodities needed for VMMC program implementation or waste management. Our calculations indicate that depending upon the volume of services provided, supply chain and waste management, including commodities and associated labor, contribute between US$58.92 and US$73.57 to the cost of performing one adult male circumcision in Swaziland. Experience with the VMMC program in Swaziland indicates that supply chain and waste management add approximately US$60 per circumcision, nearly doubling the total per procedure cost estimated previously; these additional costs are used to inform the estimate of per procedure costs modeled by Njeuhmeli et al. in "Voluntary Medical Male Circumcision: Modeling the Impact and Cost of Expanding Male Circumcision for HIV Prevention in Eastern and Southern Africa." Program planners and policy makers should consider the significant contribution of supply chain and waste management to VMMC program costs as they determine future resource needs for VMMC programs.
Adkins, Jessica Y.; Roby, Daniel D.; Lyons, Donald E.; Courtot, Karen N.; Collis, Ken; Carter, Harry R.; Shuford, W. David; Capitolo, Phillip J.
2014-01-01
The status of the double-crested cormorant (Phalacrocorax auritus) in western North America was last evaluated during 1987–2003. In the interim, concern has grown over the potential impact of predation by double-crested cormorants on juvenile salmonids (Oncorhynchusspp.), particularly in the Columbia Basin and along the Pacific coast where some salmonids are listed for protection under the United States Endangered Species Act. Recent re-evaluations of double-crested cormorant management at the local, flyway, and federal level warrant further examination of the current population size and trends in western North America. We collected colony size data for the western population (British Columbia, Washington, Oregon, Idaho, California, Nevada, Utah, Arizona, and the portions of Montana, Wyoming, Colorado and New Mexico west of the Continental Divide) by conducting aircraft-, boat-, or ground-based surveys and by cooperating with government agencies, universities, and non-profit organizations. In 2009, we estimated approximately 31,200 breeding pairs in the western population. We estimated that cormorant numbers in the Pacific Region (British Columbia, Washington, Oregon, and California) increased 72% from 1987–1992 to circa 2009. Based on the best available data for this period, the average annual growth rate (λ) of the number of breeding birds in the Pacific Region was 1.03, versus 1.07 for the population east of the Continental Divide during recent decades. Most of the increase in the Pacific Region can be attributed to an increase in the size of the nesting colony on East Sand Island in the Columbia River estuary, which accounts for about 39% of all breeding pairs in the western population and is the largest known breeding colony for the species (12,087 breeding pairs estimated in 2009). In contrast, numbers of breeding pairs estimated in coastal British Columbia and Washington have declined by approximately 66% during this same period. Disturbance at breeding colonies by bald eagles (Haliaeetus leucocephalus) and humans are likely limiting factors on the growth of the western population at present. Because of differences in biology and management, the western population of double-crested cormorants warrants consideration as a separate management unit from the population east of the Continental Divide.
An hp-adaptivity and error estimation for hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Bey, Kim S.
1995-01-01
This paper presents an hp-adaptive discontinuous Galerkin method for linear hyperbolic conservation laws. A priori and a posteriori error estimates are derived in mesh-dependent norms which reflect the dependence of the approximate solution on the element size (h) and the degree (p) of the local polynomial approximation. The a posteriori error estimate, based on the element residual method, provides bounds on the actual global error in the approximate solution. The adaptive strategy is designed to deliver an approximate solution with the specified level of error in three steps. The a posteriori estimate is used to assess the accuracy of a given approximate solution and the a priori estimate is used to predict the mesh refinements and polynomial enrichment needed to deliver the desired solution. Numerical examples demonstrate the reliability of the a posteriori error estimates and the effectiveness of the hp-adaptive strategy.
Barber-Meyer, Shannon; Ryan, Daniel; Grosshuesch, David; Catton, Timothy; Malick-Wahls, Sarah
2018-01-01
core areas and averaged 52.3 (SD=8.3, range=43-59) during 2015-2017 in the larger core areas. We found no evidence for a decrease or increase in abundance during either period. Lynx density estimates were approximately 7-10 times lower than densities of lynx in northern populations at the low of the snowshoe hare (Lepus americanus) population cycle. To our knowledge, our results are the first attempt to estimate abundance, trend and density of lynx in Minnesota using non-invasive genetic capture-mark-recapture. Estimates such as ours provide useful benchmarks for future comparisons by providing a context with which to assess 1) potential changes in forest management that may affect lynx recovery and conservation, and 2) possible effects of climate change on the depth, density, and duration of annual snow cover and correspondingly, potential effects on snowshoe hares as well.
Testing assumptions for conservation of migratory shorebirds and coastal managed wetlands
Collazo, Jaime; James Lyons,; Herring, Garth
2015-01-01
Managed wetlands provide critical foraging and roosting habitats for shorebirds during migration; therefore, ensuring their availability is a priority action in shorebird conservation plans. Contemporary shorebird conservation plans rely on a number of assumptions about shorebird prey resources and migratory behavior to determine stopover habitat requirements. For example, the US Shorebird Conservation Plan for the Southeast-Caribbean region assumes that average benthic invertebrate biomass in foraging habitats is 2.4 g dry mass m−2 and that the dominant prey item of shorebirds in the region is Chironomid larvae. For effective conservation and management, it is important to test working assumptions and update predictive models that are used to estimate habitat requirements. We surveyed migratory shorebirds and sampled the benthic invertebrate community in coastal managed wetlands of South Carolina. We sampled invertebrates at three points in time representing early, middle, and late stages of spring migration, and concurrently surveyed shorebird stopover populations at approximately 7-day intervals throughout migration. We used analysis of variance by ranks to test for temporal variation in invertebrate biomass and density, and we used a model based approach (linear mixed model and Monte Carlo simulation) to estimate mean biomass and density. There was little evidence of a temporal variation in biomass or density during the course of spring shorebird migration, suggesting that shorebirds did not deplete invertebrate prey resources at our site. Estimated biomass was 1.47 g dry mass m−2 (95 % credible interval 0.13–3.55), approximately 39 % lower than values used in the regional shorebird conservation plan. An additional 4728 ha (a 63 % increase) would be required if habitat objectives were derived from biomass levels observed in our study. Polychaetes, especially Laeonereis culveri(2569 individuals m−2), were the most abundant prey in foraging habitats at our site. Polychaetes have lower caloric content than levels assumed in the regional plan; when lower caloric content and lower biomass levels are used to determine habitat objectives, an additional 6395 ha would be required (86 % increase). Shorebird conservation and management plans would benefit from considering the uncertainty in parameters used to derive habitat objectives, especially biomass and caloric content of prey resources. Iterative testing of models that are specific to the planning region will provide rapid advances for management and conservation of migratory shorebirds and coastal managed wetlands.
Work, Thierry M.; Klavitter, John L.; Reynolds, Michelle H.; Blehert, David S.
2010-01-01
Laysan Ducks are endemic to the Hawaiian archipelago and are one of the world’s most endangered waterfowl. For 150 yr, Laysan Ducks were restricted to an estimated 4 km2 of land on Laysan Island in the northwestern Hawaiian Islands. In 2004 and 2005, 42 Laysan Ducks were translocated to Midway Atoll, and the population increased to approximately 200 by 2007. In August 2008, mortality due to botulism type C was identified, and 181 adult, fledgling, and duckling carcasses were collected from August to October. Diseased birds were found on two islands within Midway Atoll at multiple wetlands; however, one wetland contributed most carcasses. The epidemic was discovered approximately 14–21 days after the mortality started and lasted for 50 additional days. The details of this epidemic highlight the disease risk to birds restricted to small island populations and the challenges associated with managing newly translocated endangered species. Frequent population monitoring for early disease detection and comprehensive wetland monitoring and management will be needed to manage avian botulism in endangered Laysan Ducks. Vaccination may also be beneficial to reduce mortality in this small, geographically closed population.
Conceptual design of an orbital propellant transfer experiment. Volume 2: Study results
NASA Technical Reports Server (NTRS)
Drake, G. L.; Bassett, C. E.; Merino, F.; Siden, L. E.; Bradley, R. E.; Carr, E. J.; Parker, R. E.
1980-01-01
The OTV configurations, operations and requirements planned for the period from the 1980's to the 1990's were reviewed and a propellant transfer experiment was designed that would support the needs of these advanced OTV operational concepts. An overall integrated propellant management technology plan for all NASA centers was developed. The preliminary cost estimate (for planning purposes only) is $56.7 M, of which approximately $31.8 M is for shuttle user costs.
Groundwater Change in Storage Estimation by Using Monitoring Wells Data
NASA Astrophysics Data System (ADS)
Flores, C. I.
2016-12-01
In present times, remarkable attention is being given to models and data in hydrology, regarding their role in meeting water management requirements to enable well-informed decisions. Water management under the Sustainable Groundwater Management Act (SGMA) is currently challenging, due to it requires that groundwater sustainability agencies (GSAs) formulate groundwater sustainability plans (GSPs) to comply with new regulations and perform a responsible management to secure California's groundwater resources, particularly when droughts and climate change conditions are present. In this scenario, water budgets and change in groundwater storage estimations are key components for decision makers, but their computation is often difficult, lengthy and uncertain. Therefore, this work presents an innovative approach to integrate hydrologic modeling and available groundwater data into a single simplified tool, a proxy function, that estimate in real time the change in storage based on monitoring wells data. A hydrologic model was developed and calibrated for water years 1970 to 2015, the Yolo County IWFM, which was applied to generate the proxy as a study case, by regressing simulated change in storage versus change in head for the cities of Davis and Woodland area, and obtain a linear function dependent on heads variations over time. Later, the proxy was applied to actual groundwater data in this region to predict the change in storage. Results from this work provide proxy functions to approximate change in storage based on monitoring data for daily, monthly and yearly frameworks, being as well easily transferable to any spreadsheet or database to perform simply yet crucial computations in real time for sustainable groundwater management.
Groundwater pumping effects on contaminant loading management in agricultural regions.
Park, Dong Kyu; Bae, Gwang-Ok; Kim, Seong-Kyun; Lee, Kang-Kun
2014-06-15
Groundwater pumping changes the behavior of subsurface water, including the location of the water table and characteristics of the flow system, and eventually affects the fate of contaminants, such as nitrate from agricultural fertilizers. The objectives of this study were to demonstrate the importance of considering the existing pumping conditions for contaminant loading management and to develop a management model to obtain a contaminant loading design more appropriate and practical for agricultural regions where groundwater pumping is common. Results from this study found that optimal designs for contaminant loading could be determined differently when the existing pumping conditions were considered. This study also showed that prediction of contamination and contaminant loading management without considering pumping activities might be unrealistic. Motivated by these results, a management model optimizing the permissible on-ground contaminant loading mass together with pumping rates was developed and applied to field investigation and monitoring data from Icheon, Korea. The analytical solution for 1-D unsaturated solute transport was integrated with the 3-D saturated solute transport model in order to approximate the fate of contaminants loaded periodically from on-ground sources. This model was further expanded to manage agricultural contaminant loading in regions where groundwater extraction tends to be concentrated in a specific period of time, such as during the rice-growing season, using a method that approximates contaminant leaching to a fluctuating water table. The results illustrated that the simultaneous management of groundwater quantity and quality was effective and appropriate to the agricultural contaminant loading management and the model developed in this study, which can consider time-variant pumping, could be used to accurately estimate and to reasonably manage contaminant loading in agricultural areas. Copyright © 2014 Elsevier Ltd. All rights reserved.
Jakeman, J. D.; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less
The impact of non-IPA HMOs on the number of hospitals and hospital capacity.
Chernew, M
1995-01-01
Concentration in the hospital market could limit the success of health care reform strategies that rely on managed care to constrain costs. Hospital market capacity also is important because capacity affects both costs and the degree of price competition. Because managed care plans, particularly non-individual practice association (non-IPA) model HMOs, practice a less hospital-intensive style of care, consolidation and downsizing in the hospital market potentially will accompany managed care growth, influencing the long-run effectiveness of managed care cost-containment strategies. Using Standard Metropolitan Statistical Area (SMSA) data from 1982 and 1987, a 10-percentage point increase in non-IPA HMO market share is estimated to reduce the number of hospitals by about 4%, causing an approximate 5% reduction in the number of hospital beds. No statistically significant relationship is found between non-IPA HMO penetration rates and hospital occupancy rates.
Everatt, Kristoffer T.; Andresen, Leah; Somers, Michael J.
2014-01-01
The African lion (Panthera Leo) has suffered drastic population and range declines over the last few decades and is listed by the IUCN as vulnerable to extinction. Conservation management requires reliable population estimates, however these data are lacking for many of the continent's remaining populations. It is possible to estimate lion abundance using a trophic scaling approach. However, such inferences assume that a predator population is subject only to bottom-up regulation, and are thus likely to produce biased estimates in systems experiencing top-down anthropogenic pressures. Here we provide baseline data on the status of lions in a developing National Park in Mozambique that is impacted by humans and livestock. We compare a direct density estimate with an estimate derived from trophic scaling. We then use replicated detection/non-detection surveys to estimate the proportion of area occupied by lions, and hierarchical ranking of covariates to provide inferences on the relative contribution of prey resources and anthropogenic factors influencing lion occurrence. The direct density estimate was less than 1/3 of the estimate derived from prey resources (0.99 lions/100 km2 vs. 3.05 lions/100 km2). The proportion of area occupied by lions was Ψ = 0.439 (SE = 0.121), or approximately 44% of a 2 400 km2 sample of potential habitat. Although lions were strongly predicted by a greater probability of encountering prey resources, the greatest contributing factor to lion occurrence was a strong negative association with settlements. Finally, our empirical abundance estimate is approximately 1/3 of a published abundance estimate derived from opinion surveys. Altogether, our results describe a lion population held below resource-based carrying capacity by anthropogenic factors and highlight the limitations of trophic scaling and opinion surveys for estimating predator populations exposed to anthropogenic pressures. Our study provides the first empirical quantification of a population that future change can be measured against. PMID:24914934
Everatt, Kristoffer T; Andresen, Leah; Somers, Michael J
2014-01-01
The African lion (Panthera Leo) has suffered drastic population and range declines over the last few decades and is listed by the IUCN as vulnerable to extinction. Conservation management requires reliable population estimates, however these data are lacking for many of the continent's remaining populations. It is possible to estimate lion abundance using a trophic scaling approach. However, such inferences assume that a predator population is subject only to bottom-up regulation, and are thus likely to produce biased estimates in systems experiencing top-down anthropogenic pressures. Here we provide baseline data on the status of lions in a developing National Park in Mozambique that is impacted by humans and livestock. We compare a direct density estimate with an estimate derived from trophic scaling. We then use replicated detection/non-detection surveys to estimate the proportion of area occupied by lions, and hierarchical ranking of covariates to provide inferences on the relative contribution of prey resources and anthropogenic factors influencing lion occurrence. The direct density estimate was less than 1/3 of the estimate derived from prey resources (0.99 lions/100 km² vs. 3.05 lions/100 km²). The proportion of area occupied by lions was Ψ = 0.439 (SE = 0.121), or approximately 44% of a 2,400 km2 sample of potential habitat. Although lions were strongly predicted by a greater probability of encountering prey resources, the greatest contributing factor to lion occurrence was a strong negative association with settlements. Finally, our empirical abundance estimate is approximately 1/3 of a published abundance estimate derived from opinion surveys. Altogether, our results describe a lion population held below resource-based carrying capacity by anthropogenic factors and highlight the limitations of trophic scaling and opinion surveys for estimating predator populations exposed to anthropogenic pressures. Our study provides the first empirical quantification of a population that future change can be measured against.
1992-09-24
for the fifteen remaining study sites. The total number of structures to be surveyed was estimated at 800 to 1,000. When the contract was complete...Corps of Engineers real estate appraisers or contract appraisers, should be listed as a separate task. The approximate number of structures to be...DOWNGRADING SCHEDULE Distribution of this document is unlimited. 4. PERFORMING ORGANIZATION REPORT NUMBER (S) S. MONITORING ORGANIZATION REPORT NUMBER (S
Air traffic surveillance and control using hybrid estimation and protocol-based conflict resolution
NASA Astrophysics Data System (ADS)
Hwang, Inseok
The continued growth of air travel and recent advances in new technologies for navigation, surveillance, and communication have led to proposals by the Federal Aviation Administration (FAA) to provide reliable and efficient tools to aid Air Traffic Control (ATC) in performing their tasks. In this dissertation, we address four problems frequently encountered in air traffic surveillance and control; multiple target tracking and identity management, conflict detection, conflict resolution, and safety verification. We develop a set of algorithms and tools to aid ATC; These algorithms have the provable properties of safety, computational efficiency, and convergence. Firstly, we develop a multiple-maneuvering-target tracking and identity management algorithm which can keep track of maneuvering aircraft in noisy environments and of their identities. Secondly, we propose a hybrid probabilistic conflict detection algorithm between multiple aircraft which uses flight mode estimates as well as aircraft current state estimates. Our algorithm is based on hybrid models of aircraft, which incorporate both continuous dynamics and discrete mode switching. Thirdly, we develop an algorithm for multiple (greater than two) aircraft conflict avoidance that is based on a closed-form analytic solution and thus provides guarantees of safety. Finally, we consider the problem of safety verification of control laws for safety critical systems, with application to air traffic control systems. We approach safety verification through reachability analysis, which is a computationally expensive problem. We develop an over-approximate method for reachable set computation using polytopic approximation methods and dynamic optimization. These algorithms may be used either in a fully autonomous way, or as supporting tools to increase controllers' situational awareness and to reduce their work load.
Rose, G; Mulder, H A; van der Werf, J H J; Thompson, A N; van Arendonk, J A M
2014-08-01
Merino sheep in Australia experience periods of variable feed supply. Merino sheep can be bred to be more resilient to this variation by losing less BW when grazing poor quality pasture and gaining more BW when grazing good quality pasture. Therefore, selection on BW change might be economically attractive but correlations with other traits in the breeding objective need to be known. The genetic correlations (rg) between BW, BW change, and reproduction were estimated using records from approximately 7,350 fully pedigreed Merino ewes managed at Katanning in Western Australia. Number of lambs and total weight of lambs born and weaned were measured on approximately 5,300 2-yr-old ewes, approximately 4,900 3-yr-old ewes, and approximately 3,600 4-yr-old ewes. On a proportion of these ewes BW change was measured: approximately 1,950 2-yr-old ewes, approximately 1,500 3-yr-old ewes, and approximately 1,100 4-yr-old ewes. The BW measurements were for 3 periods. The first period was during mating period over 42 d on poor pasture. The second period was during pregnancy over 90 d for ewes that got pregnant on poor and medium quality pasture. The third period was during lactation over 130 d for ewes that weaned a lamb on good quality pasture. Genetic correlations between weight change and reproduction were estimated within age classes. Genetic correlations were tested to be significantly greater magnitude than 0 using likelihood ratio tests. Nearly all BW had significant positive genetic correlations with all reproduction traits. In 2-yr-old ewes, BW change during the mating period had a positive genetic correlation with number of lambs weaned (rg = 0.58); BW change during pregnancy had a positive genetic correlation with total weight of lambs born (rg = 0.33) and a negative genetic correlation with number of lambs weaned (rg = -0.49). All other genetic correlations were not significantly greater magnitude than 0 but estimates of genetic correlations for 3-yr-old ewes were generally consistent with these findings. The direction of the genetic correlations mostly coincided with the energy requirements of the ewes and the stage of maturity of the ewes. In conclusion, optimized selection strategies on BW changes to increase resilience will depend on the genetic correlations with reproduction and are dependent on age.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakeman, J.D., E-mail: jdjakem@sandia.gov; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchicalmore » surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less
Quantitative Risk Assessment for African Horse Sickness in Live Horses Exported from South Africa
Sergeant, Evan S.
2016-01-01
African horse sickness (AHS) is a severe, often fatal, arbovirus infection of horses, transmitted by Culicoides spp. midges. AHS occurs in most of sub-Saharan Africa and is a significant impediment to export of live horses from infected countries, such as South Africa. A stochastic risk model was developed to estimate the probability of exporting an undetected AHS-infected horse through a vector protected pre-export quarantine facility, in accordance with OIE recommendations for trade from an infected country. The model also allows for additional risk management measures, including multiple PCR tests prior to and during pre-export quarantine and optionally during post-arrival quarantine, as well as for comparison of risk associated with exports from a demonstrated low-risk area for AHS and an area where AHS is endemic. If 1 million horses were exported from the low-risk area with no post-arrival quarantine we estimate the median number of infected horses to be 5.4 (95% prediction interval 0.5 to 41). This equates to an annual probability of 0.0016 (95% PI: 0.00015 to 0.012) assuming 300 horses exported per year. An additional PCR test while in vector-protected post-arrival quarantine reduced these probabilities by approximately 12-fold. Probabilities for horses exported from an area where AHS is endemic were approximately 15 to 17 times higher than for horses exported from the low-risk area under comparable scenarios. The probability of undetected AHS infection in horses exported from an infected country can be minimised by appropriate risk management measures. The final choice of risk management measures depends on the level of risk acceptable to the importing country. PMID:26986002
Quantitative Risk Assessment for African Horse Sickness in Live Horses Exported from South Africa.
Sergeant, Evan S; Grewar, John D; Weyer, Camilla T; Guthrie, Alan J
2016-01-01
African horse sickness (AHS) is a severe, often fatal, arbovirus infection of horses, transmitted by Culicoides spp. midges. AHS occurs in most of sub-Saharan Africa and is a significant impediment to export of live horses from infected countries, such as South Africa. A stochastic risk model was developed to estimate the probability of exporting an undetected AHS-infected horse through a vector protected pre-export quarantine facility, in accordance with OIE recommendations for trade from an infected country. The model also allows for additional risk management measures, including multiple PCR tests prior to and during pre-export quarantine and optionally during post-arrival quarantine, as well as for comparison of risk associated with exports from a demonstrated low-risk area for AHS and an area where AHS is endemic. If 1 million horses were exported from the low-risk area with no post-arrival quarantine we estimate the median number of infected horses to be 5.4 (95% prediction interval 0.5 to 41). This equates to an annual probability of 0.0016 (95% PI: 0.00015 to 0.012) assuming 300 horses exported per year. An additional PCR test while in vector-protected post-arrival quarantine reduced these probabilities by approximately 12-fold. Probabilities for horses exported from an area where AHS is endemic were approximately 15 to 17 times higher than for horses exported from the low-risk area under comparable scenarios. The probability of undetected AHS infection in horses exported from an infected country can be minimised by appropriate risk management measures. The final choice of risk management measures depends on the level of risk acceptable to the importing country.
Health economic burden that wounds impose on the National Health Service in the UK
Guest, Julian F; Ayoub, Nadia; McIlwraith, Tracey; Uchegbu, Ijeoma; Gerrish, Alyson; Weidlich, Diana; Vowden, Kathryn; Vowden, Peter
2015-01-01
Objective To estimate the prevalence of wounds managed by the UK's National Health Service (NHS) in 2012/2013 and the annual levels of healthcare resource use attributable to their management and corresponding costs. Methods This was a retrospective cohort analysis of the records of patients in The Health Improvement Network (THIN) Database. Records of 1000 adult patients who had a wound in 2012/2013 (cases) were randomly selected and matched with 1000 patients with no history of a wound (controls). Patients’ characteristics, wound-related health outcomes and all healthcare resource use were quantified and the total NHS cost of patient management was estimated at 2013/2014 prices. Results Patients’ mean age was 69.0 years and 45% were male. 76% of patients presented with a new wound in the study year and 61% of wounds healed during the study year. Nutritional deficiency (OR 0.53; p<0.001) and diabetes (OR 0.65; p<0.001) were independent risk factors for non-healing. There were an estimated 2.2 million wounds managed by the NHS in 2012/2013. Annual levels of resource use attributable to managing these wounds and associated comorbidities included 18.6 million practice nurse visits, 10.9 million community nurse visits, 7.7 million GP visits and 3.4 million hospital outpatient visits. The annual NHS cost of managing these wounds and associated comorbidities was £5.3 billion. This was reduced to between £5.1 and £4.5 billion after adjusting for comorbidities. Conclusions Real world evidence highlights wound management is predominantly a nurse-led discipline. Approximately 30% of wounds lacked a differential diagnosis, indicative of practical difficulties experienced by non-specialist clinicians. Wounds impose a substantial health economic burden on the UK's NHS, comparable to that of managing obesity (£5.0 billion). Clinical and economic benefits could accrue from improved systems of care and an increased awareness of the impact that wounds impose on patients and the NHS. PMID:26644123
Alpert, Abby; Morganti, Kristy G; Margolis, Gregg S; Wasserman, Jeffrey; Kellermann, Arthur L
2013-12-01
Some Medicare beneficiaries who place 911 calls to request an ambulance might safely be cared for in settings other than the emergency department (ED) at lower cost. Using 2005-09 Medicare claims data and a validated algorithm, we estimated that 12.9-16.2 percent of Medicare-covered 911 emergency medical services (EMS) transports involved conditions that were probably nonemergent or primary care treatable. Among beneficiaries not admitted to the hospital, about 34.5 percent had a low-acuity diagnosis that might have been managed outside the ED. Annual Medicare EMS and ED payments for these patients were approximately $1 billion per year. If Medicare had the flexibility to reimburse EMS for managing selected 911 calls in ways other than transport to an ED, we estimate that the federal government could save $283-$560 million or more per year, while improving the continuity of patient care. If private insurance companies followed suit, overall societal savings could be twice as large.
Economic Impacts of Non-Native Forest Insects in the Continental United States
Aukema, Juliann E.; Leung, Brian; Kovacs, Kent; Chivers, Corey; Britton, Kerry O.; Englin, Jeffrey; Frankel, Susan J.; Haight, Robert G.; Holmes, Thomas P.; Liebhold, Andrew M.; McCullough, Deborah G.; Von Holle, Betsy
2011-01-01
Reliable estimates of the impacts and costs of biological invasions are critical to developing credible management, trade and regulatory policies. Worldwide, forests and urban trees provide important ecosystem services as well as economic and social benefits, but are threatened by non-native insects. More than 450 non-native forest insects are established in the United States but estimates of broad-scale economic impacts associated with these species are largely unavailable. We developed a novel modeling approach that maximizes the use of available data, accounts for multiple sources of uncertainty, and provides cost estimates for three major feeding guilds of non-native forest insects. For each guild, we calculated the economic damages for five cost categories and we estimated the probability of future introductions of damaging pests. We found that costs are largely borne by homeowners and municipal governments. Wood- and phloem-boring insects are anticipated to cause the largest economic impacts by annually inducing nearly $1.7 billion in local government expenditures and approximately $830 million in lost residential property values. Given observations of new species, there is a 32% chance that another highly destructive borer species will invade the U.S. in the next 10 years. Our damage estimates provide a crucial but previously missing component of cost-benefit analyses to evaluate policies and management options intended to reduce species introductions. The modeling approach we developed is highly flexible and could be similarly employed to estimate damages in other countries or natural resource sectors. PMID:21931766
Annualized earthquake loss estimates for California and their sensitivity to site amplification
Chen, Rui; Jaiswal, Kishor; Bausch, D; Seligson, H; Wills, C.J.
2016-01-01
Input datasets for annualized earthquake loss (AEL) estimation for California were updated recently by the scientific community, and include the National Seismic Hazard Model (NSHM), site‐response model, and estimates of shear‐wave velocity. Additionally, the Federal Emergency Management Agency’s loss estimation tool, Hazus, was updated to include the most recent census and economic exposure data. These enhancements necessitated a revisit to our previous AEL estimates and a study of the sensitivity of AEL estimates subjected to alternate inputs for site amplification. The NSHM ground motions for a uniform site condition are modified to account for the effect of local near‐surface geology. The site conditions are approximated in three ways: (1) by VS30 (time‐averaged shear‐wave velocity in the upper 30 m) value obtained from a geology‐ and topography‐based map consisting of 15 VS30 groups, (2) by site classes categorized according to National Earthquake Hazards Reduction Program (NEHRP) site classification, and (3) by a uniform NEHRP site class D. In case 1, ground motions are amplified using the Seyhan and Stewart (2014) semiempirical nonlinear amplification model. In cases 2 and 3, ground motions are amplified using the 2014 version of the NEHRP site amplification factors, which are also based on the Seyhan and Stewart model but are approximated to facilitate their use for building code applications. Estimated AELs are presented at multiple resolutions, starting with the state level assessment and followed by detailed assessments for counties, metropolitan statistical areas (MSAs), and cities. AEL estimate at the state level is ∼$3.7 billion, 70% of which is contributed from Los Angeles–Long Beach–Santa Ana, San Francisco–Oakland–Fremont, and Riverside–San Bernardino–Ontario MSAs. The statewide AEL estimate is insensitive to alternate assumptions of site amplification. However, we note significant differences in AEL estimates among the three sensitivity cases for smaller geographic units.
Hubble, Michael W; Richards, Michael E; Wilfong, Denise A
2008-01-01
To estimate the cost-effectiveness of continuous positive airway pressure (CPAP) in managing prehospital acute pulmonary edema in an urban EMS system. Using estimates from published reports on prehospital and emergency department CPAP, a cost-effectiveness model of implementing CPAP in a typical urban EMS system was derived from the societal perspective as well as the perspective of the implementing EMS system. To assess the robustness of the model, a series of univariate and multivariate sensitivity analyses was performed on the input variables. The cost of consumables, equipment, and training yielded a total cost of $89 per CPAP application. The theoretical system would be expected to use CPAP 4 times per 1000 EMS patients and is expected to save 0.75 additional lives per 1000 EMS patients at a cost of $490 per life saved. CPAP is also expected to result in approximately one less intubation per 6 CPAP applications and reduce hospitalization costs by $4075 per year for each CPAP application. Through sensitivity analyses the model was verified to be robust across a wide range of input variable assumptions. Previous studies have demonstrated the clinical effectiveness of CPAP in the management of acute pulmonary edema. Through a theoretical analysis which modeled the costs and clinical benefits of implementing CPAP in an urban EMS system, prehospital CPAP appears to be a cost-effective treatment.
Energy-balanced algorithm for RFID estimation
NASA Astrophysics Data System (ADS)
Zhao, Jumin; Wang, Fangyuan; Li, Dengao; Yan, Lijuan
2016-10-01
RFID has been widely used in various commercial applications, ranging from inventory control, supply chain management to object tracking. It is necessary for us to estimate the number of RFID tags deployed in a large area periodically and automatically. Most of the prior works use passive tags to estimate and focus on designing time-efficient algorithms that can estimate tens of thousands of tags in seconds. But for a RFID reader to access tags in a large area, active tags are likely to be used due to their longer operational ranges. But these tags use their own battery as energy supplier. Hence, conserving energy for active tags becomes critical. Some prior works have studied how to reduce energy expenditure of a RFID reader when it reads tags IDs. In this paper, we study how to reduce the amount of energy consumed by active tags during the process of estimating the number of tags in a system and make the energy every tag consumed balanced approximately. We design energy-balanced estimation algorithm that can achieve our goal we mentioned above.
Conventional development versus managed growth: the costs of sprawl.
Burchell, Robert W; Mukherji, Sahan
2003-09-01
We examined the effects of sprawl, or conventional development, versus managed (or "smart") growth on land and infrastructure consumption as well as on real estate development and public service costs in the United States. Mathematical impact models were used to produce US estimates of differences in resources consumed according to each growth scenario over the period 2000-2025. Sprawl produces a 21% increase in amount of undeveloped land converted to developed land (2.4 million acres) and approximately a 10% increase in local road lane-miles (188 300). Furthermore, sprawl causes about 10% more annual public service (fiscal) deficits ($4.2 billion US dollars) and 8% higher housing occupancy costs ($13 000 US dollars per dwelling unit). Managed growth can save significant amounts of human and natural resources with limited effects on traditional development procedures.
Conventional Development Versus Managed Growth: The Costs of Sprawl
Burchell, Robert W.; Mukherji, Sahan
2003-01-01
Objectives. We examined the effects of sprawl, or conventional development, versus managed (or “smart”) growth on land and infrastructure consumption as well as on real estate development and public service costs in the United States. Methods. Mathematical impact models were used to produce US estimates of differences in resources consumed according to each growth scenario over the period 2000–2025. Results. Sprawl produces a 21% increase in amount of undeveloped land converted to developed land (2.4 million acres) and approximately a 10% increase in local road lane-miles (188 300). Furthermore, sprawl causes about 10% more annual public service (fiscal) deficits ($4.2 billion) and 8% higher housing occupancy costs ($13 000 per dwelling unit). Conclusions. Managed growth can save significant amounts of human and natural resources with limited effects on traditional development procedures. PMID:12948976
Type 2 diabetes in children: oxymoron or medical metamorphosis?
Copeland, Kenneth C; Chalmers, Laura J; Brown, Ryan D
2005-09-01
The full public health effects of the new epidemic of obesity and diabetes in children and adolescents may not be known for many years but are certain to be substantial. Diagnosed diabetes, which is present in only 4.2% of the US population, along with its consequences, already represents approximately 19% of the total personal healthcare expenditures in this country. Between 1997 and 2002, the estimated direct medical cost of diabetes increased from 44 billion dollars to 92 billion dollars, a staggering increase of 8 billion dollars a year. In 2002, diabetes annual costs per capita rose by more than 30% to 13,243 dollars per person, compared with the average annual health care costs for persons without diabetes of 2560.92 dollars. An estimate from the CDC indicates that approximately one-third of children born in 2000 will develop diabetes at some time in their life, and nearly one-half of all Hispanic children born in 2000 will develop diabetes. As type 2 diabetes is being diagnosed at an earlier age, more young people can expect to live many more years with diabetes and its complications, adding even further to this already enormous health burden. An appropriate starting place is recognition of the magnitude of the problem by physicians, politicians, public health policy makers, and other healthcare workers. An aggressive approach to management of diabetes must begin well before the appearance of cardiovascular, eye, renal, and other complications of diabetes appear, and even before obesity leads to diabetes. Currently, physicians and other healthcare workers are poorly reimbursed for management of obesity, for diabetes education, and for ongoing telephone contact with diabetic patients and families, essential for optimal diabetes management. National policies and priorities must be readjusted to emphasize prevention, rather than crisis management, if we are to avoid a catastrophic public health crisis within the next several decades.
Speech Enhancement, Gain, and Noise Spectrum Adaptation Using Approximate Bayesian Estimation
Hao, Jiucang; Attias, Hagai; Nagarajan, Srikantan; Lee, Te-Won; Sejnowski, Terrence J.
2010-01-01
This paper presents a new approximate Bayesian estimator for enhancing a noisy speech signal. The speech model is assumed to be a Gaussian mixture model (GMM) in the log-spectral domain. This is in contrast to most current models in frequency domain. Exact signal estimation is a computationally intractable problem. We derive three approximations to enhance the efficiency of signal estimation. The Gaussian approximation transforms the log-spectral domain GMM into the frequency domain using minimal Kullback–Leiber (KL)-divergency criterion. The frequency domain Laplace method computes the maximum a posteriori (MAP) estimator for the spectral amplitude. Correspondingly, the log-spectral domain Laplace method computes the MAP estimator for the log-spectral amplitude. Further, the gain and noise spectrum adaptation are implemented using the expectation–maximization (EM) algorithm within the GMM under Gaussian approximation. The proposed algorithms are evaluated by applying them to enhance the speeches corrupted by the speech-shaped noise (SSN). The experimental results demonstrate that the proposed algorithms offer improved signal-to-noise ratio, lower word recognition error rate, and less spectral distortion. PMID:20428253
Using Bayesian Population Viability Analysis to Define Relevant Conservation Objectives.
Green, Adam W; Bailey, Larissa L
2015-01-01
Adaptive management provides a useful framework for managing natural resources in the face of uncertainty. An important component of adaptive management is identifying clear, measurable conservation objectives that reflect the desired outcomes of stakeholders. A common objective is to have a sustainable population, or metapopulation, but it can be difficult to quantify a threshold above which such a population is likely to persist. We performed a Bayesian metapopulation viability analysis (BMPVA) using a dynamic occupancy model to quantify the characteristics of two wood frog (Lithobates sylvatica) metapopulations resulting in sustainable populations, and we demonstrate how the results could be used to define meaningful objectives that serve as the basis of adaptive management. We explored scenarios involving metapopulations with different numbers of patches (pools) using estimates of breeding occurrence and successful metamorphosis from two study areas to estimate the probability of quasi-extinction and calculate the proportion of vernal pools producing metamorphs. Our results suggest that ≥50 pools are required to ensure long-term persistence with approximately 16% of pools producing metamorphs in stable metapopulations. We demonstrate one way to incorporate the BMPVA results into a utility function that balances the trade-offs between ecological and financial objectives, which can be used in an adaptive management framework to make optimal, transparent decisions. Our approach provides a framework for using a standard method (i.e., PVA) and available information to inform a formal decision process to determine optimal and timely management policies.
Modeling pattern in collections of parameters
Link, W.A.
1999-01-01
Wildlife management is increasingly guided by analyses of large and complex datasets. The description of such datasets often requires a large number of parameters, among which certain patterns might be discernible. For example, one may consider a long-term study producing estimates of annual survival rates; of interest is the question whether these rates have declined through time. Several statistical methods exist for examining pattern in collections of parameters. Here, I argue for the superiority of 'random effects models' in which parameters are regarded as random variables, with distributions governed by 'hyperparameters' describing the patterns of interest. Unfortunately, implementation of random effects models is sometimes difficult. Ultrastructural models, in which the postulated pattern is built into the parameter structure of the original data analysis, are approximations to random effects models. However, this approximation is not completely satisfactory: failure to account for natural variation among parameters can lead to overstatement of the evidence for pattern among parameters. I describe quasi-likelihood methods that can be used to improve the approximation of random effects models by ultrastructural models.
NASA Astrophysics Data System (ADS)
Yang, B.; Lee, D. K.
2016-12-01
Understanding spatial distribution of irrigation requirement is critically important for agricultural water management. However, many studies considering future agricultural water management in Korea assessed irrigation requirement on watershed or administrative district scale, but have not accounted the spatial distribution. Lumped hydrologic model has typically used in Korea for simulating watershed scale irrigation requirement, while distribution hydrologic model can simulate the spatial distribution grid by grid. To overcome this shortcoming, here we applied a grid base global hydrologic model (H08) into local scale to estimate spatial distribution under future irrigation requirement of Korean Peninsula. Korea is one of the world's most densely populated countries, with also high produce and demand of rice which requires higher soil moisture than other crops. Although, most of the precipitation concentrate in particular season and disagree with crop growth season. This precipitation character makes management of agricultural water which is approximately 60% of total water usage critical issue in Korea. Furthermore, under future climate change, the precipitation predicted to be more concentrated and necessary need change of future water management plan. In order to apply global hydrological model into local scale, we selected appropriate major crops under social and local climate condition in Korea to estimate cropping area and yield, and revised the cropping area map more accurately. As a result, future irrigation requirement estimation varies under each projection, however, slightly decreased in most case. The simulation reveals, evapotranspiration increase slightly while effective precipitation also increase to balance the irrigation requirement. This finding suggest practical guideline to decision makers for further agricultural water management plan including future development of water supply plan to resolve water scarcity.
Shumba, Edwin; Nzombe, Phoebe; Mbinda, Absolom; Simbi, Raiva; Mangwanya, Douglas; Kilmarx, Peter H; Luman, Elizabeth T; Zimuto, Sibongile N
2014-01-01
In 2010, the Zimbabwe Ministry of Health and Child Welfare (MoHCW) adopted the Strengthening Laboratory Management Toward Accreditation (SLMTA) programme as a tool for laboratory quality systems strengthening. To evaluate the financial costs of SLMTA implementation using two models (external facilitators; and internal local or MoHCW facilitators) from the perspective of the implementing partner and to estimate resources needed to scale up the programme nationally in all 10 provinces. The average expenditure per laboratory was calculated based on accounting records; calculations included implementing partner expenses but excluded in-kind contributions and salaries of local facilitators and trainees. We also estimated theoretical financial costs, keeping all contextual variables constant across the two models. Resource needs for future national expansion were estimated based on a two-phase implementation plan, in which 12 laboratories in each of five provinces would implement SLMTA per phase; for the internal facilitator model, 20 facilitators would be trained at the beginning of each phase. The average expenditure to implement SLMTA in 11 laboratories using external facilitators was approximately US$5800 per laboratory; expenditure in 19 laboratories using internal facilitators was approximately $6000 per laboratory. The theoretical financial cost of implementing a 12-laboratory SLMTA cohort keeping all contextual variables constant would be approximately $58 000 using external facilitators; or $15 000 using internal facilitators, plus $86 000 to train 20 facilitators. The financial cost for subsequent SLMTA cohorts using the previously-trained internal facilitators would be approximately $15 000, yielding a break-even point of 2 cohorts, at $116 000 for either model. Estimated resources required for national implementation in 120 laboratories would therefore be $580 000 using external facilitators ($58 000 per province) and $322 000 using internal facilitators ($86 000 for facilitator training in each of two phases plus $15 000 for SLMTA implementation in each province). Investing in training of internal facilitators will result in substantial savings over the scale-up of the programme. Our study provides information to assist policy makers to develop strategic plans for investing in laboratory strengthening.
Improved Uncertainty Quantification in Groundwater Flux Estimation Using GRACE
NASA Astrophysics Data System (ADS)
Reager, J. T., II; Rao, P.; Famiglietti, J. S.; Turmon, M.
2015-12-01
Groundwater change is difficult to monitor over large scales. One of the most successful approaches is in the remote sensing of time-variable gravity using NASA Gravity Recovery and Climate Experiment (GRACE) mission data, and successful case studies have created the opportunity to move towards a global groundwater monitoring framework for the world's largest aquifers. To achieve these estimates, several approximations are applied, including those in GRACE processing corrections, the formulation of the formal GRACE errors, destriping and signal recovery, and the numerical model estimation of snow water, surface water and soil moisture storage states used to isolate a groundwater component. A major weakness in these approaches is inconsistency: different studies have used different sources of primary and ancillary data, and may achieve different results based on alternative choices in these approximations. In this study, we present two cases of groundwater change estimation in California and the Colorado River basin, selected for their good data availability and varied climates. We achieve a robust numerical estimate of post-processing uncertainties resulting from land-surface model structural shortcomings and model resolution errors. Groundwater variations should demonstrate less variability than the overlying soil moisture state does, as groundwater has a longer memory of past events due to buffering by infiltration and drainage rate limits. We apply a model ensemble approach in a Bayesian framework constrained by the assumption of decreasing signal variability with depth in the soil column. We also discuss time variable errors vs. time constant errors, across-scale errors v. across-model errors, and error spectral content (across scales and across model). More robust uncertainty quantification for GRACE-based groundwater estimates would take all of these issues into account, allowing for more fair use in management applications and for better integration of GRACE-based measurements with observations from other sources.
Estimating groundwater recharge
Healy, Richard W.; Scanlon, Bridget R.
2010-01-01
Understanding groundwater recharge is essential for successful management of water resources and modeling fluid and contaminant transport within the subsurface. This book provides a critical evaluation of the theory and assumptions that underlie methods for estimating rates of groundwater recharge. Detailed explanations of the methods are provided - allowing readers to apply many of the techniques themselves without needing to consult additional references. Numerous practical examples highlight benefits and limitations of each method. Approximately 900 references allow advanced practitioners to pursue additional information on any method. For the first time, theoretical and practical considerations for selecting and applying methods for estimating groundwater recharge are covered in a single volume with uniform presentation. Hydrogeologists, water-resource specialists, civil and agricultural engineers, earth and environmental scientists and agronomists will benefit from this informative and practical book. It can serve as the primary text for a graduate-level course on groundwater recharge or as an adjunct text for courses on groundwater hydrology or hydrogeology.
Edgil, Dianna; Stankard, Petra; Forsythe, Steven; Rech, Dino; Chrouser, Kristin; Adamu, Tigistu; Sakallah, Sameer; Thomas, Anne Goldzier; Albertini, Jennifer; Stanton, David; Dickson, Kim Eva; Njeuhmeli, Emmanuel
2011-01-01
Background The global HIV prevention community is implementing voluntary medical male circumcision (VMMC) programs across eastern and southern Africa, with a goal of reaching 80% coverage in adult males by 2015. Successful implementation will depend on the accessibility of commodities essential for VMMC programming and the appropriate allocation of resources to support the VMMC supply chain. For this, the United States President’s Emergency Plan for AIDS Relief, in collaboration with the World Health Organization and the Joint United Nations Programme on HIV/AIDS, has developed a standard list of commodities for VMMC programs. Methods and Findings This list of commodities was used to inform program planning for a 1-y program to circumcise 152,000 adult men in Swaziland. During this process, additional key commodities were identified, expanding the standard list to include commodities for waste management, HIV counseling and testing, and the treatment of sexually transmitted infections. The approximate costs for the procurement of commodities, management of a supply chain, and waste disposal, were determined for the VMMC program in Swaziland using current market prices of goods and services. Previous costing studies of VMMC programs did not capture supply chain costs, nor the full range of commodities needed for VMMC program implementation or waste management. Our calculations indicate that depending upon the volume of services provided, supply chain and waste management, including commodities and associated labor, contribute between US$58.92 and US$73.57 to the cost of performing one adult male circumcision in Swaziland. Conclusions Experience with the VMMC program in Swaziland indicates that supply chain and waste management add approximately US$60 per circumcision, nearly doubling the total per procedure cost estimated previously; these additional costs are used to inform the estimate of per procedure costs modeled by Njeuhmeli et al. in “Voluntary Medical Male Circumcision: Modeling the Impact and Cost of Expanding Male Circumcision for HIV Prevention in Eastern and Southern Africa.” Program planners and policy makers should consider the significant contribution of supply chain and waste management to VMMC program costs as they determine future resource needs for VMMC programs. Please see later in the article for the Editors' Summary PMID:22140363
Polar bears in the Beaufort Sea: A 30-year mark-recapture case history
Amstrup, Steven C.; McDonald, T.L.; Stirling, I.
2001-01-01
Knowledge of population size and trend is necessary to manage anthropogenic risks to polar bears (Ursus maritimus). Despite capturing over 1,025 females between 1967 and 1998, previously calculated estimates of the size of the southern Beaufort Sea (SBS) population have been unreliable. We improved estimates of numbers of polar bears by modeling heterogeneity in capture probability with covariates. Important covariates referred to the year of the study, age of the bear, capture effort, and geographic location. Our choice of best approximating model was based on the inverse relationship between variance in parameter estimates and likelihood of the fit and suggested a growth from ≈ 500 to over 1,000 females during this study. The mean coefficient of variation on estimates for the last decade of the study was 0.16—the smallest yet derived. A similar model selection approach is recommended for other projects where a best model is not identified by likelihood criteria alone.
Ding, Xu-Tong; Wang, Ji-Hua
2018-03-01
Lhasa, the capital of Tibet, is located on the Tibetan Plateau. Accelerated economic development and flourishing tourism resulting from the opening of the Qinghai-Tibet Railway (QTR) have increased solid waste generation and contamination in recent years. Using data from Lhasa Statistical Yearbooks and previous studies, this study estimates the future population of permanent residents and tourists using the least squares method to extrapolate the population from 2015-2025, and evaluates the effects of the QTR on municipal solid waste (MSW) generation in Lhasa and estimates future MSW generation. There were approximately 1.35 million tourists in 2008 when the QTR had been operating for 2 years and MSW generation was approximately 470 tons per day. The amount of MSW generated increased dramatically with time after opening the QTR. This study estimates that MSW generation will reach 962 tons per day in 2025. Due to the existence of the QTR, increasing numbers of people are traveling to Lhasa, and tourism has driven the development of the local economy. During the studies, the proportion of MSW produced by tourists increased from 2.99% to 20.06%, and it is estimated that it will increase to 33.49% in 2025. If the current trend continues, Lhasa will face significant challenges from garbage disposal. This study analyzes the current situation of urban garbage treatment in Lhasa, and it suggests several options for improvement to MSW generation, transportation equipment, disposal, and resource recycling.
NASA Technical Reports Server (NTRS)
Murphy, K. A.
1988-01-01
A parameter estimation algorithm is developed which can be used to estimate unknown time- or state-dependent delays and other parameters (e.g., initial condition) appearing within a nonlinear nonautonomous functional differential equation. The original infinite dimensional differential equation is approximated using linear splines, which are allowed to move with the variable delay. The variable delays are approximated using linear splines as well. The approximation scheme produces a system of ordinary differential equations with nice computational properties. The unknown parameters are estimated within the approximating systems by minimizing a least-squares fit-to-data criterion. Convergence theorems are proved for time-dependent delays and state-dependent delays within two classes, which say essentially that fitting the data by using approximations will, in the limit, provide a fit to the data using the original system. Numerical test examples are presented which illustrate the method for all types of delay.
NASA Technical Reports Server (NTRS)
Murphy, K. A.
1990-01-01
A parameter estimation algorithm is developed which can be used to estimate unknown time- or state-dependent delays and other parameters (e.g., initial condition) appearing within a nonlinear nonautonomous functional differential equation. The original infinite dimensional differential equation is approximated using linear splines, which are allowed to move with the variable delay. The variable delays are approximated using linear splines as well. The approximation scheme produces a system of ordinary differential equations with nice computational properties. The unknown parameters are estimated within the approximating systems by minimizing a least-squares fit-to-data criterion. Convergence theorems are proved for time-dependent delays and state-dependent delays within two classes, which say essentially that fitting the data by using approximations will, in the limit, provide a fit to the data using the original system. Numerical test examples are presented which illustrate the method for all types of delay.
Ranzi, Andrea; Ancona, Carla; Angelini, Paola; Badaloni, Chiara; Cernigliaro, Achille; Chiusolo, Monica; Parmagnani, Federica; Pizzuti, Renato; Scondotto, Salvatore; Cadum, Ennio; Forastiere, Francesco; Lauriola, Paolo
2014-01-01
The SESPIR Project (Epidemiological Surveillance of Health Status of Resident Population Around the Waste Treatment Plants) assessed the impact on health of residents nearby incinerators, landfills and mechanical biological treatment plants in five Italian regions (Emilia-Romagna, Piedmont, Lazio, Campania, and Sicily). The assessment procedure took into account the available knowledge on health effects of waste disposal facilities. Analyses were related to three different scenarios: a Baseline scenario, referred to plants active in 2008-2009; the regional future scenario, with plants expected in the waste regional plans; a virtuous scenario (Green 2020), based on a policy management of municipal solid waste (MSW) through the reduction of production and an intense recovery policy. Facing with a total population of around 24 million for the 5 regions, the residents nearby the plants were more than 380,000 people at Baseline. Such a population is reduced to approximately 330.000 inhabitants and 170.000 inhabitants in the regional and Green 2020 scenarios, respectively. The health impact was assessed for the period 2008-2040. At Baseline, 1-2 cases per year of cancer attributable to MSW plants were estimated, as well as 26 cases per year of adverse pregnancy outcomes (including low birth weight and birth defects), 102 persons with respiratory symptoms, and about a thousand affected from annoyance caused by odours. These annual estimates are translated into 2,725 years of life with disability (DALYs) estimated for the entire period. The DALYs are reduced by approximately 20% and 80% in the two future scenarios. Even in these cases, health impact is given by the greater effects on pregnancy and the annoyance associated with the odours of plants. In spite of the limitations due to the inevitable assumptions required by the present exercise, the proposed methodology is suitable for a first approach to assess different policies that can be adopted in regional planning in the field of waste management. The greatest reduction in health impact is achieved with a virtuous policy of reducing waste production and a significant increase in the collection and recycling of waste.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1983-03-01
This report covers the H-Coal Pilot Plant facility located in Catlettsburg, Kentucky. The authorization for this project was under DOE contract No. DE-AC05-78ET11052, formally ET-78-C-01-3224. Badger Plants, Inc. carried out the construction management of this facility. The estimated total cost is $147,265,013. A brief process/technical description of the Pilot Plant covers subjects such as objectives, capacity, expected life, etc. A brief technical description of each processing unit, including its purpose in the overall operations of the plant is given. A general description of the organizational history of the project is given. Current overall organization and a description of the responsibilitiesmore » of each participant are included. Badger Plant's organization at manager level is shown.« less
Churcher, Frances P; Mills, Jeremy F; Forth, Adelle E
2016-08-01
Over the past few decades many structured risk appraisal measures have been created to respond to this need. The Two-Tiered Violence Risk Estimates Scale (TTV) is a measure designed to integrate both an actuarial estimate of violence risk with critical risk management indicators. The current study examined interrater reliability and the predictive validity of the TTV in a sample of violent offenders (n = 120) over an average follow-up period of 17.75 years. The TTV was retrospectively scored and compared with the Violence Risk Appraisal Guide (VRAG), the Statistical Information of Recidivism Scale-Revised (SIR-R1), and the Psychopathy Checklist-Revised (PCL-R). Approximately 53% of the sample reoffended violently, with an overall recidivism rate of 74%. Although the VRAG was the strongest predictor of violent recidivism in the sample, the Actuarial Risk Estimates (ARE) scale of the TTV produced a small, significant effect. The Risk Management Indicators (RMI) produced nonsignificant area under the curve (AUC) values for all recidivism outcomes. Comparisons between measures using AUC values and Cox regression showed that there were no statistical differences in predictive validity. The results of this research will be used to inform the validation and reliability literature on the TTV, and will contribute to the overall risk assessment literature. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Fuhrmeister, Erica R; Schwab, Kellogg J; Julian, Timothy R
2015-10-06
Understanding the excretion and treatment of human waste (feces and urine) in low and middle income countries (LMICs) is necessary to design appropriate waste management strategies. However, excretion and treatment are often difficult to quantify due to decentralization of excreta management. We address this gap by developing a mechanistic, stochastic model to characterize phosphorus, nitrogen, biochemical oxygen demand (BOD), and fecal coliform pollution from human excreta for 108 LMICs. The model estimates excretion and treatment given three scenarios: (1) use of existing sanitation systems, (2) use of World Health Organization-defined "improved sanitation", and (3) use of best available technologies. Our model estimates that more than 10(9) kg/yr each of phosphorus, nitrogen and BOD are produced. Of this, 22(19-27)%, 11(7-15)%, 17(10-23)%, and 35 (23-47)% (mean and 95% range) BOD, nitrogen, phosphorus, and fecal coliforms, respectively, are removed by existing sanitation systems. Our model estimates that upgrading to "improved sanitation" increases mean removal slightly to between 17 and 53%. Under the best available technology scenario, only approximately 60-80% of pollutants are treated. To reduce impact of nutrient and microbial pollution on human and environmental health, improvements in both access to adequate sanitation and sanitation treatment efficiency are needed.
Approximate Upper Limit of Irregular Wave Runup on Riprap.
1988-05-01
MAR 8" P editon ’ayb e used utlI exhausteo SC.A’ (ASS,4CA’ON 01_’- S PA(E_ All other ed,t,os are obsolete Unclassified ;I . % ", .. , r .,., . ,C_...Manager, CERC, and Mr. C. E . Chatham, Jr., Chief, Wave Dynamics Division, CERC. COL Dwayne G. Lee, CE, was Commander and Director of WES during the 0...prevent exceedance by runup or to estimate the potential severity of wave overtopping. e PART II: SOURCES OF DATA, TEST SETUPS, AND A. TEST CONDITIONS 3
Genetics and the investigation of developmental delay/intellectual disability.
Srour, Myriam; Shevell, Michael
2014-04-01
Global developmental delay and intellectual disabilities are common reasons for diagnostic assessment by paediatricians. There are a multiplicity of possible causes many of which have genetic, management and treatment implications for the child and family. Genetic causes are estimated to be responsible for approximately a quarter to one-half of identified cases. The multiplicity of individually rare genetic causes challenges the practitioner with respect to the selection of diagnostic tests and accurate diagnosis. To assist the practitioner practice guidelines have been formulated and these are reviewed and summarised in this particular article.
Managing Space Situational Awareness Using the Space Surveillance Network
2013-11-14
This report examines the use of utility metrics from two forms of expected information gain for each object‐sensor pair as well as the...examines the use of utility metrics from two forms of expected information gain for each object-sensor pair as well as the approximated stability of the...estimation errors in order to work towards a tasking strategy. The information theoretic approaches use the calculation of Fisher information gain
HIV Infection and Older Americans: The Public Health Perspective
Buchacz, Kate; Gebo, Kelly A.; Mermin, Jonathan
2012-01-01
HIV disease is often perceived as a condition affecting young adults. However, approximately 11% of new infections occur in adults aged 50 years or older. Among persons living with HIV disease, it is estimated that more than half will be aged 50 years or older in the near future. In this review, we highlight issues related to HIV prevention and treatment for HIV-uninfected and HIV-infected older Americans, and outline unique considerations and emerging challenges for public health and patient management in these 2 populations. PMID:22698038
Lekalakala, Ruth; Asmall, Shaidah; Cassim, Naseem
2016-01-01
Background Diagnostic health laboratory services are regarded as an integral part of the national health infrastructure across all countries. Clinical laboratory tests contribute substantially to health system goals of increasing quality of care and improving patient outcomes. Objectives This study aimed to analyse current laboratory expenditures at the primary healthcare (PHC) level in South Africa as processed by the National Health Laboratory Service and to determine the potential cost savings of introducing laboratory demand management. Methods A retrospective cross-sectional analysis of laboratory expenditures for the 2013/2014 financial year across 11 pilot National Health Insurance health districts was conducted. Laboratory expenditure tariff codes were cross-tabulated to the PHC essential laboratory tests list (ELL) to determine inappropriate testing. Data were analysed using a Microsoft Access database and Excel software. Results Approximately R35 million South African Rand (10%) of the estimated R339 million in expenditures was for tests that were not listed within the ELL. Approximately 47% of expenditure was for laboratory tests that were indicated in the algorithmic management of patients on antiretroviral treatment. The other main cost drivers for non-ELL testing included full blood count and urea, as well as electrolyte profiles usually requested to support management of patients on antiretroviral treatment. Conclusions Considerable annual savings of up to 10% in laboratory expenditure are possible at the PHC level by implementing laboratory demand management. In addition, to achieve these savings, a standardised PHC laboratory request form and some form of electronic gatekeeping system that must be supported by an educational component should be implemented. PMID:28879107
GREMEX- GODDARD RESEARCH AND ENGINEERING MANAGEMENT EXERCISE SIMULATION SYSTEM
NASA Technical Reports Server (NTRS)
Vaccaro, M. J.
1994-01-01
GREMEX is a man-machine management simulation game of a research and development project. It can be used to depict a project from just after the development of the project plan through the final construction phase. The GREMEX computer programs are basically a program evaluation and review technique (PERT) reporting system. In the usual PERT program, the operator inputs each month the amount of work performed on each activity and the computer does the bookkeeping to determine the expected completion date of the project. GREMEX automatically assumes that all activities due to be worked in the current month will be worked. GREMEX predicts new durations (and costs) each month based on management actions taken by the players and the contractor's abilities. Each activity is assigned the usual cost and duration estimates but must also be assigned three parameters that relate to the probability that the time estimate is correct, the probability that the cost estimate is correct, and the probability of technical success. Management actions usually can be expected to change these probabilities. For example, use of overtime or double shifts in research and development work will decrease duration and increase cost by known proportions and will also decrease the probability of technical success due to an increase in the likelihood of accidents or mistakes. These re-estimating future events and assigning probability factors provides life to the model. GREMEX is not a production job for project management. GREMEX is a game that can be used to train management personnel in the administration of research and development type projects. GREMEX poses no 'best way' to manage a project. The emphasis of GREMEX is to expose participants to many of the factors involved in decision making when managing a project in a government research and development environment. A management team can win the game by surpassing cost, schedule, and technical performance goals established when the simulation began. The serious management experimenter can use GREMEX to explore the results of management methods they could not risk in real life. GREMEX can operate with any research and development type project with up to 15 subcontractors and produces reports simulating monthly or quarterly updates of the project PERT network. Included with the program is a data deck for simulation of a fictitious spacecraft project. Instructions for substituting other projects are also included. GREMEX is written in FORTRAN IV for execution in the batch mode and has been implemented on an IBM 360 with a central memory requirement of approximately 350K (decimal) of 8 bit bytes. The GREMEX system was developed in 1973.
NASA Astrophysics Data System (ADS)
Lee, J.; Kim, M.; Son, Y.; Lee, W. K.
2017-12-01
Korean forests have recovered by the national-scale reforestation program and can contribute to the national greenhouse gas (GHG) mitigation goal. The forest carbon (C) sequestration is expected to change by climate change and forest management regime. In this context, estimating the changes in GHG mitigation potential of Korean forestry sector by climate and management is a timely issue. Thus, we estimated the forest C sequestration of Korea under four scenarios (2010-2050): constant temperature with no management (CT_No), representative concentration pathway (RCP) 8.5 with no management (RCP_No), constant temperature with thinning management (CT_Man), and RCP 8.5 with thinning management (RCP_Man). Dynamic stand growth model (KO-G-Dynamic; for biomass) and forest C model (FBDC model; for non-biomass) were used at approximately 64,000 simulation units (1km2). As model input data, the forest data (e.g., forest type and stand age) and climate data were spatially prepared from the national forest inventories and the RCP 8.5 climate data. The model simulation results showed that the mean annual C sequestrations during the period (Tg C yr-1) were 11.0, 9.9, 11.5, and 10.5, respectively, under the CT_No, RCP_No, CT_Man, and RCP_Man, respectively, at the national scale. The C sequestration decreased with the time passage due to the maturity of the forests. The climate change seemed disadvantageous to the C sequestration by the forest ecosystems (≒ -1.0 Tg C yr-1) due to the increase in organic matter decomposition. In particular, the decrease in C sequestration by the climate change was greater for the needle-leaved species, compared to the broad-leaved species. Meanwhile, the forest management enhanced forest C sequestration (≒ 0.5 Tg C yr-1). Accordingly, implementing appropriate forest management strategies for adaptation would contribute to maintaining the C sequestration by Korean forestry sector under climate change. Acknowledgement: This study was supported by Korean Ministry of Environment (2014001310008).
A Reduced Dimension Static, Linearized Kalman Filter and Smoother
NASA Technical Reports Server (NTRS)
Fukumori, I.
1995-01-01
An approximate Kalman filter and smoother, based on approximations of the state estimation error covariance matrix, is described. Approximations include a reduction of the effective state dimension, use of a static asymptotic error limit, and a time-invariant linearization of the dynamic model for error integration. The approximations lead to dramatic computational savings in applying estimation theory to large complex systems. Examples of use come from TOPEX/POSEIDON.
Error Estimates for Approximate Solutions of the Riccati Equation with Real or Complex Potentials
NASA Astrophysics Data System (ADS)
Finster, Felix; Smoller, Joel
2010-09-01
A method is presented for obtaining rigorous error estimates for approximate solutions of the Riccati equation, with real or complex potentials. Our main tool is to derive invariant region estimates for complex solutions of the Riccati equation. We explain the general strategy for applying these estimates and illustrate the method in typical examples, where the approximate solutions are obtained by gluing together WKB and Airy solutions of corresponding one-dimensional Schrödinger equations. Our method is motivated by, and has applications to, the analysis of linear wave equations in the geometry of a rotating black hole.
Using global sensitivity analysis of demographic models for ecological impact assessment.
Aiello-Lammens, Matthew E; Akçakaya, H Resit
2017-02-01
Population viability analysis (PVA) is widely used to assess population-level impacts of environmental changes on species. When combined with sensitivity analysis, PVA yields insights into the effects of parameter and model structure uncertainty. This helps researchers prioritize efforts for further data collection so that model improvements are efficient and helps managers prioritize conservation and management actions. Usually, sensitivity is analyzed by varying one input parameter at a time and observing the influence that variation has over model outcomes. This approach does not account for interactions among parameters. Global sensitivity analysis (GSA) overcomes this limitation by varying several model inputs simultaneously. Then, regression techniques allow measuring the importance of input-parameter uncertainties. In many conservation applications, the goal of demographic modeling is to assess how different scenarios of impact or management cause changes in a population. This is challenging because the uncertainty of input-parameter values can be confounded with the effect of impacts and management actions. We developed a GSA method that separates model outcome uncertainty resulting from parameter uncertainty from that resulting from projected ecological impacts or simulated management actions, effectively separating the 2 main questions that sensitivity analysis asks. We applied this method to assess the effects of predicted sea-level rise on Snowy Plover (Charadrius nivosus). A relatively small number of replicate models (approximately 100) resulted in consistent measures of variable importance when not trying to separate the effects of ecological impacts from parameter uncertainty. However, many more replicate models (approximately 500) were required to separate these effects. These differences are important to consider when using demographic models to estimate ecological impacts of management actions. © 2016 Society for Conservation Biology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berkel, M. van; Fellow of the Japan Society for the Promotion of Science; FOM Institute DIFFER-Dutch Institute for Fundamental Energy Research, Association EURATOM-FOM, Trilateral Euregio Cluster, P.O. Box 1207, 3430 BE Nieuwegein
In this paper, a number of new explicit approximations are introduced to estimate the perturbative diffusivity (χ), convectivity (V), and damping (τ) in cylindrical geometry. For this purpose, the harmonic components of heat waves induced by localized deposition of modulated power are used. The approximations are based on the heat equation in cylindrical geometry using the symmetry (Neumann) boundary condition at the plasma center. This means that the approximations derived here should be used only to estimate transport coefficients between the plasma center and the off-axis perturbative source. If the effect of cylindrical geometry is small, it is also possiblemore » to use semi-infinite domain approximations presented in Part I and Part II of this series. A number of new approximations are derived in this part, Part III, based upon continued fractions of the modified Bessel function of the first kind and the confluent hypergeometric function of the first kind. These approximations together with the approximations based on semi-infinite domains are compared for heat waves traveling towards the center. The relative error for the different derived approximations is presented for different values of the frequency, transport coefficients, and dimensionless radius. Moreover, it is shown how combinations of different explicit formulas can be used to estimate the transport coefficients over a large parameter range for cases without convection and damping, cases with damping only, and cases with convection and damping. The relative error between the approximation and its underlying model is below 2% for the case, where only diffusivity and damping are considered. If also convectivity is considered, the diffusivity can be estimated well in a large region, but there is also a large region in which no suitable approximation is found. This paper is the third part (Part III) of a series of three papers. In Part I, the semi-infinite slab approximations have been treated. In Part II, cylindrical approximations are treated for heat waves traveling towards the plasma edge assuming a semi-infinite domain.« less
Effects of Management on Soil Carbon Pools in California Rangeland Ecosystems
NASA Astrophysics Data System (ADS)
Silver, W. L.; Ryals, R.; Lewis, D. J.; Creque, J.; Wacker, M.; Larson, S.
2008-12-01
Rangeland ecosystems managed for livestock production represent the largest land-use footprint globally, covering more than one-quarter of the world's land surface (Asner et al. 2004). In California, rangelands cover an estimated 17 million hectares or approximately 40% of the land area (FRAP 2003). These ecosystems have considerable potential to sequester carbon (C) in soil and offset greenhouse gas emissions through changes in land management practices. Climate policies and C markets may provide incentives for rangeland managers to pursue strategies that optimize soil C storage, yet we lack a thorough understanding of the effects of management on soil C pools in rangelands over time and space. We sampled soil C pools on rangelands in a 260 km2 region of Marin and Sonoma counties to determine if patterns in soil C storage exist with management. Replicate soil samples were collected from 35 fields that spanned the dominant soil orders, plant communities, and management practices in the region while controlling for slope and bioclimatic zone (n = 1050). Management practices included organic amendments, intensive (dairy) and extensive (other) grazing practices, and subsoiling. Soil C pools ranged from approximately 50 to 140 Mg C ha-1 to 1 m depth, with a mean of 99 ± 22 (sd) Mg C ha-1. Differences among sites were due primarily to C concentrations, which exhibited a much larger coefficient of variation than bulk density at all depths. There were no statistically significant differences among the dominant soil orders. Subsoiling appeared to significantly increase soil C content in the top 50 cm, even though subsoiling had only occurred for the first time the previous Nov. Organic amendments also appeared to greatly increase soil C pools, and was the dominant factor that distinguished soil C pools in intensive and extensive land uses. Our results indicate that management has the potential to significantly increase soil C pools. Future research will determine the location of sequestered C within the soil matrix and its turnover time.
Computational methods for estimation of parameters in hyperbolic systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Ito, K.; Murphy, K. A.
1983-01-01
Approximation techniques for estimating spatially varying coefficients and unknown boundary parameters in second order hyperbolic systems are discussed. Methods for state approximation (cubic splines, tau-Legendre) and approximation of function space parameters (interpolatory splines) are outlined and numerical findings for use of the resulting schemes in model "one dimensional seismic inversion' problems are summarized.
Sleed, Michelle; Eccleston, Christopher; Beecham, Jennifer; Knapp, Martin; Jordan, Abbie
2005-12-15
Chronic pain in adulthood is one of the most costly conditions in modern western society. However, very little is known about the costs of chronic pain in adolescence. This preliminary study explored methods for collecting economic-related data for this population and estimated the cost-of-illness of adolescent chronic pain in the United Kingdom. The client service receipt inventory was specifically adapted for use with parents of adolescent chronic pain patients to collect economic-related data (CSRI-Pain). This method was compared and discussed in relation to other widely used methods. The CSRI-Pain was sent to 52 families of adolescents with chronic pain to complete as a self-report retrospective questionnaire. These data were linked with unit costs to estimate the total care cost package for each family. The economic impact of adolescent chronic pain was found to be high. The mean cost per adolescent experiencing chronic pain was approximately 8,000 pounds per year, including direct and indirect costs. The adolescents attending a specialised pain management unit, who had predominantly non-inflammatory pain, accrued significantly higher costs, than those attending rheumatology outpatient clinics, who had mostly inflammatory diagnoses. Extrapolating the mean total cost to estimated UK prevalence data of adolescent chronic pain demonstrates a cost-of-illness to UK society of approximately 3,840 million pounds in one year. The implications of the study are discussed.
Estimated Water Flows in 2005: United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, C A; Belles, R D; Simon, A J
2011-03-16
Flow charts depicting water use in the United States have been constructed from publicly available data and estimates of water use patterns. Approximately 410,500 million gallons per day of water are managed throughout the United States for use in farming, power production, residential, commercial, and industrial applications. Water is obtained from four major resource classes: fresh surface-water, saline (ocean) surface-water, fresh groundwater and saline (brackish) groundwater. Water that is not consumed or evaporated during its use is returned to surface bodies of water. The flow patterns are represented in a compact 'visual atlas' of 52 state-level (all 50 states inmore » addition to Puerto Rico and the Virgin Islands) and one national water flow chart representing a comprehensive systems view of national water resources, use, and disposition.« less
NASA Technical Reports Server (NTRS)
Unal, Resit; Morris, W. Douglas; White, Nancy H.; Lepsch, Roger A.; Brown, Richard W.
2000-01-01
This paper describes the development of parametric models for estimating operational reliability and maintainability (R&M) characteristics for reusable vehicle concepts, based on vehicle size and technology support level. A R&M analysis tool (RMAT) and response surface methods are utilized to build parametric approximation models for rapidly estimating operational R&M characteristics such as mission completion reliability. These models that approximate RMAT, can then be utilized for fast analysis of operational requirements, for lifecycle cost estimating and for multidisciplinary sign optimization.
Cruz-Marcelo, Alejandro; Ensor, Katherine B; Rosner, Gary L
2011-06-01
The term structure of interest rates is used to price defaultable bonds and credit derivatives, as well as to infer the quality of bonds for risk management purposes. We introduce a model that jointly estimates term structures by means of a Bayesian hierarchical model with a prior probability model based on Dirichlet process mixtures. The modeling methodology borrows strength across term structures for purposes of estimation. The main advantage of our framework is its ability to produce reliable estimators at the company level even when there are only a few bonds per company. After describing the proposed model, we discuss an empirical application in which the term structure of 197 individual companies is estimated. The sample of 197 consists of 143 companies with only one or two bonds. In-sample and out-of-sample tests are used to quantify the improvement in accuracy that results from approximating the term structure of corporate bonds with estimators by company rather than by credit rating, the latter being a popular choice in the financial literature. A complete description of a Markov chain Monte Carlo (MCMC) scheme for the proposed model is available as Supplementary Material.
Cruz-Marcelo, Alejandro; Ensor, Katherine B.; Rosner, Gary L.
2011-01-01
The term structure of interest rates is used to price defaultable bonds and credit derivatives, as well as to infer the quality of bonds for risk management purposes. We introduce a model that jointly estimates term structures by means of a Bayesian hierarchical model with a prior probability model based on Dirichlet process mixtures. The modeling methodology borrows strength across term structures for purposes of estimation. The main advantage of our framework is its ability to produce reliable estimators at the company level even when there are only a few bonds per company. After describing the proposed model, we discuss an empirical application in which the term structure of 197 individual companies is estimated. The sample of 197 consists of 143 companies with only one or two bonds. In-sample and out-of-sample tests are used to quantify the improvement in accuracy that results from approximating the term structure of corporate bonds with estimators by company rather than by credit rating, the latter being a popular choice in the financial literature. A complete description of a Markov chain Monte Carlo (MCMC) scheme for the proposed model is available as Supplementary Material. PMID:21765566
NASA Technical Reports Server (NTRS)
Flemming, Ken
1991-01-01
Lunar vehicles that will be space based and reusable will require resupply of propellants in orbit. Approximately 75 pct. of the total mass delivered to low earth orbit will be propellants. Consequently, the propellant management techniques selected for Space Exploration Initiative (SEI) orbital operations will have a major influence on the overall SEI architecture. Five proposed propellant management facility (PMF) concepts were analyzed and compared in order to determine the best method of resupplying reusable, space based Lunar Transfer Vehicles (LTVs). The processing time needed at the Space Station to prepare LTV for its next lunar mission was estimated for each of the PMF concepts. The estimated times required to assemble and maintain the different PMF concepts were also compared. The results of the maintenance analysis were similar, with co-orbiting depots needing 100 to 350 pct. more annual maintenance. The first few external tanks mating operations at KSC encountered many problems that could cause serious lunar mission schedule delays. The use of drop tanks on lunar vehicles increases by a factor of four the number of critical propellant interface disturbances.
An empirical method for estimating travel times for wet volcanic mass flows
Pierson, Thomas C.
1998-01-01
Travel times for wet volcanic mass flows (debris avalanches and lahars) can be forecast as a function of distance from source when the approximate flow rate (peak discharge near the source) can be estimated beforehand. The near-source flow rate is primarily a function of initial flow volume, which should be possible to estimate to an order of magnitude on the basis of geologic, geomorphic, and hydrologic factors at a particular volcano. Least-squares best fits to plots of flow-front travel time as a function of distance from source provide predictive second-degree polynomial equations with high coefficients of determination for four broad size classes of flow based on near-source flow rate: extremely large flows (>1 000 000 m3/s), very large flows (10 000–1 000 000 m3/s), large flows (1000–10 000 m3/s), and moderate flows (100–1000 m3/s). A strong nonlinear correlation that exists between initial total flow volume and flow rate for "instantaneously" generated debris flows can be used to estimate near-source flow rates in advance. Differences in geomorphic controlling factors among different flows in the data sets have relatively little effect on the strong nonlinear correlations between travel time and distance from source. Differences in flow type may be important, especially for extremely large flows, but this could not be evaluated here. At a given distance away from a volcano, travel times can vary by approximately an order of magnitude depending on flow rate. The method can provide emergency-management officials a means for estimating time windows for evacuation of communities located in hazard zones downstream from potentially hazardous volcanoes.
Demidenko, Eugene
2017-09-01
The exact density distribution of the nonlinear least squares estimator in the one-parameter regression model is derived in closed form and expressed through the cumulative distribution function of the standard normal variable. Several proposals to generalize this result are discussed. The exact density is extended to the estimating equation (EE) approach and the nonlinear regression with an arbitrary number of linear parameters and one intrinsically nonlinear parameter. For a very special nonlinear regression model, the derived density coincides with the distribution of the ratio of two normally distributed random variables previously obtained by Fieller (1932), unlike other approximations previously suggested by other authors. Approximations to the density of the EE estimators are discussed in the multivariate case. Numerical complications associated with the nonlinear least squares are illustrated, such as nonexistence and/or multiple solutions, as major factors contributing to poor density approximation. The nonlinear Markov-Gauss theorem is formulated based on the near exact EE density approximation.
A-posteriori error estimation for the finite point method with applications to compressible flow
NASA Astrophysics Data System (ADS)
Ortega, Enrique; Flores, Roberto; Oñate, Eugenio; Idelsohn, Sergio
2017-08-01
An a-posteriori error estimate with application to inviscid compressible flow problems is presented. The estimate is a surrogate measure of the discretization error, obtained from an approximation to the truncation terms of the governing equations. This approximation is calculated from the discrete nodal differential residuals using a reconstructed solution field on a modified stencil of points. Both the error estimation methodology and the flow solution scheme are implemented using the Finite Point Method, a meshless technique enabling higher-order approximations and reconstruction procedures on general unstructured discretizations. The performance of the proposed error indicator is studied and applications to adaptive grid refinement are presented.
On the use of star-shaped genealogies in inference of coalescence times.
Rosenberg, Noah A; Hirsh, Aaron E
2003-01-01
Genealogies from rapidly growing populations have approximate "star" shapes. We study the degree to which this approximation holds in the context of estimating the time to the most recent common ancestor (T(MRCA)) of a set of lineages. In an exponential growth scenario, we find that unless the product of population size (N) and growth rate (r) is at least approximately 10(5), the "pairwise comparison estimator" of T(MRCA) that derives from the star genealogy assumption has bias of 10-50%. Thus, the estimator is appropriate only for large populations that have grown very rapidly. The "tree-length estimator" of T(MRCA) is more biased than the pairwise comparison estimator, having low bias only for extremely large values of Nr. PMID:12930771
Spent shot availability and ingestion on areas managed for mourning doves
Schulz, J.H.; Millspaugh, J.J.; Washburn, B.E.; Wester, G.R.; Lanigan, J. T.; Franson, J.C.
2002-01-01
Mourning dove (Zenaida macroura) hunting is becoming increasingly popular, especially in managed shooting fields. Given the possible increase in the availability of lead (Pb) shot on these areas, our objective was to estimate availability and ingestion of spent shot at the Eagle Bluffs Conservation Area (EBCA, hunted with nontoxic shot) and the James A. Reed Memorial Wildlife Area (JARWA, hunted with Pb shot) in Missouri. During 1998, we collected soil samples one or 2 weeks prior to the hunting season (prehunt) and after 4 days of dove hunting (posthunt). We also collected information on number of doves harvested, number of shots fired, shotgun gauge, and shotshell size used. Dove carcasses were collected on both areas during 1998-99. At EBCA, 60 hunters deposited an estimated 64,775 pellets/ha of nontoxic shot on or around the managed field. At JARWA, approximately 1,086,275 pellets/ha of Pb shot were deposited by 728 hunters. Our posthunt estimates of spent-shot availability from soil sampling were 0 pellets/ha for EBCA and 6,342 pellets/ha for JARWA. Our findings suggest that existing soil sampling protocols may not provide accurate estimates of spent-shot availability in managed dove shooting fields. During 1998-99, 15 of 310 (4.8%) mourning doves collected from EBCA had ingested nontoxic shot. Of those doves, 6 (40.0%) contained a?Y7 shot pellets. In comparison, only 2 of 574 (0.3%) doves collected from JARWA had ingested Pb shot. Because a greater proportion of doves ingested multiple steel pellets compared to Pb pellets, we suggest that doves feeding in fields hunted with Pb shot may succumb to acute Pb toxicosis and thus become unavailable to harvest, resulting in an underestimate of ingestion rates. Although further research is needed to test this hypothesis, our findings may partially explain why previous studies have shown few doves with ingested Pb shot despite their feeding on areas with high Pb shot availability.
Queuing Time Prediction Using WiFi Positioning Data in an Indoor Scenario.
Shu, Hua; Song, Ci; Pei, Tao; Xu, Lianming; Ou, Yang; Zhang, Libin; Li, Tao
2016-11-22
Queuing is common in urban public places. Automatically monitoring and predicting queuing time can not only help individuals to reduce their wait time and alleviate anxiety but also help managers to allocate resources more efficiently and enhance their ability to address emergencies. This paper proposes a novel method to estimate and predict queuing time in indoor environments based on WiFi positioning data. First, we use a series of parameters to identify the trajectories that can be used as representatives of queuing time. Next, we divide the day into equal time slices and estimate individuals' average queuing time during specific time slices. Finally, we build a nonstandard autoregressive (NAR) model trained using the previous day's WiFi estimation results and actual queuing time to predict the queuing time in the upcoming time slice. A case study comparing two other time series analysis models shows that the NAR model has better precision. Random topological errors caused by the drift phenomenon of WiFi positioning technology (locations determined by a WiFi positioning system may drift accidently) and systematic topological errors caused by the positioning system are the main factors that affect the estimation precision. Therefore, we optimize the deployment strategy during the positioning system deployment phase and propose a drift ratio parameter pertaining to the trajectory screening phase to alleviate the impact of topological errors and improve estimates. The WiFi positioning data from an eight-day case study conducted at the T3-C entrance of Beijing Capital International Airport show that the mean absolute estimation error is 147 s, which is approximately 26.92% of the actual queuing time. For predictions using the NAR model, the proportion is approximately 27.49%. The theoretical predictions and the empirical case study indicate that the NAR model is an effective method to estimate and predict queuing time in indoor public areas.
Queuing Time Prediction Using WiFi Positioning Data in an Indoor Scenario
Shu, Hua; Song, Ci; Pei, Tao; Xu, Lianming; Ou, Yang; Zhang, Libin; Li, Tao
2016-01-01
Queuing is common in urban public places. Automatically monitoring and predicting queuing time can not only help individuals to reduce their wait time and alleviate anxiety but also help managers to allocate resources more efficiently and enhance their ability to address emergencies. This paper proposes a novel method to estimate and predict queuing time in indoor environments based on WiFi positioning data. First, we use a series of parameters to identify the trajectories that can be used as representatives of queuing time. Next, we divide the day into equal time slices and estimate individuals’ average queuing time during specific time slices. Finally, we build a nonstandard autoregressive (NAR) model trained using the previous day’s WiFi estimation results and actual queuing time to predict the queuing time in the upcoming time slice. A case study comparing two other time series analysis models shows that the NAR model has better precision. Random topological errors caused by the drift phenomenon of WiFi positioning technology (locations determined by a WiFi positioning system may drift accidently) and systematic topological errors caused by the positioning system are the main factors that affect the estimation precision. Therefore, we optimize the deployment strategy during the positioning system deployment phase and propose a drift ratio parameter pertaining to the trajectory screening phase to alleviate the impact of topological errors and improve estimates. The WiFi positioning data from an eight-day case study conducted at the T3-C entrance of Beijing Capital International Airport show that the mean absolute estimation error is 147 s, which is approximately 26.92% of the actual queuing time. For predictions using the NAR model, the proportion is approximately 27.49%. The theoretical predictions and the empirical case study indicate that the NAR model is an effective method to estimate and predict queuing time in indoor public areas. PMID:27879663
Multistate modeling of habitat dynamics: Factors affecting Florida scrub transition probabilities
Breininger, D.R.; Nichols, J.D.; Duncan, B.W.; Stolen, Eric D.; Carter, G.M.; Hunt, D.K.; Drese, J.H.
2010-01-01
Many ecosystems are influenced by disturbances that create specific successional states and habitat structures that species need to persist. Estimating transition probabilities between habitat states and modeling the factors that influence such transitions have many applications for investigating and managing disturbance-prone ecosystems. We identify the correspondence between multistate capture-recapture models and Markov models of habitat dynamics. We exploit this correspondence by fitting and comparing competing models of different ecological covariates affecting habitat transition probabilities in Florida scrub and flatwoods, a habitat important to many unique plants and animals. We subdivided a large scrub and flatwoods ecosystem along central Florida's Atlantic coast into 10-ha grid cells, which approximated average territory size of the threatened Florida Scrub-Jay (Aphelocoma coerulescens), a management indicator species. We used 1.0-m resolution aerial imagery for 1994, 1999, and 2004 to classify grid cells into four habitat quality states that were directly related to Florida Scrub-Jay source-sink dynamics and management decision making. Results showed that static site features related to fire propagation (vegetation type, edges) and temporally varying disturbances (fires, mechanical cutting) best explained transition probabilities. Results indicated that much of the scrub and flatwoods ecosystem was resistant to moving from a degraded state to a desired state without mechanical cutting, an expensive restoration tool. We used habitat models parameterized with the estimated transition probabilities to investigate the consequences of alternative management scenarios on future habitat dynamics. We recommend this multistate modeling approach as being broadly applicable for studying ecosystem, land cover, or habitat dynamics. The approach provides maximum-likelihood estimates of transition parameters, including precision measures, and can be used to assess evidence among competing ecological models that describe system dynamics. ?? 2010 by the Ecological Society of America.
Kroll, Andrew J; Jones, Jay E; Stringer, Angela B; Meekins, Douglas J
2016-01-01
Quantifying spatial and temporal variability in population trends is a critical aspect of successful management of imperiled species. We evaluated territory occupancy dynamics of northern spotted owls (Strix occidentalis caurina), California, USA, 1990-2014. The study area possessed two unique aspects. First, timber management has occurred for over 100 years, resulting in dramatically different forest successional and structural conditions compared to other areas. Second, the barred owl (Strix varia), an exotic congener known to exert significant negative effects on spotted owls, has not colonized the study area. We used a Bayesian dynamic multistate model to evaluate if territory occupancy of reproductive spotted owls has declined as in other study areas. The state-space approach for dynamic multistate modeling imputes the number of territories for each nesting state and allows for the estimation of longer-term trends in occupied or reproductive territories from longitudinal studies. The multistate approach accounts for different detection probabilities by nesting state (to account for either inherent differences in detection or for the use of different survey methods for different occupancy states) and reduces bias in state assignment. Estimated linear trends in the number of reproductive territories suggested an average loss of approximately one half territory per year (-0.55, 90% CRI: -0.76, -0.33), in one management block and a loss of 0.15 per year (-0.15, 90% CRI: -0.24, -0.07), in another management block during the 25 year observation period. Estimated trends in the third management block were also negative, but substantial uncertainty existed in the estimate (-0.09, 90% CRI: -0.35, 0.17). Our results indicate that the number of territories occupied by northern spotted owl pairs remained relatively constant over a 25 year period (-0.07, 90% CRI: -0.20, 0.05; -0.01, 90% CRI: -0.19, 0.16; -0.16, 90% CRI: -0.40, 0.06). However, we cannot exclude small-to-moderate declines or increases in paired territory numbers due to uncertainty in our estimates. Collectively, we conclude spotted owl pair populations on this landscape managed for commercial timber production appear to be more stable and do not show sharp year-over-year declines seen in both managed and unmanaged landscapes with substantial barred owl colonization and persistence. Continued monitoring of reproductive territories can determine whether recent declines continue or whether trends reverse as they have on four previous occasions. Experimental investigations to evaluate changes to spotted owl occupancy dynamics when barred owl populations are reduced or removed entirely can confirm the generality of this conclusion.
Lorz, C; Fürst, C; Galic, Z; Matijasic, D; Podrazky, V; Potocic, N; Simoncic, P; Strauch, M; Vacik, H; Makeschin, F
2010-12-01
We assessed the probability of three major natural hazards--windthrow, drought, and forest fire--for Central and South-Eastern European forests which are major threats for the provision of forest goods and ecosystem services. In addition, we analyzed spatial distribution and implications for a future oriented management of forested landscapes. For estimating the probability of windthrow, we used rooting depth and average wind speed. Probabilities of drought and fire were calculated from climatic and total water balance during growing season. As an approximation to climate change scenarios, we used a simplified approach with a general increase of pET by 20%. Monitoring data from the pan-European forests crown condition program and observed burnt areas and hot spots from the European Forest Fire Information System were used to test the plausibility of probability maps. Regions with high probabilities of natural hazard are identified and management strategies to minimize probability of natural hazards are discussed. We suggest future research should focus on (i) estimating probabilities using process based models (including sensitivity analysis), (ii) defining probability in terms of economic loss, (iii) including biotic hazards, (iv) using more detailed data sets on natural hazards, forest inventories and climate change scenarios, and (v) developing a framework of adaptive risk management.
West, David R; Radcliff, Tiffany A; Brown, Tiffany; Cote, Murray J; Smith, Peter C; Dickinson, W Perry
2012-01-01
Information about the costs and experiences of collecting and reporting quality measure data are vital for practices deciding whether to adopt new quality improvement initiatives or monitor existing initiatives. Six primary care practices from Colorado's Improving Performance in Practice program participated. We conducted structured key informant interviews with Improving Performance in Practice coaches and practice managers, clinicians, and staff and directly observed practices. Practices had 3 to 7 clinicians and 75 to 300 patients with diabetes, half had electronic health records, and half were members of an independent practice association. The estimated per-practice cost of implementation for the data collection and reporting for the diabetes quality improvement program was approximately $15,552 per practice (about $6.23 per diabetic patient per month). The first-year maintenance cost for this effort was approximately $9,553 per practice ($3.83 per diabetic patient per month). The cost of implementing and maintaining a diabetes quality improvement effort that incorporates formal data collection, data management, and reporting is significant and quantifiable. Policymakers must become aware of the financial and cultural impact on primary care practices when considering value-based purchasing initiatives.
NASA Astrophysics Data System (ADS)
Meshgi, Ali; Schmitter, Petra; Babovic, Vladan; Chui, Ting Fong May
2014-11-01
Developing reliable methods to estimate stream baseflow has been a subject of interest due to its importance in catchment response and sustainable watershed management. However, to date, in the absence of complex numerical models, baseflow is most commonly estimated using statistically derived empirical approaches that do not directly incorporate physically-meaningful information. On the other hand, Artificial Intelligence (AI) tools such as Genetic Programming (GP) offer unique capabilities to reduce the complexities of hydrological systems without losing relevant physical information. This study presents a simple-to-use empirical equation to estimate baseflow time series using GP so that minimal data is required and physical information is preserved. A groundwater numerical model was first adopted to simulate baseflow for a small semi-urban catchment (0.043 km2) located in Singapore. GP was then used to derive an empirical equation relating baseflow time series to time series of groundwater table fluctuations, which are relatively easily measured and are physically related to baseflow generation. The equation was then generalized for approximating baseflow in other catchments and validated for a larger vegetation-dominated basin located in the US (24 km2). Overall, this study used GP to propose a simple-to-use equation to predict baseflow time series based on only three parameters: minimum daily baseflow of the entire period, area of the catchment and groundwater table fluctuations. It serves as an alternative approach for baseflow estimation in un-gauged systems when only groundwater table and soil information is available, and is thus complementary to other methods that require discharge measurements.
Shapiro, Allen M.; Ladderud, Jeffery; Yager, Richard M.
2015-01-01
A comparison of the hydraulic conductivity over increasingly larger volumes of crystalline rock was conducted in the Piedmont physiographic region near Bethesda, Maryland, USA. Fluid-injection tests were conducted on intervals of boreholes isolating closely spaced fractures. Single-hole tests were conducted by pumping in open boreholes for approximately 30 min, and an interference test was conducted by pumping a single borehole over 3 days while monitoring nearby boreholes. An estimate of the hydraulic conductivity of the rock over hundreds of meters was inferred from simulating groundwater inflow into a kilometer-long section of a Washington Metropolitan Area Transit Authority tunnel in the study area, and a groundwater modeling investigation over the Rock Creek watershed provided an estimate of the hydraulic conductivity over kilometers. The majority of groundwater flow is confined to relatively few fractures at a given location. Boreholes installed to depths of approximately 50 m have one or two highly transmissive fractures; the transmissivity of the remaining fractures ranges over five orders of magnitude. Estimates of hydraulic conductivity over increasingly larger rock volumes varied by less than half an order of magnitude. While many investigations point to increasing hydraulic conductivity as a function of the measurement scale, a comparison with selected investigations shows that the effective hydraulic conductivity estimated over larger volumes of rock can either increase, decrease, or remain stable as a function of the measurement scale. Caution needs to be exhibited in characterizing effective hydraulic properties in fractured rock for the purposes of groundwater management.
Estimating Drug Costs: How do Manufacturer Net Prices Compare with Other Common US Price References?
Mattingly, T Joseph; Levy, Joseph F; Slejko, Julia F; Onwudiwe, Nneka C; Perfetto, Eleanor M
2018-05-12
Drug costs are frequently estimated in economic analyses using wholesale acquisition cost (WAC), but what is the best approach to develop these estimates? Pharmaceutical manufacturers recently released transparency reports disclosing net price increases after accounting for rebates and other discounts. Our objective was to determine whether manufacturer net prices (MNPs) could approximate the discounted prices observed by the U.S. Department of Veterans Affairs (VA). We compared the annual, average price discounts voluntarily reported by three pharmaceutical manufacturers with the VA price for specific products from each company. The top 10 drugs by total sales reported from company tax filings for 2016 were included. The discount observed by the VA was determined from each drug's list price, reported as WAC, in 2016. Descriptive statistics were calculated for the VA discount observed and a weighted price index was calculated using the lowest price to the VA (Weighted VA Index), which was compared with the manufacturer index. The discounted price as a percentage of the WAC ranged from 9 to 74%. All three indexes estimated by the average discount to the VA were at or below the manufacturer indexes (42 vs. 50% for Eli Lilly, 56 vs. 65% for Johnson & Johnson, and 59 vs. 59% for Merck). Manufacturer-reported average net prices may provide a close approximation of the average discounted price granted to the VA, suggesting they may be a useful proxy for the true pharmacy benefits manager (PBM) or payer cost. However, individual discounts for products have wide variation, making a standard discount adjustment across multiple products less acceptable.
Historical and future changes of frozen ground in the upper Yellow River Basin
NASA Astrophysics Data System (ADS)
Wang, Taihua; Yang, Dawen; Qin, Yue; Wang, Yuhan; Chen, Yun; Gao, Bing; Yang, Hanbo
2018-03-01
Frozen ground degradation resulting from climate warming on the Tibetan Plateau has aroused wide concern in recent years. In this study, the maximum thickness of seasonally frozen ground (MTSFG) is estimated by the Stefan equation, which is validated using long-term frozen depth observations. The permafrost distribution is estimated by the temperature at the top of permafrost (TTOP) model, which is validated using borehole observations. The two models are applied to the upper Yellow River Basin (UYRB) for analyzing the spatio-temporal changes in frozen ground. The simulated results show that the areal mean MTSFG in the UYRB decreased by 3.47 cm/10 a during 1965-2014, and that approximately 23% of the permafrost in the UYRB degraded to seasonally frozen ground during the past 50 years. Using the climate data simulated by 5 General Circulation Models (GCMs) under the Representative Concentration Pathway (RCP) 4.5, the areal mean MTSFG is projected to decrease by 1.69 to 3.07 cm/10 a during 2015-2050, and approximately 40% of the permafrost in 1991-2010 is projected to degrade into seasonally frozen ground in 2031-2050. This study provides a framework to estimate the long-term changes in frozen ground based on a combination of multi-source observations at the basin scale, and this framework can be applied to other areas of the Tibetan Plateau. The estimates of frozen ground changes could provide a scientific basis for water resource management and ecological protection under the projected future climate changes in headwater regions on the Tibetan Plateau.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-09-01
The Uranium Mill Tailings Radiation Control Act of 1978 (42 USC {section}7901 et seq.), hereafter referred to as the UMTRCA, authorized the US Department of Energy (DOE) to clean up two uranium mill tailings processing sites near Slick Rock, Colorado, in San Miquel County. Contaminated materials cover an estimated 63 acres of the Union Carbide (UC) processing site and 15 ac of the North Continent (NC) processing site. The sites are within 1 mile of each other and are adjacent to the Dolores River. The sites contain concrete foundations of mill buildings, tailings piles, and areas contaminated by windblown andmore » waterborne radioactive tailings materials. The total estimated volume of contaminated materials is approximately 621,300 cubic yards (yd{sup 3}). In addition to the contamination in the two processing site areas, four VPs were found to contain contamination. As a result of the tailings being exposed to the environment, contamination associated with the UC and NC sites has leached into shallow ground water. Surface water has not been affected. The closest residence is approximately 0.3 air mi from either site. The proposed action is to remediate the UC and NC sites by removing all contaminated materials within the designing site boundaries or otherwise associated with the sites, and relocating them to, and stabilizing them at, a location approximately 5 road mi northeast of the sites on land administered by the Bureau of Land Management (BLM).« less
Chinnasamy, Senthil; Bhatnagar, Ashish; Claxton, Ronald; Das, K C
2010-09-01
Improved wastewater management with beneficial utilization will result in enhanced sustainability and enormous cost savings in industries. Algae cultivation systems viz. raceway ponds, vertical tank reactors (VTR) and polybags were evaluated for mass production of algal consortium using carpet industry (CI) untreated wastewater. Overall areal biomass productivity of polybags (21.1 g m(-2)d(-1)) was the best followed by VTR (8.1 g m(-2)d(-1)) and raceways (5.9 g m(-2)d(-1)). An estimated biomass productivity of 51 and 77 tons ha(-1)year(-1) can be achieved using 20 and 30 L capacity polybags, respectively with triple row arrangement. Biomass obtained from algal consortium was rich in proteins (approximately 53.8%) and low in carbohydrates (approximately 15.7%) and lipids (approximately 5.3%). Consortium cultivated in polybags has the potential to produce 12,128 m(3) of biomethane ha(-1)year(-1). To be economically viable, the capital expenditure for polybag reactors needs to be reduced to $10 m(-2) for bioenergy/biofuel production. (c) 2010 Elsevier Ltd. All rights reserved.
Uncovering state-dependent relationships in shallow lakes using Bayesian latent variable regression.
Vitense, Kelsey; Hanson, Mark A; Herwig, Brian R; Zimmer, Kyle D; Fieberg, John
2018-03-01
Ecosystems sometimes undergo dramatic shifts between contrasting regimes. Shallow lakes, for instance, can transition between two alternative stable states: a clear state dominated by submerged aquatic vegetation and a turbid state dominated by phytoplankton. Theoretical models suggest that critical nutrient thresholds differentiate three lake types: highly resilient clear lakes, lakes that may switch between clear and turbid states following perturbations, and highly resilient turbid lakes. For effective and efficient management of shallow lakes and other systems, managers need tools to identify critical thresholds and state-dependent relationships between driving variables and key system features. Using shallow lakes as a model system for which alternative stable states have been demonstrated, we developed an integrated framework using Bayesian latent variable regression (BLR) to classify lake states, identify critical total phosphorus (TP) thresholds, and estimate steady state relationships between TP and chlorophyll a (chl a) using cross-sectional data. We evaluated the method using data simulated from a stochastic differential equation model and compared its performance to k-means clustering with regression (KMR). We also applied the framework to data comprising 130 shallow lakes. For simulated data sets, BLR had high state classification rates (median/mean accuracy >97%) and accurately estimated TP thresholds and state-dependent TP-chl a relationships. Classification and estimation improved with increasing sample size and decreasing noise levels. Compared to KMR, BLR had higher classification rates and better approximated the TP-chl a steady state relationships and TP thresholds. We fit the BLR model to three different years of empirical shallow lake data, and managers can use the estimated bifurcation diagrams to prioritize lakes for management according to their proximity to thresholds and chance of successful rehabilitation. Our model improves upon previous methods for shallow lakes because it allows classification and regression to occur simultaneously and inform one another, directly estimates TP thresholds and the uncertainty associated with thresholds and state classifications, and enables meaningful constraints to be built into models. The BLR framework is broadly applicable to other ecosystems known to exhibit alternative stable states in which regression can be used to establish relationships between driving variables and state variables. © 2017 by the Ecological Society of America.
Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization
NASA Technical Reports Server (NTRS)
Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.
2012-01-01
The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.
NASA Astrophysics Data System (ADS)
Akyurek, Z.; Bozoglu, B.; Girayhan, T.
2015-12-01
Flooding has the potential to cause significant impacts to economic activities as well as to disrupt or displace populations. Changing climate regimes such as extreme precipitation events increase flood vulnerability and put additional stresses on infrastructure. In this study the flood modelling in an urbanized area, namely Samsun-Terme in Blacksea region of Turkey is done. MIKE21 with flexible grid is used in 2- dimensional shallow water flow modelling. 1/1000 scaled maps with the buildings for the urbanized area and 1/5000 scaled maps for the rural parts are used to obtain DTM needed in the flood modelling. The bathymetry of the river is obtained from additional surveys. The main river passing through the urbanized area has a capacity of Q5 according to the design discharge obtained by simple ungauged discharge estimation depending on catchment area only. The effects of the available structures like bridges across the river on the flooding are presented. The upstream structural measures are studied on scenario basis. Four sub-catchments of Terme River are considered as contributing the downstream flooding. The existing circumstance of the Terme River states that the meanders of the river have a major effect on the flood situation and lead to approximately 35% reduction in the peak discharge between upstream and downstream of the river. It is observed that if the flow from the upstream catchments can be retarded through a detention pond constructed in at least two of the upstream catchments, estimated Q100 flood can be conveyed by the river without overtopping from the river channel. The operation of the upstream detention ponds and the scenarios to convey Q500 without causing flooding are also presented. Structural management measures to address changes in flood characteristics in water management planning are discussed. Flood risk is obtained by using the flood hazard maps and water depth-damage functions plotted for a variety of building types and occupancies. The estimated mean annual hazard for the area is calculated as $340 000 and it is estimated that the upstream structural management measures can decrease the direct economic risk 11% for the 500 return period flood.
Bain, Lorna; Mierdel, Sandra; Thorne, Carter
2012-01-01
Researchers, hospital administrators and governments are striving to define competencies in interprofessional care and education, as well as to identify effective models in chronic disease management. For more than 25 years The Arthritis Program (TAP) at Southlake Regional Health Centre in Newmarket, Ontario, has actively practiced within these two interrelated priorities, which are now at the top of the healthcare agenda in Ontario and Canada. The approximately 135 different rheumatic conditions are the primary cause of long-term disability in Canada, affecting those from youth to the senior years, with an economic burden estimated at $4.4 billion (CAD$) annually, and growing. For the benefit of healthcare managers and their clients with chronic conditions, this article discusses TAP's history and demonstrable success, predicated on an educational model of patient self-management and self-efficacy. Also outlined are TAP's contributions in supporting evidence-based best practices in interprofessional collaboration and chronic disease management; approaches that are arguably understudied and under-practiced. Next steps for TAP include a larger role in empirical research in chronic-disease management and integration of a formal training program to benefit health professionals launching or expanding their interprofessional programs using TAP as the dynamic clinical example.
Bolliger, Janine; Edwards, Thomas C.; Eggenberg, Stefan; Ismail, Sascha; Seidl, Irmi; Kienast, Felix
2011-01-01
Abandonment of agricultural land has resulted in forest regeneration in species-rich dry grasslands across European mountain regions and threatens conservation efforts in this vegetation type. To support national conservation strategies, we used a site-selection algorithm (MARXAN) to find optimum sets of floristic regions (reporting units) that contain grasslands of high conservation priority. We sought optimum sets that would accommodate 136 important dry-grassland species and that would minimize forest regeneration and costs of management needed to forestall predicted forest regeneration. We did not consider other conservation elements of dry grasslands, such as animal species richness, cultural heritage, and changes due to climate change. Optimal sets that included 95–100% of the dry grassland species encompassed an average of 56–59 floristic regions (standard deviation, SD 5). This is about 15% of approximately 400 floristic regions that contain dry-grassland sites and translates to 4800–5300 ha of dry grassland out of a total of approximately 23,000 ha for the entire study area. Projected costs to manage the grasslands in these optimum sets ranged from CHF (Swiss francs) 5.2 to 6.0 million/year. This is only 15–20% of the current total estimated cost of approximately CHF30–45 million/year required if all dry grasslands were to be protected. The grasslands of the optimal sets may be viewed as core sites in a national conservation strategy.
The Effect of Nurse Practitioner Co-Management on the Care of Geriatric Conditions
Reuben, David B.; Ganz, David A.; Roth, Carol P.; McCreath, Heather E.; Ramirez, Karina D.; Wenger, Neil S.
2013-01-01
Background/Objectives The quality of care for geriatric conditions remains poor. The Assessing Care of Vulnerable Elders (ACOVE)-2 model (case finding, delegation of data collection, structured visit notes, physician and patient education, and linkage to community resources) improves the quality of care for geriatric conditions when implemented by primary care physicians (PCPs) or by nurse practitioners (NPs) co-managing care with an academic geriatrician. However, it is unclear whether community-based PCP-NP co-management can achieve similar results. Design Case study. Setting Two community-based primary care practices. Participants Patients > 75 years who screened positive for at least one condition: falls, urinary incontinence (UI), dementia, and depression. Intervention The ACOVE-2 model augmented by NP co-management of conditions. Measurements Quality of care by medical record review using ACOVE-3 quality indicators (QIs). Patients receiving co-management were compared with those who received PCP care alone in the same practices. Results Of 1084 screened patients, 658 (61%) screened positive for > 1 condition; 485 of these patients were randomly selected for chart review and triggered a mean of 7 QIs. A NP saw approximately half (49%) for co-management. Overall, patients received 57% of recommended care. Quality scores for all conditions (falls: 80% versus 34%; UI: 66% versus 19%; dementia: 59% versus 38%) except depression (63% versus 60%) were higher for patients seen by a NP. In analyses adjusted for gender, age of patient, number of conditions, site, and a NP estimate of medical management style, NP co-management remained significantly associated with receiving recommended care (p<0.001), as did the NP estimate of medical management style (p=0.02). Conclusion Compared to usual care using the ACOVE-2 model, NP co-management is associated with better quality of care for geriatric conditions in community-based primary care. PMID:23772723
Epic Flooding in Georgia, 2009
Gotvald, Anthony J.; McCallum, Brian E.
2010-01-01
Metropolitan Atlanta-September 2009 Floods The epic floods experienced in the Atlanta area in September 2009 were extremely rare. Eighteen streamgages in the Metropolitan Atlanta area had flood magnitudes much greater than the estimated 0.2-percent (500-year) annual exceedance probability. The Federal Emergency Management Agency (FEMA) reported that 23 counties in Georgia were declared disaster areas due to this flood and that 16,981 homes and 3,482 businesses were affected by floodwaters. Ten lives were lost in the flood. The total estimated damages exceed $193 million (H.E. Longenecker, Federal Emergency Management Agency, written commun., November 2009). On Sweetwater Creek near Austell, Ga., just north of Interstate 20, the peak stage was more than 6 feet higher than the estimated peak stage of the 0.2-percent (500-year) flood. Flood magnitudes in Cobb County on Sweetwater, Butler, and Powder Springs Creeks greatly exceeded the estimated 0.2-percent (500-year) floods for these streams. In Douglas County, the Dog River at Ga. Highway 5 near Fairplay had a peak stage nearly 20 feet higher than the estimated peak stage of the 0.2-percent (500-year) flood. On the Chattahoochee River, the U.S. Geological Survey (USGS) gage at Vinings reached the highest level recorded in the past 81 years. Gwinnett, De Kalb, Fulton, and Rockdale Counties also had record flooding.South Georgia March and April 2009 FloodsThe March and April 2009 floods in South Georgia were smaller in magnitude than the September floods but still caused significant damage. No lives were lost in this flood. Approximately $60 million in public infrastructure damage occurred to roads, culverts, bridges and a water treatment facility (Joseph T. McKinney, Federal Emergency Management Agency, written commun., July 2009). Flow at the Satilla River near Waycross, exceeded the 0.5-percent (200-year) flood. Flows at seven other stations in South Georgia exceeded the 1-percent (100-year) flood.
Growth status and estimated growth rate of youth football players: a community-based study.
Malina, Robert M; Morano, Peter J; Barron, Mary; Miller, Susan J; Cumming, Sean P
2005-05-01
To characterize the growth status of participants in community-sponsored youth football programs and to estimate rates of growth in height and weight. Mixed-longitudinal over 2 seasons. Two communities in central Michigan. Members of 33 youth football teams in 2 central Michigan communities in the 2000 and 2001 seasons (Mid-Michigan PONY Football League). Height and weight of all participants were measured prior to each season, 327 in 2000 and 326 in 2001 (n = 653). The body mass index (kg/m) was calculated. Heights and weights did not differ from season to season and between the communities; the data were pooled and treated cross-sectionally. Increments of growth in height and weight were estimated for 166 boys with 2 measurements approximately 1 year apart to provide an estimate of growth rate. Growth status (size-attained) of youth football players relative to reference data (CDC) for American boys and estimated growth rate relative to reference values from 2 longitudinal studies of American boys. Median heights of youth football players approximate the 75th percentiles, while median weights approximate the 75th percentiles through 11 years and then drift toward the 90th percentiles of the reference. Median body mass indexes of youth football players fluctuate about the 85th percentiles of the reference. Estimated growth rates in height approximate the reference and may suggest earlier maturation, while estimated growth rates in weight exceed the reference. Youth football players are taller and especially heavier than reference values for American boys. Estimated rates of growth in height approximate medians for American boys and suggest earlier maturation. Estimated rates of growth in weight exceed those of the reference and may place many youth football players at risk for overweight/obesity, which in turn may be a risk factor for injury.
Mueller, Silke M; Schiebener, Johannes; Delazer, Margarete; Brand, Matthias
2018-01-22
Many decision situations in everyday life involve mathematical considerations. In decisions under objective risk, i.e., when explicit numeric information is available, executive functions and abilities to handle exact numbers and ratios are predictors of objectively advantageous choices. Although still debated, exact numeric abilities, e.g., normative calculation skills, are assumed to be related to approximate number processing skills. The current study investigates the effects of approximative numeric abilities on decision making under objective risk. Participants (N = 153) performed a paradigm measuring number-comparison, quantity-estimation, risk-estimation, and decision-making skills on the basis of rapid dot comparisons. Additionally, a risky decision-making task with exact numeric information was administered, as well as tasks measuring executive functions and exact numeric abilities, e.g., mental calculation and ratio processing skills, were conducted. Approximative numeric abilities significantly predicted advantageous decision making, even beyond the effects of executive functions and exact numeric skills. Especially being able to make accurate risk estimations seemed to contribute to superior choices. We recommend approximation skills and approximate number processing to be subject of future investigations on decision making under risk.
A carbon balance model for the great dismal swamp ecosystem
Sleeter, Rachel; Sleeter, Benjamin M.; Williams, Brianna; Hogan, Dianna; Hawbaker, Todd J.; Zhu, Zhiliang
2017-01-01
BackgroundCarbon storage potential has become an important consideration for land management and planning in the United States. The ability to assess ecosystem carbon balance can help land managers understand the benefits and tradeoffs between different management strategies. This paper demonstrates an application of the Land Use and Carbon Scenario Simulator (LUCAS) model developed for local-scale land management at the Great Dismal Swamp National Wildlife Refuge. We estimate the net ecosystem carbon balance by considering past ecosystem disturbances resulting from storm damage, fire, and land management actions including hydrologic inundation, vegetation clearing, and replanting.ResultsWe modeled the annual ecosystem carbon stock and flow rates for the 30-year historic time period of 1985–2015, using age-structured forest growth curves and known data for disturbance events and management activities. The 30-year total net ecosystem production was estimated to be a net sink of 0.97 Tg C. When a hurricane and six historic fire events were considered in the simulation, the Great Dismal Swamp became a net source of 0.89 Tg C. The cumulative above and below-ground carbon loss estimated from the South One and Lateral West fire events totaled 1.70 Tg C, while management activities removed an additional 0.01 Tg C. The carbon loss in below-ground biomass alone totaled 1.38 Tg C, with the balance (0.31 Tg C) coming from above-ground biomass and detritus.ConclusionsNatural disturbances substantially impact net ecosystem carbon balance in the Great Dismal Swamp. Through alternative management actions such as re-wetting, below-ground biomass loss may have been avoided, resulting in the added carbon storage capacity of 1.38 Tg. Based on two model assumptions used to simulate the peat system, (a burn scar totaling 70 cm in depth, and the soil carbon accumulation rate of 0.36 t C/ha−1/year−1 for Atlantic white cedar), the total soil carbon loss from the South One and Lateral West fires would take approximately 1740 years to re-amass. Due to the impractical time horizon this presents for land managers, this particular loss is considered permanent. Going forward, the baseline carbon stock and flow parameters presented here will be used as reference conditions to model future scenarios of land management and disturbance.
A carbon balance model for the great dismal swamp ecosystem.
Sleeter, Rachel; Sleeter, Benjamin M; Williams, Brianna; Hogan, Dianna; Hawbaker, Todd; Zhu, Zhiliang
2017-12-01
Carbon storage potential has become an important consideration for land management and planning in the United States. The ability to assess ecosystem carbon balance can help land managers understand the benefits and tradeoffs between different management strategies. This paper demonstrates an application of the Land Use and Carbon Scenario Simulator (LUCAS) model developed for local-scale land management at the Great Dismal Swamp National Wildlife Refuge. We estimate the net ecosystem carbon balance by considering past ecosystem disturbances resulting from storm damage, fire, and land management actions including hydrologic inundation, vegetation clearing, and replanting. We modeled the annual ecosystem carbon stock and flow rates for the 30-year historic time period of 1985-2015, using age-structured forest growth curves and known data for disturbance events and management activities. The 30-year total net ecosystem production was estimated to be a net sink of 0.97 Tg C. When a hurricane and six historic fire events were considered in the simulation, the Great Dismal Swamp became a net source of 0.89 Tg C. The cumulative above and below-ground carbon loss estimated from the South One and Lateral West fire events totaled 1.70 Tg C, while management activities removed an additional 0.01 Tg C. The carbon loss in below-ground biomass alone totaled 1.38 Tg C, with the balance (0.31 Tg C) coming from above-ground biomass and detritus. Natural disturbances substantially impact net ecosystem carbon balance in the Great Dismal Swamp. Through alternative management actions such as re-wetting, below-ground biomass loss may have been avoided, resulting in the added carbon storage capacity of 1.38 Tg. Based on two model assumptions used to simulate the peat system, (a burn scar totaling 70 cm in depth, and the soil carbon accumulation rate of 0.36 t C/ha -1 /year -1 for Atlantic white cedar), the total soil carbon loss from the South One and Lateral West fires would take approximately 1740 years to re-amass. Due to the impractical time horizon this presents for land managers, this particular loss is considered permanent. Going forward, the baseline carbon stock and flow parameters presented here will be used as reference conditions to model future scenarios of land management and disturbance.
Thomas, Rebecca L.; Fellowes, Mark D. E.; Baker, Philip J.
2012-01-01
Urban domestic cat (Felis catus) populations can attain exceedingly high densities and are not limited by natural prey availability. This has generated concerns that they may negatively affect prey populations, leading to calls for management. We enlisted cat-owners to record prey returned home to estimate patterns of predation by free-roaming pets in different localities within the town of Reading, UK and questionnaire surveys were used to quantify attitudes to different possible management strategies. Prey return rates were highly variable: only 20% of cats returned ≥4 dead prey annually. Consequently, approximately 65% of owners received no prey in a given season, but this declined to 22% after eight seasons. The estimated mean predation rate was 18.3 prey cat−1 year−1 but this varied markedly both spatially and temporally: per capita predation rates declined with increasing cat density. Comparisons with estimates of the density of six common bird prey species indicated that cats killed numbers equivalent to adult density on c. 39% of occasions. Population modeling studies suggest that such predation rates could significantly reduce the size of local bird populations for common urban species. Conversely, most urban residents did not consider cat predation to be a significant problem. Collar-mounted anti-predation devices were the only management action acceptable to the majority of urban residents (65%), but were less acceptable to cat-owners because of perceived risks to their pets; only 24% of cats were fitted with such devices. Overall, cat predation did appear to be of sufficient magnitude to affect some prey populations, although further investigation of some key aspects of cat predation is warranted. Management of the predation behavior of urban cat populations in the UK is likely to be challenging and achieving this would require considerable engagement with cat owners. PMID:23173057
Quantitative assessment of medical waste generation in the capital city of Bangladesh
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patwary, Masum A.; O'Hare, William Thomas; Street, Graham
2009-08-15
There is a concern that mismanagement of medical waste in developing countries may be a significant risk factor for disease transmission. Quantitative estimation of medical waste generation is needed to estimate the potential risk and as a basis for any waste management plan. Dhaka City, the capital of Bangladesh, is an example of a major city in a developing country where there has been no rigorous estimation of medical waste generation based upon a thorough scientific study. These estimates were obtained by stringent weighing of waste in a carefully chosen, representative, sample of HCEs, including non-residential diagnostic centres. This studymore » used a statistically designed sampling of waste generation in a broad range of Health Care Establishments (HCEs) to indicate that the amount of waste produced in Dhaka can be estimated to be 37 {+-} 5 ton per day. The proportion of this waste that would be classified as hazardous waste by World Health Organisation (WHO) guidelines was found to be approximately 21%. The amount of waste, and the proportion of hazardous waste, was found to vary significantly with the size and type of HCE.« less
STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.
Fan, Jianqing; Xue, Lingzhou; Zou, Hui
2014-06-01
Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.
STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION
Fan, Jianqing; Xue, Lingzhou; Zou, Hui
2014-01-01
Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression. PMID:25598560
Melnychuk, Michael C; Banobi, Jeannette A; Hilborn, Ray
2013-01-01
There is considerable variability in the status of fish populations around the world and a poor understanding of how specific management characteristics affect populations. Overfishing is a major problem in many fisheries, but in some regions the recent tendency has been to exploit stocks at levels below their maximum sustainable yield. In Western North American groundfish fisheries, the status of individual stocks and management systems among regions are highly variable. In this paper, we show the current status of groundfish stocks from Alaska, British Columbia, and the U.S. West Coast, and quantify the influence on stock status of six management tactics often hypothesized to affect groundfish. These tactics are: the use of harvest control rules with estimated biological reference points; seasonal closures; marine reserves; bycatch constraints; individual quotas (i.e., 'catch shares'); and gear type. Despite the high commercial value of many groundfish and consequent incentives for maintaining stocks at their most productive levels, most stocks were managed extremely conservatively, with current exploitation rates at only 40% of management targets and biomass 33% above target biomass on average. Catches rarely exceeded TACs but on occasion were far below TACs (mean catch:TAC ratio of 57%); approximately $150 million of potential landed value was foregone annually by underutilizing TACs. The use of individual quotas, marine reserves, and harvest control rules with estimated limit reference points had little overall effect on stock status. More valuable fisheries were maintained closer to management targets and were less variable over time than stocks with lower catches or ex-vessel prices. Together these results suggest there is no single effective management measure for meeting conservation objectives; if scientifically established quotas are set and enforced, a variety of means can be used to ensure that exploitation rates and biomass levels are near to or more conservative than management targets.
Cronin, Matthew A; Amstrup, Steven C; Talbot, Sandra L; Sage, George K; Amstrup, Kristin S
2009-01-01
Polar bears (Ursus maritimus) are unique among bears in that they are adapted to the Arctic sea ice environment. Genetic data are useful for understanding their evolution and can contribute to management. We assessed parentage and relatedness of polar bears in the southern Beaufort Sea, Alaska, with genetic data and field observations of age, sex, and mother-offspring and sibling relationships. Genotypes at 14 microsatellite DNA loci for 226 bears indicate that genetic variation is comparable to other populations of polar bears with mean number of alleles per locus of 7.9 and observed and expected heterozygosity of 0.71. The genetic data verified 60 field-identified mother-offspring pairs and identified 10 additional mother-cub pairs and 48 father-offspring pairs. The entire sample of related and unrelated bears had a mean pairwise relatedness index (r(xy)) of approximately zero, parent-offspring and siblings had r(xy) of approximately 0.5, and 5.2% of the samples had r(xy) values within the range expected for parent-offspring. Effective population size (N(e) = 277) and the ratio of N(e) to total population size (N(e)/N = 0.182) were estimated from the numbers of reproducing males and females. N(e) estimates with genetic methods gave variable results. Our results verify and expand field data on reproduction by females and provide new data on reproduction by males and estimates of relatedness and N(e) in a polar bear population.
Cronin, Matthew A.; Amstrup, Steven C.; Talbot, Sandra L.; Sage, George K.; Amstrup, Kristin S.
2009-01-01
Polar bears (Ursus maritimus) are unique among bears in that they are adapted to the Arctic sea ice environment. Genetic data are useful for understanding their evolution and can contribute to management. We assessed parentage and relatedness of polar bears in the southern Beaufort Sea, Alaska, with genetic data and field observations of age, sex, and mother–offspring and sibling relationships. Genotypes at 14 microsatellite DNA loci for 226 bears indicate that genetic variation is comparable to other populations of polar bears with mean number of alleles per locus of 7.9 and observed and expected heterozygosity of 0.71. The genetic data verified 60 field-identified mother–offspring pairs and identified 10 additional mother–cub pairs and 48 father–offspring pairs. The entire sample of related and unrelated bears had a mean pairwise relatedness index (rxy) of approximately zero, parent–offspring and siblings had rxy of approximately 0.5, and 5.2% of the samples had rxy values within the range expected for parent-offspring. Effective population size (Ne= 277) and the ratio of Ne to total population size (Ne/N = 0.182) were estimated from the numbers of reproducing males and females. Ne estimates with genetic methods gave variable results. Our results verify and expand field data on reproduction by females and provide new data on reproduction by males and estimates of relatedness and Ne in a polar bear population.
Municipal solid waste recycling and the significance of informal sector in urban China.
Linzner, Roland; Salhofer, Stefan
2014-09-01
The informal sector is active in the collection, processing and trading of recyclable materials in urban China. Formal waste management organisations have established pilot schemes for source separation of recyclables, but this strategy is still in its infancy. The amounts of recyclables informally picked out of the municipal solid waste stream are unknown as informal waste workers do not record their activities. This article estimates the size and significance of the current informal recycling system with a focus on the collection of recyclables. A majority of the reviewed literature detects that official data is displaying mainly 'municipal solid waste collected and transported', whereas less information is available on 'real' waste generation rates at the source. Based on a literature review the variables, the 'number of informal waste workers involved in collection activities', the 'amounts collected daily per informal collector' and the 'number of working days' are used to estimate yearly recyclable amounts that are informally diverted from municipal solid waste. The results show an interval of approximately 0.56%-0.93% of the urban population or 3.3-5.6 million people involved in informal waste collection and recycling activities in urban China. This is the equivalent to estimated informal recycling rates of approximately 17-38 w/w% of the municipal solid waste generated. Despite some uncertainties in these assessments, it can be concluded that a significant share of recyclables is collected and processed by informal waste workers. © The Author(s) 2014.
orbit-estimation: Fast orbital parameters estimator
NASA Astrophysics Data System (ADS)
Mackereth, J. Ted; Bovy, Jo
2018-04-01
orbit-estimation tests and evaluates the Stäckel approximation method for estimating orbit parameters in galactic potentials. It relies on the approximation of the Galactic potential as a Stäckel potential, in a prolate confocal coordinate system, under which the vertical and horizontal motions decouple. By solving the Hamilton Jacobi equations at the turning points of the horizontal and vertical motions, it is possible to determine the spatial boundary of the orbit, and hence calculate the desired orbit parameters.
USDA-ARS?s Scientific Manuscript database
Grazing lands are the most dominate land cover type in the United States with approximately 311.7 Mha being defined as rangelands. Approximately 53% of the Nation’s rangelands are owned and managed by the private sector while the Federal government manages approximately 43% of the Nation’s rangelan...
Production, use, and fate of all plastics ever made
Geyer, Roland; Jambeck, Jenna R.; Law, Kara Lavender
2017-01-01
Plastics have outgrown most man-made materials and have long been under environmental scrutiny. However, robust global information, particularly about their end-of-life fate, is lacking. By identifying and synthesizing dispersed data on production, use, and end-of-life management of polymer resins, synthetic fibers, and additives, we present the first global analysis of all mass-produced plastics ever manufactured. We estimate that 8300 million metric tons (Mt) as of virgin plastics have been produced to date. As of 2015, approximately 6300 Mt of plastic waste had been generated, around 9% of which had been recycled, 12% was incinerated, and 79% was accumulated in landfills or the natural environment. If current production and waste management trends continue, roughly 12,000 Mt of plastic waste will be in landfills or in the natural environment by 2050. PMID:28776036
Production, use, and fate of all plastics ever made.
Geyer, Roland; Jambeck, Jenna R; Law, Kara Lavender
2017-07-01
Plastics have outgrown most man-made materials and have long been under environmental scrutiny. However, robust global information, particularly about their end-of-life fate, is lacking. By identifying and synthesizing dispersed data on production, use, and end-of-life management of polymer resins, synthetic fibers, and additives, we present the first global analysis of all mass-produced plastics ever manufactured. We estimate that 8300 million metric tons (Mt) as of virgin plastics have been produced to date. As of 2015, approximately 6300 Mt of plastic waste had been generated, around 9% of which had been recycled, 12% was incinerated, and 79% was accumulated in landfills or the natural environment. If current production and waste management trends continue, roughly 12,000 Mt of plastic waste will be in landfills or in the natural environment by 2050.
Estimating aboveground live understory vegetation carbon in the United States
NASA Astrophysics Data System (ADS)
Johnson, Kristofer D.; Domke, Grant M.; Russell, Matthew B.; Walters, Brian; Hom, John; Peduzzi, Alicia; Birdsey, Richard; Dolan, Katelyn; Huang, Wenli
2017-12-01
Despite the key role that understory vegetation plays in ecosystems and the terrestrial carbon cycle, it is often overlooked and has few quantitative measurements, especially at national scales. To understand the contribution of understory carbon to the United States (US) carbon budget, we developed an approach that relies on field measurements of understory vegetation cover and height on US Department of Agriculture Forest Service, Forest Inventory and Analysis (FIA) subplots. Allometric models were developed to estimate aboveground understory carbon. A spatial model based on stand characteristics and remotely sensed data was also applied to estimate understory carbon on all FIA plots. We found that most understory carbon was comprised of woody shrub species (64%), followed by nonwoody forbs and graminoid species (35%) and seedlings (1%). The largest estimates were found in temperate or warm humid locations such as the Pacific Northwest and southeastern US, thus following the same broad trend as aboveground tree biomass. The average understory aboveground carbon density was estimated to be 0.977 Mg ha-1, for a total estimate of 272 Tg carbon across all managed forest land in the US (approximately 2% of the total aboveground live tree carbon pool). This estimate is more than twice as low as previous FIA modeled estimates that did not rely on understory measurements, suggesting that this pool may currently be overestimated in US National Greenhouse Gas reporting.
Wang, Yan-Cang; Yang, Gui-Jun; Zhu, Jin-Shan; Gu, Xiao-He; Xu, Peng; Liao, Qin-Hong
2014-07-01
For improving the estimation accuracy of soil organic matter content of the north fluvo-aquic soil, wavelet transform technology is introduced. The soil samples were collected from Tongzhou district and Shunyi district in Beijing city. And the data source is from soil hyperspectral data obtained under laboratory condition. First, discrete wavelet transform efficiently decomposes hyperspectral into approximate coefficients and detail coefficients. Then, the correlation between approximate coefficients, detail coefficients and organic matter content was analyzed, and the sensitive bands of the organic matter were screened. Finally, models were established to estimate the soil organic content by using the partial least squares regression (PLSR). Results show that the NIR bands made more contributions than the visible band in estimating organic matter content models; the ability of approximate coefficients to estimate organic matter content is better than that of detail coefficients; The estimation precision of the detail coefficients fir soil organic matter content decreases with the spectral resolution being lower; Compared with the commonly used three types of soil spectral reflectance transforms, the wavelet transform can improve the estimation ability of soil spectral fir organic content; The accuracy of the best model established by the approximate coefficients or detail coefficients is higher, and the coefficient of determination (R2) and the root mean square error (RMSE) of the best model for approximate coefficients are 0.722 and 0.221, respectively. The R2 and RMSE of the best model for detail coefficients are 0.670 and 0.255, respectively.
Estimating annualized earthquake losses for the conterminous United States
Jaiswal, Kishor S.; Bausch, Douglas; Chen, Rui; Bouabid, Jawhar; Seligson, Hope
2015-01-01
We make use of the most recent National Seismic Hazard Maps (the years 2008 and 2014 cycles), updated census data on population, and economic exposure estimates of general building stock to quantify annualized earthquake loss (AEL) for the conterminous United States. The AEL analyses were performed using the Federal Emergency Management Agency's (FEMA) Hazus software, which facilitated a systematic comparison of the influence of the 2014 National Seismic Hazard Maps in terms of annualized loss estimates in different parts of the country. The losses from an individual earthquake could easily exceed many tens of billions of dollars, and the long-term averaged value of losses from all earthquakes within the conterminous U.S. has been estimated to be a few billion dollars per year. This study estimated nationwide losses to be approximately $4.5 billion per year (in 2012$), roughly 80% of which can be attributed to the States of California, Oregon and Washington. We document the change in estimated AELs arising solely from the change in the assumed hazard map. The change from the 2008 map to the 2014 map results in a 10 to 20% reduction in AELs for the highly seismic States of the Western United States, whereas the reduction is even more significant for Central and Eastern United States.
Threshold-dependent sample sizes for selenium assessment with stream fish tissue
Hitt, Nathaniel P.; Smith, David R.
2015-01-01
Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased precision of composites for estimating mean conditions. However, low sample sizes (<5 fish) did not achieve 80% power to detect near-threshold values (i.e., <1 mg Se/kg) under any scenario we evaluated. This analysis can assist the sampling design and interpretation of Se assessments from fish tissue by accounting for natural variation in stream fish populations.
Estimating occupancy and predicting numbers of gray wolf packs in Montana using hunter surveys
Rich, Lindsey N.; Russell, Robin E.; Glenn, Elizabeth M.; Mitchell, Michael S.; Gude, Justin A.; Podruzny, Kevin M.; Sime, Carolyn A.; Laudon, Kent; Ausband, David E.; Nichols, James D.
2013-01-01
Reliable knowledge of the status and trend of carnivore populations is critical to their conservation and management. Methods for monitoring carnivores, however, are challenging to conduct across large spatial scales. In the Northern Rocky Mountains, wildlife managers need a time- and cost-efficient method for monitoring gray wolf (Canis lupus) populations. Montana Fish, Wildlife and Parks (MFWP) conducts annual telephone surveys of >50,000 deer and elk hunters. We explored how survey data on hunters' sightings of wolves could be used to estimate the occupancy and distribution of wolf packs and predict their abundance in Montana for 2007–2009. We assessed model utility by comparing our predictions to MFWP minimum known number of wolf packs. We minimized false positive detections by identifying a patch as occupied if 2–25 wolves were detected by ≥3 hunters. Overall, estimates of the occupancy and distribution of wolf packs were generally consistent with known distributions. Our predictions of the total area occupied increased from 2007 to 2009 and predicted numbers of wolf packs were approximately 1.34–1.46 times the MFWP minimum counts for each year of the survey. Our results indicate that multi-season occupancy models based on public sightings can be used to monitor populations and changes in the spatial distribution of territorial carnivores across large areas where alternative methods may be limited by personnel, time, accessibility, and budget constraints.
Cavity turnover and equilibrium cavity densities in a cottonwood bottomland
Sedgwick, James A.; Knopf, Fritz L.
1992-01-01
A fundamental factor regulating the numbers of secondary cavity nesting (SCN) birds is the number of extant cavities available for nesting. The number of available cavities may be thought of as being in an approximate equilibrium maintained by a very rough balance between recruitment and loss of cavities. Based on estimates of cavity recruitment and loss, we ascertained equilibrium cavity densities in a mature plains cottonwood (Populus sargentii) bottomland along the South Platte River in northeastern Colorado. Annual cavity recruitment, derived from density estimates of primary cavity nesting (PCN) birds and cavity excavation rates, was estimated to be 71-86 new cavities excavated/100 ha. Of 180 active cavities of 11 species of cavity-nesting birds found in 1985 and 1986, 83 were no longer usable by 1990, giving an average instantaneous rate of cavity loss of r = -0.230. From these values of cavity recruitment and cavity loss, equilibrium cavity density along the South Platte is 238-289 cavities/100 ha. This range of equilibrium cavity density is only slightly above the minimum of 205 cavities/100 ha required by SCN's and suggests that cavity availability may be limiting SCN densities along the South Platte River. We submit that snag management alone does not adequately address SCN habitat needs, and that cavity management, expressed in terms of cavity turnover and cavity densities, may be more useful.
Lin, Jen-Jen; Cheng, Jung-Yu; Huang, Li-Fei; Lin, Ying-Hsiu; Wan, Yung-Liang; Tsui, Po-Hsiang
2017-05-01
The Nakagami distribution is an approximation useful to the statistics of ultrasound backscattered signals for tissue characterization. Various estimators may affect the Nakagami parameter in the detection of changes in backscattered statistics. In particular, the moment-based estimator (MBE) and maximum likelihood estimator (MLE) are two primary methods used to estimate the Nakagami parameters of ultrasound signals. This study explored the effects of the MBE and different MLE approximations on Nakagami parameter estimations. Ultrasound backscattered signals of different scatterer number densities were generated using a simulation model, and phantom experiments and measurements of human liver tissues were also conducted to acquire real backscattered echoes. Envelope signals were employed to estimate the Nakagami parameters by using the MBE, first- and second-order approximations of MLE (MLE 1 and MLE 2 , respectively), and Greenwood approximation (MLE gw ) for comparisons. The simulation results demonstrated that, compared with the MBE and MLE 1 , the MLE 2 and MLE gw enabled more stable parameter estimations with small sample sizes. Notably, the required data length of the envelope signal was 3.6 times the pulse length. The phantom and tissue measurement results also showed that the Nakagami parameters estimated using the MLE 2 and MLE gw could simultaneously differentiate various scatterer concentrations with lower standard deviations and reliably reflect physical meanings associated with the backscattered statistics. Therefore, the MLE 2 and MLE gw are suggested as estimators for the development of Nakagami-based methodologies for ultrasound tissue characterization. Copyright © 2017 Elsevier B.V. All rights reserved.
Rosen, Michael R.; Kropf, Christian; Thomas, Karen A.
2006-01-01
Analysis of total dissolved nitrogen concentrations from soil water samples collected within the soil zone under septic tank leach fields in Spanish Springs Valley, Nevada, shows a median concentration of approximately 44 milligrams per liter (mg/L) from more than 300 measurements taken from four septic tank systems. Using two simple mass balance calculations, the concentration of total dissolved nitrogen potentially reaching the ground-water table ranges from 25 to 29 mg/L. This indicates that approximately 29 to 32 metric tons of nitrogen enters the aquifer every year from natural recharge and from the 2,070 houses that use septic tanks in the densely populated portion of Spanish Springs Valley. Natural recharge contributes only 0.25 metric tons because the total dissolved nitrogen concentration of natural recharge was estimated to be low (0.8 mg/L). Although there are many uncertainties in this estimate, the sensitivity of these uncertainties to the calculated load is relatively small, indicating that these values likely are accurate to within an order of magnitude. The nitrogen load calculation will be used as an input function for a ground-water flow and transport model that will be used to test management options for controlling nitrogen contamination in the basin.
Monostatic Radar Cross Section Estimation of Missile Shaped Object Using Physical Optics Method
NASA Astrophysics Data System (ADS)
Sasi Bhushana Rao, G.; Nambari, Swathi; Kota, Srikanth; Ranga Rao, K. S.
2017-08-01
Stealth Technology manages many signatures for a target in which most radar systems use radar cross section (RCS) for discriminating targets and classifying them with regard to Stealth. During a war target’s RCS has to be very small to make target invisible to enemy radar. In this study, Radar Cross Section of perfectly conducting objects like cylinder, truncated cone (frustum) and circular flat plate is estimated with respect to parameters like size, frequency and aspect angle. Due to the difficulties in exactly predicting the RCS, approximate methods become the alternative. Majority of approximate methods are valid in optical region and where optical region has its own strengths and weaknesses. Therefore, the analysis given in this study is purely based on far field monostatic RCS measurements in the optical region. Computation is done using Physical Optics (PO) method for determining RCS of simple models. In this study not only the RCS of simple models but also missile shaped and rocket shaped models obtained from the cascaded objects with backscatter has been computed using Matlab simulation. Rectangular plots are obtained for RCS in dbsm versus aspect angle for simple and missile shaped objects using Matlab simulation. Treatment of RCS, in this study is based on Narrow Band.
Update on Multi-Variable Parametric Cost Models for Ground and Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Henrichs, Todd; Luedtke, Alexander; West, Miranda
2012-01-01
Parametric cost models can be used by designers and project managers to perform relative cost comparisons between major architectural cost drivers and allow high-level design trades; enable cost-benefit analysis for technology development investment; and, provide a basis for estimating total project cost between related concepts. This paper reports on recent revisions and improvements to our ground telescope cost model and refinements of our understanding of space telescope cost models. One interesting observation is that while space telescopes are 50X to 100X more expensive than ground telescopes, their respective scaling relationships are similar. Another interesting speculation is that the role of technology development may be different between ground and space telescopes. For ground telescopes, the data indicates that technology development tends to reduce cost by approximately 50% every 20 years. But for space telescopes, there appears to be no such cost reduction because we do not tend to re-fly similar systems. Thus, instead of reducing cost, 20 years of technology development may be required to enable a doubling of space telescope capability. Other findings include: mass should not be used to estimate cost; spacecraft and science instrument costs account for approximately 50% of total mission cost; and, integration and testing accounts for only about 10% of total mission cost.
Verma, Vasundhara; Paul, Sujat; Ghose, Aniruddha; Eddleston, Michael; Konradsen, Flemming
2017-12-01
Approximately 10 000 people die from suicide annually in Bangladesh, many from pesticide poisoning. We aimed to estimate financial costs to patients and health services of treating patients with self-poisoning. Data on direct costs to families, sources of funds for treatment and family wealth were collected prospectively over a one-month period in 2016 at the tertiary Chittagong Medical College Hospital, Bangladesh. Aggregate operational costs to the government were calculated using annual budget, bed occupancy and length-of-stay data. Agrochemicals were the most common substances ingested (58.8%). Median duration of stay and of illness was 2 and 5 days, respectively. Median total cost to patients was conservatively estimated at US$ 98.40, highest in agrochemical poisoning (US$ 179.50), with the greatest cost due to medicines and equipment. Misdiagnosis as organophosphorus poisoning in 17.0% of agrochemical cases resulted in increased cost to patients. Only 51.9% of patients had indicators of wealth; 78.1% borrowed money to cover costs. Conservatively estimated median healthcare costs (US$ 21.30 per patient) were markedly lower than costs to patients. Cost to patients of treating a case of agrochemical poisoning was approximately three times the cost of one month's essential items basket. Incorrect diagnosis at admission costs families substantial sums of money and increased length of stay; it costs the national government an estimated US$ 80 428.80 annually. Widespread access to a list of pesticides used in self-poisoning plus greater focus on training doctors to better manage different forms of agrochemical poisoning should reduce the financial burden to patients and healthcare systems. © 2017 John Wiley & Sons Ltd.
White, M P; Elliott, L R; Taylor, T; Wheeler, B W; Spencer, A; Bone, A; Depledge, M H; Fleming, L E
2016-10-01
Building on evidence that natural environments (e.g. parks, woodlands, beaches) are key locations for physical activity, we estimated the total annual amount of adult recreational physical activity in England's natural environments, and assessed implications for population health. A cross-sectional analysis of six waves (2009/10-2014/5) of the nationally representative, Monitor of Engagement with the Natural Environment survey (n=280,790). The survey uses a weekly quota sample, and population weights, to estimate nature visit frequency across England, and provides details on a single, randomly selected visit (n=112,422), including: a) duration; b) activity; and c) environment type. Approximately 8.23 million (95% CIs: 7.93, 8.54) adults (19.5% of the population) made at least one 'active visit' (i.e. ≥30min, ≥3 METs) to natural environments in the previous week, resulting in 1.23 billion (1.14, 1.32) 'active visits' annually. An estimated 3.20 million (3.05, 3.35) of these also reported meeting recommended physical activity guidelines (i.e. ≥5×30min a week) fully, or in part, through such visits. Active visits by this group were associated with an estimated 109,164 (101,736, 116,592) Quality Adjusted Life Years (QALYs) annually. Assuming the social value of a QALY to be £20,000, the annual value of these visits was approximately £2.18 billion (£2.03, £2.33). Results for walking were replicated using WHO's Health Economic Assessment Tool. Natural environments provide the context for a large proportion of England's recreational physical activity and highlight the need to protect and manage such environments for health purposes. Copyright © 2016 Elsevier Inc. All rights reserved.
Hiebeler, David E; Millett, Nicholas E
2011-06-21
We investigate a spatial lattice model of a population employing dispersal to nearest and second-nearest neighbors, as well as long-distance dispersal across the landscape. The model is studied via stochastic spatial simulations, ordinary pair approximation, and triplet approximation. The latter method, which uses the probabilities of state configurations of contiguous blocks of three sites as its state variables, is demonstrated to be greatly superior to pair approximations for estimating spatial correlation information at various scales. Correlations between pairs of sites separated by arbitrary distances are estimated by constructing spatial Markov processes using the information from both approximations. These correlations demonstrate why pair approximation misses basic qualitative features of the model, such as decreasing population density as a large proportion of offspring are dropped on second-nearest neighbors, and why triplet approximation is able to include them. Analytical and numerical results show that, excluding long-distance dispersal, the initial growth rate of an invading population is maximized and the equilibrium population density is also roughly maximized when the population spreads its offspring evenly over nearest and second-nearest neighboring sites. Copyright © 2011 Elsevier Ltd. All rights reserved.
On approximation and energy estimates for delta 6-convex functions.
Saleem, Muhammad Shoaib; Pečarić, Josip; Rehman, Nasir; Khan, Muhammad Wahab; Zahoor, Muhammad Sajid
2018-01-01
The smooth approximation and weighted energy estimates for delta 6-convex functions are derived in this research. Moreover, we conclude that if 6-convex functions are closed in uniform norm, then their third derivatives are closed in weighted [Formula: see text]-norm.
NASA Astrophysics Data System (ADS)
Jieh Haur, Chen; Kuo, Lin Sheng; Fu, Chen Ping; Li Hsu, Yeh; Da Heng, Chen
2018-01-01
Construction surplus soil tracking management has been the key management issue in Taiwan since 1991. This is mainly due to the construction surplus soils were often regarded as disposable waste and were disposed openly without any supervision, leading to environmental pollution. Even though the surplus soils were gradually being viewed as reusable resources, some unscrupulous enterprises still dump them freely for their own convenience. In order to dispose these surplus soils, site offices are required to confirm with the soil treatment plant regarding the approximate soil volume for hauling vehicle dispatch. However, the excavated soil volume will transform from bank volume to loose volume upon excavation, which may differ by a certain speculative coefficient (1.3), depending on the excavation site and geological condition. For managing and tracking the construction surplus soils, local government authorities frequently performed on-site spot check, but the lack of rapid assessment tools for soil volume estimation increased the evaluation difficulty for on-site inspectors. This study adopted unmanned aerial vehicle (UAV) in construction surplus soil tracking and rapidly acquired site photography and point cloud data, the excavated soil volume can be determined promptly after post-processing and interpretation, providing references to future surplus soil tracking management.
Energy-water nexus for mass cultivation of algae.
Murphy, Cynthia Folsom; Allen, David T
2011-07-01
Microalgae are currently considered a potential feedstock for the production of biofuels. This work addresses the energy needed to manage the water used in the mass cultivation of saline, eukaryotic algae grown in open pond systems. Estimates of both direct and upstream energy requirements for obtaining, containing, and circulating water within algae cultivation systems are developed. Potential productivities are calculated for each of the 48 states within the continental U.S. based on theoretical photosynthetic efficiencies, growing season, and total available land area. Energy output in the form of algal biodiesel and the total energy content of algal biomass are compared to energy inputs required for water management. The analysis indicates that, for current technologies, energy required for water management alone is approximately seven times greater than energy output in the form of biodiesel and more than double that contained within the entire algal biomass. While this analysis addresses only currently identified species grown in an open-pond system, the water management requirements of any algae system will be substantial; therefore, it is critical that an energy assessment of water management requirements be performed for any cultivation technology and algal type in order to fully understand the energy balance of algae-derived biofuels.
Composite Intelligent Learning Control of Strict-Feedback Systems With Disturbance.
Xu, Bin; Sun, Fuchun
2018-02-01
This paper addresses the dynamic surface control of uncertain nonlinear systems on the basis of composite intelligent learning and disturbance observer in presence of unknown system nonlinearity and time-varying disturbance. The serial-parallel estimation model with intelligent approximation and disturbance estimation is built to obtain the prediction error and in this way the composite law for weights updating is constructed. The nonlinear disturbance observer is developed using intelligent approximation information while the disturbance estimation is guaranteed to converge to a bounded compact set. The highlight is that different from previous work directly toward asymptotic stability, the transparency of the intelligent approximation and disturbance estimation is included in the control scheme. The uniformly ultimate boundedness stability is analyzed via Lyapunov method. Through simulation verification, the composite intelligent learning with disturbance observer can efficiently estimate the effect caused by system nonlinearity and disturbance while the proposed approach obtains better performance with higher accuracy.
Estimating maquiladora hazardous waste generation on the U.S./Mexico border
NASA Astrophysics Data System (ADS)
Bowen, Mace M.; Kontuly, Thomas; Hepner, George F.
1995-03-01
Maquiladoras, manufacturing plants that primarily assemble foreign components for reexport, are located in concentrations along the northern frontier of the US/Mexico border. These plants process a wide variety of materials using modern industrial technologies within the context of developing world institutions and infrastructure. Hazardous waste generation by maquiladoras represents a critical environmental management issue because of the spatial concentration of these plants in border municipalities where the infrastructure for waste management is nonexistent or poor. These border municipalities contain rapidly increasing populations, which further stress their waste handling infrastructure capacities while exposing their populations to greater contaminant risks. Limited empirical knowledge exists concerning hazardous waste types and generation rates from maquiladorsas. There is no standard reporting method for waste generation or methodology for estimating generation rates at this time. This paper presents a method that can be used for the rapid assessment of hazardous waste generation. A first approximation of hazardous waste generation is produced for maquiladoras in the three municipalities of Nogales, Sonora, Mexicali, Baja California, and Cd. Juarez, Chihuahua, using the INVENT model developed by the World Bank. In addition, our intent is to evaluate the potential of the INVENT model for adaptation to the US/Mexico border industrial situation. The press of border industrial development, especially with the recent adoption of the NAFTA, make such assessments necessary as a basis for the environmental policy formulation and management needed in the immediate future.
Littman, Alyson J; Damschroder, Laura J; Verchinina, Lilia; Lai, Zongshan; Kim, Hyungjin Myra; Hoerster, Katherine D; Klingaman, Elizabeth A; Goldberg, Richard W; Owen, Richard R; Goodrich, David E
2015-01-01
The objective was to determine whether obesity screening and weight management program participation and outcomes are equitable for individuals with serious mental illness (SMI) and depressive disorder (DD) compared to those without SMI/DD in Veterans Health Administration (VHA), the largest integrated US health system, which requires obesity screening and offers weight management to all in need. We used chart-reviewed, clinical and administrative VHA data from fiscal years 2010-2012 to estimate obesity screening and participation in the VHA's weight management program (MOVE!) across groups. Six- and 12-month weight changes in MOVE! participants were estimated using linear mixed models adjusted for confounders. Compared to individuals without SMI/DD, individuals with SMI or DD were less frequently screened for obesity (94%-94.7% vs. 95.7%) but had greater participation in MOVE! (10.1%-10.4% vs. 7.4%). MOVE! participants with SMI or DD lost approximately 1 lb less at 6 months. At 12 months, average weight loss for individuals with SMI or neither SMI/DD was comparable (-3.5 and -3.3 lb, respectively), but individuals with DD lost less weight (mean=-2.7 lb). Disparities in obesity screening and treatment outcomes across mental health diagnosis groups were modest. However, participation in MOVE! was low for every group, which limits population impact. Published by Elsevier Inc.
Goeree, Ron; Goeree, Jeff
2016-01-01
Approximately 20-30% of Canadians suffer from chronic pain. Guidelines for the management of chronic pain support the use of controlled-release (CR) opioids to treat chronic pain. Although effective in managing chronic pain, oxycodone is associated with high rates of opioid-induced constipation (OIC). The cost-effectiveness of a combination of oxycodone for the management of pain and naloxone for the relief of OIC has not previously been evaluated for Canada. A decision analytic model was developed to estimate the cost-utility of combination oxycodone/naloxone compared to oxycodone alone in four populations. Drug costs for managing pain and healthcare costs related to managing OIC were included in the analysis and the primary measure of effectiveness was quality adjusted life years (QALYs) derived from OIC rates observed in clinical trials. The analysis was conducted from a healthcare system perspective, used a 1-year time horizon, and results were expressed in 2015 Canadian dollars. In all four patient populations, there was a trade-off between slightly higher total expected costs for Targin treated patients compared to oxycodone treated patients, but also improved clinical benefits in terms of reduced OIC, which resulted in higher QALYs for patients. Although analgesic costs were found to be slightly higher for Targin treated patients, Targin also resulted in cost offsets to the healthcare system in terms of less rescue laxative drug use and other resources required for the management of OIC. The resulting 1-year cost-utility of Targin compared to oxycodone ranged from $2178-$7732 per QALY gained in the base case analysis, and it was found that these cost-utility results remained robust and at low values throughout a series of one-way deterministic analyses of uncertainty. The clinical effectiveness of oxycodone/naloxone in managing pain and OIC compared to CR oxycodone alone resulted in low cost-utility estimates.
Kroll, Andrew J.; Jones, Jay E.; Stringer, Angela B.; Meekins, Douglas J.
2016-01-01
Quantifying spatial and temporal variability in population trends is a critical aspect of successful management of imperiled species. We evaluated territory occupancy dynamics of northern spotted owls (Strix occidentalis caurina), California, USA, 1990–2014. The study area possessed two unique aspects. First, timber management has occurred for over 100 years, resulting in dramatically different forest successional and structural conditions compared to other areas. Second, the barred owl (Strix varia), an exotic congener known to exert significant negative effects on spotted owls, has not colonized the study area. We used a Bayesian dynamic multistate model to evaluate if territory occupancy of reproductive spotted owls has declined as in other study areas. The state-space approach for dynamic multistate modeling imputes the number of territories for each nesting state and allows for the estimation of longer-term trends in occupied or reproductive territories from longitudinal studies. The multistate approach accounts for different detection probabilities by nesting state (to account for either inherent differences in detection or for the use of different survey methods for different occupancy states) and reduces bias in state assignment. Estimated linear trends in the number of reproductive territories suggested an average loss of approximately one half territory per year (-0.55, 90% CRI: -0.76, -0.33), in one management block and a loss of 0.15 per year (-0.15, 90% CRI: -0.24, -0.07), in another management block during the 25 year observation period. Estimated trends in the third management block were also negative, but substantial uncertainty existed in the estimate (-0.09, 90% CRI: -0.35, 0.17). Our results indicate that the number of territories occupied by northern spotted owl pairs remained relatively constant over a 25 year period (-0.07, 90% CRI: -0.20, 0.05; -0.01, 90% CRI: -0.19, 0.16; -0.16, 90% CRI: -0.40, 0.06). However, we cannot exclude small-to-moderate declines or increases in paired territory numbers due to uncertainty in our estimates. Collectively, we conclude spotted owl pair populations on this landscape managed for commercial timber production appear to be more stable and do not show sharp year-over-year declines seen in both managed and unmanaged landscapes with substantial barred owl colonization and persistence. Continued monitoring of reproductive territories can determine whether recent declines continue or whether trends reverse as they have on four previous occasions. Experimental investigations to evaluate changes to spotted owl occupancy dynamics when barred owl populations are reduced or removed entirely can confirm the generality of this conclusion. PMID:27065016
NASA Technical Reports Server (NTRS)
Wang, Qinglin; Gogineni, S. P.
1991-01-01
A numerical procedure for estimating the true scattering coefficient, sigma(sup 0), from measurements made using wide-beam antennas. The use of wide-beam antennas results in an inaccurate estimate of sigma(sup 0) if the narrow-beam approximation is used in the retrieval process for sigma(sup 0). To reduce this error, a correction procedure was proposed that estimates the error resulting from the narrow-beam approximation and uses the error to obtain a more accurate estimate of sigma(sup 0). An exponential model was assumed to take into account the variation of sigma(sup 0) with incidence angles, and the model parameters are estimated from measured data. Based on the model and knowledge of the antenna pattern, the procedure calculates the error due to the narrow-beam approximation. The procedure is shown to provide a significant improvement in estimation of sigma(sup 0) obtained with wide-beam antennas. The proposed procedure is also shown insensitive to the assumed sigma(sup 0) model.
McDevitt, Joseph L; Acosta-Torres, Stefany; Zhang, Ning; Hu, Tianshen; Odu, Ayobami; Wang, Jijia; Xi, Yin; Lamus, Daniel; Miller, David S; Pillai, Anil K
2017-07-01
To estimate the least costly routine exchange frequency for percutaneous nephrostomies (PCNs) placed for malignant urinary obstruction, as measured by annual hospital charges, and to estimate the financial impact of patient compliance. Patients with PCNs placed for malignant urinary obstruction were studied from 2011 to 2013. Exchanges were classified as routine or due to 1 of 3 complication types: mechanical (tube dislodgment), obstruction, or infection. Representative cases were identified, and median representative charges were used as inputs for the model. Accelerated failure time and Markov chain Monte Carlo models were used to estimate distribution of exchange types and annual hospital charges under different routine exchange frequency and compliance scenarios. Long-term PCN management was required in 57 patients, with 87 total exchange encounters. Median representative hospital charges for pyelonephritis and obstruction were 11.8 and 9.3 times greater, respectively, than a routine exchange. The projected proportion of routine exchanges increased and the projected proportion of infection-related exchanges decreased when moving from a 90-day exchange with 50% compliance to a 60-day exchange with 75% compliance, and this was associated with a projected reduction in annual charges. Projected cost reductions resulting from increased compliance were generally greater than reductions resulting from changes in exchange frequency. This simulation model suggests that the optimal routine exchange interval for PCN exchange in patients with malignant urinary obstruction is approximately 60 days and that the degree of reduction in charges likely depends more on patient compliance than exact exchange interval. Copyright © 2017 SIR. Published by Elsevier Inc. All rights reserved.
Spatial and Temporal Influences on Carbon Storage in Hydric Soils of the Conterminous United States
NASA Astrophysics Data System (ADS)
Sundquist, E. T.; Ackerman, K.; Bliss, N.; Griffin, R.; Waltman, S.; Windham-Myers, L.
2016-12-01
Defined features of hydric soils persist over extensive areas of the conterminous United States (CUS) long after their hydric formation conditions have been altered by historical changes in land and water management. These legacy hydric features may represent previous wetland environments in which soil carbon storage was significantly higher before the influence of human activities. We hypothesize that historical alterations of hydric soil carbon storage can be approximated using carefully selected estimates of carbon storage in currently identified hydric soils. Using the Soil Survey Geographic (SSURGO) database, we evaluate carbon storage in identified hydric soil components that are subject to discrete ranges of current or recent conditions of flooding, ponding, and other indicators of hydric and non-hydric soil associations. We check our evaluations and, where necessary, adjust them using independently published soil data. We compare estimates of soil carbon storage under various hydric and non-hydric conditions within proximal landscapes and similar biophysical settings and ecosystems. By combining these setting- and ecosystem-constrained comparisons with the spatial distribution and attributes of wetlands in the National Wetlands Inventory, we impute carbon storage estimates for soils that occur in current wetlands and for hydric soils that are not associated with current wetlands. Using historical data on land use and water control structures, we map the spatial and temporal distribution of past changes in land and water management that have affected hydric soils. We combine these maps with our imputed carbon storage estimates to calculate ranges of values for historical and present-day carbon storage in hydric soils throughout the CUS. These estimates may provide useful constraints for projections of potential carbon storage in hydric soils under future conditions.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Rosen, I. G.
1984-01-01
Approximation ideas are discussed that can be used in parameter estimation and feedback control for Euler-Bernoulli models of elastic systems. Focusing on parameter estimation problems, ways by which one can obtain convergence results for cubic spline based schemes for hybrid models involving an elastic cantilevered beam with tip mass and base acceleration are outlined. Sample numerical findings are also presented.
NASA Technical Reports Server (NTRS)
Lang, Christapher G.; Bey, Kim S. (Technical Monitor)
2002-01-01
This research investigates residual-based a posteriori error estimates for finite element approximations of heat conduction in single-layer and multi-layered materials. The finite element approximation, based upon hierarchical modelling combined with p-version finite elements, is described with specific application to a two-dimensional, steady state, heat-conduction problem. Element error indicators are determined by solving an element equation for the error with the element residual as a source, and a global error estimate in the energy norm is computed by collecting the element contributions. Numerical results of the performance of the error estimate are presented by comparisons to the actual error. Two methods are discussed and compared for approximating the element boundary flux. The equilibrated flux method provides more accurate results for estimating the error than the average flux method. The error estimation is applied to multi-layered materials with a modification to the equilibrated flux method to approximate the discontinuous flux along a boundary at the material interfaces. A directional error indicator is developed which distinguishes between the hierarchical modeling error and the finite element error. Numerical results are presented for single-layered materials which show that the directional indicators accurately determine which contribution to the total error dominates.
Approximation theory for LQG (Linear-Quadratic-Gaussian) optimal control of flexible structures
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Adamian, A.
1988-01-01
An approximation theory is presented for the LQG (Linear-Quadratic-Gaussian) optimal control problem for flexible structures whose distributed models have bounded input and output operators. The main purpose of the theory is to guide the design of finite dimensional compensators that approximate closely the optimal compensator. The optimal LQG problem separates into an optimal linear-quadratic regulator problem and an optimal state estimation problem. The solution of the former problem lies in the solution to an infinite dimensional Riccati operator equation. The approximation scheme approximates the infinite dimensional LQG problem with a sequence of finite dimensional LQG problems defined for a sequence of finite dimensional, usually finite element or modal, approximations of the distributed model of the structure. Two Riccati matrix equations determine the solution to each approximating problem. The finite dimensional equations for numerical approximation are developed, including formulas for converting matrix control and estimator gains to their functional representation to allow comparison of gains based on different orders of approximation. Convergence of the approximating control and estimator gains and of the corresponding finite dimensional compensators is studied. Also, convergence and stability of the closed-loop systems produced with the finite dimensional compensators are discussed. The convergence theory is based on the convergence of the solutions of the finite dimensional Riccati equations to the solutions of the infinite dimensional Riccati equations. A numerical example with a flexible beam, a rotating rigid body, and a lumped mass is given.
Mixed effects versus fixed effects modelling of binary data with inter-subject variability.
Murphy, Valda; Dunne, Adrian
2005-04-01
The question of whether or not a mixed effects model is required when modelling binary data with inter-subject variability and within subject correlation was reported in this journal by Yano et al. (J. Pharmacokin. Pharmacodyn. 28:389-412 [2001]). That report used simulation experiments to demonstrate that, under certain circumstances, the use of a fixed effects model produced more accurate estimates of the fixed effect parameters than those produced by a mixed effects model. The Laplace approximation to the likelihood was used when fitting the mixed effects model. This paper repeats one of those simulation experiments, with two binary observations recorded for every subject, and uses both the Laplace and the adaptive Gaussian quadrature approximations to the likelihood when fitting the mixed effects model. The results show that the estimates produced using the Laplace approximation include a small number of extreme outliers. This was not the case when using the adaptive Gaussian quadrature approximation. Further examination of these outliers shows that they arise in situations in which the Laplace approximation seriously overestimates the likelihood in an extreme region of the parameter space. It is also demonstrated that when the number of observations per subject is increased from two to three, the estimates based on the Laplace approximation no longer include any extreme outliers. The root mean squared error is a combination of the bias and the variability of the estimates. Increasing the sample size is known to reduce the variability of an estimator with a consequent reduction in its root mean squared error. The estimates based on the fixed effects model are inherently biased and this bias acts as a lower bound for the root mean squared error of these estimates. Consequently, it might be expected that for data sets with a greater number of subjects the estimates based on the mixed effects model would be more accurate than those based on the fixed effects model. This is borne out by the results of a further simulation experiment with an increased number of subjects in each set of data. The difference in the interpretation of the parameters of the fixed and mixed effects models is discussed. It is demonstrated that the mixed effects model and parameter estimates can be used to estimate the parameters of the fixed effects model but not vice versa.
Gradients estimation from random points with volumetric tensor in turbulence
NASA Astrophysics Data System (ADS)
Watanabe, Tomoaki; Nagata, Koji
2017-12-01
We present an estimation method of fully-resolved/coarse-grained gradients from randomly distributed points in turbulence. The method is based on a linear approximation of spatial gradients expressed with the volumetric tensor, which is a 3 × 3 matrix determined by a geometric distribution of the points. The coarse grained gradient can be considered as a low pass filtered gradient, whose cutoff is estimated with the eigenvalues of the volumetric tensor. The present method, the volumetric tensor approximation, is tested for velocity and passive scalar gradients in incompressible planar jet and mixing layer. Comparison with a finite difference approximation on a Cartesian grid shows that the volumetric tensor approximation computes the coarse grained gradients fairly well at a moderate computational cost under various conditions of spatial distributions of points. We also show that imposing the solenoidal condition improves the accuracy of the present method for solenoidal vectors, such as a velocity vector in incompressible flows, especially when the number of the points is not large. The volumetric tensor approximation with 4 points poorly estimates the gradient because of anisotropic distribution of the points. Increasing the number of points from 4 significantly improves the accuracy. Although the coarse grained gradient changes with the cutoff length, the volumetric tensor approximation yields the coarse grained gradient whose magnitude is close to the one obtained by the finite difference. We also show that the velocity gradient estimated with the present method well captures the turbulence characteristics such as local flow topology, amplification of enstrophy and strain, and energy transfer across scales.
Estimating detection and density of the Andean cat in the high Andes
Reppucci, J.; Gardner, B.; Lucherini, M.
2011-01-01
The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.
Estimating detection and density of the Andean cat in the high Andes
Reppucci, Juan; Gardner, Beth; Lucherini, Mauro
2011-01-01
The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.
Divergence between human populations estimated from linkage disequilibrium.
Sved, John A; McRae, Allan F; Visscher, Peter M
2008-12-01
Observed linkage disequilibrium (LD) between genetic markers in different populations descended independently from a common ancestral population can be used to estimate their absolute time of divergence, because the correlation of LD between populations will be reduced each generation by an amount that, approximately, depends only on the recombination rate between markers. Although drift leads to divergence in allele frequencies, it has less effect on divergence in LD values. We derived the relationship between LD and time of divergence and verified it with coalescent simulations. We then used HapMap Phase II data to estimate time of divergence between human populations. Summed over large numbers of pairs of loci, we find a positive correlation of LD between African and non-African populations at levels of up to approximately 0.3 cM. We estimate that the observed correlation of LD is consistent with an effective separation time of approximately 1,000 generations or approximately 25,000 years before present. The most likely explanation for such relatively low separation times is the existence of substantial levels of migration between populations after the initial separation. Theory and results from coalescent simulations confirm that low levels of migration can lead to a downward bias in the estimate of separation time.
Optimal causal inference: estimating stored information and approximating causal architecture.
Still, Susanne; Crutchfield, James P; Ellison, Christopher J
2010-09-01
We introduce an approach to inferring the causal architecture of stochastic dynamical systems that extends rate-distortion theory to use causal shielding--a natural principle of learning. We study two distinct cases of causal inference: optimal causal filtering and optimal causal estimation. Filtering corresponds to the ideal case in which the probability distribution of measurement sequences is known, giving a principled method to approximate a system's causal structure at a desired level of representation. We show that in the limit in which a model-complexity constraint is relaxed, filtering finds the exact causal architecture of a stochastic dynamical system, known as the causal-state partition. From this, one can estimate the amount of historical information the process stores. More generally, causal filtering finds a graded model-complexity hierarchy of approximations to the causal architecture. Abrupt changes in the hierarchy, as a function of approximation, capture distinct scales of structural organization. For nonideal cases with finite data, we show how the correct number of the underlying causal states can be found by optimal causal estimation. A previously derived model-complexity control term allows us to correct for the effect of statistical fluctuations in probability estimates and thereby avoid overfitting.
Johnson, Julie L; Amzat, Rianot; Martin, Nicolle
2015-09-01
Herpes zoster is a commonly encountered disorder. It is estimated that there are approximately 1 million new cases of herpes zoster in the United States annually, with an incidence of 3.2 per 1000 person-years. Patients with HIV have the greatest risk of developing herpes zoster ophthalmicus compared with the general population. Other risk factors include advancing age, use of immunosuppressive medications, and primary infection in infancy or in utero. Vaccination against the virus is a primary prevention modality. Primary treatments include antivirals, analgesics, and anticonvulsants. Management may require surgical intervention and comanagement with pain specialists, psychiatrists, and infectious disease specialists. Copyright © 2015 Elsevier Inc. All rights reserved.
A stock-and-flow simulation model of the US blood supply.
Simonetti, Arianna; Forshee, Richard A; Anderson, Steven A; Walderhaug, Mark
2014-03-01
Lack of reporting requirements for the amount of blood stored in blood banks and hospitals poses challenges to effectively monitor the US blood supply. Effective strategies to minimize collection and donation disruptions in the supply require an understanding of the daily amount of blood available in the system. A stock-and-flow simulation model of the US blood supply was developed to obtain estimates of the daily on-hand availability of blood, with uncertainty and by ABO/Rh type. The model simulated potential impact on supply of using different blood management practices for transfusion: first in-first out (FIFO), using the oldest stored red blood cell units first; non-FIFO likely oldest, preferentially selecting older blood; and non-FIFO likely newest, preferentially selecting younger blood. Simulation results showed higher estimates of the steady-state of the blood supply level for FIFO (1,630,000 units, 95% prediction interval [PI] 1,610,000-1,650,000) than non-FIFO scenarios (likely oldest, 1,530,000 units, 95% PI 1,500,000-1,550,000; and likely newest, 1,190,000 units, 95% PI 1,160,000-1,220,000), either for overall blood or by blood types. To our knowledge, this model represents a first attempt to evaluate the impact of different blood management practices on daily availability and distribution of blood in the US blood supply. The average storage time before blood is being issued was influenced by blood management practices, for preferences of blood that is younger and also that use specific blood types. The model also suggests which practice could best approximate the current blood management system and may serve as useful tool for blood management. Published 2013. This article is a U.S. Government work and is in the public domain in the USA.
Alagiakrishnan, Kannayiram; Wilson, Patricia; Sadowski, Cheryl A; Rolfson, Darryl; Ballermann, Mark; Ausford, Allen; Vermeer, Karla; Mohindra, Kunal; Romney, Jacques; Hayward, Robert S
2016-01-01
Background Elderly people (aged 65 years or more) are at increased risk of polypharmacy (five or more medications), inappropriate medication use, and associated increased health care costs. The use of clinical decision support (CDS) within an electronic medical record (EMR) could improve medication safety. Methods Participatory action research methods were applied to preproduction design and development and postproduction optimization of an EMR-embedded CDS implementation of the Beers’ Criteria for medication management and the Cockcroft–Gault formula for estimating glomerular filtration rates (GFR). The “Seniors Medication Alert and Review Technologies” (SMART) intervention was used in primary care and geriatrics specialty clinics. Passive (chart messages) and active (order-entry alerts) prompts exposed potentially inappropriate medications, decreased GFR, and the possible need for medication adjustments. Physician reactions were assessed using surveys, EMR simulations, focus groups, and semi-structured interviews. EMR audit data were used to identify eligible patient encounters, the frequency of CDS events, how alerts were managed, and when evidence links were followed. Results Analysis of subjective data revealed that most clinicians agreed that CDS appeared at appropriate times during patient care. Although managing alerts incurred a modest time burden, most also agreed that workflow was not disrupted. Prevalent concerns related to clinician accountability and potential liability. Approximately 36% of eligible encounters triggered at least one SMART alert, with GFR alert, and most frequent medication warnings were with hypnotics and anticholinergics. Approximately 25% of alerts were overridden and ~15% elicited an evidence check. Conclusion While most SMART alerts validated clinician choices, they were received as valuable reminders for evidence-informed care and education. Data from this study may aid other attempts to implement Beers’ Criteria in ambulatory care EMRs. PMID:26869776
Estimating economic losses from earthquakes using an empirical approach
Jaiswal, Kishor; Wald, David J.
2013-01-01
We extended the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) empirical fatality estimation methodology proposed by Jaiswal et al. (2009) to rapidly estimate economic losses after significant earthquakes worldwide. The requisite model inputs are shaking intensity estimates made by the ShakeMap system, the spatial distribution of population available from the LandScan database, modern and historic country or sub-country population and Gross Domestic Product (GDP) data, and economic loss data from Munich Re's historical earthquakes catalog. We developed a strategy to approximately scale GDP-based economic exposure for historical and recent earthquakes in order to estimate economic losses. The process consists of using a country-specific multiplicative factor to accommodate the disparity between economic exposure and the annual per capita GDP, and it has proven successful in hindcast-ing past losses. Although loss, population, shaking estimates, and economic data used in the calibration process are uncertain, approximate ranges of losses can be estimated for the primary purpose of gauging the overall scope of the disaster and coordinating response. The proposed methodology is both indirect and approximate and is thus best suited as a rapid loss estimation model for applications like the PAGER system.
Rapid estimation of characteristics of gas dynamic lasers
NASA Technical Reports Server (NTRS)
Murty, S. S. R.
1974-01-01
Sudden-freeze approximation is applied to the flow of a CO2-N2-He mixture in wedge-type nozzles. This approximation permits rapid estimation of the freezing temperature of the upper laser level as a function of the stagnation pressure and the nozzle geometry. The stagnation temperature and the composition of the mixture appear as parameters. Gain and power output may then be estimated and calculations are presented for two cases.
Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin
2016-01-01
What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics. Copyright © 2015 Elsevier B.V. All rights reserved.
Time management for preclinical safety professionals.
Wells, Monique Y
2010-08-01
A survey about time management in the workplace was distributed to obtain a sense of the level of job satisfaction among preclinical safety professionals in the current economic climate, and to encourage reflection upon how we manage time in our work environment. Roughly equal numbers of respondents (approximately 32%) identified themselves as management or staff, and approximately 27% indicated that they are consultants. Though 45.2% of respondents indicated that time management is very challenging for the profession in general, only 36.7% find it very challenging for themselves. Ten percent of respondents view time management to be exceedingly challenging for themselves. Approximately 34% of respondents indicated that prioritization of tasks was the most challenging aspect of time management for them. Focusing on an individual task was the second most challenging aspect (26%), followed equally by procrastination and delegation of tasks (12.4%). Almost equal numbers of respondents said that they would (35.2%) or might (33.3%) undertake training to improve their time management skills. Almost equal numbers of participants responded "perhaps" (44.6%) or "yes" (44.2%) to the question of whether management personnel should be trained in time management.
Celiac Disease Diagnosis and Management
Leffler, Daniel
2012-01-01
Celiac disease is one of the most prevalent autoimmune gastrointestinal disorders but as the case of Ms. J illustrates, diagnosis is often delayed or missed. Based on serology studies, the prevalence of celiac disease in many populations is estimated to be approximately 1% and has been increasing steadily over the last 50 years. Evaluation for celiac disease is generally straightforward, and uses commonly available serologic tests, however the signs and symptoms of celiac disease are nonspecific and highly heterogeneous making diagnosis difficult. While celiac disease is often considered a mild disorder treatable with simple dietary changes, in reality celiac disease imparts considerable risks including reduced bone mineral density, impaired quality of life, and increased overall mortality. In addition, the gluten free diet is highly burdensome and can profoundly affect patients and their families. For these reasons, care of individuals with celiac disease requires prompt diagnosis and ongoing multidisciplinary management. PMID:21990301
Update on the Management of Thyroid Disease during Pregnancy.
Yim, Chang Hoon
2016-09-01
Thyroid dysfunction during pregnancy can result in serious complications for both the mother and infant; however, these complications can be prevented by optimal treatment of maternal overt thyroid dysfunction. Although several studies have demonstrated that maternal subclinical hypothyroidism is associated with obstetric complications and neurocognitive impairments in offspring, there is limited evidence that levothyroxine treatment can improve these complications. Therefore, most professional societies do not recommend universal screening for thyroid dysfunction during pregnancy, and instead recommend a case-finding approach in which only high-risk women are tested. However, recent studies have estimated that targeted thyroid function testing misses approximately 30% to 55% of hypothyroidism cases in pregnant women, and some associations and researchers have recommended universal screening of pregnant women to facilitate the early detection and treatment of overt hypothyroidism. This review summarizes recent data on thyroid function test changes, thyroid functional disorder management, and thyroid screening during pregnancy.
Komasawa, Nobuyasu; Fujiwara, Shunsuke; Majima, Nozomi; Minami, Toshiaki
2015-08-01
Pregnancy-related mortality, estimated to occur in approximately 1: 50,000 deliveries, is rare in developed countries. The 2010 American Heart Association (AHA) Guidelines for Resuscitation emphasize the importance of high-quality chest compression as a key determinant of successful cardiopulmonary resuscitation. During pregnancy, the uterus can compress the inferior vena cava, impeding venous return and thereby reducing stroke volume and cardiac output. To maximize the effectiveness of chest compressions in pregnancy, the AHA guidelines recommend the 27-30 degrees left-lateral tilt (LLT) position. When CPR is performed on parturients in the LLT position, chest compressions will probably be more effective if performed with the operator standing on the left side of the patient. The videolaryngoscope Pentax-AWS Airwayscope (AWS) was found to be an effective tool for airway management during chest compressions in 27 LLT simulations, suggesting that the AWS may be a useful device for airway management during maternal resuscitation.
Scenario planning for water resource management in semi arid zone
NASA Astrophysics Data System (ADS)
Gupta, Rajiv; Kumar, Gaurav
2018-06-01
Scenario planning for water resource management in semi arid zone is performed using systems Input-Output approach of time domain analysis. This approach derived the future weights of input variables of the hydrological system from their precedent weights. Input variables considered here are precipitation, evaporation, population and crop irrigation. Ingles & De Souza's method and Thornthwaite model have been used to estimate runoff and evaporation respectively. Difference between precipitation inflow and the sum of runoff and evaporation has been approximated as groundwater recharge. Population and crop irrigation derived the total water demand. Compensation of total water demand by groundwater recharge has been analyzed. Further compensation has been evaluated by proposing efficient methods of water conservation. The best measure to be adopted for water conservation is suggested based on the cost benefit analysis. A case study for nine villages in Chirawa region of district Jhunjhunu, Rajasthan (India) validates the model.
NASA Astrophysics Data System (ADS)
Koshimizu, K.; Uchida, T.
2015-12-01
Initial large-scale sediment yield caused by heavy rainfall or major storms have made a strong impression on us. Previous studies focusing on landslide management investigated the initial sediment movement and its mechanism. However, integrated management of catchment-scale sediment movements requires estimating the sediment yield, which is produced by the subsequent expanded landslides due to rainfall, in addition to the initial landslide movement. This study presents a quantitative analysis of expanded landslides by surveying the Shukushubetsu River basin, at the foot of the Hidaka mountain range in central Hokkaido, Japan. This area recorded heavy rainfall in 2003, reaching a maximum daily precipitation of 388 mm. We extracted the expanded landslides from 2003 to 2008 using aerial photographs taken over the river area. In particular, we calculated the probability of expansion for each landslide, the ratio of the landslide area in 2008 as compared with that in 2003, and the amount of the expanded landslide area corresponding to the initial landslide area. As a result, it is estimated 24% about probability of expansion for each landslide. In addition, each expanded landslide area is smaller than the initial landslide area. Furthermore, the amount of each expanded landslide area in 2008 is approximately 7% of their landslide area in 2003. Therefore, the sediment yield from subsequent expanded landslides is equal to or slightly greater than the sediment yield in a typical base flow. Thus, we concluded that the amount of sediment yield from subsequent expanded landslides is lower than that of initial large-scale sediment yield caused by a heavy rainfall in terms of effect on management of catchment-scale sediment movement.
A scenario and forecast model for Gulf of Mexico hypoxic area and volume
Scavia, Donald; Evans, Mary Anne; Obenour, Daniel R.
2013-01-01
For almost three decades, the relative size of the hypoxic region on the Louisiana-Texas continental shelf has drawn scientific and policy attention. During that time, both simple and complex models have been used to explore hypoxia dynamics and to provide management guidance relating the size of the hypoxic zone to key drivers. Throughout much of that development, analyses had to accommodate an apparent change in hypoxic sensitivity to loads and often cull observations due to anomalous meteorological conditions. Here, we describe an adaptation of our earlier, simple biophysical model, calibrated to revised hypoxic area estimates and new hypoxic volume estimates through Bayesian estimation. This application eliminates the need to cull observations and provides revised hypoxic extent estimates with uncertainties, corresponding to different nutrient loading reduction scenarios. We compare guidance from this model application, suggesting an approximately 62% nutrient loading reduction is required to reduce Gulf hypoxia to the Action Plan goal of 5,000 km2, to that of previous applications. In addition, we describe for the first time, the corresponding response of hypoxic volume. We also analyze model results to test for increasing system sensitivity to hypoxia formation, but find no strong evidence of such change.
Estimation of correlation functions by stochastic approximation.
NASA Technical Reports Server (NTRS)
Habibi, A.; Wintz, P. A.
1972-01-01
Consideration of the autocorrelation function of a zero-mean stationary random process. The techniques are applicable to processes with nonzero mean provided the mean is estimated first and subtracted. Two recursive techniques are proposed, both of which are based on the method of stochastic approximation and assume a functional form for the correlation function that depends on a number of parameters that are recursively estimated from successive records. One technique uses a standard point estimator of the correlation function to provide estimates of the parameters that minimize the mean-square error between the point estimates and the parametric function. The other technique provides estimates of the parameters that maximize a likelihood function relating the parameters of the function to the random process. Examples are presented.
Eiler, John H.; Masuda, Michele; Spencer, Ted R.; Driscoll, Richard J.; Schreck, Carl B.
2014-01-01
Chinook Salmon Oncorhynchus tshawytscha returns to the Yukon River basin have declined dramatically since the late 1990s, and detailed information on the spawning distribution, stock structure, and stock timing is needed to better manage the run and facilitate conservation efforts. A total of 2,860 fish were radio-tagged in the lower basin during 2002–2004 and tracked upriver. Fish traveled to spawning areas throughout the basin, ranging from several hundred to over 3,000 km from the tagging site. Similar distribution patterns were observed across years, suggesting that the major components of the run were identified. Daily and seasonal composition estimates were calculated for the component stocks. The run was dominated by two regional components comprising over 70% of the return. Substantially fewer fish returned to other areas, ranging from 2% to 9% of the return, but their collective contribution was appreciable. Most regional components consisted of several principal stocks and a number of small, spatially isolated populations. Regional and stock composition estimates were similar across years even though differences in run abundance were reported, suggesting that the differences in abundance were not related to regional or stock-specific variability. Run timing was relatively compressed compared with that in rivers in the southern portion of the species’ range. Most stocks passed through the lower river over a 6-week period, ranging in duration from 16 to 38 d. Run timing was similar for middle- and upper-basin stocks, limiting the use of timing information for management. The lower-basin stocks were primarily later-run fish. Although differences were observed, there was general agreement between our composition and timing estimates and those from other assessment projects within the basin, suggesting that the telemetry-based estimates provided a plausible approximation of the return. However, the short duration of the run, complex stock structure, and similar stock timing complicate management of Yukon River returns.
Estimation of under-reporting in epidemics using approximations.
Gamado, Kokouvi; Streftaris, George; Zachary, Stan
2017-06-01
Under-reporting in epidemics, when it is ignored, leads to under-estimation of the infection rate and therefore of the reproduction number. In the case of stochastic models with temporal data, a usual approach for dealing with such issues is to apply data augmentation techniques through Bayesian methodology. Departing from earlier literature approaches implemented using reversible jump Markov chain Monte Carlo (RJMCMC) techniques, we make use of approximations to obtain faster estimation with simple MCMC. Comparisons among the methods developed here, and with the RJMCMC approach, are carried out and highlight that approximation-based methodology offers useful alternative inference tools for large epidemics, with a good trade-off between time cost and accuracy.
NASA Astrophysics Data System (ADS)
Panigada, Simone; Lauriano, Giancarlo; Donovan, Greg; Pierantonio, Nino; Cañadas, Ana; Vázquez, José Antonio; Burt, Louise
2017-07-01
Systematic, effective monitoring of animal population parameters underpins successful conservation strategy and wildlife management, but it is often neglected in many regions, including much of the Mediterranean Sea. Nonetheless, a series of systematic multispecies aerial surveys was carried out in the seas around Italy to gather important baseline information on cetacean occurrence, distribution and abundance. The monitored areas included the Pelagos Sanctuary, the Tyrrhenian Sea, portions of the Seas of Corsica and Sardinia, the Ionian Seas as well as the Gulf of Taranto. Overall, approximately 48,000 km were flown in either spring, summer and winter between 2009-2014, covering an area of 444,621 km2. The most commonly observed species were the striped dolphin and the fin whale, with 975 and 83 recorded sightings, respectively. Other sighted cetacean species were the common bottlenose dolphin, the Risso's dolphin, the sperm whale, the pilot whale and the Cuvier's beaked whale. Uncorrected model- and design-based estimates of density and abundance for striped dolphins and fin whales were produced, resulting in a best estimate (model-based) of around 95,000 striped dolphins (CV=11.6%; 95% CI=92,900-120,300) occurring in the Pelagos Sanctuary, Central Tyrrhenian and Western Seas of Corsica and Sardinia combined area in summer 2010. Estimates were also obtained for each individual study region and year. An initial attempt to estimate perception bias for striped dolphins is also provided. The preferred summer 2010 uncorrected best estimate (design-based) for the same areas for fin whales was around 665 (CV=33.1%; 95% CI=350-1260). Estimates are also provided for the individual study regions and years. The results represent baseline data to develop efficient, long-term, systematic monitoring programmes, essential to evaluate trends, as required by a number of national and international frameworks, and stress the need to ensure that surveys are undertaken regularly and at a sufficient spatial scale. The management implications of the results are discussed also in light of a possible decline of fin whales abundance over the period from the mid-1990s to the present. Further work to understand changes in distribution and to allow for improved spatial models is emphasized.
Adams, Michael J.; Mellison, Chad; Galvan, Stephanie K.
2013-01-01
The Toiyabe population of Columbia spotted frogs (Rana luteiventris, hereafter "Toiyabe frogs") is a geographically isolated population located in central Nevada (fig. 1). The Toiyabe population is part of the Great Basin Distinct Population Segment of Columbia spotted frogs, and is a candidate for listing under the Endangered Species Act (U.S. Fish and Wildlife Service, 2011). The cluster of breeding sites in central Nevada represents the southernmost extremity of the Columbia spotted frogs' known range (Funk and others, 2008). Toiyabe frogs are known to occur in seven drainages in Nye County, Nevada: Reese River, Cow Canyon Creek, Ledbetter Canyon Creek, Cloverdale Creek, Stewart Creek, Illinois Creek, and Indian Valley Creek. Most of the Toiyabe frog population resides in the Reese River, Indian Valley Creek, and Cloverdale Creek drainages (fig. 1; Nevada Department of Wildlife, 2003). Approximately 90 percent of the Toiyabe frogs' habitat is on public land. Most of the public land habitat (95 percent) is managed by the U.S. Forest Service (USFS), while the Bureau of Land Management (BLM) manages the remainder. Additional Toiyabe frog habitat is under Yomba Shoshone Tribal management and in private ownership (Nevada Department of Wildlife, 2003). The BLM, USFS, Nevada Department of Wildlife (NDOW), Nevada Natural Heritage Program (NNHP), Nye County, and U.S Fish and Wildlife Service (USFWS) have monitored the Toiyabe population since 2004 using mark and recapture surveys (Nevada Department of Wildlife, 2004). The USFWS contracted with the U.S. Geological Survey (USGS) to produce population estimates using these data.
Parameter estimation in nonlinear distributed systems - Approximation theory and convergence results
NASA Technical Reports Server (NTRS)
Banks, H. T.; Reich, Simeon; Rosen, I. G.
1988-01-01
An abstract approximation framework and convergence theory is described for Galerkin approximations applied to inverse problems involving nonlinear distributed parameter systems. Parameter estimation problems are considered and formulated as the minimization of a least-squares-like performance index over a compact admissible parameter set subject to state constraints given by an inhomogeneous nonlinear distributed system. The theory applies to systems whose dynamics can be described by either time-independent or nonstationary strongly maximal monotonic operators defined on a reflexive Banach space which is densely and continuously embedded in a Hilbert space. It is demonstrated that if readily verifiable conditions on the system's dependence on the unknown parameters are satisfied, and the usual Galerkin approximation assumption holds, then solutions to the approximating problems exist and approximate a solution to the original infinite-dimensional identification problem.
NASA Astrophysics Data System (ADS)
Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai
2017-10-01
With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.
An Innovative Method for Estimating Soil Retention at a ...
Planning for a sustainable future should include an accounting of services currently provided by ecosystems such as erosion control. Retention of soil improves fertility, increases water retention, and decreases sedimentation in streams and rivers. Landscapes patterns that facilitate these services could help reduce costs for flood control, dredging of reservoirs and waterways, while maintaining habitat for fish and other species important to recreational and tourism industries. Landscape scale geospatial data available for the continental United States was leveraged to estimate sediment erosion (RUSLE-based, Renard, et al. 1997) employing recent geospatial techniques of sediment delivery ratio (SDR) estimation (Cavalli, et al. 2013). The approach was designed to derive a quantitative approximation of the ecological services provided by vegetative cover, management practices, and other surface features with respect to protecting soils from the erosion processes of detachment, transport, and deposition. Quantities of soil retained on the landscape and potential erosion for multiple land cover scenarios relative to current (NLCD 2011) conditions were calculated for each calendar month, and summed to yield annual estimations at a 30-meter grid cell. Continental-scale data used included MODIS NDVI data (2000-2014) to estimate monthly USLE C-factors, gridded soil survey geographic (gSSURGO) soils data (annual USLE K factor), PRISM rainfall data (monthly USLE
Energy saving in WWTP: Daily benchmarking under uncertainty and data availability limitations.
Torregrossa, D; Schutz, G; Cornelissen, A; Hernández-Sancho, F; Hansen, J
2016-07-01
Efficient management of Waste Water Treatment Plants (WWTPs) can produce significant environmental and economic benefits. Energy benchmarking can be used to compare WWTPs, identify targets and use these to improve their performance. Different authors have performed benchmark analysis on monthly or yearly basis but their approaches suffer from a time lag between an event, its detection, interpretation and potential actions. The availability of on-line measurement data on many WWTPs should theoretically enable the decrease of the management response time by daily benchmarking. Unfortunately this approach is often impossible because of limited data availability. This paper proposes a methodology to perform a daily benchmark analysis under database limitations. The methodology has been applied to the Energy Online System (EOS) developed in the framework of the project "INNERS" (INNovative Energy Recovery Strategies in the urban water cycle). EOS calculates a set of Key Performance Indicators (KPIs) for the evaluation of energy and process performances. In EOS, the energy KPIs take in consideration the pollutant load in order to enable the comparison between different plants. For example, EOS does not analyse the energy consumption but the energy consumption on pollutant load. This approach enables the comparison of performances for plants with different loads or for a single plant under different load conditions. The energy consumption is measured by on-line sensors, while the pollutant load is measured in the laboratory approximately every 14 days. Consequently, the unavailability of the water quality parameters is the limiting factor in calculating energy KPIs. In this paper, in order to overcome this limitation, the authors have developed a methodology to estimate the required parameters and manage the uncertainty in the estimation. By coupling the parameter estimation with an interval based benchmark approach, the authors propose an effective, fast and reproducible way to manage infrequent inlet measurements. Its use enables benchmarking on a daily basis and prepares the ground for further investigation. Copyright © 2016 Elsevier Inc. All rights reserved.
The Diabetes Management Education Program in South Texas: An Economic and Clinical Impact Analysis.
Kash, Bita A; Lin, Szu-Hsuan; Baek, Juha; Ohsfeldt, Robert L
2017-01-01
Diabetes is a major chronic disease that can lead to serious health problems and high healthcare costs without appropriate disease management and treatment. In the United States, the number of people diagnosed with diabetes and the cost for diabetes treatment has dramatically increased over time. To improve patients' self-management skills and clinical outcomes, diabetes management education (DME) programs have been developed and operated in various regions. This community case study explores and calculates the economic and clinical impacts of expanding a model DME program into 26 counties located in South Texas. The study sample includes 355 patients with type 2 diabetes and a follow-up hemoglobin A1c level measurement among 1,275 individuals who participated in the DME program between September 2012 and August 2013. We used the Gilmer's cost differentials model and the United Kingdom Prospective Diabetes Study (UKPDS) Risk Engine methodology to predict 3-year healthcare cost savings and 10-year clinical benefits of implementing a DME program in the selected 26 Texas counties. Changes in estimated 3-year cost and the estimated treatment effect were based on baseline hemoglobin A1c level. An average 3-year reduction in medical treatment costs per program participant was $2,033 (in 2016 dollars). The total healthcare cost savings for the 26 targeted counties increases as the program participation rate increases. The total projected cost saving ranges from $12 million with 5% participation rate to $185 million with 75% participation rate. A 10-year outlook on additional clinical benefits associated with the implementation and expansion of the DME program at 60% participation is estimated to result in approximately 4,838 avoided coronary heart disease cases and another 392 cases of avoided strokes. The implementation of this model DME program in the selected 26 counties would contribute to substantial healthcare cost savings and clinical benefits. Organizations that provide DME services may benefit from reduction in medical treatment costs and improvement in clinical outcomes for populations with diabetes.
The impact of municipal solid waste management on greenhouse gas emissions in the United States.
Weitz, Keith A; Thorneloe, Susan A; Nishtala, Subba R; Yarkosky, Sherry; Zannes, Maria
2002-09-01
Technological advancements, environmental regulations, and emphasis on resource conservation and recovery have greatly reduced the environmental impacts of municipal solid waste (MSW) management, including emissions of greenhouse gases (GHGs). This study was conducted using a life-cycle methodology to track changes in GHG emissions during the past 25 years from the management of MSW in the United States. For the baseline year of 1974, MSW management consisted of limited recycling, combustion without energy recovery, and landfilling without gas collection or control. This was compared with data for 1980, 1990, and 1997, accounting for changes in MSW quantity, composition, management practices, and technology. Over time, the United States has moved toward increased recycling, composting, combustion (with energy recovery) and landfilling with gas recovery, control, and utilization. These changes were accounted for with historical data on MSW composition, quantities, management practices, and technological changes. Included in the analysis were the benefits of materials recycling and energy recovery to the extent that these displace virgin raw materials and fossil fuel electricity production, respectively. Carbon sinks associated with MSW management also were addressed. The results indicate that the MSW management actions taken by U.S. communities have significantly reduced potential GHG emissions despite an almost 2-fold increase in waste generation. GHG emissions from MSW management were estimated to be 36 million metric tons carbon equivalents (MMTCE) in 1974 and 8 MMTCE in 1997. If MSW were being managed today as it was in 1974, GHG emissions would be approximately 60 MMTCE.
Sankaranarayanan, K; Chakraborty, R
2000-10-16
This paper recapitulates the advances in the field of genetic risk estimation that have occurred during the past decade and using them as a basis, presents revised estimates of genetic risks of exposure to radiation. The advances include: (i) an upward revision of the estimates of incidence for Mendelian diseases (2.4% now versus 1.25% in 1993); (ii) the introduction of a conceptual change for calculating doubling doses; (iii) the elaboration of methods to estimate the mutation component (i.e. the relative increase in disease frequency per unit relative increase in mutation rate) and the use of the estimates obtained through these methods for assessing the impact of induced mutations on the incidence of Mendelian and chronic multifactorial diseases; (iv) the introduction of an additional factor called the "potential recoverability correction factor" in the risk equation to bridge the gap between radiation-induced mutations that have been recovered in mice and the risk of radiation-inducible genetic disease in human live births and (v) the introduction of the concept that the adverse effects of radiation-induced genetic damage are likely to be manifest predominantly as multi-system developmental abnormalities in the progeny. For all classes of genetic disease (except congenital abnormalities), the estimates of risk have been obtained using a doubling dose of 1 Gy. For a population exposed to low LET, chronic/ low dose irradiation, the current estimates for the first generation progeny are the following (all estimates per million live born progeny per Gy of parental irradiation): autosomal dominant and X-linked diseases, approximately 750-1500 cases; autosomal recessive, nearly zero and chronic multifactorial diseases, approximately 250-1200 cases. For congenital abnormalities, the estimate is approximately 2000 cases and is based on mouse data on developmental abnormalities. The total risk per Gy is of the order of approximately 3000-4700 cases which represent approximately 0.4-0.6% of the baseline frequency of these diseases (738,000 per million) in the population.
Improving absolute gravity estimates by the L p -norm approximation of the ballistic trajectory
NASA Astrophysics Data System (ADS)
Nagornyi, V. D.; Svitlov, S.; Araya, A.
2016-04-01
Iteratively re-weighted least squares (IRLS) were used to simulate the L p -norm approximation of the ballistic trajectory in absolute gravimeters. Two iterations of the IRLS delivered sufficient accuracy of the approximation without a significant bias. The simulations were performed on different samplings and perturbations of the trajectory. For the platykurtic distributions of the perturbations, the L p -approximation with 3 < p < 4 was found to yield several times more precise gravity estimates compared to the standard least-squares. The simulation results were confirmed by processing real gravity observations performed at the excessive noise conditions.
NASA Technical Reports Server (NTRS)
Frehlich, Rod
1993-01-01
Calculations of the exact Cramer-Rao Bound (CRB) for unbiased estimates of the mean frequency, signal power, and spectral width of Doppler radar/lidar signals (a Gaussian random process) are presented. Approximate CRB's are derived using the Discrete Fourier Transform (DFT). These approximate results are equal to the exact CRB when the DFT coefficients are mutually uncorrelated. Previous high SNR limits for CRB's are shown to be inaccurate because the discrete summations cannot be approximated with integration. The performance of an approximate maximum likelihood estimator for mean frequency approaches the exact CRB for moderate signal to noise ratio and moderate spectral width.
NASA Technical Reports Server (NTRS)
Fukumori, Ichiro; Malanotte-Rizzoli, Paola
1995-01-01
A practical method of data assimilation for use with large, nonlinear, ocean general circulation models is explored. A Kalman filter based on approximation of the state error covariance matrix is presented, employing a reduction of the effective model dimension, the error's asymptotic steady state limit, and a time-invariant linearization of the dynamic model for the error integration. The approximations lead to dramatic computational savings in applying estimation theory to large complex systems. We examine the utility of the approximate filter in assimilating different measurement types using a twin experiment of an idealized Gulf Stream. A nonlinear primitive equation model of an unstable east-west jet is studied with a state dimension exceeding 170,000 elements. Assimilation of various pseudomeasurements are examined, including velocity, density, and volume transport at localized arrays and realistic distributions of satellite altimetry and acoustic tomography observations. Results are compared in terms of their effects on the accuracies of the estimation. The approximate filter is shown to outperform an empirical nudging scheme used in a previous study. The examples demonstrate that useful approximate estimation errors can be computed in a practical manner for general circulation models.
NASA Astrophysics Data System (ADS)
Fukumori, Ichiro; Malanotte-Rizzoli, Paola
1995-04-01
A practical method of data assimilation for use with large, nonlinear, ocean general circulation models is explored. A Kaiman filter based on approximations of the state error covariance matrix is presented, employing a reduction of the effective model dimension, the error's asymptotic steady state limit, and a time-invariant linearization of the dynamic model for the error integration. The approximations lead to dramatic computational savings in applying estimation theory to large complex systems. We examine the utility of the approximate filter in assimilating different measurement types using a twin experiment of an idealized Gulf Stream. A nonlinear primitive equation model of an unstable east-west jet is studied with a state dimension exceeding 170,000 elements. Assimilation of various pseudomeasurements are examined, including velocity, density, and volume transport at localized arrays and realistic distributions of satellite altimetry and acoustic tomography observations. Results are compared in terms of their effects on the accuracies of the estimation. The approximate filter is shown to outperform an empirical nudging scheme used in a previous study. The examples demonstrate that useful approximate estimation errors can be computed in a practical manner for general circulation models.
Estimating Independent Locally Shifted Random Utility Models for Ranking Data
ERIC Educational Resources Information Center
Lam, Kar Yin; Koning, Alex J.; Franses, Philip Hans
2011-01-01
We consider the estimation of probabilistic ranking models in the context of conjoint experiments. By using approximate rather than exact ranking probabilities, we avoided the computation of high-dimensional integrals. We extended the approximation technique proposed by Henery (1981) in the context of the Thurstone-Mosteller-Daniels model to any…
NASA Astrophysics Data System (ADS)
Fewtrell, Timothy J.; Duncan, Alastair; Sampson, Christopher C.; Neal, Jeffrey C.; Bates, Paul D.
2011-01-01
This paper describes benchmark testing of a diffusive and an inertial formulation of the de St. Venant equations implemented within the LISFLOOD-FP hydraulic model using high resolution terrestrial LiDAR data. The models are applied to a hypothetical flooding scenario in a section of Alcester, UK which experienced significant surface water flooding in the June and July floods of 2007 in the UK. The sensitivity of water elevation and velocity simulations to model formulation and grid resolution are analyzed. The differences in depth and velocity estimates between the diffusive and inertial approximations are within 10% of the simulated value but inertial effects persist at the wetting front in steep catchments. Both models portray a similar scale dependency between 50 cm and 5 m resolution which reiterates previous findings that errors in coarse scale topographic data sets are significantly larger than differences between numerical approximations. In particular, these results confirm the need to distinctly represent the camber and curbs of roads in the numerical grid when simulating surface water flooding events. Furthermore, although water depth estimates at grid scales coarser than 1 m appear robust, velocity estimates at these scales seem to be inconsistent compared to the 50 cm benchmark. The inertial formulation is shown to reduce computational cost by up to three orders of magnitude at high resolutions thus making simulations at this scale viable in practice compared to diffusive models. For the first time, this paper highlights the utility of high resolution terrestrial LiDAR data to inform small-scale flood risk management studies.
Improved first-order uncertainty method for water-quality modeling
Melching, C.S.; Anmangandla, S.
1992-01-01
Uncertainties are unavoidable in water-quality modeling and subsequent management decisions. Monte Carlo simulation and first-order uncertainty analysis (involving linearization at central values of the uncertain variables) have been frequently used to estimate probability distributions for water-quality model output due to their simplicity. Each method has its drawbacks: Monte Carlo simulation's is mainly computational time; and first-order analysis are mainly questions of accuracy and representativeness, especially for nonlinear systems and extreme conditions. An improved (advanced) first-order method is presented, where the linearization point varies to match the output level whose exceedance probability is sought. The advanced first-order method is tested on the Streeter-Phelps equation to estimate the probability distribution of critical dissolved-oxygen deficit and critical dissolved oxygen using two hypothetical examples from the literature. The advanced first-order method provides a close approximation of the exceedance probability for the Streeter-Phelps model output estimated by Monte Carlo simulation using less computer time - by two orders of magnitude - regardless of the probability distributions assumed for the uncertain model parameters.
Flood area and damage estimation in Zhejiang, China.
Liu, Renyi; Liu, Nan
2002-09-01
A GIS-based method to estimate flood area and damage is presented in this paper, which is oriented to developing countries like China, where labor is readily available for GIS data collecting, and tools such as, HEC-GeoRAS might not be readily available. At present local authorities in developing countries are often not predisposed to pay for commercial GIS platforms. To calculate flood area, two cases, non-source flood and source flood, are distinguished and a seed-spread algorithm suitable for source-flooding is described. The flood damage estimation is calculated in raster format by overlaying the flood area range with thematic maps and relating this to other socioeconomic data. Several measures used to improve the geometric accuracy and computing efficiency are presented. The management issues related to the application of this method, including the cost-effectiveness of approximate method in practice and supplementing two technical lines (self-programming and adopting commercial GIS software) to each other, are also discussed. The applications show that this approach has practical significance to flood fighting and control in developing countries like China.
Estimation of building-related construction and demolition waste in Shanghai.
Ding, Tao; Xiao, Jianzhuang
2014-11-01
One methodology is proposed to estimate the quantification and composition of building-related construction and demolition (C&D) waste in a fast developing region like Shanghai, PR China. The varieties of structure types and building waste intensities due to the requirement of progressive building design and structure codes in different decades are considered in this regional C&D waste estimation study. It is concluded that approximately 13.71 million tons of C&D waste was generated in 2012 in Shanghai, of which more than 80% of this C&D waste was concrete, bricks and blocks. Analysis from this study can be applied to facilitate C&D waste governors and researchers the duty of formulating precise policies and specifications. As a matter of fact, at least a half of the enormous amount of C&D waste could be recycled if implementing proper recycling technologies and measures. The appropriate managements would be economically and environmentally beneficial to Shanghai where the per capita per year output of C&D waste has been as high as 842 kg in 2010. Copyright © 2014 Elsevier Ltd. All rights reserved.
Birch, Gavin F; Taylor, Stuart E
2002-06-01
Sediments in the Port Jackson estuary are polluted by a wide range of toxicants and concentrations are among the highest reported for any major harbor in the world. Sediment quality guidelines (SQGs), developed by the National Oceanographic and Atmospheric Administration (NOAA) in the United States are used to estimate possible adverse biological effects of sedimentary contaminants in Port Jackson to benthic animals. The NOAA guidelines indicate that Pb, Zn, DDD, and DDE are the most likely contaminants to cause adverse biological effects in Port Jackson. On an individual chemical basis, the detrimental effects due to these toxicants may occur over extensive areas of the harbor, i.e., about 40%, 30%, 15% and 50%, respectively. The NOAA SQGs can also be used to estimate the probability of sediment toxicity for contaminant mixtures by determining the number of contaminants exceeding an upper guideline value (effects range medium, or ERM), which predicts probable adverse biological effects. The exceedence approach is used in the current study to estimate the probability of sediment toxicity and to prioritize the harbour in terms of possible adverse effects on sediment-dwelling animals. Approximately 1% of the harbor is mantled with sediment containing more than ten contaminants exceeding their respective ERM concentrations and, based on NOAA data, these sediments have an 80% probability of being toxic. Sediment with six to ten contaminants exceeding their respective ERM guidelines extend over approximately 4% of the harbor and have a 57% probability of toxicity. These areas are located in the landward reaches of embayments in the upper and central harbor in proximity to the most industrialised and urbanized part of the catchment. Sediment in a further 17% of the harbor has between one and five exceedences and has a 32% probability of being toxic. The application of SQGs developed by NOAA has not been tested outside North America, and the validity of using them in Port Jackson has yet to be demonstrated. The screening approach adopted here is to use SQGs to identify contaminants of concern and to determine areas of environmental risk. The practical application and management implications of the results of this investigation are discussed.
Knox, Stephanie A; Chondros, Patty
2004-01-01
Background Cluster sample study designs are cost effective, however cluster samples violate the simple random sample assumption of independence of observations. Failure to account for the intra-cluster correlation of observations when sampling through clusters may lead to an under-powered study. Researchers therefore need estimates of intra-cluster correlation for a range of outcomes to calculate sample size. We report intra-cluster correlation coefficients observed within a large-scale cross-sectional study of general practice in Australia, where the general practitioner (GP) was the primary sampling unit and the patient encounter was the unit of inference. Methods Each year the Bettering the Evaluation and Care of Health (BEACH) study recruits a random sample of approximately 1,000 GPs across Australia. Each GP completes details of 100 consecutive patient encounters. Intra-cluster correlation coefficients were estimated for patient demographics, morbidity managed and treatments received. Intra-cluster correlation coefficients were estimated for descriptive outcomes and for associations between outcomes and predictors and were compared across two independent samples of GPs drawn three years apart. Results Between April 1999 and March 2000, a random sample of 1,047 Australian general practitioners recorded details of 104,700 patient encounters. Intra-cluster correlation coefficients for patient demographics ranged from 0.055 for patient sex to 0.451 for language spoken at home. Intra-cluster correlations for morbidity variables ranged from 0.005 for the management of eye problems to 0.059 for management of psychological problems. Intra-cluster correlation for the association between two variables was smaller than the descriptive intra-cluster correlation of each variable. When compared with the April 2002 to March 2003 sample (1,008 GPs) the estimated intra-cluster correlation coefficients were found to be consistent across samples. Conclusions The demonstrated precision and reliability of the estimated intra-cluster correlations indicate that these coefficients will be useful for calculating sample sizes in future general practice surveys that use the GP as the primary sampling unit. PMID:15613248
Meta-Regression Approximations to Reduce Publication Selection Bias
ERIC Educational Resources Information Center
Stanley, T. D.; Doucouliagos, Hristos
2014-01-01
Publication selection bias is a serious challenge to the integrity of all empirical sciences. We derive meta-regression approximations to reduce this bias. Our approach employs Taylor polynomial approximations to the conditional mean of a truncated distribution. A quadratic approximation without a linear term, precision-effect estimate with…
Stability of recursive out-of-sequence measurement filters: an open problem
NASA Astrophysics Data System (ADS)
Chen, Lingji; Moshtagh, Nima; Mehra, Raman K.
2011-06-01
In many applications where communication delays are present, measurements with earlier time stamps can arrive out-of-sequence, i.e., after state estimates have been obtained for the current time instant. To incorporate such an Out-Of-Sequence Measurement (OOSM), many algorithms have been proposed in the literature to obtain or approximate the optimal estimate that would have been obtained if the OOSM had arrived in-sequence. When OOSM occurs repeatedly, approximate estimations as a result of incorporating one OOSM have to serve as the basis for incorporating yet another OOSM. The question of whether the "approximation of approximation" is well behaved, i.e., whether approximation errors accumulate in a recursive setting, has not been adequately addressed in the literature. This paper draws attention to the stability question of recursive OOSM processing filters, formulates the problem in a specific setting, and presents some simulation results that suggest that such filters are indeed well-behaved. Our hope is that more research will be conducted in the future to rigorously establish stability properties of these filters.
Meta-regression approximations to reduce publication selection bias.
Stanley, T D; Doucouliagos, Hristos
2014-03-01
Publication selection bias is a serious challenge to the integrity of all empirical sciences. We derive meta-regression approximations to reduce this bias. Our approach employs Taylor polynomial approximations to the conditional mean of a truncated distribution. A quadratic approximation without a linear term, precision-effect estimate with standard error (PEESE), is shown to have the smallest bias and mean squared error in most cases and to outperform conventional meta-analysis estimators, often by a great deal. Monte Carlo simulations also demonstrate how a new hybrid estimator that conditionally combines PEESE and the Egger regression intercept can provide a practical solution to publication selection bias. PEESE is easily expanded to accommodate systematic heterogeneity along with complex and differential publication selection bias that is related to moderator variables. By providing an intuitive reason for these approximations, we can also explain why the Egger regression works so well and when it does not. These meta-regression methods are applied to several policy-relevant areas of research including antidepressant effectiveness, the value of a statistical life, the minimum wage, and nicotine replacement therapy. Copyright © 2013 John Wiley & Sons, Ltd.
Basu, Sanjay; Phillips, Russell S; Bitton, Asaf; Song, Zirui; Landon, Bruce E
2015-10-20
Physicians have traditionally been reimbursed for face-to-face visits. A new non-visit-based payment for chronic care management (CCM) of Medicare patients took effect in January 2015. To estimate financial implications of CCM payment for primary care practices. Microsimulation model incorporating national data on primary care use, staffing, expenditures, and reimbursements. National Ambulatory Medical Care Survey and other published sources. Medicare patients. 10 years. Practice-level. Comparison of CCM delivery approaches by staff and physicians. Net revenue per full-time equivalent (FTE) physician; time spent delivering CCM services. If nonphysician staff were to deliver CCM services, net revenue to practices would increase despite opportunity and staffing costs. Practices could expect approximately $332 per enrolled patient per year (95% CI, $234 to $429) if CCM services were delivered by registered nurses (RNs), approximately $372 (CI, $276 to $468) if services were delivered by licensed practical nurses, and approximately $385 (CI, $286 to $485) if services were delivered by medical assistants. For a typical practice, this equates to more than $75 ,00 of net annual revenue per FTE physician and 12 hours of nursing service time per week if 50% of eligible patients enroll. At a minimum, 131 Medicare patients (CI, 115 to 140 patients) must enroll for practices to recoup the salary and overhead costs of hiring a full-time RN to provide CCM services. If physicians were to deliver all CCM services, approximately 25% of practices nationwide could expect net revenue losses due to opportunity costs of face-to-face visit time. The CCM program may alter long-term primary care use, which is difficult to predict. Practices that rely on nonphysician team members to deliver CCM services will probably experience substantial net revenue gains but must enroll a sufficient number of eligible patients to recoup costs. None.
Dzubak, Allison L.; Krogel, Jaron T.; Reboredo, Fernando A.
2017-07-10
The necessarily approximate evaluation of non-local pseudopotentials in diffusion Monte Carlo (DMC) introduces localization errors. In this paper, we estimate these errors for two families of non-local pseudopotentials for the first-row transition metal atoms Sc–Zn using an extrapolation scheme and multideterminant wavefunctions. Sensitivities of the error in the DMC energies to the Jastrow factor are used to estimate the quality of two sets of pseudopotentials with respect to locality error reduction. The locality approximation and T-moves scheme are also compared for accuracy of total energies. After estimating the removal of the locality and T-moves errors, we present the range ofmore » fixed-node energies between a single determinant description and a full valence multideterminant complete active space expansion. The results for these pseudopotentials agree with previous findings that the locality approximation is less sensitive to changes in the Jastrow than T-moves yielding more accurate total energies, however not necessarily more accurate energy differences. For both the locality approximation and T-moves, we find decreasing Jastrow sensitivity moving left to right across the series Sc–Zn. The recently generated pseudopotentials of Krogel et al. reduce the magnitude of the locality error compared with the pseudopotentials of Burkatzki et al. by an average estimated 40% using the locality approximation. The estimated locality error is equivalent for both sets of pseudopotentials when T-moves is used. Finally, for the Sc–Zn atomic series with these pseudopotentials, and using up to three-body Jastrow factors, our results suggest that the fixed-node error is dominant over the locality error when a single determinant is used.« less
Estimate of the cosmological bispectrum from the MAXIMA-1 cosmic microwave background map.
Santos, M G; Balbi, A; Borrill, J; Ferreira, P G; Hanany, S; Jaffe, A H; Lee, A T; Magueijo, J; Rabii, B; Richards, P L; Smoot, G F; Stompor, R; Winant, C D; Wu, J H P
2002-06-17
We use the measurement of the cosmic microwave background taken during the MAXIMA-1 flight to estimate the bispectrum of cosmological perturbations. We propose an estimator for the bispectrum that is appropriate in the flat sky approximation, apply it to the MAXIMA-1 data, and evaluate errors using bootstrap methods. We compare the estimated value with what would be expected if the sky signal were Gaussian and find that it is indeed consistent, with a chi(2) per degree of freedom of approximately unity. This measurement places constraints on models of inflation.
The effect of managed care on the incomes of primary care and specialty physicians.
Simon, C J; Dranove, D; White, W D
1998-08-01
To determine the effects of managed care growth on the incomes of primary care and specialist physicians. Data on physician income and managed care penetration from the American Medical Association, Socioeconomic Monitoring System (SMS) Surveys for 1985 and 1993. We use secondary data from the Area Resource File and U.S. Census publications to construct geographical socioeconomic control variables, and we examine data from the National Residency Matching Program. Two-stage least squares regressions are estimated to determine the effect of local managed care penetration on specialty-specific physician incomes, while controlling for factors associated with local variation in supply and demand and accounting for the potential endogeneity of managed care penetration. The SMS survey is an annual telephone survey conducted by the American Medical Association of approximately one percent of nonfederal, post-residency U.S. physicians. Response rates average 60-70 percent, and analysis is weighted to account for nonresponse bias. The incomes of primary care physicians rose most rapidly in states with higher managed care growth, while the income growth of hospital-based specialists was negatively associated with managed care growth. Incomes of medical subspecialists were not significantly affected by managed care growth over this period. These findings are consistent with trends in postgraduate training choices of new physicians. Evidence is consistent with a relative increase in the demand for primary care physicians and a decline in the demand for some specialists under managed care. Market adjustments have important implications for health policy and physician workforce planning.
Shakeel, Muhammad; Farooq, Muhammad; Nasim, Wajid; Akram, Waseem; Khan, Fawad Zafar Ahmad; Jaleel, Waqar; Zhu, Xun; Yin, Haichen; Li, Shuzhong; Fahad, Shah; Hussain, Saddam; Chauhan, Bhagirath Singh; Jin, Fengliang
2017-06-01
The diamondback moth, Plutella xylostella, is recognized as a widely distributed destructive insect pest of Brassica worldwide. The management of this pest is a serious issue, and an estimated annual cost of its management has reached approximately US$4 billion. Despite the fact that chemicals are a serious threat to the environment, lots of chemicals are applied for controlling various insect pests especially P. xylostella. An overreliance on chemical control has not only led to the evolution of resistance to insecticides and to a reduction of natural enemies but also has polluted various components of water, air, and soil ecosystem. In the present scenario, there is a need to implement an environmentally friendly integrated pest management (IPM) approach with new management tactics (microbial control, biological control, cultural control, mating disruption, insecticide rotation strategies, and plant resistance) for an alternative to chemical control. The IPM approach is not only economically beneficial but also reduces the environmental and health risks. The present review synthesizes published information on the insecticide resistance against P. xylostella and emphasizes on adopting an alternative environmentally friendly IPM approach for controlling P. xylostella in China.
NASA Astrophysics Data System (ADS)
Wu, C. Z.; Huang, G. H.; Yan, X. P.; Cai, Y. P.; Li, Y. P.
2010-05-01
Large crowds are increasingly common at political, social, economic, cultural and sports events in urban areas. This has led to attention on the management of evacuations under such situations. In this study, we optimise an approximation method for vehicle allocation and route planning in case of an evacuation. This method, based on an interval-parameter multi-objective optimisation model, has potential for use in a flexible decision support system for evacuation management. The modeling solutions are obtained by sequentially solving two sub-models corresponding to lower- and upper-bounds for the desired objective function value. The interval solutions are feasible and stable in the given decision space, and this may reduce the negative effects of uncertainty, thereby improving decision makers' estimates under different conditions. The resulting model can be used for a systematic analysis of the complex relationships among evacuation time, cost and environmental considerations. The results of a case study used to validate the proposed model show that the model does generate useful solutions for planning evacuation management and practices. Furthermore, these results are useful for evacuation planners, not only in making vehicle allocation decisions but also for providing insight into the tradeoffs among evacuation time, environmental considerations and economic objectives.
NASA Astrophysics Data System (ADS)
Magnani, Federico; Dewar, Roderick C.; Borghetti, Marco
2009-04-01
Leakage (spillover) refers to the unintended negative (positive) consequences of forest carbon (C) management in one area on C storage elsewhere. For example, the local C storage benefit of less intensive harvesting in one area may be offset, partly or completely, by intensified harvesting elsewhere in order to meet global timber demand. We present the results of a theoretical study aimed at identifying the key factors determining leakage and spillover, as a prerequisite for more realistic numerical studies. We use a simple model of C storage in managed forest ecosystems and their wood products to derive approximate analytical expressions for the leakage induced by decreasing the harvesting frequency of existing forest, and the spillover induced by establishing new plantations, assuming a fixed total wood production from local and remote (non-local) forests combined. We find that leakage and spillover depend crucially on the growth rates, wood product lifetimes and woody litter decomposition rates of local and remote forests. In particular, our results reveal critical thresholds for leakage and spillover, beyond which effects of forest management on remote C storage exceed local effects. Order of magnitude estimates of leakage indicate its potential importance at global scales.
Pandiselvi, S; Raja, R; Cao, Jinde; Rajchakit, G; Ahmad, Bashir
2018-01-01
This work predominantly labels the problem of approximation of state variables for discrete-time stochastic genetic regulatory networks with leakage, distributed, and probabilistic measurement delays. Here we design a linear estimator in such a way that the absorption of mRNA and protein can be approximated via known measurement outputs. By utilizing a Lyapunov-Krasovskii functional and some stochastic analysis execution, we obtain the stability formula of the estimation error systems in the structure of linear matrix inequalities under which the estimation error dynamics is robustly exponentially stable. Further, the obtained conditions (in the form of LMIs) can be effortlessly solved by some available software packages. Moreover, the specific expression of the desired estimator is also shown in the main section. Finally, two mathematical illustrative examples are accorded to show the advantage of the proposed conceptual results.
Thunderstorm vertical velocities and mass flux estimated from satellite data
NASA Technical Reports Server (NTRS)
Adler, R. F.; Fenn, D. D.
1979-01-01
Infrared geosynchronous satellite data with an interval of five minutes between images are used to estimate thunderstorm top ascent rates on two case study days. A mean vertical velocity of 3.5/ms for 19 clouds is calculated at a height of 8.7 km. This upward motion is representative of an area of approximately 10km on a side. Thunderstorm mass flux of approximately 2x10 to the 11th power/gs is calculated, which compares favorably with previous estimates. There is a significant difference in the mean calculated vertical velocity between elements associated with severe weather reports (w bar=4.6/ms) and those with no such reports (2.5/ms). Calculations were made using a velocity profile for an axially symmetric jet to estimate the peak updraft velocity. For the largest observed w value of 7.8/ms the calculation indicates a peak updraft of approximately 50/ms.
NASA Technical Reports Server (NTRS)
Bey, Kim S.; Oden, J. Tinsley
1993-01-01
A priori error estimates are derived for hp-versions of the finite element method for discontinuous Galerkin approximations of a model class of linear, scalar, first-order hyperbolic conservation laws. These estimates are derived in a mesh dependent norm in which the coefficients depend upon both the local mesh size h(sub K) and a number p(sub k) which can be identified with the spectral order of the local approximations over each element.
Hypertension and blood pressure variability management practices among physicians in Singapore
Setia, Sajita; Subramaniam, Kannan; Tay, Jam Chin; Teo, Boon Wee
2017-01-01
Purpose There are limited data on blood pressure variability (BPV) in Singapore. The absence of updated local guidelines might contribute to variations in diagnosis, treatment and control of hypertension and BPV between physicians. This study evaluated BPV awareness, hypertension management and associated training needs in physicians from Singapore. Materials and methods Physicians from Singapore were surveyed between September 8, 2016, and October 5, 2016. Those included were in public or private practice for ≥3 years, cared directly for patients ≥70% of the time and treated ≥30 patients for hypertension each month. The questionnaire covered 6 main categories: general blood pressure (BP) management, BPV awareness/diagnosis, home BP monitoring (HBPM), ambulatory BP monitoring (ABPM), BPV management and associated training needs. Results Responses from 60 physicians (30 general practitioners [GPs], 20 cardiologists, 10 nephrologists) were analyzed (77% male, 85% aged 31–60 years, mean 22 years of practice). Approximately 63% of physicians considered white-coat hypertension as part of BPV. The most common diagnostic tool was HBPM (overall 77%, GPs 63%, cardiologists 65%, nephrologists 70%), but ABPM was rated as the tool most valued by physicians (80% overall), especially specialists (97%). Withdrawn Singapore guidelines were still being used by 73% of GPs. Approximately 48% of physicians surveyed did not adhere to the BP cutoff recommended by most guidelines for diagnosing hypertension using HBPM (>135/85 mmHg). Hypertension treatment practices also varied from available guideline recommendations, although physicians did tend to use a lower BP target for patients with diabetes or kidney disease. There were a number of challenges to estimating BPV, the most common of which was patient refusal of ABPM/HBPM. The majority of physicians (82%) had no training on BPV, but stated that this would be useful. Conclusion There appear to be gaps in knowledge and guideline adherence relating to the assessment and management of BPV among physicians in Singapore. PMID:28761353
Hypertension and blood pressure variability management practices among physicians in Singapore.
Setia, Sajita; Subramaniam, Kannan; Tay, Jam Chin; Teo, Boon Wee
2017-01-01
There are limited data on blood pressure variability (BPV) in Singapore. The absence of updated local guidelines might contribute to variations in diagnosis, treatment and control of hypertension and BPV between physicians. This study evaluated BPV awareness, hypertension management and associated training needs in physicians from Singapore. Physicians from Singapore were surveyed between September 8, 2016, and October 5, 2016. Those included were in public or private practice for ≥3 years, cared directly for patients ≥70% of the time and treated ≥30 patients for hypertension each month. The questionnaire covered 6 main categories: general blood pressure (BP) management, BPV awareness/diagnosis, home BP monitoring (HBPM), ambulatory BP monitoring (ABPM), BPV management and associated training needs. Responses from 60 physicians (30 general practitioners [GPs], 20 cardiologists, 10 nephrologists) were analyzed (77% male, 85% aged 31-60 years, mean 22 years of practice). Approximately 63% of physicians considered white-coat hypertension as part of BPV. The most common diagnostic tool was HBPM (overall 77%, GPs 63%, cardiologists 65%, nephrologists 70%), but ABPM was rated as the tool most valued by physicians (80% overall), especially specialists (97%). Withdrawn Singapore guidelines were still being used by 73% of GPs. Approximately 48% of physicians surveyed did not adhere to the BP cutoff recommended by most guidelines for diagnosing hypertension using HBPM (>135/85 mmHg). Hypertension treatment practices also varied from available guideline recommendations, although physicians did tend to use a lower BP target for patients with diabetes or kidney disease. There were a number of challenges to estimating BPV, the most common of which was patient refusal of ABPM/HBPM. The majority of physicians (82%) had no training on BPV, but stated that this would be useful. There appear to be gaps in knowledge and guideline adherence relating to the assessment and management of BPV among physicians in Singapore.
Grima, Ramon
2011-11-01
The mesoscopic description of chemical kinetics, the chemical master equation, can be exactly solved in only a few simple cases. The analytical intractability stems from the discrete character of the equation, and hence considerable effort has been invested in the development of Fokker-Planck equations, second-order partial differential equation approximations to the master equation. We here consider two different types of higher-order partial differential approximations, one derived from the system-size expansion and the other from the Kramers-Moyal expansion, and derive the accuracy of their predictions for chemical reactive networks composed of arbitrary numbers of unimolecular and bimolecular reactions. In particular, we show that the partial differential equation approximation of order Q from the Kramers-Moyal expansion leads to estimates of the mean number of molecules accurate to order Ω(-(2Q-3)/2), of the variance of the fluctuations in the number of molecules accurate to order Ω(-(2Q-5)/2), and of skewness accurate to order Ω(-(Q-2)). We also show that for large Q, the accuracy in the estimates can be matched only by a partial differential equation approximation from the system-size expansion of approximate order 2Q. Hence, we conclude that partial differential approximations based on the Kramers-Moyal expansion generally lead to considerably more accurate estimates in the mean, variance, and skewness than approximations of the same order derived from the system-size expansion.
Physical space and its impact on waste management in the neonatal care setting
Manzi, Sean
2014-01-01
This paper reports an investigation intended to obtain some understanding of how the working environment might influence the practice and knowledge of those involved in the management of healthcare waste. The National Health Service (NHS) has a continuing waste problem, and the way it manages waste harms the environment and consumes resources. It has been estimated that the carbon footprint of the NHS in England is approximately 20 million tons of CO2e. It has been suggested that better waste segregation could lead to more effective recycling, saving up to 42,000 tonnes of CO2. This qualitative study employed non-participant observation and semi-structured interviews. The interviews were carried out with the key informants within the participating neonatal intensive care unit. Findings from this study indicate that space and the physical arrangement of the environment are significant and influential factors in clinical practice. Where the clinical environment is not supportive, poor infection control and waste management practice is likely to occur. However, proximity of staff caused by a lack of physical space might facilitate situated learning and a collective development of knowledge in practice. The implementation of sustainable waste management practices would be more likely to succeed in an environment that facilitates correct waste segregation. PMID:28989373
Validation of Nimbus-7 temperature-humidity infrared radiometer estimates of cloud type and amount
NASA Technical Reports Server (NTRS)
Stowe, L. L.
1982-01-01
Estimates of clear and low, middle and high cloud amount in fixed geographical regions approximately (160 km) squared are being made routinely from 11.5 micron radiance measurements of the Nimbus-7 Temperature-Humidity Infrared Radiometer (THIR). The purpose of validation is to determine the accuracy of the THIR cloud estimates. Validation requires that a comparison be made between the THIR estimates of cloudiness and the 'true' cloudiness. The validation results reported in this paper use human analysis of concurrent but independent satellite images with surface meteorological and radiosonde observations to approximate the 'true' cloudiness. Regression and error analyses are used to estimate the systematic and random errors of THIR derived clear amount.
Floods of August 21-24, 2007, in Northwestern and North-Central Ohio
Straub, David E.; Ebner, Andrew D.; Astifan, Brian M.
2009-01-01
Heavy rains in northwestern and north-central Ohio on August 19-22, 2007, caused severe flooding and widespread damages to residential, public, and commercial structures in the communities of Bluffton, Bucyrus, Carey, Columbus Grove, Crestline, Findlay, Mansfield, Ottawa, and Shelby. On August 27, 2007, the Federal Emergency Management Agency (FEMA) issued a notice of a Presidential declaration of a major disaster affecting Allen, Crawford, Hancock, Hardin, Putnam, Richland, Seneca, and Wyandot Counties as a result of the severe flooding. Rainfall totals for most of the flooded area were 3 to 5 in., with some locations reporting as much as 8 to 10 in. Three National Weather Service (NWS) gages in the area indicated a rainfall recurrence interval of greater than 1,000 years, and two indicated a recurrence interval between 500 and 1,000 years. Total damages are estimated at approximately $290 million, with 8,205 residences registering for financial assistance. The U.S. Geological Survey (USGS) computed flood recurrence intervals for peak streamflows at 22 streamgages and 8 ungaged sites in and around the area of major flooding. The peak streamflows at Sandusky River near Bucyrus streamgage and at seven of the eight ungaged sites had estimated recurrence intervals of greater than 500 years. The USGS located and surveyed 421 high-water marks and plotted high-water profiles for approximately 44.5 miles of streams throughout the nine communities.
Genetic causes of intellectual disability in a birth cohort: a population-based study.
Karam, Simone M; Riegel, Mariluce; Segal, Sandra L; Félix, Têmis M; Barros, Aluísio J D; Santos, Iná S; Matijasevich, Alicia; Giugliani, Roberto; Black, Maureen
2015-06-01
Intellectual disability affects approximately 1-3% of the population and can be caused by genetic and environmental factors. Although many studies have investigated the etiology of intellectual disability in different populations, few studies have been performed in middle-income countries. The present study estimated the prevalence of genetic causes related to intellectual disability in a cohort of children from a city in south Brazil who were followed from birth. Children who showed poor performance in development and intelligence tests at the ages of 2 and 4 were included. Out of 4,231 liveborns enrolled in the cohort, 214 children fulfilled the inclusion criteria. A diagnosis was established in approximately 90% of the children evaluated. Genetic causes were determined in 31 of the children and 19 cases remained unexplained even after extensive investigation. The overall prevalence of intellectual disability in this cohort due to genetic causes was 0.82%. Because this study was nested in a cohort, there were a large number of variables related to early childhood and the likelihood of information bias was minimized by collecting information with a short recall time. This study was not influenced by selection bias, allowing identification of intellectual disability and estimation of the prevalence of genetic causes in this population, thereby increasing the possibility of providing appropriate management and/or genetic counseling. © 2015 Wiley Periodicals, Inc.
Population size of snowy plovers breeding in North America
Thomas, Susan M.; Lyons, James E.; Andres, Brad A.; T-Smith, Elise Elliot; Palacios, Eduardo; Cavitt, John F.; Royle, J. Andrew; Fellows, Suzanne D.; Maty, Kendra; Howe, William H.; Mellink, Eric; Melvin, Stefani; Zimmerman, Tara
2012-01-01
Snowy Plovers (Charadrius nivosus) may be one of the rarest shorebirds in North America yet a comprehensive assessment of their abundance and distribution has not been completed. During 2007 and 2008, 557 discrete wetlands were surveyed and nine additional large wetland complexes sampled in México and the USA. From these surveys, a population of 23,555 (95% CI = 17,299 – 29,859) breeding Snowy Plovers was estimated. Combining the estimate with information from areas not surveyed, the total North American population was assessed at 25,869 (95% CI = 18,917 – 32,173). Approximately 42% of all breeding Snowy Plovers in North America resided at two sites (Great Salt Lake, Utah, and Salt Plains National Wildlife Refuge, Oklahoma), and 33% of all these were on wetlands in the Great Basin (including Great Salt Lake). Also, coastal habitats in central and southern Texas supported large numbers of breeding plovers. New breeding sites were discovered in interior deserts and highlands and along the Pacific coast of México; approximately 9% of the North American breeding population occurred in México. Because of uncertainties about effects of climate change and current stresses to breeding habitats, the species should be a management and conservation priority. Periodic monitoring should be undertaken at important sites to ensure high quality habitat is available to support the Snowy Plover population.
Habitat capacity for cougar recolonization in the Upper Great Lakes region.
O Neil, Shawn T; Rahn, Kasey C; Bump, Joseph K
2014-01-01
Recent findings indicate that cougars (Puma concolor) are expanding their range into the midwestern United States. Confirmed reports of cougar in Michigan, Minnesota, and Wisconsin have increased dramatically in frequency during the last five years, leading to speculation that cougars may re-establish in the Upper Great Lakes (UGL) region, USA. Recent work showed favorable cougar habitat in northeastern Minnesota, suggesting that the northern forested regions of Michigan and Wisconsin may have similar potential. Recolonization of cougars in the UGL states would have important ecological, social, and political impacts that will require effective management. Using Geographic Information Systems (GIS), we extended a cougar habitat model to Michigan and Wisconsin and incorporated primary prey densities to estimate the capacity of the region to support cougars. Results suggest that approximately 39% (>58,000 km2) of the study area could support cougars, and that there is potential for a population of approximately 500 or more animals. An exploratory validation of this habitat model revealed strong association with 58 verified cougar locations occurring in the study area between 2008 and 2013. Spatially explicit information derived from this study could potentially lead to estimation of a viable population, delineation of possible cougar-human conflict areas, and the targeting of site locations for current monitoring. Understanding predator-prey interactions, interspecific competition, and human-wildlife relationships is becoming increasingly critical as top carnivores continue to recolonize the UGL region.
Diebel, M.W.; Maxted, J.T.; Robertson, Dale M.; Han, S.; Vander Zanden, M. J.
2009-01-01
Riparian buffers have the potential to improve stream water quality in agricultural landscapes. This potential may vary in response to landscape characteristics such as soils, topography, land use, and human activities, including legacies of historical land management. We built a predictive model to estimate the sediment and phosphorus load reduction that should be achievable following the implementation of riparian buffers; then we estimated load reduction potential for a set of 1598 watersheds (average 54 km2) in Wisconsin. Our results indicate that land cover is generally the most important driver of constituent loads in Wisconsin streams, but its influence varies among pollutants and according to the scale at which it is measured. Physiographic (drainage density) variation also influenced sediment and phosphorus loads. The effect of historical land use on present-day channel erosion and variation in soil texture are the most important sources of phosphorus and sediment that riparian buffers cannot attenuate. However, in most watersheds, a large proportion (approximately 70%) of these pollutants can be eliminated from streams with buffers. Cumulative frequency distributions of load reduction potential indicate that targeting pollution reduction in the highest 10% of Wisconsin watersheds would reduce total phosphorus and sediment loads in the entire state by approximately 20%. These results support our approach of geographically targeting nonpoint source pollution reduction at multiple scales, including the watershed scale. ?? 2008 Springer Science+Business Media, LLC.
On High-Order Radiation Boundary Conditions
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1995-01-01
In this paper we develop the theory of high-order radiation boundary conditions for wave propagation problems. In particular, we study the convergence of sequences of time-local approximate conditions to the exact boundary condition, and subsequently estimate the error in the solutions obtained using these approximations. We show that for finite times the Pade approximants proposed by Engquist and Majda lead to exponential convergence if the solution is smooth, but that good long-time error estimates cannot hold for spatially local conditions. Applications in fluid dynamics are also discussed.
Approximated maximum likelihood estimation in multifractal random walks
NASA Astrophysics Data System (ADS)
Løvsletten, O.; Rypdal, M.
2012-04-01
We present an approximated maximum likelihood method for the multifractal random walk processes of [E. Bacry , Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.64.026103 64, 026103 (2001)]. The likelihood is computed using a Laplace approximation and a truncation in the dependency structure for the latent volatility. The procedure is implemented as a package in the r computer language. Its performance is tested on synthetic data and compared to an inference approach based on the generalized method of moments. The method is applied to estimate parameters for various financial stock indices.
On the dipole approximation with error estimates
NASA Astrophysics Data System (ADS)
Boßmann, Lea; Grummt, Robert; Kolb, Martin
2018-01-01
The dipole approximation is employed to describe interactions between atoms and radiation. It essentially consists of neglecting the spatial variation of the external field over the atom. Heuristically, this is justified by arguing that the wavelength is considerably larger than the atomic length scale, which holds under usual experimental conditions. We prove the dipole approximation in the limit of infinite wavelengths compared to the atomic length scale and estimate the rate of convergence. Our results include N-body Coulomb potentials and experimentally relevant electromagnetic fields such as plane waves and laser pulses.
NASA Astrophysics Data System (ADS)
Huang, Jinxin; Clarkson, Eric; Kupinski, Matthew; Rolland, Jannick P.
2014-03-01
The prevalence of Dry Eye Disease (DED) in the USA is approximately 40 million in aging adults with about $3.8 billion economic burden. However, a comprehensive understanding of tear film dynamics, which is the prerequisite to advance the management of DED, is yet to be realized. To extend our understanding of tear film dynamics, we investigate the simultaneous estimation of the lipid and aqueous layers thicknesses with the combination of optical coherence tomography (OCT) and statistical decision theory. In specific, we develop a mathematical model for Fourier-domain OCT where we take into account the different statistical processes associated with the imaging chain. We formulate the first-order and second-order statistical quantities of the output of the OCT system, which can generate some simulated OCT spectra. A tear film model, which includes a lipid and aqueous layer on top of a rough corneal surface, is the object being imaged. Then we further implement a Maximum-likelihood (ML) estimator to interpret the simulated OCT data to estimate the thicknesses of both layers of the tear film. Results show that an axial resolution of 1 μm allows estimates down to nanometers scale. We use the root mean square error of the estimates as a metric to evaluate the system parameters, such as the tradeoff between the imaging speed and the precision of estimation. This framework further provides the theoretical basics to optimize the imaging setup for a specific thickness estimation task.
mBEEF-vdW: Robust fitting of error estimation density functionals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes
Here, we propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator overmore » the training datasets. Using this estimator, we show that the robust loss function leads to a 10% improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.« less
mBEEF-vdW: Robust fitting of error estimation density functionals
Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes; ...
2016-06-15
Here, we propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator overmore » the training datasets. Using this estimator, we show that the robust loss function leads to a 10% improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.« less
Sustainable water deliveries from the Colorado River in a changing climate.
Barnett, Tim P; Pierce, David W
2009-05-05
The Colorado River supplies water to 27 million users in 7 states and 2 countries and irrigates over 3 million acres of farmland. Global climate models almost unanimously project that human-induced climate change will reduce runoff in this region by 10-30%. This work explores whether currently scheduled future water deliveries from the Colorado River system are sustainable under different climate-change scenarios. If climate change reduces runoff by 10%, scheduled deliveries will be missed approximately 58% of the time by 2050. If runoff reduces 20%, they will be missed approximately 88% of the time. The mean shortfall when full deliveries cannot be met increases from approximately 0.5-0.7 billion cubic meters per year (bcm/yr) in 2025 to approximately 1.2-1.9 bcm/yr by 2050 out of a request of approximately 17.3 bcm/yr. Such values are small enough to be manageable. The chance of a year with deliveries <14.5 bcm/yr increases to 21% by midcentury if runoff reduces 20%, but such low deliveries could be largely avoided by reducing scheduled deliveries. These results are computed by using estimates of Colorado River flow from the 20th century, which was unusually wet; if the river reverts to its long-term mean, shortfalls increase another 1-1.5 bcm/yr. With either climate-change or long-term mean flows, currently scheduled future water deliveries from the Colorado River are not sustainable. However, the ability of the system to mitigate droughts can be maintained if the various users of the river find a way to reduce average deliveries.
Optimal Bandwidth for Multitaper Spectrum Estimation
Haley, Charlotte L.; Anitescu, Mihai
2017-07-04
A systematic method for bandwidth parameter selection is desired for Thomson multitaper spectrum estimation. We give a method for determining the optimal bandwidth based on a mean squared error (MSE) criterion. When the true spectrum has a second-order Taylor series expansion, one can express quadratic local bias as a function of the curvature of the spectrum, which can be estimated by using a simple spline approximation. This is combined with a variance estimate, obtained by jackknifing over individual spectrum estimates, to produce an estimated MSE for the log spectrum estimate for each choice of time-bandwidth product. The bandwidth that minimizesmore » the estimated MSE then gives the desired spectrum estimate. Additionally, the bandwidth obtained using our method is also optimal for cepstrum estimates. We give an example of a damped oscillatory (Lorentzian) process in which the approximate optimal bandwidth can be written as a function of the damping parameter. Furthermore, the true optimal bandwidth agrees well with that given by minimizing estimated the MSE in these examples.« less
NASA Astrophysics Data System (ADS)
Abesamis, Rene A.; Saenz-Agudelo, Pablo; Berumen, Michael L.; Bode, Michael; Jadloc, Claro Renato L.; Solera, Leilani A.; Villanoy, Cesar L.; Bernardo, Lawrence Patrick C.; Alcala, Angel C.; Russ, Garry R.
2017-09-01
Networks of no-take marine reserves (NTMRs) are a widely advocated strategy for managing coral reefs. However, uncertainty about the strength of population connectivity between individual reefs and NTMRs through larval dispersal remains a major obstacle to effective network design. In this study, larval dispersal among NTMRs and fishing grounds in the Philippines was inferred by conducting genetic parentage analysis on a coral-reef fish ( Chaetodon vagabundus). Adult and juvenile fish were sampled intensively in an area encompassing approximately 90 km of coastline. Thirty-seven true parent-offspring pairs were accepted after screening 1978 juveniles against 1387 adults. The data showed all types of dispersal connections that may occur in NTMR networks, with assignments suggesting connectivity among NTMRs and fishing grounds ( n = 35) far outnumbering those indicating self-recruitment ( n = 2). Critically, half (51%) of the inferred occurrences of larval dispersal linked reefs managed by separate, independent municipalities and constituent villages, emphasising the need for nested collaborative management arrangements across management units to sustain NTMR networks. Larval dispersal appeared to be influenced by wind-driven seasonal reversals in the direction of surface currents. The best-fit larval dispersal kernel estimated from the parentage data predicted that 50% of larvae originating from a population would attempt to settle within 33 km, and 95% within 83 km. Mean larval dispersal distance was estimated to be 36.5 km. These results suggest that creating a network of closely spaced (less than a few tens of km apart) NTMRs can enhance recruitment for protected and fished populations throughout the NTMR network. The findings underscore major challenges for regional coral-reef management initiatives that must be addressed with priority: (1) strengthening management of NTMR networks across political or customary boundaries; and (2) achieving adequate population connectivity via larval dispersal to sustain reef-fish populations within these networks.
Wagner, Tyler; Vandergoot, Christopher S.; Tyson, Jeff
2011-01-01
Fishery-independent (FI) surveys provide critical information used for the sustainable management and conservation of fish populations. Because fisheries management often requires the effects of management actions to be evaluated and detected within a relatively short time frame, it is important that research be directed toward FI survey evaluation, especially with respect to the ability to detect temporal trends. Using annual FI gill-net survey data for Lake Erie walleyes Sander vitreus collected from 1978 to 2006 as a case study, our goals were to (1) highlight the usefulness of hierarchical models for estimating spatial and temporal sources of variation in catch per effort (CPE); (2) demonstrate how the resulting variance estimates can be used to examine the statistical power to detect temporal trends in CPE in relation to sample size, duration of sampling, and decisions regarding what data are most appropriate for analysis; and (3) discuss recommendations for evaluating FI surveys and analyzing the resulting data to support fisheries management. This case study illustrated that the statistical power to detect temporal trends was low over relatively short sampling periods (e.g., 5–10 years) unless the annual decline in CPE reached 10–20%. For example, if 50 sites were sampled each year, a 10% annual decline in CPE would not be detected with more than 0.80 power until 15 years of sampling, and a 5% annual decline would not be detected with more than 0.8 power for approximately 22 years. Because the evaluation of FI surveys is essential for ensuring that trends in fish populations can be detected over management-relevant time periods, we suggest using a meta-analysis–type approach across systems to quantify sources of spatial and temporal variation. This approach can be used to evaluate and identify sampling designs that increase the ability of managers to make inferences about trends in fish stocks.
Wagner, Tyler; Vandergoot, Christopher S.; Tyson, Jeff
2009-01-01
Fishery-independent (FI) surveys provide critical information used for the sustainable management and conservation of fish populations. Because fisheries management often requires the effects of management actions to be evaluated and detected within a relatively short time frame, it is important that research be directed toward FI survey evaluation, especially with respect to the ability to detect temporal trends. Using annual FI gill-net survey data for Lake Erie walleyes Sander vitreus collected from 1978 to 2006 as a case study, our goals were to (1) highlight the usefulness of hierarchical models for estimating spatial and temporal sources of variation in catch per effort (CPE); (2) demonstrate how the resulting variance estimates can be used to examine the statistical power to detect temporal trends in CPE in relation to sample size, duration of sampling, and decisions regarding what data are most appropriate for analysis; and (3) discuss recommendations for evaluating FI surveys and analyzing the resulting data to support fisheries management. This case study illustrated that the statistical power to detect temporal trends was low over relatively short sampling periods (e.g., 5–10 years) unless the annual decline in CPE reached 10–20%. For example, if 50 sites were sampled each year, a 10% annual decline in CPE would not be detected with more than 0.80 power until 15 years of sampling, and a 5% annual decline would not be detected with more than 0.8 power for approximately 22 years. Because the evaluation of FI surveys is essential for ensuring that trends in fish populations can be detected over management-relevant time periods, we suggest using a meta-analysis–type approach across systems to quantify sources of spatial and temporal variation. This approach can be used to evaluate and identify sampling designs that increase the ability of managers to make inferences about trends in fish stocks.
Approximate Bayesian evaluations of measurement uncertainty
NASA Astrophysics Data System (ADS)
Possolo, Antonio; Bodnar, Olha
2018-04-01
The Guide to the Expression of Uncertainty in Measurement (GUM) includes formulas that produce an estimate of a scalar output quantity that is a function of several input quantities, and an approximate evaluation of the associated standard uncertainty. This contribution presents approximate, Bayesian counterparts of those formulas for the case where the output quantity is a parameter of the joint probability distribution of the input quantities, also taking into account any information about the value of the output quantity available prior to measurement expressed in the form of a probability distribution on the set of possible values for the measurand. The approximate Bayesian estimates and uncertainty evaluations that we present have a long history and illustrious pedigree, and provide sufficiently accurate approximations in many applications, yet are very easy to implement in practice. Differently from exact Bayesian estimates, which involve either (analytical or numerical) integrations, or Markov Chain Monte Carlo sampling, the approximations that we describe involve only numerical optimization and simple algebra. Therefore, they make Bayesian methods widely accessible to metrologists. We illustrate the application of the proposed techniques in several instances of measurement: isotopic ratio of silver in a commercial silver nitrate; odds of cryptosporidiosis in AIDS patients; height of a manometer column; mass fraction of chromium in a reference material; and potential-difference in a Zener voltage standard.
Taylor, A C
2010-01-01
This paper describes a customised, six-month, leadership development program (LDP) that was designed for emerging leaders in the Australian water industry who were promoting sustainable urban water management (SUWM). It also presents results from an evaluation of the program's benefits, costs and overall 'return on investment' (ROI). The program was designed to help build emergent leadership capacity in the water industry, given strong evidence that this form of leadership plays an important role in advancing SUWM. It involved '360-degree feedback' processes, training, individual leadership development plans, and coaching sessions. Its design was informed by a review of the literature, and its content was informed by local empirical research involving effective SUWM leaders. The evaluation used a seven-tier assessment framework that examined different dimensions of the program's performance using source and methodological triangulation. The results indicate that such LDPs can produce a range of positive outcomes, such as promoting desired leadership behaviours and generating a positive ROI estimate. Specifically, the program's estimated ROI was approximately 190% after only one year. The primary conclusion is that evidence-based LDPs which are highly customised for specific types of leaders in the water industry represent a promising type of intervention to build forms of leadership capacity which are needed to successfully promote SUWM.
Bair, Lucas S.; Rogowski, David L.; Neher, Christopher
2016-01-01
Glen Canyon Dam (GCD) on the Colorado River in northern Arizona provides water storage, flood control, and power system benefits to approximately 40 million people who rely on water and energy resources in the Colorado River basin. Downstream resources (e.g., angling, whitewater floating) in Glen Canyon National Recreation Area (GCNRA) and Grand Canyon National Park are impacted by the operation of GCD. The GCD Adaptive Management Program was established in 1997 to monitor and research the effects of dam operations on the downstream environment. We utilized secondary survey data and an individual observation travel cost model to estimate the net economic benefit of angling in GCNRA for each season and each type of angler. As expected, the demand for angling decreased with increasing travel cost; the annual value of angling at Lees Ferry totaled US$2.7 million at 2014 visitation levels. Demand for angling was also affected by season, with per-trip values of $210 in the summer, $237 in the spring, $261 in the fall, and $399 in the winter. This information provides insight into the ways in which anglers are potentially impacted by seasonal GCD operations and adaptive management experiments aimed at improving downstream resource conditions.
Assessing Vulnerability of Lake Erie Landscapes to Soil Erosion: Modelled and Measured Approaches
NASA Astrophysics Data System (ADS)
Joosse, P.; Laamrani, A.; Feisthauer, N.; Li, S.
2017-12-01
Loss of soil from agricultural landscapes to Lake Erie via water erosion is a key transport mechanism for phosphorus bound to soil particles. Agriculture is the dominant land use in the Canadian side of the Lake Erie basin with approximately 75% of the 2.3 million hectares under crop or livestock production. The variable geography and diversity of agricultural production systems and management practices makes estimating risk of soil erosion from agricultural landscapes in the Canadian Lake Erie basin challenging. Risk of soil erosion depends on a combination of factors including the extent to which soil remains bare, which differs with crop type and management. Two different approaches of estimating the vulnerability of landscapes to soil erosion will be compared among Soil Landscapes of Canada in the Lake Erie basin: a modelling approach incorporating farm census and soil survey data, represented by the 2011 Agriculture and Agri-Food Canada Agri-Environmental Indicator for Soil Erosion Risk; and, a measured approach using remotely sensed data that quantifies the magnitude of bare and covered soil across the basin. Results from both approaches will be compared by scaling the national level (1:1 million) Soil Erosion Risk Indicator and the remotely sensed data (30x30 m resolution) to the quaternary watershed level.
Nathan, Lucas M; Simmons, Megan; Wegleitner, Benjamin J; Jerde, Christopher L; Mahon, Andrew R
2014-11-04
The use of molecular surveillance techniques has become popular among aquatic researchers and managers due to the improved sensitivity and efficiency compared to traditional sampling methods. Rapid expansion in the use of environmental DNA (eDNA), paired with the advancement of molecular technologies, has resulted in new detection platforms and techniques. In this study we present a comparison of three eDNA surveillance platforms: traditional polymerase chain reaction (PCR), quantitative PCR (qPCR), and digital droplet PCR (ddPCR) in which water samples were collected over a 24 h time period from mesocosm experiments containing a population gradient of invasive species densities. All platforms reliably detected the presence of DNA, even at low target organism densities within the first hour. The two quantitative platforms (qPCR and ddPCR) produced similar estimates of DNA concentrations. The analyses completed with ddPCR was faster from sample collection through analyses and cost approximately half the expenditure of qPCR. Although a new platform for eDNA surveillance of aquatic species, ddPCR was consistent with more commonly used qPCR and a cost-effective means of estimating DNA concentrations. Use of ddPCR by researchers and managers should be considered in future eDNA surveillance applications.
Williamson, Nicholas; Kobayashi, Tsuyoshi; Outhet, David; Bowling, Lee C
2018-05-01
Cyanobacterial survival following their release in water from major headwaters reservoirs was compared in five New South Wales rivers. Under low flow conditions, cyanobacterial presence disappeared rapidly with distance downstream in the Cudgegong and Hunter Rivers, whereas the other three rivers were contaminated for at least 300 km. Cyanobacterial survival is likely to be impacted by the geomorphology of each river, especially the extent of gravel riffle reaches (cells striking rocks can destroy them) and by the different turbulent flow conditions it produces within each. Flow conditions at gauging stations were used to estimate the turbulent strain rate experienced by suspended cyanobacteria. These indicate average turbulent strain rates in the Cudgegong and Hunter Rivers can be above 33 and 83 s -1 while for the Murray, Edward and Macquarie Rivers average strain rate was estimated to be less than 30 s -1 . These turbulent strain rate estimates are substantially above published thresholds of approximately 2 s -1 for impacts indicated from laboratory tests. Estimates of strain rate were correlated with changes in cyanobacterial biovolume at stations along the rivers. These measurements indicate a weak but significant negative linear relationship between average strain rate and change in cyanobacterial biomass. River management often involves releasing cold deep water with low cyanobacterial presence from these reservoirs, leading to ecological impacts from cold water pollution downstream. The pollution may be avoided if cyanobacteria die off rapidly downstream of the reservoir, allowing surface water to be released instead. However high concentrations of soluble cyanotoxins may remain even after the cyanobacterial cells have been destroyed. The geomorphology of the river (length of riffle reaches) is an important consideration for river management during cyanobacterial blooms in headwater reservoirs. Copyright © 2018 Elsevier B.V. All rights reserved.
Jackson, Emma L; Rees, Siân E; Wilding, Catherine; Attrill, Martin J
2015-06-01
Where they dominate coastlines, seagrass beds are thought to have a fundamental role in maintaining populations of exploited species. Thus, Mediterranean seagrass beds are afforded protection, yet no attempt to determine the contribution of these areas to both commercial fisheries landings and recreational fisheries expenditure has been made. There is evidence that seagrass extent continues to decline, but there is little understanding of the potential impacts of this decline. We used a seagrass residency index, that was trait and evidence based, to estimate the proportion of Mediterranean commercial fishery landings values and recreation fisheries total expenditure that can be attributed to seagrass during different life stages. The index was calculated as a weighted sum of the averages of the estimated residence time in seagrass (compared with other habitats) at each life stage of the fishery species found in seagrass. Seagrass-associated species were estimated to contribute 30%-40% to the value of commercial fisheries landings and approximately 29% to recreational fisheries expenditure. These species predominantly rely on seagrass to survive juvenile stages. Seagrass beds had an estimated direct annual contribution during residency of €58-91 million (4% of commercial landing values) and €112 million (6% of recreation expenditure) to commercial and recreational fisheries, respectively, despite covering <2% of the area. These results suggest there is a clear cost of seagrass degradation associated with ineffective management of seagrass beds and that policy to manage both fisheries and seagrass beds should take into account the socioeconomic implications of seagrass loss to recreational and commercial fisheries. © 2015 Society for Conservation Biology.
Leem, Jong Han; Kim, Soon Tae; Kim, Hwan Cheol
2015-01-01
Air pollution contributes to mortality and morbidity. We estimated the impact of outdoor air pollution on public health in Seoul metropolitan area, Korea. Attributable cases of morbidity and mortality were estimated. Epidemiology-based exposure-response functions for a 10 μg/m3 increase in particulate matter (PM2.5 and PM10) were used to quantify the effects of air pollution. Cases attributable to air pollution were estimated for mortality (adults ≥ 30 years), respiratory and cardiovascular hospital admissions (all ages), chronic bronchitis (all ages), and acute bronchitis episodes (≤18 years). Environmental exposure (PM2.5 and PM10) was modeled for each 3 km × 3 km. In 2010, air pollution caused 15.9% of total mortality or approximately 15,346 attributable cases per year. Particulate air pollution also accounted for: 12,511 hospitalized cases of respiratory disease; 20,490 new cases of chronic bronchitis (adults); 278,346 episodes of acute bronchitis (children). After performing the 2(nd) Seoul metropolitan air pollution management plan, the reducible death number associated with air pollution is 14,915 cases per year in 2024. We can reduce 57.9% of death associated with air pollution. This assessment estimates the public-health impacts of current patterns of air pollution. Although individual health risks of air pollution are relatively small, the public-health consequences are remarkable. Particulate air pollution remains a key target for public-health action in the Seoul metropolitan area. Our results, which have also been used for economic valuation, should guide decisions on the assessment of environmental health-policy options.
Human Papilloma Virus and Squamous Cell Carcinoma of the Anus
Gami, Bhavna; Kubba, Faris; Ziprin, Paul
2014-01-01
The incidence of anal cancer is increasing. In the UK, the incidence is estimated at approximately 1.5 per 100,000. Most of this increase is attributed to certain at-risk populations. Persons who are human immunodeficiency virus (HIV)–positive and men who have sex with men (MSM), Organ transplant recipients, women with a history of cervical cancer, human papilloma virus (HPV), or cervical intraepithelial neoplasia (CIN) are known to have a greater risk for anal cancer. This paper will focus on HPV as a risk factor for anal intraepithelial neoplasia (AIN) and discusses the etiology, anatomy, pathogenesis, management of squamous cell carcinoma (SCC) of the anus. PMID:25288893
Rafal Podlaski; Francis A. Roesch
2013-01-01
Study assessed the usefulness of various methods for choosing the initial values for the numerical procedures for estimating the parameters of mixture distributions and analysed variety of mixture models to approximate empirical diameter at breast height (dbh) distributions. Two-component mixtures of either the Weibull distribution or the gamma distribution were...
NASA Astrophysics Data System (ADS)
Bania, Piotr; Baranowski, Jerzy
2018-02-01
Quantisation of signals is a ubiquitous property of digital processing. In many cases, it introduces significant difficulties in state estimation and in consequence control. Popular approaches either do not address properly the problem of system disturbances or lead to biased estimates. Our intention was to find a method for state estimation for stochastic systems with quantised and discrete observation, that is free of the mentioned drawbacks. We have formulated a general form of the optimal filter derived by a solution of Fokker-Planck equation. We then propose the approximation method based on Galerkin projections. We illustrate the approach for the Ornstein-Uhlenbeck process, and derive analytic formulae for the approximated optimal filter, also extending the results for the variant with control. Operation is illustrated with numerical experiments and compared with classical discrete-continuous Kalman filter. Results of comparison are substantially in favour of our approach, with over 20 times lower mean squared error. The proposed filter is especially effective for signal amplitudes comparable to the quantisation thresholds. Additionally, it was observed that for high order of approximation, state estimate is very close to the true process value. The results open the possibilities of further analysis, especially for more complex processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kopp, H.J.; Mortensen, G.A.
1978-04-01
Approximately 60% of the full CDC 6600/7600 Datatran 2.0 capability was made operational on IBM 360/370 equipment. Sufficient capability was made operational to demonstrate adequate performance for modular program linking applications. Also demonstrated were the basic capabilities and performance required to support moderate-sized data base applications and moderately active scratch input/output applications. Approximately one to two calendar years are required to develop DATATRAN 2.0 capabilities fully for the entire spectrum of applications proposed. Included in the next stage of conversion should be syntax checking and syntax conversion features that would foster greater FORTRAN compatibility between IBM and CDC developed modules.more » The batch portion of the JOSHUA Modular System, which was developed by Savannah River Laboratory to run on an IBM computer, was examined for the feasibility of conversion to run on a Control Data Corporation (CDC) computer. Portions of the JOSHUA Precompiler were changed so as to be operable on the CDC computer. The Data Manager and Batch Monitor were also examined for conversion feasibility, but no changes were made in them. It appears to be feasible to convert the batch portion of the JOSHUA Modular System to run on a CDC computer with an estimated additional two to three man-years of effort. 9 tables.« less
Active Urbanization and Channel Adjustment in Apple Creek, Appleton, WI
NASA Astrophysics Data System (ADS)
Clark, J. J.
2002-12-01
Headwaters of the Apple Creek watershed have been and continue to be rapidly developed as part of the City of Appleton's long-term growth plan. Concurrent with early development, and prior to development over the past 4 years, two regional stormwater management facilities were constructed. Cross-sectional surveys and core transects were used to determine channel response to urbanization mitigated by stormwater management. The reach immediately downstream of the first pond complex has a narrow, but well established, wooded riparian zone and has not changed in size or shape over the past two years. An engineered reach approximately one mile downstream, however has exhibited widespread bed aggradation. Cross-sectional area decreased an average of 51% over the past four years. Despite the use of sediment and erosion control BMPs, sediment concentrations exceeding 1000 mg/L during base flow are not uncommon downstream of construction sites adjacent to the stream. The artificially widened channel, a reduction in stream gradient, and the backwater effect from downstream ponds caused much of this sediment to remain within the engineered reach. It is estimated that approximately 21,000 Mg of sediment is stored in this mile-long reach. As this sediment migrates downstream, the forebay of the second set of stormwater ponds will begin to fill, reducing storage capacity and thereby limiting its effectiveness in mitigating peak discharges and sequestering nutrients.
Baffin Bay Ice Drift and Export: 2002-2007
NASA Technical Reports Server (NTRS)
Kwok, Ron
2007-01-01
Multiyear estimates of sea ice drift in Baffin Bay and Davis Strait are derived for the first time from the 89 GHz channel of the AMSR-E instrument. Uncertainties in the drift estimates, assessed with Envisat ice motion, are approximately 2-3 km/day. A persistent atmospheric trough, between the coast of Greenland and Baffin Island, drives the prevailing southward drift pattern with average daily displacements in excess of 18-20 km during winter. Over the 5-year record, the ice export ranges between 360 and 675 x 10(exp 3) km(exp 2), with an average of 530 x 10(exp 3) km(exp 2). Sea ice area inflow from the Nares Strait, Lancaster Sound and Jones Sound potentially contribute up to a third of the net area outflow while ice production at the North Water Polynya contributes the balance. Rough estimates of annual volume export give approximately 500-800 km(exp 3). Comparatively, these are approximately 70% and approximately 30% of the annual area and Strait.
A Glossary of Terms Used by the Educational Management Project.
ERIC Educational Resources Information Center
Tucson Public Schools, AZ.
This publication defines and illustrates approximately 80 terms and concepts that are crucial to understanding the Educational Management Project, a comprehensive inservice program for educational administrators that was developed by the Tucson Public Schools. Annotations for the individual terms vary in length from approximately 30 to 350 words…
Optimal estimation of parameters and states in stochastic time-varying systems with time delay
NASA Astrophysics Data System (ADS)
Torkamani, Shahab; Butcher, Eric A.
2013-08-01
In this study estimation of parameters and states in stochastic linear and nonlinear delay differential systems with time-varying coefficients and constant delay is explored. The approach consists of first employing a continuous time approximation to approximate the stochastic delay differential equation with a set of stochastic ordinary differential equations. Then the problem of parameter estimation in the resulting stochastic differential system is represented as an optimal filtering problem using a state augmentation technique. By adapting the extended Kalman-Bucy filter to the resulting system, the unknown parameters of the time-delayed system are estimated from noise-corrupted, possibly incomplete measurements of the states.
On the estimation variance for the specific Euler-Poincaré characteristic of random networks.
Tscheschel, A; Stoyan, D
2003-07-01
The specific Euler number is an important topological characteristic in many applications. It is considered here for the case of random networks, which may appear in microscopy either as primary objects of investigation or as secondary objects describing in an approximate way other structures such as, for example, porous media. For random networks there is a simple and natural estimator of the specific Euler number. For its estimation variance, a simple Poisson approximation is given. It is based on the general exact formula for the estimation variance. In two examples of quite different nature and topology application of the formulas is demonstrated.
Cortés, Camilo; de Los Reyes-Guzmán, Ana; Scorza, Davide; Bertelsen, Álvaro; Carrasco, Eduardo; Gil-Agudo, Ángel; Ruiz-Salguero, Oscar; Flórez, Julián
2016-01-01
Robot-Assisted Rehabilitation (RAR) is relevant for treating patients affected by nervous system injuries (e.g., stroke and spinal cord injury). The accurate estimation of the joint angles of the patient limbs in RAR is critical to assess the patient improvement. The economical prevalent method to estimate the patient posture in Exoskeleton-based RAR is to approximate the limb joint angles with the ones of the Exoskeleton. This approximation is rough since their kinematic structures differ. Motion capture systems (MOCAPs) can improve the estimations, at the expenses of a considerable overload of the therapy setup. Alternatively, the Extended Inverse Kinematics Posture Estimation (EIKPE) computational method models the limb and Exoskeleton as differing parallel kinematic chains. EIKPE has been tested with single DOF movements of the wrist and elbow joints. This paper presents the assessment of EIKPE with elbow-shoulder compound movements (i.e., object prehension). Ground-truth for estimation assessment is obtained from an optical MOCAP (not intended for the treatment stage). The assessment shows EIKPE rendering a good numerical approximation of the actual posture during the compound movement execution, especially for the shoulder joint angles. This work opens the horizon for clinical studies with patient groups, Exoskeleton models, and movements types.
Large Covariance Estimation by Thresholding Principal Orthogonal Complements
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2012-01-01
This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented. PMID:24348088
Large Covariance Estimation by Thresholding Principal Orthogonal Complements.
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2013-09-01
This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented.
NASA Astrophysics Data System (ADS)
Rachmawati, Vimala; Khusnul Arif, Didik; Adzkiya, Dieky
2018-03-01
The systems contained in the universe often have a large order. Thus, the mathematical model has many state variables that affect the computation time. In addition, generally not all variables are known, so estimations are needed to measure the magnitude of the system that cannot be measured directly. In this paper, we discuss the model reduction and estimation of state variables in the river system to measure the water level. The model reduction of a system is an approximation method of a system with a lower order without significant errors but has a dynamic behaviour that is similar to the original system. The Singular Perturbation Approximation method is one of the model reduction methods where all state variables of the equilibrium system are partitioned into fast and slow modes. Then, The Kalman filter algorithm is used to estimate state variables of stochastic dynamic systems where estimations are computed by predicting state variables based on system dynamics and measurement data. Kalman filters are used to estimate state variables in the original system and reduced system. Then, we compare the estimation results of the state and computational time between the original and reduced system.
Huang, C.; Townshend, J.R.G.
2003-01-01
A stepwise regression tree (SRT) algorithm was developed for approximating complex nonlinear relationships. Based on the regression tree of Breiman et al . (BRT) and a stepwise linear regression (SLR) method, this algorithm represents an improvement over SLR in that it can approximate nonlinear relationships and over BRT in that it gives more realistic predictions. The applicability of this method to estimating subpixel forest was demonstrated using three test data sets, on all of which it gave more accurate predictions than SLR and BRT. SRT also generated more compact trees and performed better than or at least as well as BRT at all 10 equal forest proportion interval ranging from 0 to 100%. This method is appealing to estimating subpixel land cover over large areas.
Multilevel Sequential Monte Carlo Samplers for Normalizing Constants
Moral, Pierre Del; Jasra, Ajay; Law, Kody J. H.; ...
2017-08-24
This article considers the sequential Monte Carlo (SMC) approximation of ratios of normalizing constants associated to posterior distributions which in principle rely on continuum models. Therefore, the Monte Carlo estimation error and the discrete approximation error must be balanced. A multilevel strategy is utilized to substantially reduce the cost to obtain a given error level in the approximation as compared to standard estimators. Two estimators are considered and relative variance bounds are given. The theoretical results are numerically illustrated for two Bayesian inverse problems arising from elliptic partial differential equations (PDEs). The examples involve the inversion of observations of themore » solution of (i) a 1-dimensional Poisson equation to infer the diffusion coefficient, and (ii) a 2-dimensional Poisson equation to infer the external forcing.« less
ADHD and math - The differential effect on calculation and estimation.
Ganor-Stern, Dana; Steinhorn, Ofir
2018-05-31
Adults with ADHD were compared to controls when solving multiplication problems exactly and when estimating the results of multidigit multiplication problems relative to reference numbers. The ADHD participants were slower than controls in the exact calculation and in the estimation tasks, but not less accurate. The ADHD participants were similar to controls in showing enhanced accuracy and speed for smaller problem sizes, for trials in which the reference numbers were smaller (vs. larger) than the exact answers and for reference numbers that were far (vs. close) from the exact answer. The two groups similarly used the approximated calculation and the sense of magnitude strategies. They differed however in strategy execution, mainly of the approximated calculation strategy, which requires working memory resources. The increase in reaction time associated with using the approximated calculation strategy was larger for the ADHD compared to the control participants. Thus, ADHD seems to selectively impair calculation processes in estimation tasks that rely on working memory, but it does not hamper estimation skills that are based on sense of magnitude. The educational implications of these findings are discussed. Copyright © 2018. Published by Elsevier B.V.
Plis, Sergey M; George, J S; Jun, S C; Paré-Blagoev, J; Ranken, D M; Wood, C C; Schmidt, D M
2007-01-01
We propose a new model to approximate spatiotemporal noise covariance for use in neural electromagnetic source analysis, which better captures temporal variability in background activity. As with other existing formalisms, our model employs a Kronecker product of matrices representing temporal and spatial covariance. In our model, spatial components are allowed to have differing temporal covariances. Variability is represented as a series of Kronecker products of spatial component covariances and corresponding temporal covariances. Unlike previous attempts to model covariance through a sum of Kronecker products, our model is designed to have a computationally manageable inverse. Despite increased descriptive power, inversion of the model is fast, making it useful in source analysis. We have explored two versions of the model. One is estimated based on the assumption that spatial components of background noise have uncorrelated time courses. Another version, which gives closer approximation, is based on the assumption that time courses are statistically independent. The accuracy of the structural approximation is compared to an existing model, based on a single Kronecker product, using both Frobenius norm of the difference between spatiotemporal sample covariance and a model, and scatter plots. Performance of ours and previous models is compared in source analysis of a large number of single dipole problems with simulated time courses and with background from authentic magnetoencephalography data.
Should we be looking for and treating isolated calf vein thrombosis?
Horner, Daniel; Hogg, Kerstin; Body, Richard
2016-06-01
Management of isolated calf deep vein thrombosis is an area of significant international debate and variable clinical practice. Both therapeutic anticoagulation and conservative management carry risk. As clinical care of suspected and confirmed venous thromboembolic disease increasingly becomes the remit of emergency medicine, complex decisions are left to practising clinicians at the front door. We aim to provide a contemporary overview of recent evidence on this topic and associated challenges facing clinicians. Given the lack of high-level evidence, we present this work as a narrative review, based on structured literature review and expert opinion. A decision to manage calf thrombosis is principally dependent on the risk of complications without treatment balanced against the risks of therapeutic anticoagulation. Estimates of the former risks taken from systematic review, meta-analysis, observational cohort and recent pilot trial evidence include proximal propagation 7%-10%, pulmonary embolism 2%-3% and death <1%. Fatal bleeding with therapeutic anticoagulation stands at <0.5%, and major bleeding at approximately 2%. Estimates of haemorrhagic risk are based on robust data from large prospective management studies of venous thromboembolic disease; the risks of untreated calf deep vein thrombosis are based on small cohorts and therefore less exact. Pending further trial evidence, these risks should be discussed with patients openly, in the context of personal preference and shared decision-making. Anticoagulation may maximally benefit those patients with extensive and/or symptomatic disease or those with higher risk for complication (unprovoked, cancer-associated or pregnancy). Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Martinson, Melissa; Bharmi, Rupinder; Dalal, Nirav; Abraham, William T; Adamson, Philip B
2017-05-01
Haemodynamic-guided heart failure (HF) management effectively reduces decompensation events and need for hospitalizations. The economic benefit of clinical improvement requires further study. An estimate of the cost-effectiveness of haemodynamic-guided HF management was made based on observations published in the randomized, prospective single-blinded CHAMPION trial. A comprehensive analysis was performed including healthcare utilization event rates, survival, and quality of life demonstrated in the randomized portion of the trial (18 months). Markov modelling with Monte Carlo simulation was used to approximate comprehensive costs and quality-adjusted life years (QALYs) from a payer perspective. Unit costs were estimated using the Truven Health MarketScan database from April 2008 to March 2013. Over a 5-year horizon, patients in the Treatment group had average QALYs of 2.56 with a total cost of US$56 974; patients in the Control group had QALYs of 2.16 with a total cost of US$52 149. The incremental cost-effectiveness ratio (ICER) was US$12 262 per QALY. Using comprehensive cost modelling, including all anticipated costs of HF and non-HF hospitalizations, physician visits, prescription drugs, long-term care, and outpatient hospital visits over 5 years, the Treatment group had a total cost of US$212 004 and the Control group had a total cost of US$200 360. The ICER was US$29 593 per QALY. Standard economic modelling suggests that pulmonary artery pressure-guided management of HF using the CardioMEMS™ HF System is cost-effective from the US-payer perspective. This analysis provides the background for further modelling in specific country healthcare systems and cost structures. © 2016 The Authors. European Journal of Heart Failure published by John Wiley & Sons Ltd on behalf of European Society of Cardiology.
Colwell, Janice C; McNichol, Laurie; Boarini, Joy
The purpose of this study was to describe the practice of 796 ostomy nurses in North America in 2014 related to peristomal skin issues. Descriptive study. Participants were 796 wound, ostomy, and continence (WOC) and enterostomal therapy (ET) nurses currently practicing in the United States or Canada and caring for patients with ostomies. The collection of data occurred in conjunction with an educational program on peristomal skin complications and practice issues and solicited the participant's perception on the incidence and frequency of peristomal skin issues as well as on practice patterns. Participants attended an educational program. They were also asked to anonymously respond to multiple-choice questions on ostomy care management via an audience response system followed by discussion of each item and their responses. This descriptive study reports on the answers to the questions as well as the pertinent discussion points. Participants estimated that approximately 77.70% of their patients developed peristomal skin issues. The most commonly encountered problem was irritant contact dermatitis (peristomal moisture-associated skin damage). Contributing factors were inappropriate use of a pouching system owing to lack of follow-up after hospital discharge. Reported interventions for the prevention and management of peristomal skin issues included preoperative stoma site marking, use of a convex pouching system, and barrier rings. However, subsequent discussion revealed that the frequency of use of these products varied considerably. Participants identified shortened hospital stays, absence of preoperative stoma marking, and limited outpatient follow-up as contributing to development of peristomal skin problems. WOC and ET nurses estimate that more than three-quarters of persons living with an ostomy develop peristomal skin problems. Multiple interventions for managing these problems were identified, but some variability in management approaches emerged.
U.S. dietary exposures to heterocyclic amines.
Bogen, K T; Keating, G A
2001-01-01
Heterocyclic amines (HAs) formed in fried, broiled or grilled meats are potent mutagens that increase rates of colon, mammary, prostate and other cancers in bioassay rodents. Studies of how human dietary HA exposures may affect cancer risks have so far relied on fairly crudely defined HA-exposure categories. Recently, an integrated, quantitative approach to HA-exposure assessment (HAEA) was developed to estimate compound-specific intakes for particular individuals based on corresponding HA-concentration estimates that reflect their meat-type, intake-rate, cooking-method and meat-doneness preferences. This method was applied in the present study to U.S. national Continuing Survey of Food Intakes by Individuals (CSFII) data on meats consumed and cooking methods used by >25,000 people, after adjusting for underreported energy intake and conditional on meat-doneness preferences estimated from additional survey data. The U.S. population average lifetime time-weighted average of total HAs consumed was estimated to be approximately 9 ng/kg/day, with 2-amino-1-methyl-6-phenylimidazo[4,5-b]pyridine (PhIP) estimated to comprise about two thirds of this intake. Pan-fried meats were the largest source of HA in the diet and chicken the largest source of HAs among different meat types. Estimated total HA intakes by male vs. female children were generally similar, with those by (0- to 15-year-old) children approximately 25% greater than those by (16+-year-old) adults. Race-, age- and sex-specific mean HA intakes were estimated to be greatest for African American males, who were estimated to consume approximately 2- and approximately 3-fold more PhIP than white males at ages <16 and 30+ years, respectively, after considering a relatively greater preference for more well-done items among African Americans based on national survey data. This difference in PhIP intakes may at least partly explain why prostate cancer (PC) kills approximately 2-fold more African American than white men, in view of experimental data indicating that PhIP mutates prostate DNA and causes prostate tumors in rats.
Trask, Amanda E; Bignal, Eric M; McCracken, Davy I; Piertney, Stuart B; Reid, Jane M
2017-09-01
A population's effective size (N e ) is a key parameter that shapes rates of inbreeding and loss of genetic diversity, thereby influencing evolutionary processes and population viability. However, estimating N e , and identifying key demographic mechanisms that underlie the N e to census population size (N) ratio, remains challenging, especially for small populations with overlapping generations and substantial environmental and demographic stochasticity and hence dynamic age-structure. A sophisticated demographic method of estimating N e /N, which uses Fisher's reproductive value to account for dynamic age-structure, has been formulated. However, this method requires detailed individual- and population-level data on sex- and age-specific reproduction and survival, and has rarely been implemented. Here, we use the reproductive value method and detailed demographic data to estimate N e /N for a small and apparently isolated red-billed chough (Pyrrhocorax pyrrhocorax) population of high conservation concern. We additionally calculated two single-sample molecular genetic estimates of N e to corroborate the demographic estimate and examine evidence for unobserved immigration and gene flow. The demographic estimate of N e /N was 0.21, reflecting a high total demographic variance (σ2dg) of 0.71. Females and males made similar overall contributions to σ2dg. However, contributions varied among sex-age classes, with greater contributions from 3 year-old females than males, but greater contributions from ≥5 year-old males than females. The demographic estimate of N e was ~30, suggesting that rates of increase of inbreeding and loss of genetic variation per generation will be relatively high. Molecular genetic estimates of N e computed from linkage disequilibrium and approximate Bayesian computation were approximately 50 and 30, respectively, providing no evidence of substantial unobserved immigration which could bias demographic estimates of N e . Our analyses identify key sex-age classes contributing to demographic variance and thus decreasing N e /N in a small age-structured population inhabiting a variable environment. They thereby demonstrate how assessments of N e can incorporate stochastic sex- and age-specific demography and elucidate key demographic processes affecting a population's evolutionary trajectory and viability. Furthermore, our analyses show that N e for the focal chough population is critically small, implying that management to re-establish genetic connectivity may be required to ensure population viability. © 2017 The Authors. Journal of Animal Ecology © 2017 British Ecological Society.
Approximation of Optimal Infinite Dimensional Compensators for Flexible Structures
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Mingori, D. L.; Adamian, A.; Jabbari, F.
1985-01-01
The infinite dimensional compensator for a large class of flexible structures, modeled as distributed systems are discussed, as well as an approximation scheme for designing finite dimensional compensators to approximate the infinite dimensional compensator. The approximation scheme is applied to develop a compensator for a space antenna model based on wrap-rib antennas being built currently. While the present model has been simplified, it retains the salient features of rigid body modes and several distributed components of different characteristics. The control and estimator gains are represented by functional gains, which provide graphical representations of the control and estimator laws. These functional gains also indicate the convergence of the finite dimensional compensators and show which modes the optimal compensator ignores.
Steenland, Kyle; Pillarisetti, Ajay; Kirby, Miles; Peel, Jennifer; Clark, Maggie; Checkley, Will; Chang, Howard H; Clasen, Thomas
2018-02-01
Improved biomass and advanced fuel cookstoves can lower household air pollution (HAP), but levels of fine particulate matter (PM 2.5 ) often remain above the World Health Organization (WHO) recommended interim target of 35μg/m 3 . Based on existing literature, we first estimate a range of likely levels of personal PM 2.5 before and after a liquefied petroleum gas (LPG) intervention. Using simulations reflecting uncertainty in both the exposure estimates and exposure-response coefficients, we estimate corresponding expected health benefits for systolic blood pressure (SBP) in adults, birthweight, and pneumonia incidence among children <2years old. We also estimate potential avoided premature mortality among those exposed. Our best estimate is that an LPG stove intervention would decrease personal PM 2.5 exposure from approximately 270μg/m 3 to approximately 70μg/m 3 , due to likely continued use of traditional open-fire stoves. We estimate that this decrease would lead to a 5.5mmHg lower SBP among women over age 50, a 338g higher birthweight, and a 37% lower incidence of severe childhood pneumonia. We estimate that decreased SBP, if sustained, would result in a 5%-10% decrease in mortality for women over age 50. We estimate that higher birthweight would reduce infant mortality by 4 to 11 deaths per 1000 births; for comparison, the current global infant mortality rate is 32/1000 live births. Reduced exposure is estimated to prevent approximately 29 cases of severe pneumonia per year per 1000 children under 2, avoiding approximately 2-3 deaths/1000 per year. However, there are large uncertainties around all these estimates due to uncertainty in both exposure estimates and in exposure-response coefficients; all health effect estimates include the null value of no benefit. An LPG stove intervention, while not likely to lower exposure to the WHO interim target level, is still likely to offer important health benefits. Copyright © 2017 Elsevier Ltd. All rights reserved.
Approximate likelihood calculation on a phylogeny for Bayesian estimation of divergence times.
dos Reis, Mario; Yang, Ziheng
2011-07-01
The molecular clock provides a powerful way to estimate species divergence times. If information on some species divergence times is available from the fossil or geological record, it can be used to calibrate a phylogeny and estimate divergence times for all nodes in the tree. The Bayesian method provides a natural framework to incorporate different sources of information concerning divergence times, such as information in the fossil and molecular data. Current models of sequence evolution are intractable in a Bayesian setting, and Markov chain Monte Carlo (MCMC) is used to generate the posterior distribution of divergence times and evolutionary rates. This method is computationally expensive, as it involves the repeated calculation of the likelihood function. Here, we explore the use of Taylor expansion to approximate the likelihood during MCMC iteration. The approximation is much faster than conventional likelihood calculation. However, the approximation is expected to be poor when the proposed parameters are far from the likelihood peak. We explore the use of parameter transforms (square root, logarithm, and arcsine) to improve the approximation to the likelihood curve. We found that the new methods, particularly the arcsine-based transform, provided very good approximations under relaxed clock models and also under the global clock model when the global clock is not seriously violated. The approximation is poorer for analysis under the global clock when the global clock is seriously wrong and should thus not be used. The results suggest that the approximate method may be useful for Bayesian dating analysis using large data sets.
On Bernstein type inequalities and a weighted Chebyshev approximation problem on ellipses
NASA Technical Reports Server (NTRS)
Freund, Roland
1989-01-01
A classical inequality due to Bernstein which estimates the norm of polynomials on any given ellipse in terms of their norm on any smaller ellipse with the same foci is examined. For the uniform and a certain weighted uniform norm, and for the case that the two ellipses are not too close, sharp estimates of this type were derived and the corresponding extremal polynomials were determined. These Bernstein type inequalities are closely connected with certain constrained Chebyshev approximation problems on ellipses. Some new results were also presented for a weighted approximation problem of this type.
Hepatic Resection for Colorectal Liver Metastases: A Cost-Effectiveness Analysis
Beard, Stephen M.; Holmes, Michael; Price, Charles; Majeed, Ali W.
2000-01-01
Objective To analyze the cost-effectiveness of resection for liver metastases compared with standard nonsurgical cytotoxic treatment. Summary Background Data The efficacy of hepatic resection for metastases from colorectal cancer has been debated, despite reported 5-year survival rates of 20% to 40%. Resection is confined to specialized centers and is not widely available, perhaps because of lack of appropriate expertise, resources, or awareness of its efficacy. The cost-effectiveness of resection is important from the perspective of managed care in the United States and for the commissioning of health services in the United Kingdom. Methods A simple decision-based model was developed to evaluate the marginal costs and health benefits of hepatic resection. Estimates of resectability for liver metastases were taken from UK-reported case series data. The results of 100 hepatic resections conducted in Sheffield from 1997 to 1999 were used for the cost calculation of liver resection. Survival data from published series of resections were compiled to estimate the incremental cost per life-year gained (LYG) because of the short period of follow-up in the Sheffield series. Results Hepatic resection for colorectal liver metastases provides an estimated marginal benefit of 1.6 life-years (undiscounted) at a marginal cost of £6,742. If 17% of patients have only palliative resections, the overall cost per LYG is approximately £5,236 (£5,985 with discounted benefits). If potential benefits are extended to include 20-year survival rates, these figures fall to approximately £1,821 (£2,793 with discounted benefits). Further univariate sensitivity analysis of key model parameters showed the cost per LYG to be consistently less than £15,000. Conclusion In this model, hepatic resection appears highly cost-effective compared with nonsurgical treatments for colorectal-related liver metastases. PMID:11088071
Factors affecting winter survival of female mallards in the lower Mississippi alluvial valley
Davis, B.E.; Afton, A.D.; Cox, R.R.
2011-01-01
The lower Mississippi Alluvial Valley (hereafter LMAV) provides winter habitat for approximately 40% of the Mississippi Flyway's Mallard (Anas platyrhynhcos) population; information on winter survival rates of female Mallards in the LMAV is restricted to data collected prior to implementation of the North American Waterfowl Management Plan. To estimate recent survival and cause-specific mortality rates in the LMAV, 174 radio-marked female Mallards were tracked for a total of 11,912 exposure days. Survival varied by time periods defined by hunting seasons, and females with lower body condition (size adjusted body mass) at time of capture had reduced probability of survival. Female survival was less and the duration of our tracking period was greater than those in previous studies of similarly marked females in the LMAV; the product-limit survival estimate (??????SE) through the entire tracking period (136 days) was 0.54 ??0.10. Cause-specific mortality rates were 0.18 ??0.04 and 0.34 ??0.12 for hunting and other sources of mortality, respectively; the estimated mortality rate from other sources (including those from avian, mammalian, or unknown sources) was higher than mortality from non-hunting sources reported in previous studies of Mallards in the LMAV. Models that incorporate winter survival estimates as a factor in Mallard population growth rates should be adjusted for these reduced winter survival estimates.
Progress in navigation filter estimate fusion and its application to spacecraft rendezvous
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
1994-01-01
A new derivation of an algorithm which fuses the outputs of two Kalman filters is presented within the context of previous research in this field. Unlike other works, this derivation clearly shows the combination of estimates to be optimal, minimizing the trace of the fused covariance matrix. The algorithm assumes that the filters use identical models, and are stable and operating optimally with respect to their own local measurements. Evidence is presented which indicates that the error ellipsoid derived from the covariance of the optimally fused estimate is contained within the intersections of the error ellipsoids of the two filters being fused. Modifications which reduce the algorithm's data transmission requirements are also presented, including a scalar gain approximation, a cross-covariance update formula which employs only the two contributing filters' autocovariances, and a form of the algorithm which can be used to reinitialize the two Kalman filters. A sufficient condition for using the optimally fused estimates to periodically reinitialize the Kalman filters in this fashion is presented and proved as a theorem. When these results are applied to an optimal spacecraft rendezvous problem, simulated performance results indicate that the use of optimally fused data leads to significantly improved robustness to initial target vehicle state errors. The following applications of estimate fusion methods to spacecraft rendezvous are also described: state vector differencing, and redundancy management.
Lee, K V; Moon, R D; Burkness, E C; Hutchison, W D; Spivak, M
2010-08-01
The parasitic mite Varroa destructor Anderson & Trueman (Acari: Varroidae) is arguably the most detrimental pest of the European-derived honey bee, Apis mellifera L. Unfortunately, beekeepers lack a standardized sampling plan to make informed treatment decisions. Based on data from 31 commercial apiaries, we developed sampling plans for use by beekeepers and researchers to estimate the density of mites in individual colonies or whole apiaries. Beekeepers can estimate a colony's mite density with chosen level of precision by dislodging mites from approximately to 300 adult bees taken from one brood box frame in the colony, and they can extrapolate to mite density on a colony's adults and pupae combined by doubling the number of mites on adults. For sampling whole apiaries, beekeepers can repeat the process in each of n = 8 colonies, regardless of apiary size. Researchers desiring greater precision can estimate mite density in an individual colony by examining three, 300-bee sample units. Extrapolation to density on adults and pupae may require independent estimates of numbers of adults, of pupae, and of their respective mite densities. Researchers can estimate apiary-level mite density by taking one 300-bee sample unit per colony, but should do so from a variable number of colonies, depending on apiary size. These practical sampling plans will allow beekeepers and researchers to quantify mite infestation levels and enhance understanding and management of V. destructor.
Comparison of three NDVI time-series fitting methods in crop phenology detection in Northeast China
NASA Astrophysics Data System (ADS)
Wang, Meng; Tao, Fulu
2014-03-01
Phenological changes of cropland are the pivotal basis for farm management, agricultural production, and climate change research. Over the past decades, a range of methods have been used to extract phenological events based on satellite derived continuous vegetation index time series, however, large uncertainties still exist. In this study, three smoothing methods were compared to reduce the potential uncertainty and to quantify crop green-up dates over Northeast China. The results indicated that the crop spring onset dates estimated by three methods show variance in the dates, but with similar spatial pattern. In 60% of the study area, the standard deviation (SD) of the estimated starting date from different method is less than 10 days, while 39.5% of total pixels have SDs between 10days and 30 days. Through comparative analysis against the observation phenological data, we concluded that Asymmetric Gaussians produced the most approximative results of all, followed by Double Logistic algorithm, and Savizky-Glolay algorithm performed worst. The starting dates of crops occur mostly between May and June in this region. The Savitzky-Golay has the earliest estimates, while the Asymmetric Gaussians and Double logistic fitting method show similar and later estimates, which are more consistent with the observed data.
Validation of daily increments periodicity in otoliths of spotted gar
Snow, Richard A.; Long, James M.; Frenette, Bryan D.
2017-01-01
Accurate age and growth information is essential in successful management of fish populations and for understanding early life history. We validated daily increment deposition, including the timing of first ring formation, for spotted gar (Lepisosteus oculatus) through 127 days post hatch. Fry were produced from hatchery-spawned specimens, and up to 10 individuals per week were sacrificed and their otoliths (sagitta, lapillus, and asteriscus) removed for daily age estimation. Daily age estimates for all three otolith pairs were significantly related to known age. The strongest relationships existed for measurements from the sagitta (r2 = 0.98) and the lapillus (r2 = 0.99) with asteriscus (r2 = 0.95) the lowest. All age prediction models resulted in a slope near unity, indicating that ring deposition occurred approximately daily. Initiation of ring formation varied among otolith types, with deposition beginning 3, 7, and 9 days for the sagitta, lapillus, and asteriscus, respectively. Results of this study suggested that otoliths are useful to estimate daily age of spotted gar juveniles; these data may be used to back calculate hatch dates, estimate early growth rates, and correlate with environmental factor that influence spawning in wild populations. is early life history information will be valuable in better understanding the ecology of this species.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-07
... of 12,100 hours is estimated for SF 3112C. SF 3112A is used each year by approximately 1,350 persons... completed by the immediate supervisor and the employing agency of the applicant. Approximately 12,100... hours is estimated for SF 3112A. The total burden for SF 3112 is 12,775 hours. All 12,100 respondents...
Portable Language-Independent Adaptive Translation from OCR. Phase 1
2009-04-01
including brute-force k-Nearest Neighbors ( kNN ), fast approximate kNN using hashed k-d trees, classification and regression trees, and locality...achieved by refinements in ground-truthing protocols. Recent algorithmic improvements to our approximate kNN classifier using hashed k-D trees allows...recent years discriminative training has been shown to outperform phonetic HMMs estimated using ML for speech recognition. Standard ML estimation
Reply to Steele & Ferrer: Modeling Oscillation, Approximately or Exactly?
ERIC Educational Resources Information Center
Oud, Johan H. L.; Folmer, Henk
2011-01-01
This article addresses modeling oscillation in continuous time. It criticizes Steele and Ferrer's article "Latent Differential Equation Modeling of Self-Regulatory and Coregulatory Affective Processes" (2011), particularly the approximate estimation procedure applied. This procedure is the latent version of the local linear approximation procedure…
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2014-02-01
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.
Technical Note: Approximate Bayesian parameterization of a complex tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2013-08-01
Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.
mBEEF-vdW: Robust fitting of error estimation density functionals
NASA Astrophysics Data System (ADS)
Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes; Jacobsen, Karsten W.; Bligaard, Thomas
2016-06-01
We propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework [J. Wellendorff et al., Phys. Rev. B 85, 235149 (2012), 10.1103/PhysRevB.85.235149; J. Wellendorff et al., J. Chem. Phys. 140, 144107 (2014), 10.1063/1.4870397]. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator over the training datasets. Using this estimator, we show that the robust loss function leads to a 10 % improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.
Small UAS-Based Wind Feature Identification System Part 1: Integration and Validation
Rodriguez Salazar, Leopoldo; Cobano, Jose A.; Ollero, Anibal
2016-01-01
This paper presents a system for identification of wind features, such as gusts and wind shear. These are of particular interest in the context of energy-efficient navigation of Small Unmanned Aerial Systems (UAS). The proposed system generates real-time wind vector estimates and a novel algorithm to generate wind field predictions. Estimations are based on the integration of an off-the-shelf navigation system and airspeed readings in a so-called direct approach. Wind predictions use atmospheric models to characterize the wind field with different statistical analyses. During the prediction stage, the system is able to incorporate, in a big-data approach, wind measurements from previous flights in order to enhance the approximations. Wind estimates are classified and fitted into a Weibull probability density function. A Genetic Algorithm (GA) is utilized to determine the shaping and scale parameters of the distribution, which are employed to determine the most probable wind speed at a certain position. The system uses this information to characterize a wind shear or a discrete gust and also utilizes a Gaussian Process regression to characterize continuous gusts. The knowledge of the wind features is crucial for computing energy-efficient trajectories with low cost and payload. Therefore, the system provides a solution that does not require any additional sensors. The system architecture presents a modular decentralized approach, in which the main parts of the system are separated in modules and the exchange of information is managed by a communication handler to enhance upgradeability and maintainability. Validation is done providing preliminary results of both simulations and Software-In-The-Loop testing. Telemetry data collected from real flights, performed in the Seville Metropolitan Area in Andalusia (Spain), was used for testing. Results show that wind estimation and predictions can be calculated at 1 Hz and a wind map can be updated at 0.4 Hz. Predictions show a convergence time with a 95% confidence interval of approximately 30 s. PMID:28025531
Small UAS-Based Wind Feature Identification System Part 1: Integration and Validation.
Rodriguez Salazar, Leopoldo; Cobano, Jose A; Ollero, Anibal
2016-12-23
This paper presents a system for identification of wind features, such as gusts and wind shear. These are of particular interest in the context of energy-efficient navigation of Small Unmanned Aerial Systems (UAS). The proposed system generates real-time wind vector estimates and a novel algorithm to generate wind field predictions. Estimations are based on the integration of an off-the-shelf navigation system and airspeed readings in a so-called direct approach. Wind predictions use atmospheric models to characterize the wind field with different statistical analyses. During the prediction stage, the system is able to incorporate, in a big-data approach, wind measurements from previous flights in order to enhance the approximations. Wind estimates are classified and fitted into a Weibull probability density function. A Genetic Algorithm (GA) is utilized to determine the shaping and scale parameters of the distribution, which are employed to determine the most probable wind speed at a certain position. The system uses this information to characterize a wind shear or a discrete gust and also utilizes a Gaussian Process regression to characterize continuous gusts. The knowledge of the wind features is crucial for computing energy-efficient trajectories with low cost and payload. Therefore, the system provides a solution that does not require any additional sensors. The system architecture presents a modular decentralized approach, in which the main parts of the system are separated in modules and the exchange of information is managed by a communication handler to enhance upgradeability and maintainability. Validation is done providing preliminary results of both simulations and Software-In-The-Loop testing. Telemetry data collected from real flights, performed in the Seville Metropolitan Area in Andalusia (Spain), was used for testing. Results show that wind estimation and predictions can be calculated at 1 Hz and a wind map can be updated at 0.4 Hz . Predictions show a convergence time with a 95% confidence interval of approximately 30 s .
Cowled, Brendan D; Garner, M Graeme; Negus, Katherine; Ward, Michael P
2012-01-16
Disease modelling is one approach for providing new insights into wildlife disease epidemiology. This paper describes a spatio-temporal, stochastic, susceptible- exposed-infected-recovered process model that simulates the potential spread of classical swine fever through a documented, large and free living wild pig population following a simulated incursion. The study area (300 000 km2) was in northern Australia. Published data on wild pig ecology from Australia, and international Classical Swine Fever data was used to parameterise the model. Sensitivity analyses revealed that herd density (best estimate 1-3 pigs km-2), daily herd movement distances (best estimate approximately 1 km), probability of infection transmission between herds (best estimate 0.75) and disease related herd mortality (best estimate 42%) were highly influential on epidemic size but that extraordinary movements of pigs and the yearly home range size of a pig herd were not. CSF generally established (98% of simulations) following a single point introduction. CSF spread at approximately 9 km2 per day with low incidence rates (< 2 herds per day) in an epidemic wave along contiguous habitat for several years, before dying out (when the epidemic arrived at the end of a contiguous sub-population or at a low density wild pig area). The low incidence rate indicates that surveillance for wildlife disease epidemics caused by short lived infections will be most efficient when surveillance is based on detection and investigation of clinical events, although this may not always be practical. Epidemics could be contained and eradicated with culling (aerial shooting) or vaccination when these were adequately implemented. It was apparent that the spatial structure, ecology and behaviour of wild populations must be accounted for during disease management in wildlife. An important finding was that it may only be necessary to cull or vaccinate relatively small proportions of a population to successfully contain and eradicate some wildlife disease epidemics.
2012-01-01
Disease modelling is one approach for providing new insights into wildlife disease epidemiology. This paper describes a spatio-temporal, stochastic, susceptible- exposed-infected-recovered process model that simulates the potential spread of classical swine fever through a documented, large and free living wild pig population following a simulated incursion. The study area (300 000 km2) was in northern Australia. Published data on wild pig ecology from Australia, and international Classical Swine Fever data was used to parameterise the model. Sensitivity analyses revealed that herd density (best estimate 1-3 pigs km-2), daily herd movement distances (best estimate approximately 1 km), probability of infection transmission between herds (best estimate 0.75) and disease related herd mortality (best estimate 42%) were highly influential on epidemic size but that extraordinary movements of pigs and the yearly home range size of a pig herd were not. CSF generally established (98% of simulations) following a single point introduction. CSF spread at approximately 9 km2 per day with low incidence rates (< 2 herds per day) in an epidemic wave along contiguous habitat for several years, before dying out (when the epidemic arrived at the end of a contiguous sub-population or at a low density wild pig area). The low incidence rate indicates that surveillance for wildlife disease epidemics caused by short lived infections will be most efficient when surveillance is based on detection and investigation of clinical events, although this may not always be practical. Epidemics could be contained and eradicated with culling (aerial shooting) or vaccination when these were adequately implemented. It was apparent that the spatial structure, ecology and behaviour of wild populations must be accounted for during disease management in wildlife. An important finding was that it may only be necessary to cull or vaccinate relatively small proportions of a population to successfully contain and eradicate some wildlife disease epidemics. PMID:22243996
A new detailed map of total phosphorus stocks in Australian soil.
Viscarra Rossel, Raphael A; Bui, Elisabeth N
2016-01-15
Accurate data are needed to effectively monitor environmental condition, and develop sound policies to plan for the future. Globally, current estimates of soil total phosphorus (P) stocks are very uncertain because they are derived from sparse data, with large gaps over many areas of the Earth. Here, we derive spatially explicit estimates, and their uncertainty, of the distribution and stock of total P in Australian soil. Data from several sources were harmonized to produce the most comprehensive inventory of total P in soil of the continent. They were used to produce fine spatial resolution continental maps of total P in six depth layers by combining the bootstrap, a decision tree with piecewise regression on environmental variables and geostatistical modelling of residuals. Values of percent total P were predicted at the nodes of a 3-arcsecond (approximately 90 m) grid and mapped together with their uncertainties. We combined these predictions with those for bulk density and mapped the total soil P stock in the 0-30 cm layer over the whole of Australia. The average amount of P in Australian topsoil is estimated to be 0.98 t ha(-1) with 90% confidence limits of 0.2 and 4.2 t ha(-1). The total stock of P in the 0-30 cm layer of soil for the continent is 0.91 Gt with 90% confidence limits of 0.19 and 3.9 Gt. The estimates are the most reliable approximation of the stock of total P in Australian soil to date. They could help improve ecological models, guide the formulation of policy around food and water security, biodiversity and conservation, inform future sampling for inventory, guide the design of monitoring networks, and provide a benchmark against which to assess the impact of changes in land cover, land use and management and climate on soil P stocks and water quality in Australia. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.
A Data Analysis Toolbox for Modeling the Global Food-Energy-Water Nexus
NASA Astrophysics Data System (ADS)
AghaKouchak, A.; Sadegh, M.; Mallakpour, I.
2017-12-01
Water, Food and energy systems are highly interconnected. More than seventy percent of global water resource is used for food production. Water withdrawal, purification, and transfer systems are energy intensive. Furthermore, energy generation strongly depends on water availability. Therefore, considering the interactions in the nexus of water, food and energy is crucial for sustainable management of available resources. In this presentation, we introduce a user-friendly data analysis toolbox that mines the available global data on food, energy and water, and analyzes their interactions. This toolbox provides estimates of water footprint for a wide range of food types in different countries and also approximates the required energy and water resources. The toolbox also provides estimates of the corresponding emissions and biofuel production of different crops. In summary, this toolbox allows evaluating dependencies of the food, energy, and water systems at the country scale. We present global analysis of the interactions between water, food and energy from different perspectives including efficiency and diversity of resources use.
Palmer, Amanda; Wymore, Katie; Clogher, Paula; Oosmanally, Nadine; Robinson, Trisha; Lathrop, Sarah; Karr, Jillian; Hatch, Julie; Dunn, John; Ryan, Patricia; Blythe, David
2014-01-01
Objectives. The objective of this study was to determine the role international travel plays in US Campylobacter epidemiology and antimicrobial resistance. Methods. In this study, epidemiological and antimicrobial resistance data, encompassing the years 2005 to 2011, from 10 sites participating in the Foodborne Diseases Active Surveillance Network were linked. The 10 sites are represented by 7 states that conducted surveillance on a statewide level, and 3 states which conducted county-level surveillance. Cases of Campylobacter among persons with history of international travel in the week prior to illness were compared with cases among individuals with no international travel. Results. Approximately 18% of Campylobacter infections were estimated to be associated with international travel, and 60% of international travel-associated infections had a quinolone-resistant Campylobacter isolate. Conclusions. We confirm that international travel plays a significant role in campylobacteriosis diagnosed in the United States. Recognizing this is important to both medical management decisions and understanding burden and attribution estimates of US campylobacteriosis and antibiotic-resistant campylobacteriosis. PMID:24832415
Rhinitis and its impact on work.
Vandenplas, Olivier; D'Alpaos, Vinciane; Van Brussel, Philippe
2008-04-01
Health-related work disability has been increasingly recognized as an important component of the economic and societal burden of a disease. The purpose of this review is to summarize recently published data pertaining to the impact of rhinitis on work disability. Recent studies have investigated the impact of rhinitis on both the amount of time missed from work (absenteeism) and the level of work effectiveness while on the job (presenteeism). These studies have shown that rhinitis has a rather modest effect on absenteeism, with estimated productivity losses of approximately 1-4% resulting from missed work time. By contrast, rhinitis is associated with substantial impairment in at-work performance. Estimates of lost productivity attributable to reduced on-the-job effectiveness ranged from 11 to 40%. The impact of rhinitis on work productivity is affected by symptom severity, and allergen exposure, and it can be reduced by second-generation antihistamines. The impact of rhinitis on work productivity should be further characterized and taken into account for establishing cost-effective management strategies.
Estimation of old field ecosystem biomass using low altitude imagery
NASA Technical Reports Server (NTRS)
Nor, S. M.; Safir, G.; Burton, T. M.; Hook, J. E.; Schultink, G.
1977-01-01
Color-infrared photography was used to evaluate the biomass of experimental plots in an old-field ecosystem that was treated with different levels of waste water from a sewage treatment facility. Cibachrome prints at a scale of approximately 1:1,600 produced from 35 mm color infrared slides were used to analyze density patterns using prepared tonal density scales and multicell grids registered to ground panels shown on the photograph. Correlations between mean tonal density and harvest biomass data gave consistently high coefficients ranging from 0.530 to 0.896 at the 0.001 significance level. Corresponding multiple regression analysis resulted in higher correlation coefficients. The results indicate that aerial infrared photography can be used to estimate standing crop biomass on waste water irrigated old field ecosystems. Combined with minimal ground truth data, this technique could enable managers of waste water irrigation projects to precisely time harvest of such systems for maximal removal of nutrients in harvested biomass.
Johnson, Matthew; Kern, Jeffrey; Haig, Susan M.
2010-01-01
This report provides an analysis of California Condor (Gymnogyps californianus) space use of six management units in southern California (Hopper Mountain and Bitter Creek National Wildlife Refuges, Wildlands Conservancy-Wind Wolves Preserve, Tejon Mountain Village Specific Plan, California Condor Study Area, and the Tejon Ranch excluding Tejon Mountain Village Specific Plan and California Condor Study Area). Space use was analyzed to address urgent management needs using location data from Global Positioning System transmitters. The U.S. Fish and Wildlife Service provided the U.S. Geological Survey with location data (2004-09) for California Condors from Global Positioning System transmitters and Geographic Information System data for the six management units in southern California. We calculated relative concentration of use estimates for each management unit for each California Condor (n = 21) on an annual basis (n = 39 annual home ranges) and evaluated resource selection for the population each year using the individual as our sampling unit. The most striking result from our analysis was the recolonization of the Tejon Mountain Village Specific Plan, California Condor Study Area, and Tejon Ranch management units during 2008. During 2004-07, the home range estimate for two (25 percent) California Condors overlapped the Tejon Mountain Village Specific Plan, California Condor Study Area, and Tejon Ranch management units (n = 8), and use within the annual home range generally was bimodal and was concentrated on the Bitter Creek and Hopper Mountain National Wildlife Refuges. However, 10 (77 percent) California Condor home ranges overlapped the Tejon Mountain Village Specific Plan, California Condor Study Area, and Tejon Ranch management units during 2008 (n = 13), and by 2009, the home range of every condor carrying a Global Positioning System transmitter (n = 14) overlapped these management units. Space use was multimodal within the home range during 2008-09 and was concentrated on Hopper Mountain Refuge in the south, Bittercreek Refuge and the Wind Wolves Preserve in the northwest, and the Tejon Mountain Village Specific Plan, California Condor Study Area, and Tejon Ranch management units in the northeast. Recolonization of the Tejon Mountain Village Specific Plan, California Condor Study Area, and Tejon Ranch management units reestablished traditional condor movement and foraging patterns in southern California and provides the travel corridor (approximately 20 kilometers wide) for recolonization of the northeastern part of the species historical range.
Mangrola, Devna; Cox, Christine; Furman, Arianne S; Krishnan, Sridevi; Karakas, Sidika E
2018-01-01
When glucose records from self blood glucose monitoring (SBGM) do not reflect estimated average glucose from glycosylated hemoglobin (HgBA1) or when patients' clinical symptoms are not explained by their SBGM records, clinical management of diabetes becomes a challenge. Our objective was to determine the magnitude of differences in glucose values reported by SBGM versus those documented by continuous glucose monitoring (CGM). The CGM was conducted by a clinical diabetes educator (CDE)/registered nurse by the clinic protocol, using the Medtronic iPRO2 ™ system. Patients continued SBGM and managed their diabetes without any change. Data from 4 full days were obtained, and relevant clinical information was recorded. De-identified data sets were provided to the investigators. Data from 61 patients, 27 with type 1 diabetes (T1DM) and 34 with T2DM were analyzed. The lowest, highest, and average glucose recorded by SBGM were compared to the corresponding values from CGM. The lowest glucose values reported by SBGM were approximately 25 mg/dL higher in both T1DM ( P = .0232) and T2DM ( P = .0003). The highest glucose values by SBGM were approximately 30 mg/dL lower in T1DM ( P = .0005) and 55 mg/dL lower in T2DM ( P<.0001). HgBA1c correlated with the highest and average glucose by SBGM and CGM. The lowest glucose values were seen most frequently during sleep and before breakfast; the highest were seen during the evening and postprandially. SBGM accurately estimates the average glucose but underestimates glucose excursions. CGM uncovers glucose patterns that common SBGM patterns cannot. CDE = certified diabetes educator; CGM = continuous glucose monitoring; HgBA1c = glycosylated hemoglobin; MAD = mean absolute difference; SBGM = self blood glucose monitoring; T1DM = type 1 diabetes; T2DM = type 2 diabetes.
Bozzani, Fiammetta Maria; Arnold, Matthias; Colbourn, Timothy; Lufesi, Norman; Nambiar, Bejoy; Masache, Gibson; Skordis-Worrall, Jolene
2016-07-28
Human resources are a major cost driver in childhood pneumonia case management. Introduction of 13-valent pneumococcal conjugate vaccine (PCV-13) in Malawi can lead to savings on staff time and salaries due to reductions in pneumonia cases requiring admission. Reliable estimates of human resource costs are vital for use in economic evaluations of PCV-13 introduction. Twenty-eight severe and twenty-four very severe pneumonia inpatients under the age of five were tracked from admission to discharge by paediatric ward staff using self-administered timesheets at Mchinji District Hospital between June and August 2012. All activities performed and the time spent on each activity were recorded. A monetary value was assigned to the time by allocating a corresponding percentage of the health workers' salary. All costs are reported in 2012 US$. A total of 1,017 entries, grouped according to 22 different activity labels, were recorded during the observation period. On average, 99 min (standard deviation, SD = 46) were spent on each admission: 93 (SD = 38) for severe and 106 (SD = 55) for very severe cases. Approximately 40 % of activities involved monitoring and stabilization, including administering non-drug therapies such as oxygen. A further 35 % of the time was spent on injecting antibiotics. Nurses provided 60 % of the total time spent on pneumonia admissions, clinicians 25 % and support staff 15 %. Human resource costs were approximately US$ 2 per bed-day and, on average, US$ 29.5 per severe pneumonia admission and US$ 37.7 per very severe admission. Self-reporting was successfully used in this context to generate reliable estimates of human resource time and costs of childhood pneumonia treatment. Assuming vaccine efficacy of 41 % and 90 % coverage, PCV-13 introduction in Malawi can save over US$ 2 million per year in staff costs alone.
Chaffee, Benjamin W; Cheng, Jing; Featherstone, John D B
2015-09-24
Consensus guidelines support non-operative preventives for dental caries management; yet, their use in practice is far from universal. The purpose of this study was to evaluate the effectiveness of non-operative anti-caries agents in caries prevention among high caries risk adults at a university clinic where risk-based caries management is emphasized. This retrospective observational study drew data from the electronic patient records of non-edentulous adult patients deemed to be at high risk for dental caries during baseline oral evaluations that were completed between July 1, 2007 and December 31, 2012 at a dental university in the United States. We calculated and compared adjusted mean estimates for the number of new decayed or restored teeth (DFT increment) from baseline to the next completed oral evaluation (N = 2,724 patients with follow-up) across three categories of delivery of non-operative anti-caries agents (e.g., high-concentration fluoride toothpaste, chlorhexidine rinse, xylitol products): never, at a single appointment, or at ≥2 appointments ≥4 weeks apart. Estimates were adjusted for patient and provider characteristics, baseline dental status, losses-to-follow-up, and follow-up time. Approximately half the patients did not receive any form of non-operative anti-caries agent. Most that received anti-caries agents were given more than one type of product in combination. One-time delivery of anti-caries agents was associated with a similar DFT increment as receiving no such therapy (difference in increment: -0.04; 95% CI: -0.28, 0.21). However, repeated, spaced delivery of anti-caries agents was associated with approximately one decayed or restored tooth prevented over 18 months for every three patients treated (difference in increment: -0.35; 95% CI: -0.65, -0.08). These results lend evidence that repeatedly receiving anti-caries agents can reduce tooth decay among high-risk patients engaged in regular dental care.
APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES.
Han, Qiyang; Wellner, Jon A
2016-01-01
In this paper, we study the approximation and estimation of s -concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s -concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [ Ann. Statist. 38 (2010) 2998-3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q : if Q n → Q in the Wasserstein metric, then the projected densities converge in weighted L 1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s -concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s -concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s -concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s -concave.
APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES
Han, Qiyang; Wellner, Jon A.
2017-01-01
In this paper, we study the approximation and estimation of s-concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s-concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [Ann. Statist. 38 (2010) 2998–3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q: if Qn → Q in the Wasserstein metric, then the projected densities converge in weighted L1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s-concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s-concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s-concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s-concave. PMID:28966410
Zollanvari, Amin; Dougherty, Edward R
2014-06-01
The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bogen, K.T.; Conrado, C.L.; Robison, W.L.
A detailed analysis of uncertainty and interindividual variability in estimated doses was conducted for a rehabilitation scenario for Bikini Island at Bikini Atoll, in which the top 40 cm of soil would be removed in the housing and village area, and the rest of the island is treated with potassium fertilizer, prior to an assumed resettlement date of 1999. Predicted doses were considered for the following fallout-related exposure pathways: ingested Cesium-137 and Strontium-90, external gamma exposure, and inhalation and ingestion of Americium-241 + Plutonium-239+240. Two dietary scenarios were considered: (1) imported foods are available (IA), and (2) imported foods aremore » unavailable (only local foods are consumed) (IUA). Corresponding calculations of uncertainty in estimated population-average dose showed that after {approximately}5 y of residence on Bikini, the upper and lower 95% confidence limits with respect to uncertainty in this dose are estimated to be approximately 2-fold higher and lower than its population-average value, respectively (under both IA and IUA assumptions). Corresponding calculations of interindividual variability in the expected value of dose with respect to uncertainty showed that after {approximately}5 y of residence on Bikini, the upper and lower 95% confidence limits with respect to interindividual variability in this dose are estimated to be approximately 2-fold higher and lower than its expected value, respectively (under both IA and IUA assumptions). For reference, the expected values of population-average dose at age 70 were estimated to be 1.6 and 5.2 cSv under the IA and IUA dietary assumptions, respectively. Assuming that 200 Bikini resettlers would be exposed to local foods (under both IA and IUA assumptions), the maximum 1-y dose received by any Bikini resident is most likely to be approximately 2 and 8 mSv under the IA and IUA assumptions, respectively.« less
Solving Math Problems Approximately: A Developmental Perspective
Ganor-Stern, Dana
2016-01-01
Although solving arithmetic problems approximately is an important skill in everyday life, little is known about the development of this skill. Past research has shown that when children are asked to solve multi-digit multiplication problems approximately, they provide estimates that are often very far from the exact answer. This is unfortunate as computation estimation is needed in many circumstances in daily life. The present study examined 4th graders, 6th graders and adults’ ability to estimate the results of arithmetic problems relative to a reference number. A developmental pattern was observed in accuracy, speed and strategy use. With age there was a general increase in speed, and an increase in accuracy mainly for trials in which the reference number was close to the exact answer. The children tended to use the sense of magnitude strategy, which does not involve any calculation but relies mainly on an intuitive coarse sense of magnitude, while the adults used the approximated calculation strategy which involves rounding and multiplication procedures, and relies to a greater extent on calculation skills and working memory resources. Importantly, the children were less accurate than the adults, but were well above chance level. In all age groups performance was enhanced when the reference number was smaller (vs. larger) than the exact answer and when it was far (vs. close) from it, suggesting the involvement of an approximate number system. The results suggest the existence of an intuitive sense of magnitude for the results of arithmetic problems that might help children and even adults with difficulties in math. The present findings are discussed in the context of past research reporting poor estimation skills among children, and the conditions that might allow using children estimation skills in an effective manner. PMID:27171224
NASA Astrophysics Data System (ADS)
Satchithanantham, Sanjayan; Wilson, Henry F.; Glenn, Aaron J.
2017-06-01
Consumptive use of shallow groundwater by phreatophytic vegetation is a significant part of the water budget in many regions, particularly in riparian areas. The influence of vegetation type on groundwater level fluctuations and evapotranspiration has rarely been quantified for contrasting plant communities concurrently although it has implications for downstream water yield and quality. Hourly groundwater evapotranspiration (ETG) rates were estimated for grass and tree riparian vegetation in southwestern Manitoba, Canada using two modified White methods. Groundwater table depth was monitored in four 21 m transects of five 3 m deep monitoring wells in the riparian zone of a stream reach including tree (Acer negundo; boxelder) and grass (Bromus inermis; smooth brome) dominated segments. The average depths to the groundwater table from the surface were 1.4 m and 1 m for the tree and grass segments, respectively, over the two-year study. During rain free periods of the growing season ETG was estimated for a total of 70 days in 2014 and 79 days in 2015 when diurnal fluctuations were present in groundwater level. Diurnal groundwater level fluctuations were observed during dry periods under both segments, however, ETG was significantly higher (p < 0.001) under trees compared to grass cover in 2014 (a wet year with 72% higher than normal growing season precipitation) and 2015 (a drier year with 15% higher than normal growing season precipitation). The two methods used to estimate ETG produced similar daily and seasonal values for the two segments. In 2014, total ETG was approximately 50% (148 mm) and 100% (282-285 mm) of reference evapotranspiration (ETref, 281 mm) for the grass and tree segments, respectively. In 2015, total ETG was approximately 40% (106-127 mm) and 120% (369-374 mm) of ETref (307 mm) for the grass and tree segments, respectively. Results from the study show the tree dominated portions of the stream reach consumed approximately 2.4 ML ha-1 yr-1 more groundwater than a common forage grass. These findings have land management implications for regional water budgets during wet periods when flood mitigation is desirable and dry years when water scarcity is a concern.
2005-07-26
Audit Report Cost-to-Complete Estimates and Financial Reporting for the Management of the Iraq Relief and Reconstruction...Complete Estimates and Financial Reporting for the Management of the Iraq Relief and Reconstruction Fund 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c...RECONSTRUCTION MANAGEMENT OFFICE DIRECTOR, PROJECT AND CONTRACTING OFFICE SUBJECT: Cost-to-Complete Estimates and Financial Reporting for the Management of
Parkes-Ratanshi, R; Achan, B; Kwizera, R; Kambugu, A; Meya, D; Denning, D W
2015-10-01
The HIV epidemic in Uganda has highlighted Cryptococcus and Candida infections as important opportunistic fungal infections. However, the burden of other fungal diseases is not well described. We aimed to estimate the burden of fungal infections in Uganda. All epidemiological papers of fungal diseases in Uganda were reviewed. Where there is no Ugandan data, global or East African data were used. Recurrent vaginal candidiasis is estimated to occur in 375 540 Uganda women per year; Candida in pregnant women affects up to 651,600 women per year. There are around 45,000 HIV-related oral and oesophageal candidosis cases per year. There are up to 3000 cases per year of post-TB chronic pulmonary aspergillosis. There are an estimated 40,392 people with asthma-related fungal conditions. An estimated 1,300,000 cases of tinea capitis occur in school children yearly in Uganda. There are approximately 800 HIV-positive adults with Pneumocystis jirovecii pneumonia (PJP) annually and up to 42 000 children with PJP per year. There are an estimated 4000 cryptococcal cases annually. There are an estimated 2.5 million fungal infections per year in Uganda. Cryptococcus and PJP cause around 28,000 deaths in adults and children per year. We propose replicating the model of research around cryptococcal disease to investigate and development management strategies for other fungal diseases in Uganda. © 2015 Blackwell Verlag GmbH.
NASA Astrophysics Data System (ADS)
Surawski, N. C.; Sullivan, A. L.; Roxburgh, S. H.; Meyer, M.; Polglase, P. J.
2016-12-01
Vegetation fires are a complex phenomenon and have a range of global impacts including influences on climate. Even though fire is a necessary disturbance for the maintenance of some ecosystems, a range of anthropogenically deleterious consequences are associated with it, such as damage to assets and infrastructure, loss of life, as well as degradation to air quality leading to negative impacts on human health. Estimating carbon emissions from fire relies on a carbon mass balance technique which has evolved with two different interpretations in the fire emissions community. Databases reporting global fire emissions estimates use an approach based on `consumed biomass' which is an approximation to the biogeochemically correct `burnt carbon' approach. Disagreement between the two methods occurs because the `consumed biomass' accounting technique assumes that all burnt carbon is volatilized and emitted. By undertaking a global review of the fraction of burnt carbon emitted to the atmosphere, we show that the `consumed biomass' accounting approach overestimates global carbon emissions by 4.0%, or 100 Teragrams, annually. The required correction is significant and represents 9% of the net global forest carbon sink estimated annually. To correctly partition burnt carbon between that emitted to the atmosphere and that remaining as a post-fire residue requires the post-burn carbon content to be estimated, which is quite often not undertaken in atmospheric emissions studies. To broaden our understanding of ecosystem carbon fluxes, it is recommended that the change in carbon content associated with burnt residues be accounted for. Apart from correctly partitioning burnt carbon between the emitted and residue pools, it enables an accounting approach which can assess the efficacy of fire management operations targeted at sequestering carbon from fire. These findings are particularly relevant for the second commitment period for the Kyoto protocol, since improved landscape fire management can now be accounted for in the land use and forestry sector.
Effects of management thinning on carbon dioxide uptake by a plantation oak woodland in SE England
NASA Astrophysics Data System (ADS)
Wilkinson, Matthew; Eaton, Edward; Casella, Eric; Crow, Peter; Morison, James
2013-04-01
Eddy covariance (EC) methods are widely used to estimate net ecosystem CO2 exchanges from sub-hourly to inter-annual time scales. The majority of forest sites contributing to the global EC networks are located in large, unmanaged forest areas. However, managed and plantation forests have an important role in greenhouse gas emissions abatement, nationally and globally, as exemplified by LULUCF inventory reporting. In the lowland areas of the UK forestry is mainly carried out in small woodlands, heterogeneous in species and structure and with regular management interventions. The aim of this study was to improve our understanding of the influence of management on forest CO2 uptake during a stand-scale thinning. CO2 fluxes have been measured using EC at the 70-80 year old, 90 ha oak-with-understorey plantation of the Straits Inclosure in the Alice Holt Research Forest since 1998. The mean annual net ecosystem productivity (NEP) from EC over 12 years was 486g C m-2 y-1, although there has been substantial inter-annual variation (95 % CI of ± 73g C m-2 y-1). This has been partitioned into a gross primary productivity (GPP) of 2034 ± 145g C m-2 y-1 and an ecosystem respiration rate (Reco) of 1548 ± 122 C m-2 y-1. In 2007 approximately 50% of the woodland area within the EC flux tower footprint was selectively thinned according to normal management prescription with mechanical harvesters. High resolution aerial LiDAR surveys of the whole woodland collected pre- (2006) and post- (2010) thin were used to characterise the canopy gap fraction and tree height changes. We then used EC footprint analysis combined with LiDAR data to quantify the effects of the management thinning and subsequent recovery on the CO2 flux and partitioning. Following the management thinning there was an average reduction in peak midday summer uptakes of approximately 5 μmol CO2 m-2 s-1 (20%) compared to fluxes from the un-thinned area, and a larger depression in night-time efflux. A depression in net daily CO2 uptake was still evident in the summer of 2010, three years after the thin. The implications of such management intervention for woodland C balances are discussed.
Comparing capacity value estimation techniques for photovoltaic solar power
Madaeni, Seyed Hossein; Sioshansi, Ramteen; Denholm, Paul
2012-09-28
In this paper, we estimate the capacity value of photovoltaic (PV) solar plants in the western U.S. Our results show that PV plants have capacity values that range between 52% and 93%, depending on location and sun-tracking capability. We further compare more robust but data- and computationally-intense reliability-based estimation techniques with simpler approximation methods. We show that if implemented properly, these techniques provide accurate approximations of reliability-based methods. Overall, methods that are based on the weighted capacity factor of the plant provide the most accurate estimate. As a result, we also examine the sensitivity of PV capacity value to themore » inclusion of sun-tracking systems.« less
The impact of HMO competition on private health insurance premiums, 1985-1992.
Wickizer, T M; Feldstein, P J
1995-01-01
A critical unresolved health policy question is whether competition stimulated by managed care organizations can slow the rate of growth in health care expenditures. We analyzed the competitive effects of health maintenance organizations (HMOs) on the growth in fee-for-service indemnity insurance premiums over the period 1985-1992 using premium data on 95 groups that had policies with a single, large, private insurance carrier. We used multiple regressions to estimate the effect of HMO market penetration on insurance premium growth rates. HMO penetration had a statistically significant (p < .015) negative effect on the rate of growth in indemnity insurance premiums. For an average group located in a market whose HMO penetration rate increased by 25% (e.g., from 10% to 12.5%), the real rate of growth in premiums would be approximately 5.9% instead of 7.0%. Our findings indicate that competitive strategies, relying on managed care, have significant potential to reduce health insurance premium growth rates, thereby resulting in substantial cost savings over time.
Development of characterization protocol for mixed liquid radioactive waste classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zakaria, Norasalwa, E-mail: norasalwa@nuclearmalaysia.gov.my; Wafa, Syed Asraf; Wo, Yii Mei
2015-04-29
Mixed liquid organic waste generated from health-care and research activities containing tritium, carbon-14, and other radionuclides posed specific challenges in its management. Often, these wastes become legacy waste in many nuclear facilities and being considered as ‘problematic’ waste. One of the most important recommendations made by IAEA is to perform multistage processes aiming at declassification of the waste. At this moment, approximately 3000 bottles of mixed liquid waste, with estimated volume of 6000 litres are currently stored at the National Radioactive Waste Management Centre, Malaysia and some have been stored for more than 25 years. The aim of this studymore » is to develop a characterization protocol towards reclassification of these wastes. The characterization protocol entails waste identification, waste screening and segregation, and analytical radionuclides profiling using various analytical procedures including gross alpha/ gross beta, gamma spectrometry, and LSC method. The results obtained from the characterization protocol are used to establish criteria for speedy classification of the waste.« less
Drops of energy: conserving urban water to reduce greenhouse gas emissions.
Zhou, Yuanchun; Zhang, Bing; Wang, Haikun; Bi, Jun
2013-10-01
Water and energy are two essential resources of modern civilization and are inherently linked. Indeed, the optimization of the water supply system would reduce energy demands and greenhouse gas emissions in the municipal water sector. This research measured the climatic cobenefit of water conservation based on a water flow analysis. The results showed that the estimated energy consumption of the total water system in Changzhou, China, reached approximately 10% of the city's total energy consumption, whereas the industrial sector was found to be more energy intensive than other sectors within the entire water system, accounting for nearly 70% of the total energy use of the water system. In addition, four sustainable water management scenarios would bring the cobenefit of reducing the total energy use of the water system by 13.9%, and 77% of the energy savings through water conservation was indirect. To promote sustainable water management and reduce greenhouse gas emissions, China would require its water price system, both for freshwater and recycled water, to be reformed.
Wastewater reclamation and recharge: A water management strategy for Albuquerque
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorder, P.J.; Brunswick, R.J.; Bockemeier, S.W.
1995-12-31
Approximately 61,000 acre-feet of the pumped water is annually discharged to the Rio Grande as treated wastewater. Albuquerque`s Southside Water Reclamation Plant (SWRP) is the primary wastewater treatment facility for most of the Albuquerque area. Its current design capacity is 76 million gallons per day (mgd), which is expected to be adequate until about 2004. A master plan currently is being prepared (discussed here in Wastewater Master Planning and the Zero Discharge Concept section) to provide guidelines for future expansions of the plant and wastewater infrastructure. Construction documents presently are being prepared to add ammonia and nitrogen removal capability tomore » the plant, as required by its new discharge permit. The paper discusses water management strategies, indirect potable reuse for Albuquerque, water quality considerations for indirect potable reuse, treatment for potable reuse, geohydrological aspects of a recharge program, layout and estimated costs for a conceptual reclamation and recharge system, and work to be accomplished under phase 2 of the reclamation and recharge program.« less
Reliable Function Approximation and Estimation
2016-08-16
AUSTIN , TX 78712 08/16/2016 Final Report DISTRIBUTION A: Distribution approved for public release. Air Force Research Laboratory AF Office Of Scientific...UNIVERSITY OF TEXAS AT AUSTIN 101 EAST 27TH STREET STE 4308 AUSTIN , TX 78712 DISTRIBUTION A: Distribution approved for public release. INSTRUCTIONS...AFRL-AFOSR-VA-TR-2016-0293 Reliable Function Approximation and Estimation Rachel Ward UNIVERSITY OF TEXAS AT AUSTIN 101 EAST 27TH STREET STE 4308
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-08
... the drill pad would measure 4 by 20 feet and be approximately 5 feet deep. An estimated 1.45 acres of... the drill pad would measure 8 by 10 feet and be approximately 6 feet deep. An estimated 42.64 acres of... the proposal will be posted on the project Web site at http://www.fs.fed.us/nepa/nepa_project_exp.php...
Estimating the attack rate of pregnancy-associated listeriosis during a large outbreak.
Imanishi, Maho; Routh, Janell A; Klaber, Marigny; Gu, Weidong; Vanselow, Michelle S; Jackson, Kelly A; Sullivan-Chang, Loretta; Heinrichs, Gretchen; Jain, Neena; Albanese, Bernadette; Callaghan, William M; Mahon, Barbara E; Silk, Benjamin J
2015-01-01
In 2011, a multistate outbreak of listeriosis linked to contaminated cantaloupes raised concerns that many pregnant women might have been exposed to Listeria monocytogenes. Listeriosis during pregnancy can cause fetal death, premature delivery, and neonatal sepsis and meningitis. Little information is available to guide healthcare providers who care for asymptomatic pregnant women with suspected L. monocytogenes exposure. We tracked pregnancy-associated listeriosis cases using reportable diseases surveillance and enhanced surveillance for fetal death using vital records and inpatient fetal deaths data in Colorado. We surveyed 1,060 pregnant women about symptoms and exposures. We developed three methods to estimate how many pregnant women in Colorado ate the implicated cantaloupes, and we calculated attack rates. One laboratory-confirmed case of listeriosis was associated with pregnancy. The fetal death rate did not increase significantly compared to preoutbreak periods. Approximately 6,500-12,000 pregnant women in Colorado might have eaten the contaminated cantaloupes, an attack rate of ~1 per 10,000 exposed pregnant women. Despite many exposures, the risk of pregnancy-associated listeriosis was low. Our methods for estimating attack rates may help during future outbreaks and product recalls. Our findings offer relevant considerations for management of asymptomatic pregnant women with possible L. monocytogenes exposure.
A mapping and monitoring assessment of the Philippines' mangrove forests from 1990 to 2010
Long, Jordan; Napton, Darrell; Giri, Chandra; Graesser, Jordan
2014-01-01
Information on the present condition and spatiotemporal dynamics of mangrove forests is needed for land-change studies and integrated natural resources planning and management. Although several national mangrove estimates for the Philippines exist, information is unavailable at sufficient spatial and thematic detail for change analysis. Historical and contemporary mangrove distribution maps of the Philippines for 1990 and 2010 were prepared at nominal 30-m spatial resolution using Landsat satellite data. Image classification was performed using a supervised decision tree classification approach. Additionally, decadal land-cover change maps from 1990 to 2010 were prepared to depict changes in mangrove area. Total mangrove area decreased 10.5% from 1990 to 2010. Comparison of estimates produced from this study with selected historical mangrove area estimates revealed that total mangrove area decreased by approximately half (51.8%) from 1918 to 2010. This study provides the most current and reliable data regarding the Philippines mangrove area and spatial distribution and delineates where and when mangrove change has occurred in recent decades. The results from this study are useful for developing conservation strategies, biodiversity loss mitigation efforts, and future monitoring and analysis.
A Wide Area Risk Assessment Framework for Underwater Military Munitions Response
NASA Astrophysics Data System (ADS)
Holland, K. T.; Calantoni, J.
2017-12-01
Our objective was to develop a prototype statistical framework supporting Wide Area Assessment and Remedial Investigation decisions relating to the risk of unexploded ordnance and other military munitions concentrated in underwater environments. Decision making involving underwater munitions is inherently complex due to the high degree of uncertainty in the environmental conditions that force munitions responses (burial, decay, migration, etc.) and associated risks to the public. The prototype framework provides a consistent approach to accurately delineating contaminated areas at underwater munitions sites through the estimation of most probable concentrations. We adapted existing deterministic models and environmental data services for use within statistical modules that allowed the estimation of munition concentration given historic site information and environmental attributes. Ultimately this risk surface can be used to evaluate costs associated with various remediation approaches (e.g. removal, monitoring, etc.). Unfortunately, evaluation of the assessment framework was limited due to the lack of enduser data services from munition site managers. Of the 450 U.S. sites identified as having potential contamination with underwater munitions, assessment of available munitions information (including historic firing or disposal records, and recent ground-truth munitions samples) indicated very limited information in the databases. Example data types include the most probable munition types, approximate firing / disposal dates and locations, and any supportive munition survey or sampling results. However the overall technical goal to integrate trained statistical belief networks with detailed geophysical knowledge of sites, of sensors and of the underwater environment was demonstrated and should allow probabilistic estimates of the most likely outcomes and tradeoffs while managing uncertainty associated with military munitions response.
NASA Astrophysics Data System (ADS)
Ray, R. K.; Syed, T. H.; Saha, Dipankar; Sarkar, B. C.; Patre, A. K.
2017-12-01
Extracted groundwater, 90% of which is used for irrigated agriculture, is central to the socio-economic development of India. A lack of regulation or implementation of regulations, alongside unrecorded extraction, often leads to over exploitation of large-scale common-pool resources like groundwater. Inevitably, management of groundwater extraction (draft) for irrigation is critical for sustainability of aquifers and the society at large. However, existing assessments of groundwater draft, which are mostly available at large spatial scales, are inadequate for managing groundwater resources that are primarily exploited by stakeholders at much finer scales. This study presents an estimate, projection and analysis of fine-scale groundwater draft in the Seonath-Kharun interfluve of central India. Using field surveys of instantaneous discharge from irrigation wells and boreholes, annual groundwater draft for irrigation in this area is estimated to be 212 × 106 m3, most of which (89%) is withdrawn during non-monsoon season. However, the density of wells/boreholes, and consequent extraction of groundwater, is controlled by the existing hydrogeological conditions. Based on trends in the number of abstraction structures (1982-2011), groundwater draft for the year 2020 is projected to be approximately 307 × 106 m3; hence, groundwater draft for irrigation in the study area is predicted to increase by ˜44% within a span of 8 years. Central to the work presented here is the approach for estimation and prediction of groundwater draft at finer scales, which can be extended to critical groundwater zones of the country.
NASA Astrophysics Data System (ADS)
Malovichko, M.; Khokhlov, N.; Yavich, N.; Zhdanov, M.
2017-10-01
Over the recent decades, a number of fast approximate solutions of Lippmann-Schwinger equation, which are more accurate than classic Born and Rytov approximations, were proposed in the field of electromagnetic modeling. Those developments could be naturally extended to acoustic and elastic fields; however, until recently, they were almost unknown in seismology. This paper presents several solutions of this kind applied to acoustic modeling for both lossy and lossless media. We evaluated the numerical merits of those methods and provide an estimation of their numerical complexity. In our numerical realization we use the matrix-free implementation of the corresponding integral operator. We study the accuracy of those approximate solutions and demonstrate, that the quasi-analytical approximation is more accurate, than the Born approximation. Further, we apply the quasi-analytical approximation to the solution of the inverse problem. It is demonstrated that, this approach improves the estimation of the data gradient, comparing to the Born approximation. The developed inversion algorithm is based on the conjugate-gradient type optimization. Numerical model study demonstrates that the quasi-analytical solution significantly reduces computation time of the seismic full-waveform inversion. We also show how the quasi-analytical approximation can be extended to the case of elastic wavefield.
A watershed-scale goals approach to assessing and funding wastewater infrastructure.
Rahm, Brian G; Vedachalam, Sridhar; Shen, Jerry; Woodbury, Peter B; Riha, Susan J
2013-11-15
Capital needs during the next twenty years for public wastewater treatment, piping, combined sewer overflow correction, and storm-water management are estimated to be approximately $300 billion for the USA. Financing these needs is a significant challenge, as Federal funding for the Clean Water Act has been reduced by 70% during the last twenty years. There is an urgent need for new approaches to assist states and other decision makers to prioritize wastewater maintenance and improvements. We present a methodology for performing an integrated quantitative watershed-scale goals assessment for sustaining wastewater infrastructure. We applied this methodology to ten watersheds of the Hudson-Mohawk basin in New York State, USA that together are home to more than 2.7 million people, cover 3.5 million hectares, and contain more than 36,000 km of streams. We assembled data on 183 POTWs treating approximately 1.5 million m(3) of wastewater per day. For each watershed, we analyzed eight metrics: Growth Capacity, Capacity Density, Soil Suitability, Violations, Tributary Length Impacted, Tributary Capital Cost, Volume Capital Cost, and Population Capital Cost. These metrics were integrated into three goals for watershed-scale management: Tributary Protection, Urban Development, and Urban-Rural Integration. Our results demonstrate that the methodology can be implemented using widely available data, although some verification of data is required. Furthermore, we demonstrate substantial differences in character, need, and the appropriateness of different management strategies among the ten watersheds. These results suggest that it is feasible to perform watershed-scale goals assessment to augment existing approaches to wastewater infrastructure analysis and planning. Copyright © 2013 Elsevier Ltd. All rights reserved.
Reproductive Declines in an Endangered Seabird: Cause for Concern or Signs of Conservation Success?
Schuetz, Justin
2011-01-01
Collection and analysis of demographic data play a critical role in monitoring and management of endangered taxa. I analyzed long-term clutch size and fledgling productivity data for California least tern (Sternula antillarum browni), a federally endangered subspecies that has recently become a candidate for down-listing. While the breeding population grew from approximately 1,253 to 7,241 pairs (578%) during the study period (1988–2009) both clutch size and fledgling productivity declined. Clutch size decreased by approximately 0.27 eggs (14%) from 1990–2004 then showed a moderate increase of 0.11 eggs from 2004–2009. Estimates of fledgling productivity showed a similar pattern of decline and moderate increase even after controlling for clutch size. Sea surface temperature anomalies, an index of El Niño-Southern Oscillation activity, did not influence clutch size but were associated with fledgling productivity through a non-linear relationship. Both clutch size and fledgling productivity increased with latitude, potentially indicating a gradient of life-history trade-offs. Random site effects explained little of the overall variation in clutch size (3%) or fledgling productivity (<1%) suggesting that site characteristics beyond those associated with latitude had little bearing on either measure of reproduction. Despite intensive monitoring and management, causes of variation in key demographic parameters remain poorly understood. Long-term declines in clutch size and fledgling productivity may reflect: 1) reduced food availability, 2) increased density-dependent competition, and/or 3) age-dependent reproduction coupled with a shifting population age-structure. Until the mechanisms shaping demographic parameters and population change are better understood, the success of past management and the probability of ongoing recovery will remain difficult to characterize. PMID:21559287
Integration Of 3D Geographic Information System (GIS) For Effective Waste Management Practice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rood, G.J.; Hecox, G.R.
2006-07-01
Soil remediation in response to the presence of residual radioactivity resulting from past MED/AEC activities is currently in progress under the Formerly Utilized Sites Remedial Action Program near the St. Louis, MO airport. During GY05, approximately 92,000 cubic meters (120,000 cubic yards) of radioactive soil was excavated, packaged and transported via rail for disposal at U.S. Ecology or Envirocare of Utah, LLC. To facilitate the management of excavation/transportation/disposal activities, a 3D GIS was developed for the site that was used to estimate the in-situ radionuclide activities, activities in excavation block areas, and shipping activities using a sum-of ratio (SOR) methodmore » for combining various radionuclide compounds into applicable transportation and disposal SOR values. The 3D GIS was developed starting with the SOR values for the approximately 900 samples from 90 borings. These values were processed into a three-dimensional (3D) point grid using kriging with nominal grid spacing of 1.5 by 1.5 meter horizontal by 0.3 meter vertical. The final grid, clipped to the area and soil interval above the planned base of excavation, consisted of 210,000 individual points. Standard GIS volumetric and spatial join procedures were used to calculate the volume of soil represented by each grid point, the base of excavation, depth below ground surface, elevation, surface elevation and SOR values for each point in the final grid. To create the maps needed for management, the point grid results were spatially joined to each excavation area in 0.9 meter (3 foot) depth intervals and the average SOR and total volumes were calculations. The final maps were color-coded for easy identification of areas above the specific transportation or disposal criteria. (authors)« less
Estimates of the absolute error and a scheme for an approximate solution to scheduling problems
NASA Astrophysics Data System (ADS)
Lazarev, A. A.
2009-02-01
An approach is proposed for estimating absolute errors and finding approximate solutions to classical NP-hard scheduling problems of minimizing the maximum lateness for one or many machines and makespan is minimized. The concept of a metric (distance) between instances of the problem is introduced. The idea behind the approach is, given the problem instance, to construct another instance for which an optimal or approximate solution can be found at the minimum distance from the initial instance in the metric introduced. Instead of solving the original problem (instance), a set of approximating polynomially/pseudopolynomially solvable problems (instances) are considered, an instance at the minimum distance from the given one is chosen, and the resulting schedule is then applied to the original instance.
Honda, Michitaka
2014-04-01
Several improvements were implemented in the edge method of presampled modulation transfer function measurements (MTFs). The estimation technique for edge angle was newly developed by applying an algorithm for principal components analysis. The error in the estimation was statistically confirmed to be less than 0.01 even in the presence of quantum noise. Secondly, the geometrical edge slope was approximated using a rationalized number, making it possible to obtain an oversampled edge response function (ESF) with equal intervals. Thirdly, the final MTFs were estimated using the average of multiple MTFs calculated for local areas. This averaging operation eliminates the errors caused by the rationalized approximation. Computer-simulated images were used to evaluate the accuracy of our method. The relative error between the estimated MTF and the theoretical MTF at the Nyquist frequency was less than 0.5% when the MTF was expressed as a sinc function. For MTFs representing an indirect detector and phase-contrast detector, good agreement was also observed for the estimated MTFs for each. The high accuracy of the MTF estimation was also confirmed, even for edge angles of around 10 degrees, which suggests the potential for simplification of the measurement conditions. The proposed method could be incorporated into an automated measurement technique using a software application.
Cortés, Camilo; de los Reyes-Guzmán, Ana; Scorza, Davide; Bertelsen, Álvaro; Carrasco, Eduardo; Gil-Agudo, Ángel; Ruiz-Salguero, Oscar; Flórez, Julián
2016-01-01
Robot-Assisted Rehabilitation (RAR) is relevant for treating patients affected by nervous system injuries (e.g., stroke and spinal cord injury). The accurate estimation of the joint angles of the patient limbs in RAR is critical to assess the patient improvement. The economical prevalent method to estimate the patient posture in Exoskeleton-based RAR is to approximate the limb joint angles with the ones of the Exoskeleton. This approximation is rough since their kinematic structures differ. Motion capture systems (MOCAPs) can improve the estimations, at the expenses of a considerable overload of the therapy setup. Alternatively, the Extended Inverse Kinematics Posture Estimation (EIKPE) computational method models the limb and Exoskeleton as differing parallel kinematic chains. EIKPE has been tested with single DOF movements of the wrist and elbow joints. This paper presents the assessment of EIKPE with elbow-shoulder compound movements (i.e., object prehension). Ground-truth for estimation assessment is obtained from an optical MOCAP (not intended for the treatment stage). The assessment shows EIKPE rendering a good numerical approximation of the actual posture during the compound movement execution, especially for the shoulder joint angles. This work opens the horizon for clinical studies with patient groups, Exoskeleton models, and movements types. PMID:27403420
Kubota, Yoshihisa; Takahashi, Hiroyuki; Watanabe, Yoshito; Fuma, Shoichi; Kawaguchi, Isao; Aoki, Masanari; Kubota, Masahide; Furuhata, Yoshiaki; Shigemura, Yusaku; Yamada, Fumio; Ishikawa, Takahiro; Obara, Satoshi; Yoshida, Satoshi
2015-04-01
The dose rates of radiation absorbed by wild rodents inhabiting a site severely contaminated by the Fukushima Dai-ichi Nuclear Power Plant accident were estimated. The large Japanese field mouse (Apodemus speciosus), also called the wood mouse, was the major rodent species captured in the sampling area, although other species of rodents, such as small field mice (Apodemus argenteus) and Japanese grass voles (Microtus montebelli), were also collected. The external exposure of rodents calculated from the activity concentrations of radiocesium ((134)Cs and (137)Cs) in litter and soil samples using the ERICA (Environmental Risk from Ionizing Contaminants: Assessment and Management) tool under the assumption that radionuclides existed as the infinite plane isotropic source was almost the same as those measured directly with glass dosimeters embedded in rodent abdomens. Our findings suggest that the ERICA tool is useful for estimating external dose rates to small animals inhabiting forest floors; however, the estimated dose rates showed large standard deviations. This could be an indication of the inhomogeneous distribution of radionuclides in the sampled litter and soil. There was a 50-fold difference between minimum and maximum whole-body activity concentrations measured in rodents at the time of capture. The radionuclides retained in rodents after capture decreased exponentially over time. Regression equations indicated that the biological half-life of radiocesium after capture was 3.31 d. At the time of capture, the lowest activity concentration was measured in the lung and was approximately half of the highest concentration measured in the mixture of muscle and bone. The average internal absorbed dose rate was markedly smaller than the average external dose rate (<10% of the total absorbed dose rate). The average total absorbed dose rate to wild rodents inhabiting the sampling area was estimated to be approximately 52 μGy h(-1) (1.2 mGy d(-1)), even 3 years after the accident. This dose rate exceeds 0.1-1 mGy d(-1) derived consideration reference level for Reference rat proposed by the International Commission on Radiological Protection (ICRP). Copyright © 2015 Elsevier Ltd. All rights reserved.
Huber, Thomas P; Shortell, Stephen M; Rodriguez, Hector P
2017-08-01
Examine the extent to which physician organization participation in an accountable care organization (ACO) and electronic health record (EHR) functionality are associated with greater adoption of care transition management (CTM) processes. A total of 1,398 physician organizations from the third National Study of Physician Organization survey (NSPO3), a nationally representative sample of medical practices in the United States (January 2012-May 2013). We used data from the third National Study of Physician Organization survey (NSPO3) to assess medical practice characteristics, including CTM processes, ACO participation, EHR functionality, practice type, organization size, ownership, public reporting, and pay-for-performance participation. Multivariate linear regression models estimated the extent to which ACO participation and EHR functionality were associated with greater CTM capabilities, controlling for practice size, ownership, public reporting, and pay-for-performance participation. Approximately half (52.4 percent) of medical practices had a formal program for managing care transitions in place. In adjusted analyses, ACO participation (p < .001) and EHR functionality (p < .001) were independently associated with greater use of CTM processes among medical practices. The growth of ACOs and similar provider risk-bearing arrangements across the country may improve the management of care transitions by physician organizations. © Health Research and Educational Trust.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tinmaz, Esra; Demir, Ibrahim
Over the past decades, uncontrolled population growth and rapid urbanization and industrialization have resulted in environmental problems in Corlu Town, Turkey. One of the most important problems is solid waste due to inadequate management practices. Nowadays, increasing public awareness of the environment compels local authorities to define and to adopt new solutions for waste management. This paper presents a general overview of current solid waste management practices in Corlu Town and principles of the recommended municipal solid waste (MSW) management system. In Corlu, 170 tonnes of municipal solid waste are generated each day, or 1.150 kg per capita per day.more » Approximately one-half of the municipal solid waste generated is organic material and 30% of the MSW consists of recyclable materials. The recommended system deals with maximizing recycling and minimizing landfilling of municipal solid waste, and consists of separation at source, collection, sorting, recycling, composting and sanitary landfilling. This study also analyzed the recommended system with respect to feasibility and economics. To evaluate whether the suggested system is cost effective or not, the operating cost of the recommended system and market prices of recyclable materials were compared, and the results show that the recommended system will reduce required landfill volume up to 27% of compared to the present situation. The profit of the recommended system is estimated to be about 80 million US dollars.« less
Handwritten document age classification based on handwriting styles
NASA Astrophysics Data System (ADS)
Ramaiah, Chetan; Kumar, Gaurav; Govindaraju, Venu
2012-01-01
Handwriting styles are constantly changing over time. We approach the novel problem of estimating the approximate age of Historical Handwritten Documents using Handwriting styles. This system will have many applications in handwritten document processing engines where specialized processing techniques can be applied based on the estimated age of the document. We propose to learn a distribution over styles across centuries using Topic Models and to apply a classifier over weights learned in order to estimate the approximate age of the documents. We present a comparison of different distance metrics such as Euclidean Distance and Hellinger Distance within this application.
Soil erosion and significance for carbon fluxes in a mountainous Mediterranean-climate watershed.
Smith, S V; Bullock, S H; Hinojosa-Corona, A; Franco-Vizcaíno, E; Escoto-Rodríguez, M; Kretzschmar, T G; Farfán, L M; Salazar-Ceseña, J M
2007-07-01
In topographically complex terrains, downslope movement of soil organic carbon (OC) can influence local carbon balance. The primary purpose of the present analysis is to compare the magnitude of OC displacement by erosion with ecosystem metabolism in such a complex terrain. Does erosion matter in this ecosystem carbon balance? We have used the Revised Universal Soil Loss Equation (RUSLE) erosion model to estimate lateral fluxes of OC in a watershed in northwestern Mexico. The watershed (4900 km2) has an average slope of 10 degrees +/- 9 degrees (mean +/- SD); 45% is >10 degrees, and 3% is >30 degrees. Land cover is primarily shrublands (69%) and agricultural lands (22%). Estimated bulk soil erosion averages 1350 Mg x km(-2) x yr(-1). We estimate that there is insignificant erosion on slopes < 2 degrees and that 20% of the area can be considered depositional. Estimated OC erosion rates are 10 Mg x km(-2) x yr(-1) for areas steeper than 2 degrees. Over the entire area, erosion is approximately 50% higher on shrublands than on agricultural lands, but within slope classes, erosion rates are more rapid on agricultural areas. For the whole system, estimated OC erosion is approximately 2% of net primary production (NPP), increasing in high-slope areas to approximately 3% of NPP. Deposition of eroded OC in low-slope areas is approximately 10% of low-slope NPP. Soil OC movement from erosional slopes to alluvial fans alters the mosaic of OC metabolism and storage across the landscape.
Schweiger, Regev; Fisher, Eyal; Rahmani, Elior; Shenhav, Liat; Rosset, Saharon; Halperin, Eran
2018-06-22
Estimation of heritability is an important task in genetics. The use of linear mixed models (LMMs) to determine narrow-sense single-nucleotide polymorphism (SNP)-heritability and related quantities has received much recent attention, due of its ability to account for variants with small effect sizes. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. The common way to report the uncertainty in REML estimation uses standard errors (SEs), which rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals (CIs). In addition, for larger data sets (e.g., tens of thousands of individuals), the construction of SEs itself may require considerable time, as it requires expensive matrix inversions and multiplications. Here, we present FIESTA (Fast confidence IntErvals using STochastic Approximation), a method for constructing accurate CIs. FIESTA is based on parametric bootstrap sampling, and, therefore, avoids unjustified assumptions on the distribution of the heritability estimator. FIESTA uses stochastic approximation techniques, which accelerate the construction of CIs by several orders of magnitude, compared with previous approaches as well as to the analytical approximation used by SEs. FIESTA builds accurate CIs rapidly, for example, requiring only several seconds for data sets of tens of thousands of individuals, making FIESTA a very fast solution to the problem of building accurate CIs for heritability for all data set sizes.
Schuck, Sabrina; Emmerson, Natasha; Ziv, Hadar; Collins, Penelope; Arastoo, Sara; Warschauer, Mark; Crinella, Francis; Lakes, Kimberley
2016-01-01
Children with Attention Deficit/Hyperactivity Disorder (ADHD) receive approximately 80% of instruction in the general education classroom, where individualized behavioral management strategies may be difficult for teachers to consistently deliver. Mobile device apps provide promising platforms to manage behavior. This pilot study evaluated the utility of a web-based application (iSelfControl) designed to support classroom behavior management. iSelfControl prompted students every 'Center' (30-minutes) to self-evaluate using a universal token-economy classroom management system focused on compliance, productivity, and positive relationships. Simultaneously, the teacher evaluated each student on a separate iPad. Using Multi Level Modeling, we examined 13 days of data gathered from implementation with 5th grade students (N = 12) at a school for children with ADHD and related executive function difficulties. First, an unconditional growth model evaluated the overall amount of change in aggregated scores over time as well as the degree of systematic variation in scores within and across teacher-student dyads. Second, separate intercepts and slopes were estimated for teacher and student to estimate degree of congruency between trajectories. Finally, differences between teacher and student scores were tested at each time-point in separate models to examine unique 'Center' effects. 51% of the total variance in scores was attributed to differences between dyads. Trajectories of student and teacher scores remained relatively stable across seven time-points each day and did not statistically differ from each other. On any given day, students tended to evaluate their behaviors more positively (entered higher scores for themselves) compared to corresponding teacher scores. In summary, iSelfControl provides a platform for self and teacher evaluation that is an important adjunct to conventional classroom management strategies. The application captured teacher/student discrepancies and significant variations across the day. Future research with a larger, clinically diagnosed sample in multiple classrooms is needed to assess generalizability to a wider variety of classroom settings.
An economic analysis of poliovirus risk management policy options for 2013-2052.
Duintjer Tebbens, Radboud J; Pallansch, Mark A; Cochi, Stephen L; Wassilak, Steven G F; Thompson, Kimberly M
2015-09-24
The Global Polio Eradication Initiative plans for coordinated cessation of oral poliovirus vaccine (OPV) after interrupting all wild poliovirus (WPV) transmission, but many questions remain related to long-term poliovirus risk management policies. We used an integrated dynamic poliovirus transmission and stochastic risk model to simulate possible futures and estimate the health and economic outcomes of maintaining the 2013 status quo of continued OPV use in most developing countries compared with OPV cessation policies with various assumptions about global inactivated poliovirus vaccine (IPV) adoption. Continued OPV use after global WPV eradication leads to continued high costs and/or high cases. Global OPV cessation comes with a high probability of at least one outbreak, which aggressive outbreak response can successfully control in most instances. A low but non-zero probability exists of uncontrolled outbreaks following a poliovirus reintroduction long after OPV cessation in a population in which IPV-alone cannot prevent poliovirus transmission. We estimate global incremental net benefits during 2013-2052 of approximately $16 billion (US$2013) for OPV cessation with at least one IPV routine immunization dose in all countries until 2024 compared to continued OPV use, although significant uncertainty remains associated with the frequency of exportations between populations and the implementation of long term risk management policies. Global OPV cessation offers the possibility of large future health and economic benefits compared to continued OPV use. Long-term poliovirus risk management interventions matter (e.g., IPV use duration, outbreak response, containment, continued surveillance, stockpile size and contents, vaccine production site requirements, potential antiviral drugs, and potential safer vaccines) and require careful consideration. Risk management activities can help to ensure a low risk of uncontrolled outbreaks and preserve or further increase the positive net benefits of OPV cessation. Important uncertainties will require more research, including characterizing immunodeficient long-term poliovirus excretor risks, containment risks, and the kinetics of outbreaks and response in an unprecedented world without widespread live poliovirus exposure.
Control of Complex Dynamic Systems by Neural Networks
NASA Technical Reports Server (NTRS)
Spall, James C.; Cristion, John A.
1993-01-01
This paper considers the use of neural networks (NN's) in controlling a nonlinear, stochastic system with unknown process equations. The NN is used to model the resulting unknown control law. The approach here is based on using the output error of the system to train the NN controller without the need to construct a separate model (NN or other type) for the unknown process dynamics. To implement such a direct adaptive control approach, it is required that connection weights in the NN be estimated while the system is being controlled. As a result of the feedback of the unknown process dynamics, however, it is not possible to determine the gradient of the loss function for use in standard (back-propagation-type) weight estimation algorithms. Therefore, this paper considers the use of a new stochastic approximation algorithm for this weight estimation, which is based on a 'simultaneous perturbation' gradient approximation that only requires the system output error. It is shown that this algorithm can greatly enhance the efficiency over more standard stochastic approximation algorithms based on finite-difference gradient approximations.
Estimating Mudpuppy (Necturus maculosus) abundance in the Lamoille River, Vermont, USA
Chellman, Isaac C.; Parrish, Donna; Donovan, Therese M.
2017-01-01
The Mudpuppy (Necturus maculosus) is classified as a Species of Greatest Conservation Need by the state of Vermont. There is concern regarding status of populations in the Lake Champlain basin because of habitat alteration and potential effects of 3-trifluromethyl-4-nitrophenol (TFM), a chemical used to control Sea Lamprey (Petromyzon marinus). The purpose of our research was to assess Mudpuppy capture methods and abundance in the Lamoille River, Vermont, USA. We sampled Mudpuppies under a mark-recapture framework, using modified, baited minnow traps set during two winter-spring periods. We marked each Mudpuppy with a passive integrated transponder (PIT) tag and released individuals after collecting morphological measurements. We collected 80 individuals during 2,581 trap days in 2008–2009 (year 1), and 81 individuals during 3,072 trap days in 2009–2010 (year 2). We estimated abundance from spring trapping periods in 2009 and 2010, during which capture rates were sufficient for analysis. Capture probability was low (< 0.04), but highest following precipitation events in spring, during periods of higher river flow, when water temperatures were approximately 3 to 6° C. During October 2009, management agencies treated the Lamoille River with TFM. Surveyors recovered more than 500 dead Mudpuppies during the post-treatment assessment. Overall, Mudpuppy captures did not change between sampling periods; however, we captured fewer females during year 2 compared to year 1, and the sex ratio changed from 0.79:1 (M:F) during year 1 to 3:1 (M:F) during year 2. Our data may help wildlife managers assess population status of Mudpuppies in conjunction with fisheries management techniques.
Recycling of glass: accounting of greenhouse gases and global warming contributions.
Larsen, Anna W; Merrild, Hanna; Christensen, Thomas H
2009-11-01
Greenhouse gas (GHG) emissions related to recycling of glass waste were assessed from a waste management perspective. Focus was on the material recovery facility (MRF) where the initial sorting of glass waste takes place. The MRF delivers products like cullet and whole bottles to other industries. Two possible uses of reprocessed glass waste were considered: (i) remelting of cullet added to glass production; and (ii) re-use of whole bottles. The GHG emission accounting included indirect upstream emissions (provision of energy, fuels and auxiliaries), direct activities at the MRF and bottle-wash facility (combustion of fuels) as well as indirect downstream activities in terms of using the recovered glass waste in other industries and, thereby, avoiding emissions from conventional production. The GHG accounting was presented as aggregated global warming factors (GWFs) for the direct and indirect upstream and downstream processes, respectively. The range of GWFs was estimated to 0-70 kg CO(2)eq. tonne( -1) of glass waste for the upstream activities and the direct emissions from the waste management system. The GWF for the downstream effect showed some significant variation between the two cases. It was estimated to approximately -500 kg CO(2)-eq. tonne(- 1) of glass waste for the remelting technology and -1500 to -600 kg CO(2)-eq. tonne(-1) of glass waste for bottle re-use. Including the downstream process, large savings of GHG emissions can be attributed to the waste management system. The results showed that, in GHG emission accounting, attention should be drawn to thorough analysis of energy sources, especially electricity, and the downstream savings caused by material substitution.
Regional variability of nitrate fluxes in the unsaturated zone and groundwater, Wisconsin, USA
Green, Christopher T.; Liao, Lixia; Nolan, Bernard T.; Juckem, Paul F.; Shope, Christopher L.; Tesoriero, Anthony J.; Jurgens, Bryant
2018-01-01
Process-based modeling of regional NO3− fluxes to groundwater is critical for understanding and managing water quality, but the complexity of NO3− reactive transport processes make implementation a challenge. This study introduces a regional vertical flux method (VFM) for efficient estimation of reactive transport of NO3− in the vadose zone and groundwater. The regional VFM was applied to 443 well samples in central-eastern Wisconsin. Chemical measurements included O2, NO3−, N2 from denitrification, and atmospheric tracers of groundwater age including carbon-14, chlorofluorocarbons, tritium, and tritiogenic helium. VFM results were consistent with observed chemistry, and calibrated parameters were in-line with estimates from previous studies. Results indicated that (1) unsaturated zone travel times were a substantial portion of the transit time to wells and streams (2) since 1945 fractions of applied N leached to groundwater have increased for manure-N, possibly due to increased injection of liquid manure, and decreased for fertilizer-N, and (3) under current practices and conditions, approximately 60% of the shallow aquifer will eventually be affected by downward migration of NO3−, with denitrification protecting the remaining 40%. Recharge variability strongly affected the unsaturated zone lag times and the eventual depth of the NO3− front. Principal components regression demonstrated that VFM parameters and predictions were significantly correlated with hydrogeochemical landscape features. The diverse and sometimes conflicting aspects of N management (e.g. limiting N volatilization versus limiting N losses to groundwater) warrant continued development of large-scale holistic strategies to manage water quality and quantity.
Regional Variability of Nitrate Fluxes in the Unsaturated Zone and Groundwater, Wisconsin, USA
NASA Astrophysics Data System (ADS)
Green, Christopher T.; Liao, Lixia; Nolan, Bernard T.; Juckem, Paul F.; Shope, Christopher L.; Tesoriero, Anthony J.; Jurgens, Bryant C.
2018-01-01
Process-based modeling of regional NO3- fluxes to groundwater is critical for understanding and managing water quality, but the complexity of NO3- reactive transport processes makes implementation a challenge. This study introduces a regional vertical flux method (VFM) for efficient estimation of reactive transport of NO3- in the vadose zone and groundwater. The regional VFM was applied to 443 well samples in central-eastern Wisconsin. Chemical measurements included O2, NO3-, N2 from denitrification, and atmospheric tracers of groundwater age including carbon-14, chlorofluorocarbons, tritium, and tritiogenic helium. VFM results were consistent with observed chemistry, and calibrated parameters were in-line with estimates from previous studies. Results indicated that (1) unsaturated zone travel times were a substantial portion of the transit time to wells and streams, (2) since 1945 fractions of applied N leached to groundwater have increased for manure-N, possibly due to increased injection of liquid manure, and decreased for fertilizer-N, and (3) under current practices and conditions, approximately 60% of the shallow aquifer will eventually be affected by downward migration of NO3-, with denitrification protecting the remaining 40%. Recharge variability strongly affected the unsaturated zone lag times and the eventual depth of the NO3- front. Principal components regression demonstrated that VFM parameters and predictions were significantly correlated with hydrogeochemical landscape features. The diverse and sometimes conflicting aspects of N management (e.g., limiting N volatilization versus limiting N losses to groundwater) warrant continued development of large-scale holistic strategies to manage water quality and quantity.
Estimation for bilinear stochastic systems
NASA Technical Reports Server (NTRS)
Willsky, A. S.; Marcus, S. I.
1974-01-01
Three techniques for the solution of bilinear estimation problems are presented. First, finite dimensional optimal nonlinear estimators are presented for certain bilinear systems evolving on solvable and nilpotent lie groups. Then the use of harmonic analysis for estimation problems evolving on spheres and other compact manifolds is investigated. Finally, an approximate estimation technique utilizing cumulants is discussed.
Research on solid waste management system: to improve existing situation in Corlu Town of Turkey.
Tinmaz, Esra; Demir, Ibrahim
2006-01-01
Over the past decades, uncontrolled population growth and rapid urbanization and industrialization have resulted in environmental problems in Corlu Town, Turkey. One of the most important problems is solid waste due to inadequate management practices. Nowadays, increasing public awareness of the environment compels local authorities to define and to adopt new solutions for waste management. This paper presents a general overview of current solid waste management practices in Corlu Town and principles of the recommended municipal solid waste (MSW) management system. In Corlu, 170 tonnes of municipal solid waste are generated each day, or 1.150 kg per capita per day. Approximately one-half of the municipal solid waste generated is organic material and 30% of the MSW consists of recyclable materials. The recommended system deals with maximizing recycling and minimizing landfilling of municipal solid waste, and consists of separation at source, collection, sorting, recycling, composting and sanitary landfilling. This study also analyzed the recommended system with respect to feasibility and economics. To evaluate whether the suggested system is cost effective or not, the operating cost of the recommended system and market prices of recyclable materials were compared, and the results show that the recommended system will reduce required landfill volume up to 27% of compared to the present situation. The profit of the recommended system is estimated to be about 80 million US dollars.
Moreira, Patricia V L; Baraldi, Larissa Galastri; Moubarac, Jean-Claude; Monteiro, Carlos Augusto; Newton, Alex; Capewell, Simon; O'Flaherty, Martin
2015-01-01
The global burden of non-communicable diseases partly reflects growing exposure to ultra-processed food products (UPPs). These heavily marketed UPPs are cheap and convenient for consumers and profitable for manufacturers, but contain high levels of salt, fat and sugars. This study aimed to explore the potential mortality reduction associated with future policies for substantially reducing ultra-processed food intake in the UK. We obtained data from the UK Living Cost and Food Survey and from the National Diet and Nutrition Survey. By the NOVA food typology, all food items were categorized into three groups according to the extent of food processing: Group 1 describes unprocessed/minimally processed foods. Group 2 comprises processed culinary ingredients. Group 3 includes all processed or ultra-processed products. Using UK nutrient conversion tables, we estimated the energy and nutrient profile of each food group. We then used the IMPACT Food Policy model to estimate reductions in cardiovascular mortality from improved nutrient intakes reflecting shifts from processed or ultra-processed to unprocessed/minimally processed foods. We then conducted probabilistic sensitivity analyses using Monte Carlo simulation. Approximately 175,000 cardiovascular disease (CVD) deaths might be expected in 2030 if current mortality patterns persist. However, halving the intake of Group 3 (processed) foods could result in approximately 22,055 fewer CVD related deaths in 2030 (minimum estimate 10,705, maximum estimate 34,625). An ideal scenario in which salt and fat intakes are reduced to the low levels observed in Group 1 and 2 could lead to approximately 14,235 (minimum estimate 6,680, maximum estimate 22,525) fewer coronary deaths and approximately 7,820 (minimum estimate 4,025, maximum estimate 12,100) fewer stroke deaths, comprising almost 13% mortality reduction. This study shows a substantial potential for reducing the cardiovascular disease burden through a healthier food system. It highlights the crucial importance of implementing healthier UK food policies.
NASA Technical Reports Server (NTRS)
Fukumori, Ichiro
1995-01-01
Sea surface height variability measured by TOPEX is analyzed in the tropical Pacific Ocean by way of assimilation into a wind-driven, reduced-gravity, shallow water model using an approximate Kalman filter and smoother. The analysis results in an optimal fit of the dynamic model to the observations, providing it dynamically consistent interpolation of sea level and estimation of the circulation. Nearly 80% of the expected signal variance is accounted for by the model within 20 deg of the equator, and estimation uncertainty is substantially reduced by the voluminous observation. Notable features resolved by the analysis include seasonal changes associated with the North Equatorial Countercurrent and equatorial Kelvin and Rossby waves. Significant discrepancies are also found between the estimate and TOPEX measurements, especially near the eastern boundary. Improvements in the estimate made by the assimilation are validated by comparisons with independent tide gauge and current meter observations. The employed filter and smoother are based on approximately computed estimation error covariance matrices, utilizing a spatial transformation and an symptotic approximation. The analysis demonstrates the practical utility of a quasi-optimal filter and smoother.
NASA Astrophysics Data System (ADS)
Álvarez-Gómez, J. A.; Aniel-Quiroga, Í.; Gutiérrez-Gutiérrez, O. Q.; Larreynaga, J.; González, M.; Castro, M.; Gavidia, F.; Aguirre-Ayerbe, I.; González-Riancho, P.; Carreño, E.
2013-05-01
El Salvador is the smallest and most densely populated country in Central America; its coast has approximately a length of 320 km, 29 municipalities and more than 700 000 inhabitants. In El Salvador there have been 15 recorded tsunamis between 1859 and 2012, 3 of them causing damages and hundreds of victims. The hazard assessment is commonly based on propagation numerical models for earthquake-generated tsunamis and can be approached from both Probabilistic and Deterministic Methods. A deterministic approximation has been applied in this study as it provides essential information for coastal planning and management. The objective of the research was twofold, on the one hand the characterization of the threat over the entire coast of El Salvador, and on the other the computation of flooding maps for the three main localities of the Salvadorian coast. For the latter we developed high resolution flooding models. For the former, due to the extension of the coastal area, we computed maximum elevation maps and from the elevation in the near-shore we computed an estimation of the run-up and the flooded area using empirical relations. We have considered local sources located in the Middle America Trench, characterized seismotectonically, and distant sources in the rest of Pacific basin, using historical and recent earthquakes and tsunamis. We used a hybrid finite differences - finite volumes numerical model in this work, based on the Linear and Non-linear Shallow Water Equations, to simulate a total of 24 earthquake generated tsunami scenarios. In the western Salvadorian coast, run-up values higher than 5 m are common, while in the eastern area, approximately from La Libertad to the Gulf of Fonseca, the run-up values are lower. The more exposed areas to flooding are the lowlands in the Lempa River delta and the Barra de Santiago Western Plains. The results of the empirical approximation used for the whole country are similar to the results obtained with the high resolution numerical modelling, being a good and fast approximation to obtain preliminary tsunami hazard estimations. In Acajutla and La Libertad, both important tourism centres being actively developed, flooding depths between 2 and 4 m are frequent, accompanied with high and very high person instability hazard. Inside the Gulf of Fonseca the impact of the waves is almost negligible.
NASA Astrophysics Data System (ADS)
Álvarez-Gómez, J. A.; Aniel-Quiroga, Í.; Gutiérrez-Gutiérrez, O. Q.; Larreynaga, J.; González, M.; Castro, M.; Gavidia, F.; Aguirre-Ayerbe, I.; González-Riancho, P.; Carreño, E.
2013-11-01
El Salvador is the smallest and most densely populated country in Central America; its coast has an approximate length of 320 km, 29 municipalities and more than 700 000 inhabitants. In El Salvador there were 15 recorded tsunamis between 1859 and 2012, 3 of them causing damages and resulting in hundreds of victims. Hazard assessment is commonly based on propagation numerical models for earthquake-generated tsunamis and can be approached through both probabilistic and deterministic methods. A deterministic approximation has been applied in this study as it provides essential information for coastal planning and management. The objective of the research was twofold: on the one hand the characterization of the threat over the entire coast of El Salvador, and on the other the computation of flooding maps for the three main localities of the Salvadorian coast. For the latter we developed high-resolution flooding models. For the former, due to the extension of the coastal area, we computed maximum elevation maps, and from the elevation in the near shore we computed an estimation of the run-up and the flooded area using empirical relations. We have considered local sources located in the Middle America Trench, characterized seismotectonically, and distant sources in the rest of Pacific Basin, using historical and recent earthquakes and tsunamis. We used a hybrid finite differences-finite volumes numerical model in this work, based on the linear and non-linear shallow water equations, to simulate a total of 24 earthquake-generated tsunami scenarios. Our results show that at the western Salvadorian coast, run-up values higher than 5 m are common, while in the eastern area, approximately from La Libertad to the Gulf of Fonseca, the run-up values are lower. The more exposed areas to flooding are the lowlands in the Lempa River delta and the Barra de Santiago Western Plains. The results of the empirical approximation used for the whole country are similar to the results obtained with the high-resolution numerical modelling, being a good and fast approximation to obtain preliminary tsunami hazard estimations. In Acajutla and La Libertad, both important tourism centres being actively developed, flooding depths between 2 and 4 m are frequent, accompanied with high and very high person instability hazard. Inside the Gulf of Fonseca the impact of the waves is almost negligible.
Estimation of Tile Drainage Contribution to Streamflow and Nutrient Export Loads
NASA Astrophysics Data System (ADS)
Schilling, K. E.; Arenas Amado, A.; Jones, C. S.; Weber, L. J.
2015-12-01
Subsurface drainage is a very common practice in the agricultural U.S. Midwest. It is typically installed in poorly drained soils in order to enhance crop yields. The presence of tile drains creates a route for agrichemicals to travel and therefore negatively impacts stream water quality. This study estimated through end-member analyses the contributions of tile drainage, groundwater, and surface runoff to streamflow at the watershed scale based on continuously monitored data. Especial attention was devoted to quantifying tile drainage impact on watershed streamflow and nutrient export loads. Data analyzed includes streamflow, rainfall, soil moisture, shallow groundwater levels, in-stream nitrate+nitrite concentrations and specific conductance. Data were collected at a HUC12 watershed located in Northeast Iowa, USA. Approximately 60% of the total watershed area is devoted to agricultural activities and forest and grassland are the other two predominant land uses. Results show that approximately 20% of total annual streamflow comes from tile drainage and during rainfall events tile drainage contribution can go up to 30%. Furthermore, for most of the analyzed rainfall events groundwater responded faster and in a more dramatic fashion than tile drainage. The State of Iowa is currently carrying out a plan to reduce nutrients in Iowa waters and the Gulf of Mexico (Iowa Nutrient Reduction Strategy). The outcome of this investigation has the potential to assist in Best Management Practice (BMP) scenario selection and therefore help the state achieve water quality goals.
A proactive transfer policy for critical patient flow management.
González, Jaime; Ferrer, Juan-Carlos; Cataldo, Alejandro; Rojas, Luis
2018-02-17
Hospital emergency departments are often overcrowded, resulting in long wait times and a public perception of poor attention. Delays in transferring patients needing further treatment increases emergency department congestion, has negative impacts on their health and may increase their mortality rates. A model built around a Markov decision process is proposed to improve the efficiency of patient flows between the emergency department and other hospital units. With each day divided into time periods, the formulation estimates bed demand for the next period as the basis for determining a proactive rather than reactive transfer decision policy. Due to the high dimensionality of the optimization problem involved, an approximate dynamic programming approach is used to derive an approximation of the optimal decision policy, which indicates that a certain number of beds should be kept free in the different units as a function of the next period demand estimate. Testing the model on two instances of different sizes demonstrates that the optimal number of patient transfers between units changes when the emergency patient arrival rate for transfer to other units changes at a single unit, but remains stable if the change is proportionally the same for all units. In a simulation using real data for a hospital in Chile, significant improvements are achieved by the model in key emergency department performance indicators such as patient wait times (reduction higher than 50%), patient capacity (21% increase) and queue abandonment (from 7% down to less than 1%).
Genetic causes of intellectual disability in a birth cohort: A population‐based study
Riegel, Mariluce; Segal, Sandra L.; Félix, Têmis M.; Barros, Aluísio J. D.; Santos, Iná S.; Matijasevich, Alicia; Giugliani, Roberto; Black, Maureen
2015-01-01
Intellectual disability affects approximately 1–3% of the population and can be caused by genetic and environmental factors. Although many studies have investigated the etiology of intellectual disability in different populations, few studies have been performed in middle‐income countries. The present study estimated the prevalence of genetic causes related to intellectual disability in a cohort of children from a city in south Brazil who were followed from birth. Children who showed poor performance in development and intelligence tests at the ages of 2 and 4 were included. Out of 4,231 liveborns enrolled in the cohort, 214 children fulfilled the inclusion criteria. A diagnosis was established in approximately 90% of the children evaluated. Genetic causes were determined in 31 of the children and 19 cases remained unexplained even after extensive investigation. The overall prevalence of intellectual disability in this cohort due to genetic causes was 0.82%. Because this study was nested in a cohort, there were a large number of variables related to early childhood and the likelihood of information bias was minimized by collecting information with a short recall time. This study was not influenced by selection bias, allowing identification of intellectual disability and estimation of the prevalence of genetic causes in this population, thereby increasing the possibility of providing appropriate management and/or genetic counseling. © 2015 The Authors. American Journal of Medical Genetics Part A Published by Wiley Periodicals, Inc. PMID:25728503
Habitat Capacity for Cougar Recolonization in the Upper Great Lakes Region
O′Neil, Shawn T.; Rahn, Kasey C.; Bump, Joseph K.
2014-01-01
Background Recent findings indicate that cougars (Puma concolor) are expanding their range into the midwestern United States. Confirmed reports of cougar in Michigan, Minnesota, and Wisconsin have increased dramatically in frequency during the last five years, leading to speculation that cougars may re-establish in the Upper Great Lakes (UGL) region, USA. Recent work showed favorable cougar habitat in northeastern Minnesota, suggesting that the northern forested regions of Michigan and Wisconsin may have similar potential. Recolonization of cougars in the UGL states would have important ecological, social, and political impacts that will require effective management. Methodology/Principal Findings Using Geographic Information Systems (GIS), we extended a cougar habitat model to Michigan and Wisconsin and incorporated primary prey densities to estimate the capacity of the region to support cougars. Results suggest that approximately 39% (>58,000 km2) of the study area could support cougars, and that there is potential for a population of approximately 500 or more animals. An exploratory validation of this habitat model revealed strong association with 58 verified cougar locations occurring in the study area between 2008 and 2013. Conclusions/Significance Spatially explicit information derived from this study could potentially lead to estimation of a viable population, delineation of possible cougar-human conflict areas, and the targeting of site locations for current monitoring. Understanding predator-prey interactions, interspecific competition, and human-wildlife relationships is becoming increasingly critical as top carnivores continue to recolonize the UGL region. PMID:25389761
Progress toward measles control - African region, 2001-2008.
2009-09-25
In 2001, the countries of the World Health Organization (WHO) African Region (AFR) became part of a global initiative with a goal of reducing the number of measles deaths by 50% by 2005, compared with 1999. Recommended strategies for measles mortality reduction included 1) increasing routine coverage for the first dose of measles-containing vaccine (MCV1) for all children, 2) providing a second opportunity for measles vaccination through supplemental immunization activities (SIAs), 3) improving measles case management, and 4) establishing case-based surveillance with laboratory confirmation of all suspected measles cases. Before introduction of MCV throughout AFR, approximately 1 million measles cases had been reported each year in the early 1980s. After strengthening measles-control activities, annual reported cases declined to an estimated 300,000- -580,000 during the 1990s. This report summarizes the progress made during 2001- -2008 toward improving measles control in AFR. During 2001- -2008 estimated MCV1 coverage increased from 57% to 73%, SIAs vaccinated approximately 398 million children, and reported measles cases decreased by 93%, from 492,116 in 2001 to 32,278 in 2008. By 2005, global measles deaths had decreased by 60%, and the AFR goal had been achieved; AFR adopted a new goal to reduce deaths by 90%, compared with 2000, and that goal was achieved in 2006. However, inaccuracies in reported vaccination coverage exist, surveillance is suboptimal, and measles outbreaks continue to occur in AFR countries. Further progress in measles control will require full implementation of recommended strategies, including validation of vaccination coverage.
NASA Astrophysics Data System (ADS)
Crawford, David L.; McKenna, D.
2006-12-01
A good estimate of sky brightness and its variations throughout the night, the months, and even the years is an essential bit of knowledge both for good observing and especially as a tool in efforts to minimize sky brightness through local action. Hence a stable and accurate monitor can be a valuable and necessary tool. We have developed such a monitor, with the financial help of Vatican Observatory and Walker Management. The device is now undergoing its Beta test in preparation for production. It is simple, accurate, well calibrated, and automatic, sending its data directly to IDA over the internet via E-mail . Approximately 50 such monitors will be ready soon for deployment worldwide including most major observatories. Those interested in having one should enquire of IDA about details.
Tune-Up Your Fan Systems for Improved Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fans are used extensively in commercial buildings and represent approximately 6% of total energy consumed by commercial buildings. The U.S. Department of Energy (DOE) estimates that fans in commercial buildings consume 158 billion kWh of electricity annually. Maintaining fan systems in proper condition provides energy savings and ensures a comfortable and healthy environment. While many fan systems have significant energy savings opportunities available through improvements in fan selection, system design, and operational practices, it is not always apparent when a fan system needs maintenance or what opportunities are available for improvements. This resource is designed for facility managers and maintenancemore » staff to provide easy-to-implement actionable guidance on fan efficiency measures for existing ducted air systems.« less
Estes, Jason G.; Othman, Nurzhafarina; Ismail, Sulaiman; Ancrenaz, Marc; Goossens, Benoit; Ambu, Laurentius N.; Estes, Anna B.; Palmiotto, Peter A.
2012-01-01
The approximately 300 (298, 95% CI: 152–581) elephants in the Lower Kinabatangan Managed Elephant Range in Sabah, Malaysian Borneo are a priority sub-population for Borneo's total elephant population (2,040, 95% CI: 1,184–3,652). Habitat loss and human-elephant conflict are recognized as the major threats to Bornean elephant survival. In the Kinabatangan region, human settlements and agricultural development for oil palm drive an intense fragmentation process. Electric fences guard against elephant crop raiding but also remove access to suitable habitat patches. We conducted expert opinion-based least-cost analyses, to model the quantity and configuration of available suitable elephant habitat in the Lower Kinabatangan, and called this the Elephant Habitat Linkage. At 184 km2, our estimate of available habitat is 54% smaller than the estimate used in the State's Elephant Action Plan for the Lower Kinabatangan Managed Elephant Range (400 km2). During high flood levels, available habitat is reduced to only 61 km2. As a consequence, short-term elephant densities are likely to surge during floods to 4.83 km−2 (95% CI: 2.46–9.41), among the highest estimated for forest-dwelling elephants in Asia or Africa. During severe floods, the configuration of remaining elephant habitat and the surge in elephant density may put two villages at elevated risk of human-elephant conflict. Lower Kinabatangan elephants are vulnerable to the natural disturbance regime of the river due to their limited dispersal options. Twenty bottlenecks less than one km wide throughout the Elephant Habitat Linkage, have the potential to further reduce access to suitable habitat. Rebuilding landscape connectivity to isolated habitat patches and to the North Kinabatangan Managed Elephant Range (less than 35 km inland) are conservation priorities that would increase the quantity of available habitat, and may work as a mechanism to allow population release, lower elephant density, reduce human-elephant conflict, and enable genetic mixing. PMID:23071499
NASA Astrophysics Data System (ADS)
Kapangaziwiri, E.; Mwenge Kahinda, J.; Dzikiti, S.; Ramoelo, A.; Cho, M.; Mathieu, R.; Naidoo, M.; Seetal, A.; Pienaar, H.
2018-06-01
South Africa is a water-stressed country which has, over the years, strived to adopt a rational, just and equitable way to manage this limited resource. The National Water Act (Act No.36 of 1998) (NWA) provides the legal framework to achieve this objective. Since 2003, the government embarked on a national process to: validate (confirm the quantum of), and; verify (establish the lawfulness of) water uses that exceed domestic requirements. The objective of the process is to determine how much water is allocated for: (1) existing lawful use in accordance with specific requirements of the NWA, and; (2) current water uses. The process identified users with or without registered use entitlements, whether claims for registered uses were correct, under-estimated, over-estimated or false; and confirmed the lawfulness of each water use in accordance with water legislation that pre-dated the NWA. The process included identifying land and non-land based water uses (industrial, mining and bulk potable water supplies, irrigation, crop types and impoundments) using remote sensing (RS) techniques for both a qualifying (defined as two years before the enactment of the NWA) and the current periods. Using this as a basis, volumetric crop irrigation requirements were then estimated using the South African Procedure for estimating irrigation WATer requirements (SAPWAT), while the Gush curves were used to quantify Stream Flow Reduction Activities (SFRAs) for commercially afforested areas. The boundaries of farm reservoirs were delineated from RS and the volumes calculated using a regression approach. Estimates of the irrigation water requirements, SFRAs and reservoir volumes formed the basis for interaction between the Department of Water and Sanitation (DWS) and water users to confirm their uses; and subsequently, to update the DWS Water Authorisation and Registration Management System (WARMS), a database of water users. While WARMS initially indicated a total of approximately 16 000 registered users in the KwaZulu-Natal Province, following the RS analysis up to 6000 potential additional water users have been identified, mostly currently unregistered, who are expected to be registered in the updated database. Despite certain process methodology challenges and limitations, it forms a critical basis for all other aspects of water management, informs macro- and micro-water resource planning, water allocation reform, as well as water use compliance, monitoring and enforcement.
Concerns about a variance approach to X-ray diffractometric estimation of microfibril angle in wood
Steve P. Verrill; David E. Kretschmann; Victoria L. Herian; Michael C. Wiemann; Harry A. Alden
2011-01-01
In this article, we raise three technical concerns about Evansâ 1999 Appita Journal âvariance approachâ to estimating microfibril angle (MFA). The first concern is associated with the approximation of the variance of an X-ray intensity half-profile by a function of the MFA and the natural variability of the MFA. The second concern is associated with the approximation...
NASA Astrophysics Data System (ADS)
Tikhonov, D. A.; Sobolev, E. V.
2011-04-01
A method of integral equations of the theory of liquids in the reference interaction site model (RISM) approximation is used to estimate the Gibbs energy averaged over equilibrium trajectories computed by molecular mechanics. Peptide oxytocin is selected as the object of interest. The Gibbs energy is calculated using all chemical potential formulas introduced in the RISM approach for the excess chemical potential of solvation and is compared with estimates by the generalized Born model. Some formulas are shown to give the wrong sign of Gibbs energy changes when peptide passes from the gas phase into water environment; the other formulas give overestimated Gibbs energy changes with the right sign. Note that allowance for the repulsive correction in the approximate analytical expressions for the Gibbs energy derived by thermodynamic perturbation theory is not a remedy.
NASA Flexible Screen Propellant Management Device (PMD) Demonstration With Cryogenic Liquid
NASA Technical Reports Server (NTRS)
Wollen, Mark; Bakke, Victor; Baker, James
2012-01-01
While evaluating various options for liquid methane and liquid oxygen propellant management for lunar missions, Innovative Engineering Solutions (IES) conceived the flexible screen device as a potential simple alternative to conventional propellant management devices (PMD). An apparatus was designed and fabricated to test flexible screen devices in liquid nitrogen. After resolution of a number of issues (discussed in detail in the paper), a fine mesh screen (325 by 2300 wires per inch) spring return assembly was successfully tested. No significant degradation in the screen bubble point was observed either due to the screen stretching process or due to cyclic fatigue during testing. An estimated 30 to 50 deflection cycles, and approximately 3 to 5 thermal cycles, were performed on the final screen specimen, prior to and between formally recorded testing. These cycles included some "abusive" pressure cycling, where gas or liquid was driven through the screen at rates that produced differential pressures across the screen of several times the bubble point pressure. No obvious performance degradation or other changes were observed over the duration of testing. In summary, it is felt by the author that these simple tests validated the feasibility of the flexible screen PMD concept for use with cryogenic propellants.
Flow analysis of metals in a municipal solid waste management system.
Jung, C H; Matsuto, T; Tanaka, N
2006-01-01
This study aimed to identify the metal flow in a municipal solid waste (MSW) management system. Outputs of a resource recovery facility, refuse derived fuel (RDF) production facility, carbonization facility, plastics liquefaction facility, composting facility, and bio-gasification facility were analyzed for metal content and leaching concentration. In terms of metal content, bulky and incombustible waste had the highest values. Char from a carbonization facility, which treats household waste, had a higher metal content than MSW incinerator bottom ash. A leaching test revealed that Cd and Pb in char and Pb in RDF production residue exceeded the Japanese regulatory criteria for landfilling, so special attention should be paid to final disposal of these substances. By multiplying metal content and the generation rate of outputs, the metal content of input waste to each facility was estimated. For most metals except Cr, the total contribution ratio of paper/textile/plastics, bulky waste, and incombustible waste was over 80%. Approximately 30% of Cr originated from plastic packaging. Finally, several MSW management scenarios showed that most metals are transferred to landfills and the leaching potential of metals to the environment is quite small.
Wetlands in a changing climate: Science, policy and management
Moomaw, William R.; Chmura, G.L.; Davies, Gillian T.; Finlayson, Max; Middleton, Beth A.; Natali, Sue M.; Perry, James; Roulet, Nigel; Sutton-Grier, Ariana
2018-01-01
Part 1 of this review synthesizes recent research on status and climate vulnerability of freshwater and saltwater wetlands, and their contribution to addressing climate change (carbon cycle, adaptation, resilience). Peatlands and vegetated coastal wetlands are among the most carbon rich sinks on the planet sequestering approximately as much carbon as do global forest ecosystems. Estimates of the consequences of rising temperature on current wetland carbon storage and future carbon sequestration potential are summarized. We also demonstrate the need to prevent drying of wetlands and thawing of permafrost by disturbances and rising temperatures to protect wetland carbon stores and climate adaptation/resiliency ecosystem services. Preventing further wetland loss is found to be important in limiting future emissions to meet climate goals, but is seldom considered. In Part 2, the paper explores the policy and management realm from international to national, subnational and local levels to identify strategies and policies reflecting an integrated understanding of both wetland and climate change science. Specific recommendations are made to capture synergies between wetlands and carbon cycle management, adaptation and resiliency to further enable researchers, policy makers and practitioners to protect wetland carbon and climate adaptation/resiliency ecosystem services.
Flow analysis of metals in a municipal solid waste management system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, C.H.; Matsuto, T.; Tanaka, N.
2006-07-01
This study aimed to identify the metal flow in a municipal solid waste (MSW) management system. Outputs of a resource recovery facility, refuse derived fuel (RDF) production facility, carbonization facility, plastics liquefaction facility, composting facility, and bio-gasification facility were analyzed for metal content and leaching concentration. In terms of metal content, bulky and incombustible waste had the highest values. Char from a carbonization facility, which treats household waste, had a higher metal content than MSW incinerator bottom ash. A leaching test revealed that Cd and Pb in char and Pb in RDF production residue exceeded the Japanese regulatory criteria formore » landfilling, so special attention should be paid to final disposal of these substances. By multiplying metal content and the generation rate of outputs, the metal content of input waste to each facility was estimated. For most metals except Cr, the total contribution ratio of paper/textile/plastics, bulky waste, and incombustible waste was over 80%. Approximately 30% of Cr originated from plastic packaging. Finally, several MSW management scenarios showed that most metals are transferred to landfills and the leaching potential of metals to the environment is quite small.« less
Arnold, Pamela; Scheurer, Danielle; Dake, Andrew W; Hedgpeth, Angela; Hutto, Amy; Colquitt, Caroline; Hermayer, Kathie L
2016-04-01
The Joint Commission Advanced Inpatient Diabetes Certification Program is founded on the American Diabetes Association's Clinical Practice Recommendations and is linked to the Joint Commission Standards. Diabetes currently affects 29.1 million people in the USA and another 86 million Americans are estimated to have pre-diabetes. On a daily basis at the Medical University of South Carolina (MUSC) Medical Center, there are approximately 130-150 inpatients with a diagnosis of diabetes. The program encompasses all service lines at MUSC. Some important features of the program include: a program champion or champion team, written blood glucose monitoring protocols, staff education in diabetes management, medical record identification of diabetes, a plan coordinating insulin and meal delivery, plans for treatment of hypoglycemia and hyperglycemia, data collection for incidence of hypoglycemia, and patient education on self-management of diabetes. The major clinical components to develop, implement, and evaluate an inpatient diabetes care program are: I. Program management, II. Delivering or facilitating clinical care, III. Supporting self-management, IV. Clinical information management and V. performance measurement. The standards receive guidance from a Disease-Specific Care Certification Advisory Committee, and the Standards and Survey Procedures Committee of the Joint Commission Board of Commissioners. The Joint Commission-ADA Advanced Inpatient Diabetes Certification represents a clinical program of excellence, improved processes of care, means to enhance contract negotiations with providers, ability to create an environment of teamwork, and heightened communication within the organization. Published by Elsevier Inc.
Exponential series approaches for nonparametric graphical models
NASA Astrophysics Data System (ADS)
Janofsky, Eric
Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.
Framework to trade optimality for local processing in large-scale wavefront reconstruction problems.
Haber, Aleksandar; Verhaegen, Michel
2016-11-15
We show that the minimum variance wavefront estimation problems permit localized approximate solutions, in the sense that the wavefront value at a point (excluding unobservable modes, such as the piston mode) can be approximated by a linear combination of the wavefront slope measurements in the point's neighborhood. This enables us to efficiently compute a wavefront estimate by performing a single sparse matrix-vector multiplication. Moreover, our results open the possibility for the development of wavefront estimators that can be easily implemented in a decentralized/distributed manner, and in which the estimate optimality can be easily traded for computational efficiency. We numerically validate our approach on Hudgin wavefront sensor geometries, and the results can be easily generalized to Fried geometries.
An Evaluation of Three Approximate Item Response Theory Models for Equating Test Scores.
ERIC Educational Resources Information Center
Marco, Gary L.; And Others
Three item response models were evaluated for estimating item parameters and equating test scores. The models, which approximated the traditional three-parameter model, included: (1) the Rasch one-parameter model, operationalized in the BICAL computer program; (2) an approximate three-parameter logistic model based on coarse group data divided…
ERIC Educational Resources Information Center
Chen, Ru San; Dunlap, William P.
1994-01-01
The present simulation study confirms that the corrected epsilon approximate test of B. Lecoutre yields a less biased estimation of population epsilon and reduces Type I error rates when compared to the epsilon approximate test of H. Huynh and L. S. Feldt. (SLD)
Lin, Chen-Yen; Halabi, Susan
2017-01-01
We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox’s proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer. PMID:29326496
Lin, Chen-Yen; Halabi, Susan
2017-01-01
We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox's proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer.
Challenges to assessing connectivity between massive populations of the Australian plague locust
Chapuis, Marie-Pierre; Popple, Julie-Anne M.; Berthier, Karine; Simpson, Stephen J.; Deveson, Edward; Spurgin, Peter; Steinbauer, Martin J.; Sword, Gregory A.
2011-01-01
Linking demographic and genetic dispersal measures is of fundamental importance for movement ecology and evolution. However, such integration can be difficult, particularly for highly fecund species that are often the target of management decisions guided by an understanding of population movement. Here, we present an example of how the influence of large population sizes can preclude genetic approaches from assessing demographic population structuring, even at a continental scale. The Australian plague locust, Chortoicetes terminifera, is a significant pest, with populations on the eastern and western sides of Australia having been monitored and managed independently to date. We used microsatellites to assess genetic variation in 12 C. terminifera population samples separated by up to 3000 km. Traditional summary statistics indicated high levels of genetic diversity and a surprising lack of population structure across the entire range. An approximate Bayesian computation treatment indicated that levels of genetic diversity in C. terminifera corresponded to effective population sizes conservatively composed of tens of thousands to several million individuals. We used these estimates and computer simulations to estimate the minimum rate of dispersal, m, that could account for the observed range-wide genetic homogeneity. The rate of dispersal between both sides of the Australian continent could be several orders of magnitude lower than that typically considered as required for the demographic connectivity of populations. PMID:21389030
Frazey, S.L.; Wilzbach, M.A.
2007-01-01
Productivities of resident salmonids and upland and riporian forests in 22 small watersheds of coastal northern California were estimated and compared to determine whether: 1) upland site productivity predicted riparian site productivity; 2) either upland or riparian site productivity predicted salmonid productivity; and 3) other parameters explained more of the variance in salmonid productivity. Upland and riparian site productivities were estimated using Site Index values for redwood (Sequoia sempervirens) and red alder (Alnus rubra), respectively. Salmonid productivity was indexed by back-calculated length at age 1 of the largest individuals sampled and by total biomass. Upland and riparian site indices were correlated, but neither factor contributed to the best approximating models of salmonid productivity. Total salmonid biomass was best described by a positive relationship with drainage area. Length of dominant fish was best described by a positive relationship with percentage of hardwoods within riparian areas, which may result from nutrient and/or litter subsidies provided by red older. The inability of forest productivity to predict salmon productivity may reflect insufficient variation in independent variables, limitations of the indices, and the operation of other factors affecting salmonid production. The lack of an apparent relationship between upland conifer and salmonid productivity suggests that management of land for timber productivity and component streams for salmonid production in these sites will require separate, albeit integrated, management strategies.
Clinical economics review: medical management of inflammatory bowel disease.
Ward, F M; Bodger, K; Daly, M J; Heatley, R V
1999-01-01
Inflammatory bowel diseases, although they are uncommon and rarely fatal, typically present during the period of economically productive adult life. Patients may require extensive therapeutic intervention as a result of the chronic, relapsing nature of the diseases. Their medical management includes oral and topical 5-amino salicylic acid derivatives and corticosteroids, as well as antibiotics and immunosuppressive therapies. Assessing the cost-effectiveness of rival treatments requires valid, reliable global assessments of outcome which consider quality of life, as well as the usual clinical end-points. Macro-economic studies of the overall impact of inflammatory bowel disease on health care systems have so far been largely confined to North America, where the total annual US costs, both direct and indirect, incurred by the estimated 380 000-480 000 sufferers has been put at around US2bn. Drugs were estimated to account for only 10% of total costs, whereas surgery and hospitalization account for approximately half. Studies from Europe suggest that the proportion of patients with Crohn's disease and ulcerative colitis who are capable of full time work is 75% and 90%, respectively. However, whilst only a minority of inflammatory bowel disease patients suffer chronic ill health and their life expectancy is normal, obtaining life assurance may be problematic, suggesting a misconception that inflammatory bowel disease frequently results in a major impact on an individual's economic productivity.
77 FR 22615 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-16
.... The Commission estimates that approximately 209 broker-dealers will spend an average of 87 hours annually to comply with this rule. Thus, the total compliance burden is approximately 18,200 burden-hours...
77 FR 29394 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-17
.... The Commission estimates that approximately 209 broker-dealers will spend an average of 87 hours annually to comply with the rule. Thus, the total compliance burden is approximately 18,183 burden-hours...
78 FR 30346 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-22
... conversations with fund representatives, we estimate that the reporting burden is approximately 620 hours per... approximately 103,580 hours. In addition to the burden hours, based on conversations with fund representatives...
Approximation by the iterates of Bernstein operator
NASA Astrophysics Data System (ADS)
Zapryanova, Teodora; Tachev, Gancho
2012-11-01
We study the degree of pointwise approximation of the iterated Bernstein operators to its limiting operator. We obtain a quantitative estimates related to the conjecture of Gonska and Raşa from 2006.
Thin-wall approximation in vacuum decay: A lemma
NASA Astrophysics Data System (ADS)
Brown, Adam R.
2018-05-01
The "thin-wall approximation" gives a simple estimate of the decay rate of an unstable quantum field. Unfortunately, the approximation is uncontrolled. In this paper I show that there are actually two different thin-wall approximations and that they bracket the true decay rate: I prove that one is an upper bound and the other a lower bound. In the thin-wall limit, the two approximations converge. In the presence of gravity, a generalization of this lemma provides a simple sufficient condition for nonperturbative vacuum instability.
Sambo, Maganga; Johnson, Paul C. D.; Hotopp, Karen; Changalucha, Joel; Cleaveland, Sarah; Kazwala, Rudovick; Lembo, Tiziana; Lugelo, Ahmed; Lushasi, Kennedy; Maziku, Mathew; Mbunda, Eberhard; Mtema, Zacharia; Sikana, Lwitiko; Townsend, Sunny E.; Hampson, Katie
2017-01-01
Rabies can be eliminated by achieving comprehensive coverage of 70% of domestic dogs during annual mass vaccination campaigns. Estimates of vaccination coverage are, therefore, required to evaluate and manage mass dog vaccination programs; however, there is no specific guidance for the most accurate and efficient methods for estimating coverage in different settings. Here, we compare post-vaccination transects, school-based surveys, and household surveys across 28 districts in southeast Tanzania and Pemba island covering rural, urban, coastal and inland settings, and a range of different livelihoods and religious backgrounds. These approaches were explored in detail in a single district in northwest Tanzania (Serengeti), where their performance was compared with a complete dog population census that also recorded dog vaccination status. Post-vaccination transects involved counting marked (vaccinated) and unmarked (unvaccinated) dogs immediately after campaigns in 2,155 villages (24,721 dogs counted). School-based surveys were administered to 8,587 primary school pupils each representing a unique household, in 119 randomly selected schools approximately 2 months after campaigns. Household surveys were conducted in 160 randomly selected villages (4,488 households) in July/August 2011. Costs to implement these coverage assessments were $12.01, $66.12, and $155.70 per village for post-vaccination transects, school-based, and household surveys, respectively. Simulations were performed to assess the effect of sampling on the precision of coverage estimation. The sampling effort required to obtain reasonably precise estimates of coverage from household surveys is generally very high and probably prohibitively expensive for routine monitoring across large areas, particularly in communities with high human to dog ratios. School-based surveys partially overcame sampling constraints, however, were also costly to obtain reasonably precise estimates of coverage. Post-vaccination transects provided precise and timely estimates of community-level coverage that could be used to troubleshoot the performance of campaigns across large areas. However, transects typically overestimated coverage by around 10%, which therefore needs consideration when evaluating the impacts of campaigns. We discuss the advantages and disadvantages of these different methods and make recommendations for how vaccination campaigns can be better monitored and managed at different stages of rabies control and elimination programs. PMID:28352630
NASA Technical Reports Server (NTRS)
Banks, H. T.; Kunisch, K.
1982-01-01
Approximation results from linear semigroup theory are used to develop a general framework for convergence of approximation schemes in parameter estimation and optimal control problems for nonlinear partial differential equations. These ideas are used to establish theoretical convergence results for parameter identification using modal (eigenfunction) approximation techniques. Results from numerical investigations of these schemes for both hyperbolic and parabolic systems are given.
Gucciardi, Daniel F; Zhang, Chun-Qing; Ponnusamy, Vellapandian; Si, Gangyan; Stenling, Andreas
2016-04-01
The aims of this study were to assess the cross-cultural invariance of athletes' self-reports of mental toughness and to introduce and illustrate the application of approximate measurement invariance using Bayesian estimation for sport and exercise psychology scholars. Athletes from Australia (n = 353, Mage = 19.13, SD = 3.27, men = 161), China (n = 254, Mage = 17.82, SD = 2.28, men = 138), and Malaysia (n = 341, Mage = 19.13, SD = 3.27, men = 200) provided a cross-sectional snapshot of their mental toughness. The cross-cultural invariance of the mental toughness inventory in terms of (a) the factor structure (configural invariance), (b) factor loadings (metric invariance), and (c) item intercepts (scalar invariance) was tested using an approximate measurement framework with Bayesian estimation. Results indicated that approximate metric and scalar invariance was established. From a methodological standpoint, this study demonstrated the usefulness and flexibility of Bayesian estimation for single-sample and multigroup analyses of measurement instruments. Substantively, the current findings suggest that the measurement of mental toughness requires cultural adjustments to better capture the contextually salient (emic) aspects of this concept.
Choi, J.; Harvey, J.W.
2000-01-01
Developing a more thorough understanding of water and chemical budgets in wetlands depends in part on our ability to quantify time-varying interactions between ground water and surface water. We used a combined water and solute mass balance approach to estimate time-varying ground-water discharge and recharge in the Everglades Nutrient Removal project (ENR), a relatively large constructed wetland (1544 hectare) built for removing nutrients from agricultural drainage in the norther Everglades in South Florida, USA. Over a 4-year period (1994 through 1998), ground-water recharge averaged 13.4 hectare-meter per day (ha-m/day) or 0.9 cm/day, which is approximately 31% of surface water pumped into the ENR for treatment. In contrast, ground-water discharge was much smaller (1.4 ha-m/day, or 0.09 cm/day, or 2.8% of water input to ENR for treatment). Using a water-balance approach alone only allowed net ground-water exchange (discharge - recharge) to be estimated (-12 ?? 2.4 ha-ma/day). Disharge and recharge were individually determined by combining a chloride mass balance with the water balance. For a variety of reasons, the ground-water discharge estimated by the combined mass balance approach was not reliable (1.4 ?? 37 ha-m/day). As a result, ground-water interactions could only be reliably estimated by comparing the mass-balance results with other independent approaches, including direct seepage-meter measurements and previous estimates using ground-water modeling. All three independent approaches provided similar estimates of average ground-water recharge, ranging from 13 to 14 ha-m/day. There was also relatively good agreement between ground-water discharge estimates for the mass balance and seepage meter methods, 1.4 and 0.9 ha-m/day, respectively. However, ground-water-flow modeling provided an average discharge estimate that was approximately a factor of four higher (5.4 ha-m/day) than the other two methods. Our study developed an initial understanding of how the design and operation of the ENR increases interactions between ground water and surface water. A considerable portion of recharged ground water (73%) was collected and returned to the ENR by a seepage canal. Additional recharge that was not captured by the seepage canal only occurred when pumped inflow rates to ENR (and ENR water levels) were relatively high. Management of surface water in the northern Everglades therefore clearly has the potential to increase interactions with ground water.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlson, Gary
This final report provides a complete summary of the activities, results, analytical discussion, and overall evaluation of the project titled “Economical and Environmentally Benign Extraction of Rare Earth Elements (REES) from Coal & Coal Byproducts” under DOE Award Number DE-FE-0027155 that started in March 2016 and ended December 2017. Fly ash was selected as the coal-byproduct source material due to fact that it is readily available with no need for extensive methods to obtain the material, it is produced in large quantities (>50 million tons per year) and had REE concentrations similar to other coal-byproducts. The selected fly ash usedmore » throughout this project was from the Mill Creek power generating facility operated by Louisville Gas and Electric located in Louisville, KY and was subjected to a variety of physical and chemical characterization tests. Results from fusion extractions showed that the selected fly-ash had a TREE+Y concentration of 480 ppm with critical REEs concentration of 200 ppm. The fly ash had an outlook ratio of 1.25 and an estimated value of $16-$18 worth of salable REEs per 1-tonne of fly ash. Additional characterizations by optical evaluation, QEMSCAN, XRD, size fractionation, and SEM analysis showed the fly ash consisted of small glassy spherules with a size range between 1 to 110 µm (ave. diam. of 13 um), was heterogeneous in chemical composition (main crystalline phases: aluminum oxides and iron oxides) and was primarily an amorphous material (75 to 80%). A simple stepped approach was completed to estimate the total REE resource quantity. The approach included REE characterization of the representative samples, evaluation of fly-ash availability, and final determination estimated resource availability with regards to REE grade on a regional and national scale. This data represents the best available information and is based upon the assumptions that the power generating facility where the fly-ash was obtained will use the same coal sources (actual mines were identified), the coal materials will have relatively consistent REE concentrations, and the REE extraction process developed during this project can achieve 42% REE recovery (validated and confirmed). Calculations indicated that the estimated REE resource is approximately 175,000 tonnes with a current estimated value of $3,330MM. The proposed REE extraction and production process developed during this project used four fundamental steps; 1) fly-ash pretreatment to enhance REE extraction, 2) REE extraction by acid digestion, 3) REE separation/concentration by carbon adsorption and column chromatography, and 4) REE oxide production. Secondary processing steps to manage process residuals and additional processing techniques to produce value-added products were incorporated into the process during the project. These secondary steps were not only necessary to manage residuals, but also provided additional revenue streams that offset operational and capital expenditures. The process produces one value product stream (production of zeolite Na-P1), a solids waste stream, and one liquid stream that met RCRA discharge requirements. Based upon final design criteria and operational parameters, the proposed system could produce approximately 200 grams of REOs from 1-tonne of fly-ash, thereby representing a TREE+Y recovery of 42% (project target of > 25%). A detailed economic model was developed to evaluate both CAPEX and OPEX estimates for systems with varying capacities between 100 kg to 200 tonnes of fly ash processed per day. Using a standard system capacity of 10 tonne/day system, capital costs were estimated at $88/kg fly ash while operating costs were estimated at approximately $450/kg fly ash. This operating cost estimate includes a revenue of $495/tonne of fly ash processed from the value-added product produced from the system (zeolite Na-P1). Although operating cost savings due to zeolite production were significant, the capital + operating cost for a 10 tonne system was more expensive than the total dollar value of REEs present in the fly ash material. Specifically, the estimated cost per 1-tonne of fly ash treated is approximately $540 while the estimated value of REEs in the fly ash is $18-$20/tonne. This is an excessive difference showing that the proposed process is not economically feasible strictly on the basis of REE revenue compared to extraction costs. Although the current proposed system does not produce sufficient quantities of REEs or additional revenue sources to offset operational and capital costs, supplementary factors including US strategic concerns, commercial demands, and defense department requirements must be factored. At this time, the process developed during this project provides foundational information for future development of simple processes that require low capital investment and one that will extract a valuable quality and quantity of REE oxides from industrial waste.« less
Atkinson, Quentin D; Gray, Russell D; Drummond, Alexei J
2008-02-01
The relative timing and size of regional human population growth following our expansion from Africa remain unknown. Human mitochondrial DNA (mtDNA) diversity carries a legacy of our population history. Given a set of sequences, we can use coalescent theory to estimate past population size through time and draw inferences about human population history. However, recent work has challenged the validity of using mtDNA diversity to infer species population sizes. Here we use Bayesian coalescent inference methods, together with a global data set of 357 human mtDNA coding-region sequences, to infer human population sizes through time across 8 major geographic regions. Our estimates of relative population sizes show remarkable concordance with the contemporary regional distribution of humans across Africa, Eurasia, and the Americas, indicating that mtDNA diversity is a good predictor of population size in humans. Plots of population size through time show slow growth in sub-Saharan Africa beginning 143-193 kya, followed by a rapid expansion into Eurasia after the emergence of the first non-African mtDNA lineages 50-70 kya. Outside Africa, the earliest and fastest growth is inferred in Southern Asia approximately 52 kya, followed by a succession of growth phases in Northern and Central Asia (approximately 49 kya), Australia (approximately 48 kya), Europe (approximately 42 kya), the Middle East and North Africa (approximately 40 kya), New Guinea (approximately 39 kya), the Americas (approximately 18 kya), and a second expansion in Europe (approximately 10-15 kya). Comparisons of relative regional population sizes through time suggest that between approximately 45 and 20 kya most of humanity lived in Southern Asia. These findings not only support the use of mtDNA data for estimating human population size but also provide a unique picture of human prehistory and demonstrate the importance of Southern Asia to our recent evolutionary past.
Computational tools for exact conditional logistic regression.
Corcoran, C; Mehta, C; Patel, N; Senchaudhuri, P
Logistic regression analyses are often challenged by the inability of unconditional likelihood-based approximations to yield consistent, valid estimates and p-values for model parameters. This can be due to sparseness or separability in the data. Conditional logistic regression, though useful in such situations, can also be computationally unfeasible when the sample size or number of explanatory covariates is large. We review recent developments that allow efficient approximate conditional inference, including Monte Carlo sampling and saddlepoint approximations. We demonstrate through real examples that these methods enable the analysis of significantly larger and more complex data sets. We find in this investigation that for these moderately large data sets Monte Carlo seems a better alternative, as it provides unbiased estimates of the exact results and can be executed in less CPU time than can the single saddlepoint approximation. Moreover, the double saddlepoint approximation, while computationally the easiest to obtain, offers little practical advantage. It produces unreliable results and cannot be computed when a maximum likelihood solution does not exist. Copyright 2001 John Wiley & Sons, Ltd.
Expenditure and resource utilisation for cervical screening in Australia
2012-01-01
Background The National Cervical Screening Program in Australia currently recommends that women aged 18–69 years are screened with conventional cytology every 2 years. Publicly funded HPV vaccination was introduced in 2007, and partly as a consequence, a renewal of the screening program that includes a review of screening recommendations has recently been announced. This study aimed to provide a baseline for such a review by quantifying screening program resource utilisation and costs in 2010. Methods A detailed model of current cervical screening practice in Australia was constructed and we used data from the Victorian Cervical Cytology Registry to model age-specific compliance with screening and follow-up. We applied model-derived rate estimates to the 2010 Australian female population to calculate costs and numbers of colposcopies, biopsies, treatments for precancer and cervical cancers in that year, assuming that the numbers of these procedures were not yet substantially impacted by vaccination. Results The total cost of the screening program in 2010 (excluding administrative program overheads) was estimated to be A$194.8M. We estimated that a total of 1.7 million primary screening smears costing $96.7M were conducted, a further 188,900 smears costing $10.9M were conducted to follow-up low grade abnormalities, 70,900 colposcopy and 34,100 histological evaluations together costing $21.2M were conducted, and about 18,900 treatments for precancerous lesions were performed (including retreatments), associated with a cost of $45.5M for treatment and post-treatment follow-up. We also estimated that $20.5M was spent on work-up and treatment for approximately 761 women diagnosed with invasive cervical cancer. Overall, an estimated $23 was spent in 2010 for each adult woman in Australia on cervical screening program-related activities. Conclusions Approximately half of the total cost of the screening program is spent on delivery of primary screening tests; but the introduction of HPV vaccination, new technologies, increasing the interval and changing the age range of screening is expected to have a substantial impact on this expenditure, as well as having some impact on follow-up and management costs. These estimates provide a benchmark for future assessment of the impact of changes to screening program recommendations to the costs of cervical screening in Australia. PMID:23216968
An approximate dynamic programming approach to resource management in multi-cloud scenarios
NASA Astrophysics Data System (ADS)
Pietrabissa, Antonio; Priscoli, Francesco Delli; Di Giorgio, Alessandro; Giuseppi, Alessandro; Panfili, Martina; Suraci, Vincenzo
2017-03-01
The programmability and the virtualisation of network resources are crucial to deploy scalable Information and Communications Technology (ICT) services. The increasing demand of cloud services, mainly devoted to the storage and computing, requires a new functional element, the Cloud Management Broker (CMB), aimed at managing multiple cloud resources to meet the customers' requirements and, simultaneously, to optimise their usage. This paper proposes a multi-cloud resource allocation algorithm that manages the resource requests with the aim of maximising the CMB revenue over time. The algorithm is based on Markov decision process modelling and relies on reinforcement learning techniques to find online an approximate solution.
Willem W.S. van Hees
2002-01-01
Comparisons of estimated standard error for a ratio-of-means (ROM) estimator are presented for forest resource inventories conducted in southeast Alaska between 1995 and 2000. Estimated standard errors for the ROM were generated by using a traditional variance estimator and also approximated by bootstrap methods. Estimates of standard error generated by both...
A Posteriori Error Estimation for Discontinuous Galerkin Approximations of Hyperbolic Systems
NASA Technical Reports Server (NTRS)
Larson, Mats G.; Barth, Timothy J.
1999-01-01
This article considers a posteriori error estimation of specified functionals for first-order systems of conservation laws discretized using the discontinuous Galerkin (DG) finite element method. Using duality techniques, we derive exact error representation formulas for both linear and nonlinear functionals given an associated bilinear or nonlinear variational form. Weighted residual approximations of the exact error representation formula are then proposed and numerically evaluated for Ringleb flow, an exact solution of the 2-D Euler equations.
NASA Astrophysics Data System (ADS)
Dabiri, M.; Ghafouri, M.; Rohani Raftar, H. R.; Björk, T.
2018-03-01
Methods to estimate the strain-life curve, which were divided into three categories: simple approximations, artificial neural network-based approaches and continuum damage mechanics models, were examined, and their accuracy was assessed in strain-life evaluation of a direct-quenched high-strength steel. All the prediction methods claim to be able to perform low-cycle fatigue analysis using available or easily obtainable material properties, thus eliminating the need for costly and time-consuming fatigue tests. Simple approximations were able to estimate the strain-life curve with satisfactory accuracy using only monotonic properties. The tested neural network-based model, although yielding acceptable results for the material in question, was found to be overly sensitive to the data sets used for training and showed an inconsistency in estimation of the fatigue life and fatigue properties. The studied continuum damage-based model was able to produce a curve detecting early stages of crack initiation. This model requires more experimental data for calibration than approaches using simple approximations. As a result of the different theories underlying the analyzed methods, the different approaches have different strengths and weaknesses. However, it was found that the group of parametric equations categorized as simple approximations are the easiest for practical use, with their applicability having already been verified for a broad range of materials.
Troxler, Tiffany G.; Gaiser, Evelyn; Barr, Jordan; Fuentes, Jose D.; Jaffe, Rudolf; Childers, Daniel L.; Collado-Vides, Ligia; Rivera-Monroy, Victor H.; Castañeda-Moya, Edward; Anderson, William; Chambers, Randy; Chen, Meilian; Coronado-Molina, Carlos; Davis, Stephen E.; Engel, Victor C.; Fitz, Carl; Fourqurean, James; Frankovich, Tom; Kominoski, John; Madden, Chris; Malone, Sparkle L.; Oberbauer, Steve F.; Olivas, Paulo; Richards, Jennifer; Saunders, Colin; Schedlbauer, Jessica; Scinto, Leonard J.; Sklar, Fred; Smith, Thomas J.; Smoak, Joseph M.; Starr, Gregory; Twilley, Robert; Whelan, Kevin
2013-01-01
Recent studies suggest that coastal ecosystems can bury significantly more C than tropical forests, indicating that continued coastal development and exposure to sea level rise and storms will have global biogeochemical consequences. The Florida Coastal Everglades Long Term Ecological Research (FCE LTER) site provides an excellent subtropical system for examining carbon (C) balance because of its exposure to historical changes in freshwater distribution and sea level rise and its history of significant long-term carbon-cycling studies. FCE LTER scientists used net ecosystem C balance and net ecosystem exchange data to estimate C budgets for riverine mangrove, freshwater marsh, and seagrass meadows, providing insights into the magnitude of C accumulation and lateral aquatic C transport. Rates of net C production in the riverine mangrove forest exceeded those reported for many tropical systems, including terrestrial forests, but there are considerable uncertainties around those estimates due to the high potential for gain and loss of C through aquatic fluxes. C production was approximately balanced between gain and loss in Everglades marshes; however, the contribution of periphyton increases uncertainty in these estimates. Moreover, while the approaches used for these initial estimates were informative, a resolved approach for addressing areas of uncertainty is critically needed for coastal wetland ecosystems. Once resolved, these C balance estimates, in conjunction with an understanding of drivers and key ecosystem feedbacks, can inform cross-system studies of ecosystem response to long-term changes in climate, hydrologic management, and other land use along coastlines
Griffin, Paul C.; Schoenecker, Kate A.; Gogan, Peter J.; Lubow, Bruce C.
2009-01-01
Reliable estimates of elk (Cervus elaphus) and deer (Odocoileus hemionus) abundance on Santa Rosa Island, Channel Islands National Park, California, are required to assess the success of management actions directed at these species. We conducted a double-observer aerial survey of elk on a large portion of Santa Rosa Island on March 19, 2009. All four persons on the helicopter were treated as observers. We used two analytical approaches: (1) with three capture occasions corresponding to three possible observers, pooling the observations from the two rear-seat observers, and (2) with four capture occasions treating each observer separately. Approach 1 resulted in an estimate of 483 elk in the survey zone with a 95-percent confidence interval of 479 to 524 elk. Approach 2 resulted in an estimate of 489 elk in the survey zone with a 95-percent confidence interval of 471 to 535 elk. Approximately 5 percent of the elk groups that were estimated to have been present in the survey area were not seen by any observer. Fog prevented us from collecting double-observer observations for deer as intended on March 20. However, we did count 434 deer during the double-observer counts of elk on March 19. Both the calculated number of elk and the observed number of deer are minimal estimates of numbers of each ungulate species on Santa Rosa Island as weather conditions precluded us from surveying the entire island.
NASA Astrophysics Data System (ADS)
Takagi, Hideo D.; Swaddle, Thomas W.
1996-01-01
The outer-sphere contribution to the volume of activation of homogeneous electron exchange reactions is estimated for selected solvents on the basis of the mean spherical approximation (MSA), and the calculated values are compared with those estimated by the Strank-Hush-Marcus (SHM) theory and with activation volumes obtained experimentally for the electron exchange reaction between tris(hexafluoroacetylacetonato)ruthenium(III) and -(II) in acetone, acetonitrile, methanol and chloroform. The MSA treatment, which recognizes the molecular nature of the solvent, does not improve significantly upon the continuous-dielectric SHM theory, which represents the experimental data adequately for the more polar solvents.
Computing the scatter component of mammographic images.
Highnam, R P; Brady, J M; Shepstone, B J
1994-01-01
The authors build upon a technical report (Tech. Report OUEL 2009/93, Engng. Sci., Oxford Uni., Oxford, UK, 1993) in which they proposed a model of the mammographic imaging process for which scattered radiation is a key degrading factor. Here, the authors propose a way of estimating the scatter component of the signal at any pixel within a mammographic image, and they use this estimate for model-based image enhancement. The first step is to extend the authors' previous model to divide breast tissue into "interesting" (fibrous/glandular/cancerous) tissue and fat. The scatter model is then based on the idea that the amount of scattered radiation reaching a point is related to the energy imparted to the surrounding neighbourhood. This complex relationship is approximated using published empirical data, and it varies with the size of the breast being imaged. The approximation is further complicated by needing to take account of extra-focal radiation and breast edge effects. The approximation takes the form of a weighting mask which is convolved with the total signal (primary and scatter) to give a value which is input to a "scatter function", approximated using three reference cases, and which returns a scatter estimate. Given a scatter estimate, the more important primary component can be calculated and used to create an image recognizable by a radiologist. The images resulting from this process are clearly enhanced, and model verification tests based on an estimate of the thickness of interesting tissue present proved to be very successful. A good scatter model opens the was for further processing to remove the effects of other degrading factors, such as beam hardening.
Libertus, Melissa E.; Odic, Darko; Feigenson, Lisa; Halberda, Justin
2016-01-01
Children can represent number in at least two ways: by using their non-verbal, intuitive Approximate Number System (ANS), and by using words and symbols to count and represent numbers exactly. Further, by the time they are five years old, children can map between the ANS and number words, as evidenced by their ability to verbally estimate numbers of items without counting. How does the quality of the mapping between approximate and exact numbers relate to children’s math abilities? The role of the ANS-number word mapping in math competence remains controversial for at least two reasons. First, previous work has not examined the relation between verbal estimation and distinct subtypes of math abilities. Second, previous work has not addressed how distinct components of verbal estimation – mapping accuracy and variability – might each relate to math performance. Here, we address these gaps by measuring individual differences in ANS precision, verbal number estimation, and formal and informal math abilities in 5- to 7-year-old children. We found that verbal estimation variability, but not estimation accuracy, predicted formal math abilities even when controlling for age, expressive vocabulary, and ANS precision, and that it mediated the link between ANS precision and overall math ability. These findings suggest that variability in the ANS-number word mapping may be especially important for formal math abilities. PMID:27348475
Libertus, Melissa E; Odic, Darko; Feigenson, Lisa; Halberda, Justin
2016-10-01
Children can represent number in at least two ways: by using their non-verbal, intuitive approximate number system (ANS) and by using words and symbols to count and represent numbers exactly. Furthermore, by the time they are 5years old, children can map between the ANS and number words, as evidenced by their ability to verbally estimate numbers of items without counting. How does the quality of the mapping between approximate and exact numbers relate to children's math abilities? The role of the ANS-number word mapping in math competence remains controversial for at least two reasons. First, previous work has not examined the relation between verbal estimation and distinct subtypes of math abilities. Second, previous work has not addressed how distinct components of verbal estimation-mapping accuracy and variability-might each relate to math performance. Here, we addressed these gaps by measuring individual differences in ANS precision, verbal number estimation, and formal and informal math abilities in 5- to 7-year-old children. We found that verbal estimation variability, but not estimation accuracy, predicted formal math abilities, even when controlling for age, expressive vocabulary, and ANS precision, and that it mediated the link between ANS precision and overall math ability. These findings suggest that variability in the ANS-number word mapping may be especially important for formal math abilities. Copyright © 2016 Elsevier Inc. All rights reserved.
Estimating the potential for methane clathrate instability in the 1%-CO2 IPCC AR-4 simulations
NASA Astrophysics Data System (ADS)
Lamarque, Jean-François
2008-10-01
The recent work of Reagan and Moridis (2007) has shown that even a limited warming of 1 K over 100 years can lead to clathrate destabilization, leading to a significant flux of methane into the ocean water, at least for shallow deposits. Here we study the potential for methane clathrate destabilization by identifying the 100-year temperature increase in the available IPCC (Intergovernmental Panel on Climate Change) AR-4 1%-CO2 increase per year (up to doubling over pre-industrial conditions, which occurs after 70 years) simulations. Depending on assumptions made on the possible locations (in this case, only depth) of methane clathrates and on temperature dependence, our calculation leads to an estimated model-mean release of methane at the bottom of the ocean of approximately 560-2140 Tg(CH4)/year; as no actual geographical distribution of methane clathrates is considered here, these flux estimates must be viewed as upper bound estimates. Using an observed 1% ratio to estimate the amount of methane reaching the atmosphere, our analysis leads to a relatively small methane flux of approximately 5-21 Tg(CH4)/year, with an estimated inter-model standard deviation of approximately 30%. The role of sea-level rise by 2100 will be to further stabilize methane clathrates, albeit to a small amount as the sea-level rise is expected to be less than a few meters.
Goal-oriented explicit residual-type error estimates in XFEM
NASA Astrophysics Data System (ADS)
Rüter, Marcus; Gerasimov, Tymofiy; Stein, Erwin
2013-08-01
A goal-oriented a posteriori error estimator is derived to control the error obtained while approximately evaluating a quantity of engineering interest, represented in terms of a given linear or nonlinear functional, using extended finite elements of Q1 type. The same approximation method is used to solve the dual problem as required for the a posteriori error analysis. It is shown that for both problems to be solved numerically the same singular enrichment functions can be used. The goal-oriented error estimator presented can be classified as explicit residual type, i.e. the residuals of the approximations are used directly to compute upper bounds on the error of the quantity of interest. This approach therefore extends the explicit residual-type error estimator for classical energy norm error control as recently presented in Gerasimov et al. (Int J Numer Meth Eng 90:1118-1155, 2012a). Without loss of generality, the a posteriori error estimator is applied to the model problem of linear elastic fracture mechanics. Thus, emphasis is placed on the fracture criterion, here the J-integral, as the chosen quantity of interest. Finally, various illustrative numerical examples are presented where, on the one hand, the error estimator is compared to its finite element counterpart and, on the other hand, improved enrichment functions, as introduced in Gerasimov et al. (2012b), are discussed.
Management of geriatric incontinence in nursing homes.
Schnelle, J F; Traughber, B; Morgan, D B; Embry, J E; Binion, A F; Coleman, A
1983-01-01
A behavioral management system designed to reduce urinary incontinence was evaluated in two nursing homes with a pretest-posttest control group design with repeated measures. The primary components of the system were prompting and contingent social approval/disapproval which required approximately 2.5 minutes per patient per hour to administer. The frequency of correct toileting for experimental subjects increased by approximately 45%. The experimental groups were significantly different from the control groups on both incontinence and correct toileting measures. The results are discussed in view of the management issues inherent in nursing home settings. PMID:6885672
Wartmann, Flurina M; Purves, Ross S; van Schaik, Carel P
2010-04-01
Quantification of the spatial needs of individuals and populations is vitally important for management and conservation. Geographic information systems (GIS) have recently become important analytical tools in wildlife biology, improving our ability to understand animal movement patterns, especially when very large data sets are collected. This study aims at combining the field of GIS with primatology to model and analyse space-use patterns of wild orang-utans. Home ranges of female orang-utans in the Tuanan Mawas forest reserve in Central Kalimantan, Indonesia were modelled with kernel density estimation methods. Kernel results were compared with minimum convex polygon estimates, and were found to perform better, because they were less sensitive to sample size and produced more reliable estimates. Furthermore, daily travel paths were calculated from 970 complete follow days. Annual ranges for the resident females were approximately 200 ha and remained stable over several years; total home range size was estimated to be 275 ha. On average, each female shared a third of her home range with each neighbouring female. Orang-utan females in Tuanan built their night nest on average 414 m away from the morning nest, whereas average daily travel path length was 777 m. A significant effect of fruit availability on day path length was found. Sexually active females covered longer distances per day and may also temporarily expand their ranges.
Spline approximations for nonlinear hereditary control systems
NASA Technical Reports Server (NTRS)
Daniel, P. L.
1982-01-01
A sline-based approximation scheme is discussed for optimal control problems governed by nonlinear nonautonomous delay differential equations. The approximating framework reduces the original control problem to a sequence of optimization problems governed by ordinary differential equations. Convergence proofs, which appeal directly to dissipative-type estimates for the underlying nonlinear operator, are given and numerical findings are summarized.
An approximation formula for a class of fault-tolerant computers
NASA Technical Reports Server (NTRS)
White, A. L.
1986-01-01
An approximation formula is derived for the probability of failure for fault-tolerant process-control computers. These computers use redundancy and reconfiguration to achieve high reliability. Finite-state Markov models capture the dynamic behavior of component failure and system recovery, and the approximation formula permits an estimation of system reliability by an easy examination of the model.
Exponential Approximations Using Fourier Series Partial Sums
NASA Technical Reports Server (NTRS)
Banerjee, Nana S.; Geer, James F.
1997-01-01
The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.
On the estimation of spread rate for a biological population
Jim Clark; Lajos Horváth; Mark Lewis
2001-01-01
We propose a nonparametric estimator for the rate of spread of an introduced population. We prove that the limit distribution of the estimator is normal or stable, depending on the behavior of the moment generating function. We show that resampling methods can also be used to approximate the distribution of the estimators.
ERIC Educational Resources Information Center
Malcolm, Peter
2013-01-01
The ability and to make good estimates is essential, as is the ability to assess the reasonableness of estimates. These abilities are becoming increasingly important as digital technologies transform the ways in which people work. To estimate is to provide an approximation to a problem that is mathematical in nature, and the ability to estimate is…
Retrospective cost-effectiveness analyses for polio vaccination in the United States.
Thompson, Kimberly M; Tebbens, Radboud J Duintjer
2006-12-01
The history of polio vaccination in the United States spans 50 years and includes different phases of the disease, multiple vaccines, and a sustained significant commitment of resources. We estimated cost-effectiveness ratios and assessed the net benefits of polio vaccination applicable at various points in time from the societal perspective and we discounted these back to appropriate points in time. We reconstructed vaccine price data from available sources and used these to retrospectively estimate the total costs of the U.S. historical polio vaccination strategies (all costs reported in year 2002 dollars). We estimate that the United States invested approximately US dollars 35 billion (1955 net present value, discount rate of 3%) in polio vaccines between 1955 and 2005 and will invest approximately US dollars 1.4 billion (1955 net present value, or US dollars 6.3 billion in 2006 net present value) between 2006 and 2015 assuming a policy of continued use of inactivated poliovirus vaccine (IPV) for routine vaccination. The historical and future investments translate into over 1.7 billion vaccinations that prevent approximately 1.1 million cases of paralytic polio and over 160,000 deaths (1955 net present values of approximately 480,000 cases and 73,000 deaths). Due to treatment cost savings, the investment implies net benefits of approximately US dollars 180 billion (1955 net present value), even without incorporating the intangible costs of suffering and death and of averted fear. Retrospectively, the U.S. investment in polio vaccination represents a highly valuable, cost-saving public health program. Observed changes in the cost-effectiveness ratio estimates over time suggest the need for living economic models for interventions that appropriately change with time. This article also demonstrates that estimates of cost-effectiveness ratios at any single time point may fail to adequately consider the context of the investment made to date and the importance of population and other dynamics, and shows the importance of dynamic modeling.
Home range and survival of breeding painted buntings on Sapelo Island, Georgia
Springborn, E.G.; Meyers, J.M.
2005-01-01
The southeastern United States population of the painted bunting (Passerina ciris) has decreased approximately 75% from 1966-1996 based on Breeding Bird Survey trends. Partners in Flight guidelines recommend painted bunting conservation as a high priority with a need for management by state and federal agencies. Basic information on home range and survival of breeding painted buntings will provide managers with required habitat types and estimates of land areas necessary to maintain minimum population sizes for this species. We radiotracked after-second-year male and after-hatching-year female buntings on Sapelo Island, Georgia, during the breeding seasons (late April-early August) of 1997 and 1998. We used the animal movement extension in ArcView to determine fixed-kernel home range in an unmanaged maritime shrub and managed 60-80-year-old pine (Pinus spp.)-oak Quercus spp.) forest. Using the Kaplan-Meier method, we estimated an adult breeding season survival of 1.00 for males (n = 36) and 0.94 (SE = 0.18) for females(n=27). Painted bunting home ranges were smaller in unmanaged maritime shrub (female: kernel (x) over bar = 3.5 ha [95% CI: 2.5-4.51; male: kernel (x) over bar = 3.1 ha [95% CI: 2.3-3.9]) compared to those in managed pine-oak forests (female: kernel (x) over bar = 4.7 ha [95% CI: 2.8-6.6]; male: kernel (x) over bar = 7.0 ha [95% CI: 4.9-9.1]). Buntings nesting in the managed pine-oak forest flew long distances (>= 300 m) to forage in salt marshes, freshwater wetlands, and moist forest clearings. In maritime shrub buntings occupied a compact area and rarely moved long distances. The painted bunting population of Sapelo Island requires conservation of maritime shrub as potential optimum nesting habitat and management of nesting habitat in open-canopy pine-oak sawtimber forests by periodic prescribed fire (every 4-6 years) and timber thinning within a landscape that contains salt marsh or freshwater wetland openings within 700 m of those forests.
Groten, Joel T.; Ellison, Christopher A.; Mahoney, Mollie H.
2016-06-30
Excess sediment in rivers and estuaries poses serious environmental and economic challenges. The U.S. Army Corps of Engineers (USACE) routinely dredges sediment in Federal navigation channels to maintain commercial shipping operations. The USACE initiated a 3-year pilot project in 2013 to use navigation channel dredged material to aid in restoration of shoreline habitat in the 21st Avenue West Channel Embayment of the Duluth-Superior Harbor. Placing dredged material in the 21st Avenue West Channel Embayment supports the restoration of shallow bay aquatic habitat aiding in the delisting of the St. Louis River Estuary Area of Concern.The U.S. Geological Survey, in cooperation with the USACE, collected turbidity and suspended-sediment concentrations (SSCs) in 2014 and 2015 to measure the horizontal and vertical distribution of SSCs during placement operations of dredged materials. These data were collected to help the USACE evaluate the use of several best management practices, including various dredge material placement techniques and a silt curtain, to mitigate the dispersion of suspended sediment.Three-dimensional visualization maps are a valuable tool for assessing the spatial displacement of SSCs. Data collection was designed to coincide with four dredged placement configurations that included periods with and without a silt curtain as well as before and after placement of dredged materials. Approximately 230 SSC samples and corresponding turbidity values collected in 2014 and 2015 were used to develop a simple linear regression model between SSC and turbidity. Using the simple linear regression model, SSCs were estimated for approximately 3,000 turbidity values at approximately 100 sampling sites in the 21st Avenue West Channel Embayment of the Duluth-Superior Harbor. The estimated SSCs served as input for development of 12 three-dimensional visualization maps.
Hanley, O; Gutiérrez-Villanueva, J L; Currivan, L; Pollard, D
2008-10-01
The RPII radon (Rn) laboratory holds accreditation for the International Standard ISO/IEC 17025. A requirement of this standard is an estimate of the uncertainty of measurement. This work shows two approaches to estimate the uncertainty. The bottom-up approach involved identifying the components that were found to contribute to the uncertainty. Estimates were made for each of these components, which were combined to give a combined uncertainty of 13.5% at a Rn concentration of approximately 2500 Bq m(-3) at the 68% confidence level. By applying a coverage factor of k=2, the expanded uncertainty is +/-27% at the 95% confidence level. The top-down approach used information previously gathered from intercomparison exercises to estimate the uncertainty. This investigation found an expanded uncertainty of +/-22% at approximately 95% confidence level. This is good agreement for such independent estimates.
Error Estimation for the Linearized Auto-Localization Algorithm
Guevara, Jorge; Jiménez, Antonio R.; Prieto, Jose Carlos; Seco, Fernando
2012-01-01
The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965
Sampling schemes and parameter estimation for nonlinear Bernoulli-Gaussian sparse models
NASA Astrophysics Data System (ADS)
Boudineau, Mégane; Carfantan, Hervé; Bourguignon, Sébastien; Bazot, Michael
2016-06-01
We address the sparse approximation problem in the case where the data are approximated by the linear combination of a small number of elementary signals, each of these signals depending non-linearly on additional parameters. Sparsity is explicitly expressed through a Bernoulli-Gaussian hierarchical model in a Bayesian framework. Posterior mean estimates are computed using Markov Chain Monte-Carlo algorithms. We generalize the partially marginalized Gibbs sampler proposed in the linear case in [1], and build an hybrid Hastings-within-Gibbs algorithm in order to account for the nonlinear parameters. All model parameters are then estimated in an unsupervised procedure. The resulting method is evaluated on a sparse spectral analysis problem. It is shown to converge more efficiently than the classical joint estimation procedure, with only a slight increase of the computational cost per iteration, consequently reducing the global cost of the estimation procedure.
NASA Astrophysics Data System (ADS)
Imani Masouleh, Mehdi; Limebeer, David J. N.
2018-07-01
In this study we will estimate the region of attraction (RoA) of the lateral dynamics of a nonlinear single-track vehicle model. The tyre forces are approximated using rational functions that are shown to capture the nonlinearities of tyre curves significantly better than polynomial functions. An existing sum-of-squares (SOS) programming algorithm for estimating regions of attraction is extended to accommodate the use of rational vector fields. This algorithm is then used to find an estimate of the RoA of the vehicle lateral dynamics. The influence of vehicle parameters and driving conditions on the stability region are studied. It is shown that SOS programming techniques can be used to approximate the stability region without resorting to numerical integration. The RoA estimate from the SOS algorithm is compared to the existing results in the literature. The proposed method is shown to obtain significantly better RoA estimates.
NASA Technical Reports Server (NTRS)
Quek, Kok How Francis
1990-01-01
A method of computing reliable Gaussian and mean curvature sign-map descriptors from the polynomial approximation of surfaces was demonstrated. Such descriptors which are invariant under perspective variation are suitable for hypothesis generation. A means for determining the pose of constructed geometric forms whose algebraic surface descriptors are nonlinear in terms of their orienting parameters was developed. This was done by means of linear functions which are capable of approximating nonlinear forms and determining their parameters. It was shown that biquadratic surfaces are suitable companion linear forms for cylindrical approximation and parameter estimation. The estimates provided the initial parametric approximations necessary for a nonlinear regression stage to fine tune the estimates by fitting the actual nonlinear form to the data. A hypothesis-based split-merge algorithm for extraction and pose determination of cylinders and planes which merge smoothly into other surfaces was developed. It was shown that all split-merge algorithms are hypothesis-based. A finite-state algorithm for the extraction of the boundaries of run-length regions was developed. The computation takes advantage of the run list topology and boundary direction constraints implicit in the run-length encoding.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao
Surrogate models are commonly used in Bayesian approaches such as Markov Chain Monte Carlo (MCMC) to avoid repetitive CPU-demanding model evaluations. However, the approximation error of a surrogate may lead to biased estimations of the posterior distribution. This bias can be corrected by constructing a very accurate surrogate or implementing MCMC in a two-stage manner. Since the two-stage MCMC requires extra original model evaluations, the computational cost is still high. If the information of measurement is incorporated, a locally accurate approximation of the original model can be adaptively constructed with low computational cost. Based on this idea, we propose amore » Gaussian process (GP) surrogate-based Bayesian experimental design and parameter estimation approach for groundwater contaminant source identification problems. A major advantage of the GP surrogate is that it provides a convenient estimation of the approximation error, which can be incorporated in the Bayesian formula to avoid over-confident estimation of the posterior distribution. The proposed approach is tested with a numerical case study. Without sacrificing the estimation accuracy, the new approach achieves about 200 times of speed-up compared to our previous work using two-stage MCMC.« less
Global Burden of Leptospirosis: Estimated in Terms of Disability Adjusted Life Years
Torgerson, Paul R.; Hagan, José E.; Costa, Federico; Calcagno, Juan; Kane, Michael; Martinez-Silveira, Martha S.; Goris, Marga G. A.; Stein, Claudia; Ko, Albert I.; Abela-Ridder, Bernadette
2015-01-01
Background Leptospirosis, a spirochaetal zoonosis, occurs in diverse epidemiological settings and affects vulnerable populations, such as rural subsistence farmers and urban slum dwellers. Although leptospirosis can cause life-threatening disease, there is no global burden of disease estimate in terms of Disability Adjusted Life Years (DALYs) available. Methodology/Principal Findings We utilised the results of a parallel publication that reported global estimates of morbidity and mortality due to leptospirosis. We estimated Years of Life Lost (YLLs) from age and gender stratified mortality rates. Years of Life with Disability (YLDs) were developed from a simple disease model indicating likely sequelae. DALYs were estimated from the sum of YLLs and YLDs. The study suggested that globally approximately 2·90 million DALYs are lost per annum (UIs 1·25–4·54 million) from the approximately annual 1·03 million cases reported previously. Males are predominantly affected with an estimated 2·33 million DALYs (UIs 0·98–3·69) or approximately 80% of the total burden. For comparison, this is over 70% of the global burden of cholera estimated by GBD 2010. Tropical regions of South and South-east Asia, Western Pacific, Central and South America, and Africa had the highest estimated leptospirosis disease burden. Conclusions/Significance Leptospirosis imparts a significant health burden worldwide, which approach or exceed those encountered for a number of other zoonotic and neglected tropical diseases. The study findings indicate that highest burden estimates occur in resource-poor tropical countries, which include regions of Africa where the burden of leptospirosis has been under-appreciated and possibly misallocated to other febrile illnesses such as malaria. PMID:26431366
Acceleration estimation using a single GPS receiver for airborne scalar gravimetry
NASA Astrophysics Data System (ADS)
Zhang, Xiaohong; Zheng, Kai; Lu, Cuixian; Wan, Jiakuan; Liu, Zhanke; Ren, Xiaodong
2017-11-01
Kinematic acceleration estimated using the global positioning system (GPS) is significant for airborne scalar gravimetry. As the conventional approach based on the differential global positioning system (DGPS) presents several drawbacks, including additional cost or the impracticality of setting up nearby base stations in challenging environments, we introduce an alternative approach, Modified Kin-VADASE (MKin-VADASE), based on a modified Kin-VADASE approach without the requirement to have ground-base stations. In this approach, the aircraft velocities are first estimated with the modified Kin-VADASE. Then the accelerations are obtained from velocity estimates using the Taylor approximation differentiator. The impact of carrier-phase measurement noise and satellite ephemeris errors on acceleration estimates are investigated carefully in the frequency domain with the Fast Fourier Transform Algorithm (FFT). The results show that the satellite clock products have a significant impact on the acceleration estimates. Then, the performance of MKin-VADASE, PPP, and DGPS are validated using flight tests carried out in Shanxi Province, China. The accelerations are estimated using the three approaches, then used to calculate the gravity disturbances. Finally, the analysis of crossover difference and the terrestrial gravity data are used to evaluate the accuracy of gravity disturbance estimates. The results show that the performances of MKin-VADASE, PPP and DGPS are comparable, but the computational complexity of MKin-VADASE is greatly reduced with regard to PPP and DGPS. For the results of the three approaches, the RMS of crossover differences of gravity disturbance estimates is approximately 1-1.5 mGal at a spatial resolution of 3.5 km (half wavelength) after crossover adjustment, and the accuracy is approximately 3-4 mGal with respect to terrestrial gravity data.
Calibrating the mental number line.
Izard, Véronique; Dehaene, Stanislas
2008-03-01
Human adults are thought to possess two dissociable systems to represent numbers: an approximate quantity system akin to a mental number line, and a verbal system capable of representing numbers exactly. Here, we study the interface between these two systems using an estimation task. Observers were asked to estimate the approximate numerosity of dot arrays. We show that, in the absence of calibration, estimates are largely inaccurate: responses increase monotonically with numerosity, but underestimate the actual numerosity. However, insertion of a few inducer trials, in which participants are explicitly (and sometimes misleadingly) told that a given display contains 30 dots, is sufficient to calibrate their estimates on the whole range of stimuli. Based on these empirical results, we develop a model of the mapping between the numerical symbols and the representations of numerosity on the number line.
Viscarra Rossel, Raphael A; Webster, Richard; Bui, Elisabeth N; Baldock, Jeff A
2014-01-01
We can effectively monitor soil condition—and develop sound policies to offset the emissions of greenhouse gases—only with accurate data from which to define baselines. Currently, estimates of soil organic C for countries or continents are either unavailable or largely uncertain because they are derived from sparse data, with large gaps over many areas of the Earth. Here, we derive spatially explicit estimates, and their uncertainty, of the distribution and stock of organic C in the soil of Australia. We assembled and harmonized data from several sources to produce the most comprehensive set of data on the current stock of organic C in soil of the continent. Using them, we have produced a fine spatial resolution baseline map of organic C at the continental scale. We describe how we made it by combining the bootstrap, a decision tree with piecewise regression on environmental variables and geostatistical modelling of residuals. Values of stock were predicted at the nodes of a 3-arc-sec (approximately 90 m) grid and mapped together with their uncertainties. We then calculated baselines of soil organic C storage over the whole of Australia, its states and territories, and regions that define bioclimatic zones, vegetation classes and land use. The average amount of organic C in Australian topsoil is estimated to be 29.7 t ha−1 with 95% confidence limits of 22.6 and 37.9 t ha−1. The total stock of organic C in the 0–30 cm layer of soil for the continent is 24.97 Gt with 95% confidence limits of 19.04 and 31.83 Gt. This represents approximately 3.5% of the total stock in the upper 30 cm of soil worldwide. Australia occupies 5.2% of the global land area, so the total organic C stock of Australian soil makes an important contribution to the global carbon cycle, and it provides a significant potential for sequestration. As the most reliable approximation of the stock of organic C in Australian soil in 2010, our estimates have important applications. They could support Australia's National Carbon Accounting System, help guide the formulation of policy around carbon offset schemes, improve Australia's carbon balances, serve to direct future sampling for inventory, guide the design of monitoring networks and provide a benchmark against which to assess the impact of changes in land cover, land management and climate on the stock of C in Australia. In this way, these estimates would help us to develop strategies to adapt and mitigate the effects of climate change. PMID:24599716
Viscarra Rossel, Raphael A; Webster, Richard; Bui, Elisabeth N; Baldock, Jeff A
2014-09-01
We can effectively monitor soil condition-and develop sound policies to offset the emissions of greenhouse gases-only with accurate data from which to define baselines. Currently, estimates of soil organic C for countries or continents are either unavailable or largely uncertain because they are derived from sparse data, with large gaps over many areas of the Earth. Here, we derive spatially explicit estimates, and their uncertainty, of the distribution and stock of organic C in the soil of Australia. We assembled and harmonized data from several sources to produce the most comprehensive set of data on the current stock of organic C in soil of the continent. Using them, we have produced a fine spatial resolution baseline map of organic C at the continental scale. We describe how we made it by combining the bootstrap, a decision tree with piecewise regression on environmental variables and geostatistical modelling of residuals. Values of stock were predicted at the nodes of a 3-arc-sec (approximately 90 m) grid and mapped together with their uncertainties. We then calculated baselines of soil organic C storage over the whole of Australia, its states and territories, and regions that define bioclimatic zones, vegetation classes and land use. The average amount of organic C in Australian topsoil is estimated to be 29.7 t ha(-1) with 95% confidence limits of 22.6 and 37.9 t ha(-1) . The total stock of organic C in the 0-30 cm layer of soil for the continent is 24.97 Gt with 95% confidence limits of 19.04 and 31.83 Gt. This represents approximately 3.5% of the total stock in the upper 30 cm of soil worldwide. Australia occupies 5.2% of the global land area, so the total organic C stock of Australian soil makes an important contribution to the global carbon cycle, and it provides a significant potential for sequestration. As the most reliable approximation of the stock of organic C in Australian soil in 2010, our estimates have important applications. They could support Australia's National Carbon Accounting System, help guide the formulation of policy around carbon offset schemes, improve Australia's carbon balances, serve to direct future sampling for inventory, guide the design of monitoring networks and provide a benchmark against which to assess the impact of changes in land cover, land management and climate on the stock of C in Australia. In this way, these estimates would help us to develop strategies to adapt and mitigate the effects of climate change. © 2014 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.
Tropical forest plantation biomass estimation using RADARSAT-SAR and TM data of south china
NASA Astrophysics Data System (ADS)
Wang, Chenli; Niu, Zheng; Gu, Xiaoping; Guo, Zhixing; Cong, Pifu
2005-10-01
Forest biomass is one of the most important parameters for global carbon stock model yet can only be estimated with great uncertainties. Remote sensing, especially SAR data can offers the possibility of providing relatively accurate forest biomass estimations at a lower cost than inventory in study tropical forest. The goal of this research was to compare the sensitivity of forest biomass to Landsat TM and RADARSAT-SAR data and to assess the efficiency of NDVI, EVI and other vegetation indices in study forest biomass based on the field survey date and GIS in south china. Based on vegetation indices and factor analysis, multiple regression and neural networks were developed for biomass estimation for each species of the plantation. For each species, the better relationships between the biomass predicted and that measured from field survey was obtained with a neural network developed for the species. The relationship between predicted and measured biomass derived from vegetation indices differed between species. This study concludes that single band and many vegetation indices are weakly correlated with selected forest biomass. RADARSAT-SAR Backscatter coefficient has a relatively good logarithmic correlation with forest biomass, but neither TM spectral bands nor vegetation indices alone are sufficient to establish an efficient model for biomass estimation due to the saturation of bands and vegetation indices, multiple regression models that consist of spectral and environment variables improve biomass estimation performance. Comparing with TM, a relatively well estimation result can be achieved by RADARSAT-SAR, but all had limitations in tropical forest biomass estimation. The estimation results obtained are not accurate enough for forest management purposes at the forest stand level. However, the approximate volume estimates derived by the method can be useful in areas where no other forest information is available. Therefore, this paper provides a better understanding of relationships of remote sensing data and forest stand parameters used in forest parameter estimation models.
Analysis and Management of Animal Populations: Modeling, Estimation and Decision Making
Williams, B.K.; Nichols, J.D.; Conroy, M.J.
2002-01-01
This book deals with the processes involved in making informed decisions about the management of animal populations. It covers the modeling of population responses to management actions, the estimation of quantities needed in the modeling effort, and the application of these estimates and models to the development of sound management decisions. The book synthesizes and integrates in a single volume the methods associated with these themes, as they apply to ecological assessment and conservation of animal populations. KEY FEATURES * Integrates population modeling, parameter estimation and * decision-theoretic approaches to management in a single, cohesive framework * Provides authoritative, state-of-the-art descriptions of quantitative * approaches to modeling, estimation and decision-making * Emphasizes the role of mathematical modeling in the conduct of science * and management * Utilizes a unifying biological context, consistent mathematical notation, * and numerous biological examples
Low-dimensional Representation of Error Covariance
NASA Technical Reports Server (NTRS)
Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan
2000-01-01
Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.
Time Trends in the Family Physician Management of Insomnia: The Australian Experience (2000-2015).
Miller, Christopher B; Valenti, Lisa; Harrison, Christopher M; Bartlett, Delwyn J; Glozier, Nick; Cross, Nathan E; Grunstein, Ronald R; Britt, Helena C; Marshall, Nathaniel S
2017-06-15
To evaluate changes in rates of family physician (FP) management of insomnia in Australia from 2000-2015. The Bettering the Evaluation And Care of Health (BEACH) program is a nationally representative cross-sectional survey of 1,000 newly randomly sampled family physicians' activity in Australia per year, who each record details of 100 consecutive patient encounters. This provided records of approximately 100,000 encounters each year. We identified all encounters with patients older than 15 years where insomnia or difficulty sleeping was managed and assessed trends in these encounters from 2000-2015. There was no change in the management rate of insomnia from 2000-2007 (1.54 per 100 encounters [95% confidence interval [CI]: 1.49-1.58]). This rate was lower from 2008-2015 (1.31 per 100 encounters [95% CI: 1.27-1.35]). There was no change in FP management: pharmacotherapy was used in approximately 90% of encounters; nonpharmacological advice was given at approximately 20%; and onward referral at approximately 1% of encounters. Prescription of temazepam changed from 54.6 [95% CI: 51.4-57.9] per 100 insomnia problems in 2000-2001 to 43.6 [95% CI: 40.1-47.0] in 2014-2015, whereas zolpidem increased steadily from introduction in 2000 to 14.6 [95% CI: 12.2-17.1] per 100 insomnia problems in 2006-2007, and then decreased to 7.3 [95% CI: 5.4-9.2] by 2014-2015. Insomnia management frequency decreased after 2007 in conjunction with ecologically associated Australian media reporting of adverse effects linked to zolpidem use. Australian FPs remain reliant on pharmacotherapy for the management of insomnia. © 2017 American Academy of Sleep Medicine
Analytical Phase Equilibrium Function for Mixtures Obeying Raoult's and Henry's Laws
NASA Astrophysics Data System (ADS)
Hayes, Robert
When a mixture of two substances exists in both the liquid and gas phase at equilibrium, Raoults and Henry's laws (ideal solution and ideal dilute solution approximations) can be used to estimate the gas and liquid mole fractions at the extremes of either very little solute or solvent. By assuming that a cubic polynomial can reasonably approximate the intermediate values to these extremes as a function of mole fraction, the cubic polynomial is solved and presented. A closed form equation approximating the pressure dependence on mole fraction of the constituents is thereby obtained. As a first approximation, this is a very simple and potentially useful means to estimate gas and liquid mole fractions of equilibrium mixtures. Mixtures with an azeotrope require additional attention if this type of approach is to be utilized. This work supported in part by federal Grant NRC-HQ-84-14-G-0059.
Application of expert systems in project management decision aiding
NASA Technical Reports Server (NTRS)
Harris, Regina; Shaffer, Steven; Stokes, James; Goldstein, David
1987-01-01
The feasibility of developing an expert systems-based project management decision aid to enhance the performance of NASA project managers was assessed. The research effort included extensive literature reviews in the areas of project management, project management decision aiding, expert systems technology, and human-computer interface engineering. Literature reviews were augmented by focused interviews with NASA managers. Time estimation for project scheduling was identified as the target activity for decision augmentation, and a design was developed for an Integrated NASA System for Intelligent Time Estimation (INSITE). The proposed INSITE design was judged feasible with a low level of risk. A partial proof-of-concept experiment was performed and was successful. Specific conclusions drawn from the research and analyses are included. The INSITE concept is potentially applicable in any management sphere, commercial or government, where time estimation is required for project scheduling. As project scheduling is a nearly universal management activity, the range of possibilities is considerable. The INSITE concept also holds potential for enhancing other management tasks, especially in areas such as cost estimation, where estimation-by-analogy is already a proven method.
Rectal temperature-based death time estimation in infants.
Igari, Yui; Hosokai, Yoshiyuki; Funayama, Masato
2016-03-01
In determining the time of death in infants based on rectal temperature, the same methods used in adults are generally used. However, whether the methods for adults are suitable for infants is unclear. In this study, we examined the following 3 methods in 20 infant death cases: computer simulation of rectal temperature based on the infinite cylinder model (Ohno's method), computer-based double exponential approximation based on Marshall and Hoare's double exponential model with Henssge's parameter determination (Henssge's method), and computer-based collinear approximation based on extrapolation of the rectal temperature curve (collinear approximation). The interval between the last time the infant was seen alive and the time that he/she was found dead was defined as the death time interval and compared with the estimated time of death. In Ohno's method, 7 cases were within the death time interval, and the average deviation in the other 12 cases was approximately 80 min. The results of both Henssge's method and collinear approximation were apparently inferior to the results of Ohno's method. The corrective factor was set within the range of 0.7-1.3 in Henssge's method, and a modified program was newly developed to make it possible to change the corrective factors. Modification A, in which the upper limit of the corrective factor range was set as the maximum value in each body weight, produced the best results: 8 cases were within the death time interval, and the average deviation in the other 12 cases was approximately 80min. There was a possibility that the influence of thermal isolation on the actual infants was stronger than that previously shown by Henssge. We conclude that Ohno's method and Modification A are useful for death time estimation in infants. However, it is important to accept the estimated time of death with certain latitude considering other circumstances. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Twenty-year follow-up study of radiocesium migration in soil.
Clouvas, A; Xanthos, S; Takoudis, G; Antonopoulos-Domis, M; Zinoviadis, G; Vidmar, T; Likar, A
2007-01-01
The profile of (137)Cs present in undisturbed soil due to the Chernobyl accident was measured repeatedly for approximately 20 y. The vertical migration of (137)Cs in soil is a very slow process. The mean vertical migration velocity is estimated at approximately 0.1-0.2 cm y(-1). A method based on in situ gamma spectrometry measurements and Monte Carlo computations, aimed at estimating the profile of (137)Cs without performing any soil sampling, is investigated.
NASA Astrophysics Data System (ADS)
Bajjouk, Touria; Rochette, Sébastien; Laurans, Martial; Ehrhold, Axel; Hamdi, Anouar; Le Niliot, Philippe
2015-06-01
The Molène Archipelago in Brittany (France) hosts one of the largest kelp forests in Europe. Beyond their recognized ecological importance as an essential habitat and food for a variety of marine species, kelp also contributes towards regional economies by means of the alginate industry. Thousands of tons of kelp are collected each year for the needs of the chemical and food industries. Kelp harvesting in Brittany mainly concerns two species, Laminaria digitata (59,000 t) and Laminaria hyperborea (24,000 t), that, together, represent approximately 95% of the national landings. Estimating the available standing stock and its distribution is a clear need for providing appropriate and sustainable management measures. Prior to estimating the spatial distribution of biomasses, we produced a detailed seabed topography map with accurate hard substrate delineation thanks to surveys and appropriate processing of airborne optical and acoustic imaging. Habitat suitability models of presence-absence and biomass were then developed for each species by relating in situ observations from underwater video and sampling to the many biotic and abiotic factors that may govern kelp species distribution. Our statistical approach combining generalized additive models (GAM) in a delta approach also provided spatial uncertainty associated with each prediction to help management decisions. This study confirmed that the adopted strategy, based on an integrated approach, enhanced knowledge on kelp biomass distributions in the Molène Archipelago and provided a promising direct link between research and management. Indeed, the high resolution topography and hard substrate maps produced for the study greatly improved knowledge on the sea bottom of the area. This was also of major importance for an accurate mapping of kelp distribution. The quality of the habitat suitability models was verified with fishing effort data (RECOPESCA program) and confirmed by local managers and kelp harvesters. Based on the biomass maps produced and their associated confidence intervals, we proposed more precise management rules than those already in use for both L. digitata and L. hyperborea. Our mapping approach is a first step towards sustainable kelp species management in the area. Introducing higher resolution environmental variables and population dynamics would help interannual management.
Skrbinšek, Tomaž; Jelenčič, Maja; Waits, Lisette; Kos, Ivan; Jerina, Klemen; Trontelj, Peter
2012-02-01
The effective population size (N(e) ) could be the ideal parameter for monitoring populations of conservation concern as it conveniently summarizes both the evolutionary potential of the population and its sensitivity to genetic stochasticity. However, tracing its change through time is difficult in natural populations. We applied four new methods for estimating N(e) from a single sample of genotypes to trace temporal change in N(e) for bears in the Northern Dinaric Mountains. We genotyped 510 bears using 20 microsatellite loci and determined their age. The samples were organized into cohorts with regard to the year when the animals were born and yearly samples with age categories for every year when they were alive. We used the Estimator by Parentage Assignment (EPA) to directly estimate both N(e) and generation interval for each yearly sample. For cohorts, we estimated the effective number of breeders (N(b) ) using linkage disequilibrium, sibship assignment and approximate Bayesian computation methods and extrapolated these estimates to N(e) using the generation interval. The N(e) estimate by EPA is 276 (183-350 95% CI), meeting the inbreeding-avoidance criterion of N(e) > 50 but short of the long-term minimum viable population goal of N(e) > 500. The results obtained by the other methods are highly consistent with this result, and all indicate a rapid increase in N(e) probably in the late 1990s and early 2000s. The new single-sample approaches to the estimation of N(e) provide efficient means for including N(e) in monitoring frameworks and will be of great importance for future management and conservation. © 2012 Blackwell Publishing Ltd.
Gunter, Stacey A; Bradford, James A; Moffet, Corey A
2017-01-01
Methane (CH) and carbon dioxide (CO) represent 11 and 81%, respectively, of all anthropogenic greenhouse gas emissions. Agricultural CH emissions account for approximately 43% of all anthropogenic CH emissions. Most agricultural CH emissions are attributed to enteric fermentation within ruminant livestock; hence, the heightened interest in quantifying and mitigating this source. The automated, open-circuit gas quantification system (GQS; GreenFeed, C-Lock, Inc., Rapid City, SD) evaluated here can be placed in a pasture with grazing cattle and can measure their CH and CO emissions with spot sampling. However, improper management of the GQS can have an erroneous effect on emission estimates. One factor affecting the quality of emission estimates is the airflow rates through the GQS to ensure a complete capture of the breath cloud emitted by the animal. It is hypothesized that at lower airflow rates this cloud will be incompletely captured. To evaluate the effect of airflow rate through the GQS on emission estimates, a data set was evaluated with 758 CO and CH emission estimates with a range in airflows of 10.7 to 36.6 L/s. When airflow through the GQS was between 26.0 and 36.6 L/s, CO and CH emission estimates were not affected ( = 0.14 and 0.05, respectively). When airflow rates were less than 26.0 L/s, CO and CH emission estimates were lower and decreased as airflow rate decreased ( < 0.0001). We hypothesize that when airflow through the GQS decreases below 26 L/s, breath capture was incomplete and CO and CH emissions are underestimated. Maintaining mass airflow through a GQS at rates greater than 26 L/s is important for producing high quality CO and CH emission estimates.
Density estimation in wildlife surveys
Bart, Jonathan; Droege, Sam; Geissler, Paul E.; Peterjohn, Bruce G.; Ralph, C. John
2004-01-01
Several authors have recently discussed the problems with using index methods to estimate trends in population size. Some have expressed the view that index methods should virtually never be used. Others have responded by defending index methods and questioning whether better alternatives exist. We suggest that index methods are often a cost-effective component of valid wildlife monitoring but that double-sampling or another procedure that corrects for bias or establishes bounds on bias is essential. The common assertion that index methods require constant detection rates for trend estimation is mathematically incorrect; the requirement is no long-term trend in detection "ratios" (index result/parameter of interest), a requirement that is probably approximately met by many well-designed index surveys. We urge that more attention be given to defining bird density rigorously and in ways useful to managers. Once this is done, 4 sources of bias in density estimates may be distinguished: coverage, closure, surplus birds, and detection rates. Distance, double-observer, and removal methods do not reduce bias due to coverage, closure, or surplus birds. These methods may yield unbiased estimates of the number of birds present at the time of the survey, but only if their required assumptions are met, which we doubt occurs very often in practice. Double-sampling, in contrast, produces unbiased density estimates if the plots are randomly selected and estimates on the intensive surveys are unbiased. More work is needed, however, to determine the feasibility of double-sampling in different populations and habitats. We believe the tension that has developed over appropriate survey methods can best be resolved through increased appreciation of the mathematical aspects of indices, especially the effects of bias, and through studies in which candidate methods are evaluated against known numbers determined through intensive surveys.
75 FR 69619 - East Reservoir Project; Kootenai National Forest, Lincoln County, MT
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-15
... harvest. Vegetation treatments total approximately 13,000 acres of treated area. (2) Road management includes new road construction, road storage and adding existing, undetermined roads to the National Forest Service road system. Approximately 2.04 miles of new road construction is proposed. Approximately 40 miles...
Approximate Joint Diagonalization and Geometric Mean of Symmetric Positive Definite Matrices
Congedo, Marco; Afsari, Bijan; Barachant, Alexandre; Moakher, Maher
2015-01-01
We explore the connection between two problems that have arisen independently in the signal processing and related fields: the estimation of the geometric mean of a set of symmetric positive definite (SPD) matrices and their approximate joint diagonalization (AJD). Today there is a considerable interest in estimating the geometric mean of a SPD matrix set in the manifold of SPD matrices endowed with the Fisher information metric. The resulting mean has several important invariance properties and has proven very useful in diverse engineering applications such as biomedical and image data processing. While for two SPD matrices the mean has an algebraic closed form solution, for a set of more than two SPD matrices it can only be estimated by iterative algorithms. However, none of the existing iterative algorithms feature at the same time fast convergence, low computational complexity per iteration and guarantee of convergence. For this reason, recently other definitions of geometric mean based on symmetric divergence measures, such as the Bhattacharyya divergence, have been considered. The resulting means, although possibly useful in practice, do not satisfy all desirable invariance properties. In this paper we consider geometric means of covariance matrices estimated on high-dimensional time-series, assuming that the data is generated according to an instantaneous mixing model, which is very common in signal processing. We show that in these circumstances we can approximate the Fisher information geometric mean by employing an efficient AJD algorithm. Our approximation is in general much closer to the Fisher information geometric mean as compared to its competitors and verifies many invariance properties. Furthermore, convergence is guaranteed, the computational complexity is low and the convergence rate is quadratic. The accuracy of this new geometric mean approximation is demonstrated by means of simulations. PMID:25919667
Stochastic parameter estimation in nonlinear time-delayed vibratory systems with distributed delay
NASA Astrophysics Data System (ADS)
Torkamani, Shahab; Butcher, Eric A.
2013-07-01
The stochastic estimation of parameters and states in linear and nonlinear time-delayed vibratory systems with distributed delay is explored. The approach consists of first employing a continuous time approximation to approximate the delayed integro-differential system with a large set of ordinary differential equations having stochastic excitations. Then the problem of state and parameter estimation in the resulting stochastic ordinary differential system is represented as an optimal filtering problem using a state augmentation technique. By adapting the extended Kalman-Bucy filter to the augmented filtering problem, the unknown parameters of the time-delayed system are estimated from noise-corrupted, possibly incomplete measurements of the states. Similarly, the upper bound of the distributed delay can also be estimated by the proposed technique. As an illustrative example to a practical problem in vibrations, the parameter, delay upper bound, and state estimation from noise-corrupted measurements in a distributed force model widely used for modeling machine tool vibrations in the turning operation is investigated.
NASA Astrophysics Data System (ADS)
Blount, W. K.; Hogue, T. S.; Franz, K.; Knipper, K. R.
2017-12-01
Accurate estimation of evapotranspiration (ET) is critical for the management of water resources, especially in water-stressed regions. ET accounts for approximately 60% of terrestrial precipitation globally and approaches 100% of annual rainfall in arid ecosystems, where transpiration becomes the dominant term. ET is difficult to measure due to its spatiotemporal variation, which requires adequate data coverage. While new remote sensing-based ET products are available at a 1 km spatial resolution, including the Operational Simplified Surface Energy Balance model (SSEBop) and the MODIS Global Evapotranspiration Project (MOD16), these products are available at monthly and 8-day temporal resolutions, respectively. To better understand the changing dynamics of hydrologic fluxes and the partitioning of water after land cover disturbances and to identify statically significant trends, more frequent observations are necessary. Utilizing the recently developed MODIS Soil Moisture-Evapotranspiration (MOD-SMET) model, daily temporal resolution is achieved. This presentation outlines the methodology of the MOD-SMET model and compares SSEBop, MOD16, and MOD-SMET ET estimates over the High Park Fire burn scar in Colorado, USA. MOD-SMET estimates are used to identify changes in fluxes and partitioning of the water cycle after a wildfire and during recovery in the High Park Fire near Fort Collins, Colorado. Initial results indicate greenness and ET from all three models decrease post-fire, with higher statistical confidence in high burn areas and spatial patterns that closely align with burn severity. MOD-SMET improves the ability to resolve statistically significant changes in ET following wildfires and better understand changes in the post-fire water budget. Utilizing this knowledge, water resource managers can better plan for, and mitigate, the short- and long-term impacts of wildfire on regional water supplies.
Cost-effectiveness analysis of a hospital electronic medication management system
Gospodarevskaya, Elena; Li, Ling; Richardson, Katrina L; Roffe, David; Heywood, Maureen; Day, Richard O; Graves, Nicholas
2015-01-01
Objective To conduct a cost–effectiveness analysis of a hospital electronic medication management system (eMMS). Methods We compared costs and benefits of paper-based prescribing with a commercial eMMS (CSC MedChart) on one cardiology ward in a major 326-bed teaching hospital, assuming a 15-year time horizon and a health system perspective. The eMMS implementation and operating costs were obtained from the study site. We used data on eMMS effectiveness in reducing potential adverse drug events (ADEs), and potential ADEs intercepted, based on review of 1 202 patient charts before (n = 801) and after (n = 401) eMMS. These were combined with published estimates of actual ADEs and their costs. Results The rate of potential ADEs following eMMS fell from 0.17 per admission to 0.05; a reduction of 71%. The annualized eMMS implementation, maintenance, and operating costs for the cardiology ward were A$61 741 (US$55 296). The estimated reduction in ADEs post eMMS was approximately 80 actual ADEs per year. The reduced costs associated with these ADEs were more than sufficient to offset the costs of the eMMS. Estimated savings resulting from eMMS implementation were A$63–66 (US$56–59) per admission (A$97 740–$102 000 per annum for this ward). Sensitivity analyses demonstrated results were robust when both eMMS effectiveness and costs of actual ADEs were varied substantially. Conclusion The eMMS within this setting was more effective and less expensive than paper-based prescribing. Comparison with the few previous full economic evaluations available suggests a marked improvement in the cost–effectiveness of eMMS, largely driven by increased effectiveness of contemporary eMMs in reducing medication errors. PMID:25670756
Smith, Brad L.; Lu, Ching-Ping; García-Cortés, Blanca; Viñas, Jordi; Yeh, Shean-Ya; Alvarado Bremer, Jaime R.
2015-01-01
Previous genetic studies of Atlantic swordfish (Xiphias gladius L.) revealed significant differentiation among Mediterranean, North Atlantic and South Atlantic populations using both mitochondrial and nuclear DNA data. However, limitations in geographic sampling coverage, and the use of single loci, precluded an accurate placement of boundaries and of estimates of admixture. In this study, we present multilocus analyses of 26 single nucleotide polymorphisms (SNPs) within 10 nuclear genes to estimate population differentiation and admixture based on the characterization of 774 individuals representing North Atlantic, South Atlantic, and Mediterranean swordfish populations. Pairwise F ST values, AMOVA, PCoA, and Bayesian individual assignments support the differentiation of swordfish inhabiting these three basins, but not the current placement of the boundaries that separate them. Specifically, the range of the South Atlantic population extends beyond 5°N management boundary to 20°N-25°N from 45°W. Likewise the Mediterranean population extends beyond the current management boundary at the Strait of Gibraltar to approximately 10°W. Further, admixture zones, characterized by asymmetric contributions of adjacent populations within samples, are confined to the Northeast Atlantic. While South Atlantic and Mediterranean migrants were identified within these Northeast Atlantic admixture zones no North Atlantic migrants were identified respectively in these two neighboring basins. Owing to both, the characterization of larger number of loci and a more ample spatial sampling coverage, it was possible to provide a finer resolution of the boundaries separating Atlantic swordfish populations than previous studies. Finally, the patterns of population structure and admixture are discussed in the light of the reproductive biology, the known patterns of dispersal, and oceanographic features that may act as barriers to gene flow to Atlantic swordfish. PMID:26057382
Cost-effectiveness analysis of a hospital electronic medication management system.
Westbrook, Johanna I; Gospodarevskaya, Elena; Li, Ling; Richardson, Katrina L; Roffe, David; Heywood, Maureen; Day, Richard O; Graves, Nicholas
2015-07-01
To conduct a cost-effectiveness analysis of a hospital electronic medication management system (eMMS). We compared costs and benefits of paper-based prescribing with a commercial eMMS (CSC MedChart) on one cardiology ward in a major 326-bed teaching hospital, assuming a 15-year time horizon and a health system perspective. The eMMS implementation and operating costs were obtained from the study site. We used data on eMMS effectiveness in reducing potential adverse drug events (ADEs), and potential ADEs intercepted, based on review of 1 202 patient charts before (n = 801) and after (n = 401) eMMS. These were combined with published estimates of actual ADEs and their costs. The rate of potential ADEs following eMMS fell from 0.17 per admission to 0.05; a reduction of 71%. The annualized eMMS implementation, maintenance, and operating costs for the cardiology ward were A$61 741 (US$55 296). The estimated reduction in ADEs post eMMS was approximately 80 actual ADEs per year. The reduced costs associated with these ADEs were more than sufficient to offset the costs of the eMMS. Estimated savings resulting from eMMS implementation were A$63-66 (US$56-59) per admission (A$97 740-$102 000 per annum for this ward). Sensitivity analyses demonstrated results were robust when both eMMS effectiveness and costs of actual ADEs were varied substantially. The eMMS within this setting was more effective and less expensive than paper-based prescribing. Comparison with the few previous full economic evaluations available suggests a marked improvement in the cost-effectiveness of eMMS, largely driven by increased effectiveness of contemporary eMMs in reducing medication errors. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Wang, Hua; He, Jie; Kim, Yoonhee; Kamata, Takuya
2014-08-01
Municipal solid waste management (SWM) is a major challenge for local governments in rural China. One key issue is the low priority assigned by the local government which is faced with limited financing capacity. We conducted an economic analysis in Eryuan, a poor county in Yunnan, China, where the willingness- to- pay (WTP) for an improved solid waste collection and disposal service was valuated and compared with project cost. Similar to most previous studies in developing countries, this study found that the mean WTP is approximately 1% of the household income. The economic internal rate of return of the project is about 5%, which signifies the estimated social benefit to be already higher than the project cost. Moreover, we believe our estimation of social benefit to be a conservative one since our study only focuses on the local people who will be directly served by the project; wider positive externality of the project, such as CO2 emission reduction and groundwater pollution alleviation, etc., whose impact most probably surpass the frontier of Eryuan county, are not considered explicitly in our survey. The analysis also reveals that the poorest households are not only willing to pay more than the rich households in terms of percentage income but are also willing to pay no less than the rich in terms of absolute value in locations where solid waste services are unavailable. This result reveals the fact that the poorest households have stronger demands for public SWM services, whereas the rich may have the ability to employ private solutions. © The Author(s) 2014.
First Year Wilkinson Microwave Anisotropy Probe(WMAP)Observations: The Angular Power Spectrum
NASA Technical Reports Server (NTRS)
Hinshaw, G.; Spergel, D. N.; Verde, L.; Hill, R. S.; Meyer, S. S.; Barnes, C.; Bennett, C. L.; Halpern, M.; Jarosik, N.; Kogut, A.
2003-01-01
We present the angular power spectrum derived from the first-year Wilkinson Microwave Anisotropy Probe (WMAP) sky maps. We study a variety of power spectrum estimation methods and data combinations and demonstrate that the results are robust. The data are modestly contaminated by diffuse Galactic foreground emission, but we show that a simple Galactic template model is sufficient to remove the signal. Point sources produce a modest contamination in the low frequency data. After masking approximately 700 known bright sources from the maps, we estimate residual sources contribute approximately 3500 mu sq Kappa at 41 GHz, and approximately 130 mu sq Kappa at 94 GHz, to the power spectrum [iota(iota + 1)C(sub iota)/2pi] at iota = 1000. Systematic errors are negligible compared to the (modest) level of foreground emission. Our best estimate of the power spectrum is derived from 28 cross-power spectra of statistically independent channels. The final spectrum is essentially independent of the noise properties of an individual radiometer. The resulting spectrum provides a definitive measurement of the CMB power spectrum, with uncertainties limited by cosmic variance, up to iota approximately 350. The spectrum clearly exhibits a first acoustic peak at iota = 220 and a second acoustic peak at iota approximately 540, and it provides strong support for adiabatic initial conditions. Researchers have analyzed the CT(sup Epsilon) power spectrum, and present evidence for a relatively high optical depth, and an early period of cosmic reionization. Among other things, this implies that the temperature power spectrum has been suppressed by approximately 30% on degree angular scales, due to secondary scattering.
The risk of groundling fatalities from unintentional airplane crashes.
Thompson, K M; Rabouw, R F; Cooke, R M
2001-12-01
The crashes of four hijacked commercial planes on September 11, 2001, and the repeated televised images of the consequent collapse of the World Trade Center and one side of the Pentagon will inevitably change people's perceptions of the mortality risks to people on the ground from crashing airplanes. Goldstein and colleagues were the first to quantify the risk for Americans of being killed on the ground from a crashing airplane for unintentional events, providing average point estimates of 6 in a hundred million for annual risk and 4.2 in a million for lifetime risk. They noted that the lifetime risk result exceeded the commonly used risk management threshold of 1 in a million, and suggested that the risk to "groundlings" could be a useful risk communication tool because (a) it is a man-made risk (b) arising from economic activities (c) from which the victims derive no benefit and (d) exposure to which the victims cannot control. Their results have been used in risk communication. This analysis provides updated estimates of groundling fatality risks from unintentional crashes using more recent data and a geographical information system approach to modeling the population around airports. The results suggest that the average annual risk is now 1.2 in a hundred million and the lifetime risk is now 9 in ten million (below the risk management threshold). Analysis of the variability and uncertainty of this estimate, however, suggests that the exposure to groundling fatality risk varies by about a factor of approximately 100 in the spatial dimension of distance to an airport, with the risk declining rapidly outside the first 2 miles around an airport. We believe that the risk to groundlings from crashing airplanes is more useful in the context of risk communication when information about variability and uncertainty in the risk estimates is characterized, but we suspect that recent events will alter its utility in risk communication.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, J.A.; Brasseur, G.P.; Zimmerman, P.R.
Using the hydroxyl radical field calibrated to the methyl chloroform observations, the globally averaged release of methane and its spatial and temporal distribution were investigated. Two source function models of the spatial and temporal distribution of the flux of methane to the atmosphere were developed. The first model was based on the assumption that methane is emitted as a proportion of net primary productivity (NPP). With the average hydroxyl radical concentration fixed, the methane source term was computed as {approximately}623 Tg CH{sub 4}, giving an atmospheric lifetime for methane {approximately}8.3 years. The second model identified source regions for methane frommore » rice paddies, wetlands, enteric fermentation, termites, and biomass burning based on high-resolution land use data. This methane source distribution resulted in an estimate of the global total methane source of {approximately}611 Tg CH{sub 4}, giving an atmospheric lifetime for methane {approximately}8.5 years. The most significant difference between the two models were predictions of methane fluxes over China and South East Asia, the location of most of the world's rice paddies. Using a recent measurement of the reaction rate of hydroxyl radical and methane leads to estimates of the global total methane source for SF1 of {approximately}524 Tg CH{sub 4} giving an atmospheric lifetime of {approximately}10.0 years and for SF2{approximately}514 Tg CH{sub 4} yielding a lifetime of {approximately}10.2 years.« less
Benavides, Fernando G; Torá, Isabel; Miguel Martínez, José; Jardí, Josefina; Manzanera, Rafael; Alberti, Constança; Delclós, Jordi
2010-01-01
To compare the length of nonwork-related sick leave among cases managed by an insurance company versus those managed by the National Institute of Social Security (NISS). We performed a retrospective cohort study of 289,686 cases of sick leave lasting for more than 15 days that began in 2005 after certification by a primary care physician in Catalonia, were reported to the Catalonian Institute of Medical Evaluations, and were followed to term. Of the total, 156,676 cases were managed by the NISS. To account for repeat episodes (approximately 25% of the total), the Wang-Chang estimator was used to calculate the median duration and percentiles; comparisons were made using log-logistic regression with shared gamma frailty models, with calculation of time ratios (TR) and their corresponding 95% confidence intervals (95% CI). The median duration of sick leave was 43 days for cases managed by the NISS and 39 days for those managed by the insurance company. This difference was statistically significant both for men employed under contract (TR=0.87; 95% CI: 0.85-0.88) and for those who were self-employed (TR=0.78; 95% CI: 0.75-0.80) as well as for women under contract (TR=0.85; 95% CI: 0.84-0.87) and self-employed women (TR=0.84; 95% CI: 0.81-0.88). These differences persisted after adjustment was performed for age and health region. For sick leave lasting more than 15 days, these results confirm that cases managed by an insurance company ended earlier than for those managed by the NISS, both for contract and self-employed workers. Further research is needed to explore the reasons for these differences. Copyright 2009 SESPAS. Published by Elsevier Espana. All rights reserved.
Incorporating approximation error in surrogate based Bayesian inversion
NASA Astrophysics Data System (ADS)
Zhang, J.; Zeng, L.; Li, W.; Wu, L.
2015-12-01
There are increasing interests in applying surrogates for inverse Bayesian modeling to reduce repetitive evaluations of original model. In this way, the computational cost is expected to be saved. However, the approximation error of surrogate model is usually overlooked. This is partly because that it is difficult to evaluate the approximation error for many surrogates. Previous studies have shown that, the direct combination of surrogates and Bayesian methods (e.g., Markov Chain Monte Carlo, MCMC) may lead to biased estimations when the surrogate cannot emulate the highly nonlinear original system. This problem can be alleviated by implementing MCMC in a two-stage manner. However, the computational cost is still high since a relatively large number of original model simulations are required. In this study, we illustrate the importance of incorporating approximation error in inverse Bayesian modeling. Gaussian process (GP) is chosen to construct the surrogate for its convenience in approximation error evaluation. Numerical cases of Bayesian experimental design and parameter estimation for contaminant source identification are used to illustrate this idea. It is shown that, once the surrogate approximation error is well incorporated into Bayesian framework, promising results can be obtained even when the surrogate is directly used, and no further original model simulations are required.
Classics in chemical neuroscience: levodopa.
Whitfield, A Connor; Moore, Ben T; Daniels, R Nathan
2014-12-17
Levodopa was the first and most successful breakthrough in the treatment of Parkinson's disease (PD). It is estimated that PD affects approximately 1 million people in the United States alone. Although PD was discovered in 1817, prior to levodopa's discovery there was not an effective treatment for managing its symptoms. In 1961, Hornykiewicz pioneered the use of levodopa to enhance dopamine levels in the striatum, significantly improving symptoms in many patients. With the addition of carbidopa in 1974, the frequency of gastrointestinal adverse drug reactions (ADRs) was significantly reduced, leading to the modern treatment of PD. Although levodopa treatment is more than 50 years old, it remains the "gold standard" for PD treatment. This Review describes in detail the synthesis, metabolism, pharmacology, ADRs, and importance of levodopa therapy to neuroscience in the past and present.
NASA Astrophysics Data System (ADS)
Zhou, H.; Chen, B.; Han, Z. X.; Zhang, F. Q.
2009-05-01
The study on probability density function and distribution function of electricity prices contributes to the power suppliers and purchasers to estimate their own management accurately, and helps the regulator monitor the periods deviating from normal distribution. Based on the assumption of normal distribution load and non-linear characteristic of the aggregate supply curve, this paper has derived the distribution of electricity prices as the function of random variable of load. The conclusion has been validated with the electricity price data of Zhejiang market. The results show that electricity prices obey normal distribution approximately only when supply-demand relationship is loose, whereas the prices deviate from normal distribution and present strong right-skewness characteristic. Finally, the real electricity markets also display the narrow-peak characteristic when undersupply occurs.
Management of Acute Lower Gastrointestinal Bleeding.
Speir, Ethan J; Ermentrout, R Mitchell; Martin, Jonathan G
2017-12-01
Acute lower gastrointestinal bleeding (LGIB), defined as hemorrhage into the gastrointestinal tract distal to the ligament of Treitz, is a major cause of morbidity and mortality among adults. Overall, mortality rates are estimated between 2.4% and 3.9%. The most common etiology for LGIB is diverticulosis, implicated in approximately 30% of cases, with other causes including hemorrhoids, ischemic colitis, and postpolypectomy bleeding. Transcatheter visceral angiography has begun to play an increasingly important role in both the diagnosis and treatment of LGIB. Historically, transcatheter visceral angiography has been used to direct vasopressin infusion with embolization reserved for treatment of upper gastrointestinal bleeding. However, advances in microcatheter technology and embolotherapy have enabled super-selective embolization to emerge as the treatment of choice for many cases of LGIB. Copyright © 2017 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Jackson, Dan; Bowden, Jack; Baker, Rose
2015-01-01
Moment-based estimators of the between-study variance are very popular when performing random effects meta-analyses. This type of estimation has many advantages including computational and conceptual simplicity. Furthermore, by using these estimators in large samples, valid meta-analyses can be performed without the assumption that the treatment…
Evidence that electronic health records can promote physician counseling for healthy behaviors.
Bae, Jaeyong; Hockenberry, Jason M; Rask, Kimberly J; Becker, Edmund R
Health behavior counseling services may help patients manage chronic conditions effectively and slow disease progression. Studies show, however, that many providers fail to provide these services because of time constraints and inability to tailor counseling to individual patient needs. Electronic health records (EHRs) have the potential to increase appropriate counseling by providing pertinent patient information at the point of care and clinical decision support. This study estimates the impact of select EHR functionalities on the rate of health behavior counseling provided during primary care visits. Multivariable regression analyses of the 2007-2010 National Ambulatory Medical Care Survey were conducted to examine whether eight EHR components representing four core functionalities of EHR systems were correlated with the rate of health behavior counseling services. Propensity score matching was used to control for confounding factors given the use of observational data. To address concerns that EHR may only lead to improved documentation of counseling services and not necessarily improved care, the association of EHR functionalities with prescriptions for smoking cessation medications was also estimated. The use of an EHR system with health information and data, order entry and management, result management, decision support, and a notification system for abnormal test results was associated with an approximately 25% increase in the probability of health behavior counseling delivered. Clinical reminders were associated with more health behavior counseling services when available in combination with patient problem lists. The laboratory results viewer was also associated with more counseling services when implemented with a notification system for abnormal results. An EHR system with key supportive functionalities can enhance delivery of preventive health behavior counseling services in primary care settings. Meaningful use criteria should be evaluated to ensure that they encourage the adoption of EHR systems with those functionalities shown to improve clinical care.
Making Supply Chains Resilient to Floods Using a Bayesian Network
NASA Astrophysics Data System (ADS)
Haraguchi, M.
2015-12-01
Natural hazards distress the global economy by disrupting the interconnected supply chain networks. Manufacturing companies have created cost-efficient supply chains by reducing inventories, streamlining logistics and limiting the number of suppliers. As a result, today's supply chains are profoundly susceptible to systemic risks. In Thailand, for example, the GDP growth rate declined by 76 % in 2011 due to prolonged flooding. Thailand incurred economic damage including the loss of USD 46.5 billion, approximately 70% of which was caused by major supply chain disruptions in the manufacturing sector. Similar problems occurred after the Great East Japan Earthquake and Tsunami in 2011, the Mississippi River floods and droughts during 2011 - 2013, and Hurricane Sandy in 2012. This study proposes a methodology for modeling supply chain disruptions using a Bayesian network analysis (BNA) to estimate expected values of countermeasures of floods, such as inventory management, supplier management and hard infrastructure management. We first performed a spatio-temporal correlation analysis between floods and extreme precipitation data for the last 100 years at a global scale. Then we used a BNA to create synthetic networks that include variables associated with the magnitude and duration of floods, major components of supply chains and market demands. We also included decision variables of countermeasures that would mitigate potential losses caused by supply chain disruptions. Finally, we conducted a cost-benefit analysis by estimating the expected values of these potential countermeasures while conducting a sensitivity analysis. The methodology was applied to supply chain disruptions caused by the 2011 Thailand floods. Our study demonstrates desirable typical data requirements for the analysis, such as anonymized supplier network data (i.e. critical dependencies, vulnerability information of suppliers) and sourcing data(i.e. locations of suppliers, and production rates and volume), and data from previous experiences (i.e. companies' risk mitigation strategy decisions).
Tank waste remediation system multi-year work plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The Tank Waste Remediation System (TWRS) Multi-Year Work Plan (MYWP) documents the detailed total Program baseline and was constructed to guide Program execution. The TWRS MYWP is one of two elements that comprise the TWRS Program Management Plan. The TWRS MYWP fulfills the Hanford Site Management System requirement for a Multi-Year Program Plan and a Fiscal-Year Work Plan. The MYWP addresses program vision, mission, objectives, strategy, functions and requirements, risks, decisions, assumptions, constraints, structure, logic, schedule, resource requirements, and waste generation and disposition. Sections 1 through 6, Section 8, and the appendixes provide program-wide information. Section 7 includes a subsectionmore » for each of the nine program elements that comprise the TWRS Program. The foundation of any program baseline is base planning data (e.g., defendable product definition, logic, schedules, cost estimates, and bases of estimates). The TWRS Program continues to improve base data. As data improve, so will program element planning, integration between program elements, integration outside of the TWRS Program, and the overall quality of the TWRS MYWP. The MYWP establishes the TWRS baseline objectives to store, treat, and immobilize highly radioactive Hanford waste in an environmentally sound, safe, and cost-effective manner. The TWRS Program will complete the baseline mission in 2040 and will incur costs totalling approximately 40 billion dollars. The summary strategy is to meet the above objectives by using a robust systems engineering effort, placing the highest possible priority on safety and environmental protection; encouraging {open_quotes}out sourcing{close_quotes} of the work to the extent practical; and managing significant but limited resources to move toward final disposition of tank wastes, while openly communicating with all interested stakeholders.« less
NASA Astrophysics Data System (ADS)
Elias, Dimitriou; Angeliki, Mentzafou; Vasiliki, Markogianni; Maria, Tzortziou; Christina, Zeri
2014-06-01
Managing water resources, in terms of both quality and quantity, in transboundary rivers is a difficult and challenging task that requires efficient cross-border cooperation and transparency. Groundwater pollution risk assessment and mapping techniques over the full catchment area are important tools that could be used as part of these water resource management efforts, to estimate pollution pressures and optimize land planning processes. The Evros river catchment is the second largest river in Eastern Europe and sustains a population of 3.6 million people in three different countries (Bulgaria, Turkey and Greece). This study provides detailed information on the main pollution sources and pressures in the Evros catchment and, for the first time, applies, assesses and evaluates a groundwater pollution risk mapping technique using satellite observations (Landsat NDVI) and an extensive dataset of field measurements covering different seasons and multiple years. We found that approximately 40 % of the Greek part of the Evros catchment is characterized as of high and very high pollution risk, while 14 % of the study area is classified as of moderate risk. Both the modeled and measured water quality status of the river showed large spatiotemporal variations consistent with the strong anthropogenic pressures in this system, especially on the northern and central segments of the catchment. The pollutants identified illustrate inputs of agrochemicals and urban wastes in the river. High correlation coefficients ( R between 0.79 and 0.85) were found between estimated pollution risks and measured concentrations of those chemical parameters that are mainly attributed to anthropogenic activities rather than in situ biogeochemical processes. The pollution risk method described here could be used elsewhere as a decision support tool for mitigating the impact of hazardous human activities and improving management of groundwater resources.
Predictors of perceived asthma control among patients managed in primary care clinics.
Eilayyan, Owis; Gogovor, Amede; Mayo, Nancy; Ernst, Pierre; Ahmed, Sara
2015-01-01
To estimate the extent to which symptom status, physical activity, beliefs about medications, self-efficacy, emotional status, and healthcare utilization predict perceived asthma control over a period of 16 months among a primary care population. The current study is a secondary analysis of data from a longitudinal study that examined health outcomes of asthma among participants recruited from primary care clinics. Path analysis, based on the Wilson and Cleary and International Classification of Functioning, Disability and Health frameworks, was used to estimate the predictors of perceived asthma control. The path analysis identified initial perceived asthma control asthma (β = 0.43, p < 0.0001), symptoms (β = 0.35, p < 0.0001), physical activity (β = 0.27, p < 0.0001), and self-efficacy (β = 0.29, p < 0.0001) as significant predictors of perceived asthma control (total effects, i.e., direct and indirect), while emotional status (β = 0.08, p = 0.03) was a significant indirect predictor through physical activity. The model explained 24 % of the variance of perceived asthma control. Overall, the model fits the data well (χ (2) = 6.65, df = 6, p value = 0.35, root-mean-square error of approximation = 0.02, Comparative Fit Index = 0.999, and weighted root-mean-square residual = 0.27). Initial perceived asthma control, current symptoms status, physical activity, and self-efficacy can be used to identify individuals likely to have good perceived asthma control in the future. Emotional status also has an impact on perceived asthma control mediated through physical activity and should be considered when planning patient management. Identifying these predictors is important to help the care team tailor interventions that will allow individuals to optimally manage their asthma, to prevent exacerbations, to prevent other respiratory-related chronic disease, and to maximize quality of life.
Tank waste remediation system multi-year work plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-09-01
The Tank Waste Remediation System (TWRS) Multi-Year Work Plan (MYWP) documents the detailed total Program baseline and was constructed to guide Program execution. The TWRS MYWP is one of two elements that comprise the TWRS Program Management Plan. The TWRS MYWP fulfills the Hanford Site Management System requirement for a Multi-Year Program Plan and a Fiscal-Year Work Plan. The MYWP addresses program vision, mission, objectives, strategy, functions and requirements, risks, decisions, assumptions, constraints, structure, logic, schedule, resource requirements, and waste generation and disposition. Sections 1 through 6, Section 8, and the appendixes provide program-wide information. Section 7 includes a subsectionmore » for each of the nine program elements that comprise the TWRS Program. The foundation of any program baseline is base planning data (e.g., defendable product definition, logic, schedules, cost estimates, and bases of estimates). The TWRS Program continues to improve base data. As data improve, so will program element planning, integration between program elements, integration outside of the TWRS Program, and the overall quality of the TWRS MYWP. The MYWP establishes the TWRS baseline objectives to store, treat, and immobilize highly radioactive Hanford waste in an environmentally sound, safe, and cost-effective manner. The TWRS Program will complete the baseline mission in 2040 and will incur costs totalling approximately 40 billion dollars. The summary strategy is to meet the above objectives by using a robust systems engineering effort, placing the highest possible priority on safety and environmental protection; encouraging {open_quotes}out sourcing{close_quotes} of the work to the extent practical; and managing significant but limited resources to move toward final disposition of tank wastes, while openly communicating with all interested stakeholders.« less
Chen, Chuan-Yu; Yeh, Hsueh-Han; Huang, Nicole; Lin, Yun-Chen
2014-05-01
Repeat suicidal behaviors in young people are a critical public health concern. The study investigates individual socioeconomic and episode-dependent clinical factors predicting repeat suicide attempts among youth by gender. Using a retrospective cohort study, we identified a total of 4,094 male and 3,219 female youths who had the index suicide episode at the ages of 15-24 years from the 1996-2007 National Health Insurance Research Database in Taiwan. The recurrence of suicide attempt was assessed within 1 year after the index suicide. Information pertaining to suicide management and postsuicide treatment was obtained from healthcare records. Repeated event survival analyses were used to estimate episode-dependent risk of suicide attempt. The occurrence of repeat suicide attempts was more common in males, yet the phenomenon of risk aggravation appears more prominent in females. The estimate for peak hazard of the second repeat attempt was 2-fold higher than that of the first repeat event in males, and approximately 6-fold in females. Socioeconomic (e.g., labor market participation: adjusted Hazard Ratio [aHR] = 1.14, 95% CI = 1.01-1.28) and index suicide management characteristics (e.g., receiving treatment at clinic, aHR = 1.54, 95% CI = 1.19-1.99) were found to play important roles for repeat suicide attempts in males. For females, postsuicide treatment of mental disorders appears more influential. The relationships between socioeconomic and clinical factors with repeat suicide attempts in young people vary by gender. School/workplace-based post suicide attempt consultation and clinical management for youth may be planned and delivered on a gender-appropriate basis. Copyright © 2014 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
[AIDS and pain management-a survey of German AIDS and pain management units.].
Zech, D; Radbruch, L; Grond, S; Heise, W
1994-06-01
The number of AIDS patients is steadily increasing. According to the literature these patients are often in severe pain. We evaluated pain diagnoses and treatments with two almost identical questionnaires for AIDS treatment units (ATU) and pain management units (PMU). Questions dealt with unit type and size, number of patients treated per year and the proportion of intravenous drug users. The units were also asked to give an estimate of pain aetiologies, pain types and localizations and treatment modalities offered. Completed questionnaires were returned by 38 of 235 ATU and 85 of 127 PMU. In the ATU, 16% of the patients (estimated at 580 patients per year) had pain requiring treatment. In 26 of the PMU approximately 120 AIDS patients per year were treated, while 59 PMU had not yet seen any AIDS patients. Pain was caused mainly by opportunistic infections and by neurological syndromes connected with AIDS. Pain aetiologies could not be differentiated in the ATU in 22% of patients (PMU 9%), and pain types in 33% (PMU 9%). Neuropathic pain (ATU 38%, PMU 89%) was more frequent than nociceptive pain (ATU 29%, PMU 36%). The treatment modalities were systemic pharmacotherapy in 76% of ATU and 73% of PMU and nerve blocks in 37% of ATU and 42% of PMU. In 82% of ATU the staff thought their analgesic therapy was adequate, and in 92% staff were interested in closer cooperation with PMU such as was currently practised in only 6 of the 38 units (16%) that responded. The high incidence of complicated neuropathic pain syndromes in AIDS patients requires a sophisticated therapeutic approach. Closer cooperation between AIDS specialists and pain specialists, comparable to that already existing for other patient groups, is therefore desirable.
NASA Astrophysics Data System (ADS)
Wong, Pak-kin; Vong, Chi-man; Wong, Hang-cheong; Li, Ke
2010-05-01
Modern automotive spark-ignition (SI) power performance usually refers to output power and torque, and they are significantly affected by the setup of control parameters in the engine management system (EMS). EMS calibration is done empirically through tests on the dynamometer (dyno) because no exact mathematical engine model is yet available. With an emerging nonlinear function estimation technique of Least squares support vector machines (LS-SVM), the approximate power performance model of a SI engine can be determined by training the sample data acquired from the dyno. A novel incremental algorithm based on typical LS-SVM is also proposed in this paper, so the power performance models built from the incremental LS-SVM can be updated whenever new training data arrives. With updating the models, the model accuracies can be continuously increased. The predicted results using the estimated models from the incremental LS-SVM are good agreement with the actual test results and with the almost same average accuracy of retraining the models from scratch, but the incremental algorithm can significantly shorten the model construction time when new training data arrives.
Estimating the value of medical education: a net present value approach.
Kahn, Marc J; Nelling, Edward F
2010-07-01
Estimating the value of a medical education is a difficult undertaking. As student debt levels rise and the role of managed care in price-setting increases, the financial benefit of an MD degree comes into question. We developed a model using net present value (NPV) analysis for a range of annual costs of medical school attendance. Using this model, we determined the point at which pursuing a medical education is a "break-even" proposition from a financial perspective. The NPV of a medical education was positive for all annual costs of attendance from $10,000 to $100,000 and ranged from approximately $39,000 to $674,000 depending on the discount rate. Assuming a discount rate of 8%, only at an annual cost of attendance of $139,805 was the NPV = $0, which represents the break-even cost of medical education for a prospective student. Medical education is a financially advantageous undertaking for costs of attendance that far exceed even the most expensive schools in the United States. Our analysis suggests that based on economics, the supply of future physicians ought to be secure.
Aquifers of the Denver Basin, Colorado
Topper, R.
2004-01-01
Development of the Denver Basin for water supply has been ongoing since the late 1800s. The Denver Basin aquifer system consists of the water-yielding strata of Tertiary and Cretaceous sedimentary rocks within four overlying formations. The four statutory aquifers contained in these formations are named the Dawson, Denver, Arapahoe, and Laramie-Fox Hills. For water rights administrative purposes, the outcrop/subcrop of the Laramie-Fox Hills aquifer defines the margins of the Basin. Initial estimates of the total recoverable groundwater reserves in storage, under this 6700-mi2 area, were 295 million acre-ft. Recent geologic evidence indicates that the aquifers are very heterogeneous and their composition varies significantly with distance from the source area of the sediments. As a result, available recoverable reserves may be one-third less than previously estimated. There is no legal protection for pressure levels in the aquifer, and water managers are becoming increasingly concerned about the rapid water level declines (30 ft/yr). Approximately 33,700 wells of record have been completed in the sedimentary rock aquifers of the Denver Basin for municipal, industrial, agricultural, and domestic uses.
Estimation of old field ecosystem biomass using low altitude imagery
NASA Technical Reports Server (NTRS)
Nor, S. M.; Safir, G.; Burton, T. M.; Hook, J. E.; Schultink, G.
1977-01-01
Color-infrared photography was used to evaluate the biomass of experimental plots in an old-field ecosystem that was treated with different levels of waste water from a sewage treatment facility. Cibachrome prints at a scale of approximately 1:1,600 produced from 35 mm color infrared slides were used to analyze density patterns using prepared tonal density scales and multicell grids registered to ground panels shown on the photograph. Correlation analyses between tonal density and vegetation biomass obtained from ground samples and harvests were carried out. Correlations between mean tonal density and harvest biomass data gave consistently high coefficients ranging from 0.530 to 0.896 at the 0.001 significance level. Corresponding multiple regression analysis resulted in higher correlation coefficients. The results of this study indicate that aerial infrared photography can be used to estimate standing crop biomass on waste water irrigated old field ecosystems. Combined with minimal ground truth data, this technique could enable managers of wastewater irrigation projects to precisely time harvest of such systems for maximal removal of nutrients in harvested biomass.
Higher order corrections to mixed QCD-EW contributions to Higgs boson production in gluon fusion
NASA Astrophysics Data System (ADS)
Bonetti, Marco; Melnikov, Kirill; Tancredi, Lorenzo
2018-03-01
We present an estimate of the next-to-leading-order (NLO) QCD corrections to mixed QCD-electroweak contributions to the Higgs boson production cross section in gluon fusion, combining the recently computed three-loop virtual corrections and the approximate treatment of real emission in the soft approximation. We find that the NLO QCD corrections to the mixed QCD-electroweak contributions are nearly identical to NLO QCD corrections to QCD Higgs production. Our result confirms an earlier estimate of these O (α αs2) effects by Anastasiou et al. [J. High Energy Phys. 04 (2009) 003, 10.1088/1126-6708/2009/04/003] and provides further support for the factorization approximation of QCD and electroweak corrections.
Branching-ratio approximation for the self-exciting Hawkes process
NASA Astrophysics Data System (ADS)
Hardiman, Stephen J.; Bouchaud, Jean-Philippe
2014-12-01
We introduce a model-independent approximation for the branching ratio of Hawkes self-exciting point processes. Our estimator requires knowing only the mean and variance of the event count in a sufficiently large time window, statistics that are readily obtained from empirical data. The method we propose greatly simplifies the estimation of the Hawkes branching ratio, recently proposed as a proxy for market endogeneity and formerly estimated using numerical likelihood maximization. We employ our method to support recent theoretical and experimental results indicating that the best fitting Hawkes model to describe S&P futures price changes is in fact critical (now and in the recent past) in light of the long memory of financial market activity.
A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.
Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less
Electromagnetic wave scattering from some vegetation samples
NASA Technical Reports Server (NTRS)
Karam, Mostafa A.; Fung, Adrian K.; Antar, Yahia M.
1988-01-01
For an incident plane wave, the field inside a thin scatterer (disk and needle) is estimated by the generalized Rayleigh-Gans (GRG) approximation. This leads to a scattering amplitude tensor equal to that obtained via the Rayleigh approximation (dipole term) with a modifying function. For a finite-length cylinder the inner field is estimated by the corresponding field for the same cylinder of infinite lenght. The effects of different approaches in estimating the field inside the scatterer on the backscattering cross section are illustrated numerically for a circular disk, a needle, and a finite-length cylinder as a function of the wave number and the incidence angle. Finally, the modeling predictions are compared with measurements.
A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes
Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.
2017-02-05
Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less
Molinos-Senante, María; Mocholí-Arce, Manuel; Sala-Garrido, Ramon
2016-10-15
Water scarcity is one of the main problems faced by many regions in the XXIst century. In this context, the need to reduce leakages from water distribution systems has gained almost universal acceptance. The concept of sustainable economic level of leakage (SELL) has been proposed to internalize the environmental and resource costs within economic level of leakage calculations. However, because these costs are not set by the market, they have not often been calculated. In this paper, the directional-distance function was used to estimate the shadow price of leakages as a proxy of their environmental and resource costs. This is a pioneering approach to the economic valuation of leakage externalities. An empirical application was carried out for the main Chilean water companies. The estimated results indicated that for 2014, the average shadow price of leakages was approximately 32% of the price of the water delivered. Moreover, as a sensitivity analysis, the shadow prices of the leakages were calculated from the perspective of the water companies' managers and the regulator. The methodology and findings of this study are essential for supporting the decision process of reducing leakage, contributing to the improvement of economic, social and environmental efficiency and sustainability of urban water supplies. Copyright © 2016 Elsevier B.V. All rights reserved.
Estimating the Attack Rate of Pregnancy-Associated Listeriosis during a Large Outbreak
Imanishi, Maho; Routh, Janell A.; Klaber, Marigny; Gu, Weidong; Vanselow, Michelle S.; Jackson, Kelly A.; Sullivan-Chang, Loretta; Heinrichs, Gretchen; Jain, Neena; Albanese, Bernadette; Callaghan, William M.; Mahon, Barbara E.; Silk, Benjamin J.
2015-01-01
Background. In 2011, a multistate outbreak of listeriosis linked to contaminated cantaloupes raised concerns that many pregnant women might have been exposed to Listeria monocytogenes. Listeriosis during pregnancy can cause fetal death, premature delivery, and neonatal sepsis and meningitis. Little information is available to guide healthcare providers who care for asymptomatic pregnant women with suspected L. monocytogenes exposure. Methods. We tracked pregnancy-associated listeriosis cases using reportable diseases surveillance and enhanced surveillance for fetal death using vital records and inpatient fetal deaths data in Colorado. We surveyed 1,060 pregnant women about symptoms and exposures. We developed three methods to estimate how many pregnant women in Colorado ate the implicated cantaloupes, and we calculated attack rates. Results. One laboratory-confirmed case of listeriosis was associated with pregnancy. The fetal death rate did not increase significantly compared to preoutbreak periods. Approximately 6,500–12,000 pregnant women in Colorado might have eaten the contaminated cantaloupes, an attack rate of ~1 per 10,000 exposed pregnant women. Conclusions. Despite many exposures, the risk of pregnancy-associated listeriosis was low. Our methods for estimating attack rates may help during future outbreaks and product recalls. Our findings offer relevant considerations for management of asymptomatic pregnant women with possible L. monocytogenes exposure. PMID:25784782
Pioneering offshore excellence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kent, R.P.; Grattan, L.
1996-11-01
Hibernia Management and Development Company Ltd. (HMDC) was formed in 1990 by a consortium of oil companies to develop their interests in the Hibernia and Avalon reservoirs offshore Newfoundland in a safe and environmentally responsible manner. The reservoirs are located 315km ESE of St. John`s in the North Atlantic. The water depth is about 80m. The entire Hibernia field is estimated to contain more than three billion barrels of oil in place and the owners development plan area is estimated to contain two billion barrels. Recoverable reserves are estimated to be approximately 615 million barrels. The Hibernia reservoir, the principlemore » reservoir, is located at an average depth of 3,700m. HMDC is building a large concrete gravity based structure (GBS) that which will support the platform drilling and processing facilities and living quarters for 280 personnel. In 1997 the platform will be towed to the production site and production will commence late 1997. Oil will be exported by a 2 km long pipeline to an offshore loading system. Dynamically positioned tankers will then take the oil to market. Average daily production is expected to plateau between 125,000 and 135,000 BOPD. It will be the first major development on the east coast of Canada and is located in an area that is prone to pack ice and icebergs.« less
Towards Canine Rabies Elimination in South-Eastern Tanzania: Assessment of Health Economic Data.
Hatch, B; Anderson, A; Sambo, M; Maziku, M; Mchau, G; Mbunda, E; Mtema, Z; Rupprecht, C E; Shwiff, S A; Nel, L
2017-06-01
An estimated 59 000 people die annually from rabies, keeping this zoonosis on the forefront of neglected diseases, especially in the developing world. Most deaths occur after being bitten by a rabid dog. Those exposed to a suspect rabid animal should receive appropriate post-exposure prophylaxis (PEP) or risk death. However, vaccination of dogs to control and eliminate canine rabies at the source has been implemented in many places around the world. Here, we analysed the vaccination and cost data for one such campaign in the area surrounding and including Dar es Salaam, Tanzania and estimated the cost per dog vaccinated. We also estimated the cost of human PEP. We found that the cost per dog vaccinated ranged from $2.50 to $22.49 across districts and phases, with the phase average ranging from $7.30 to $11.27. These figures were influenced by over purchase of vaccine in the early phases of the programme and the significant costs associated with purchasing equipment for a programme starting from scratch. The cost per human PEP course administered was approximately $24.41, with the average patient receiving 2.5 of the recommended four vaccine doses per suspect bite. This study provides valuable financial insights into programme managers and policymakers working towards rabies elimination. © 2016 Blackwell Verlag GmbH.
Mass properties survey of solar array technologies
NASA Technical Reports Server (NTRS)
Kraus, Robert
1991-01-01
An overview of the technologies, electrical performance, and mass characteristics of many of the presently available and the more advanced developmental space solar array technologies is presented. Qualitative trends and quantitative mass estimates as total array output power is increased from 1 kW to 5 kW at End of Life (EOL) from a single wing are shown. The array technologies are part of a database supporting an ongoing solar power subsystem model development for top level subsystem and technology analyses. The model is used to estimate the overall electrical and thermal performance of the complete subsystem, and then calculate the mass and volume of the array, batteries, power management, and thermal control elements as an initial sizing. The array types considered here include planar rigid panel designs, flexible and rigid fold-out planar arrays, and two concentrator designs, one with one critical axis and the other with two critical axes. Solar cell technologies of Si, GaAs, and InP were included in the analyses. Comparisons were made at the array level; hinges, booms, harnesses, support structures, power transfer, and launch retention mountings were included. It is important to note that the results presented are approximations, and in some cases revised or modified performance and mass estimates of specific designs.
Nonpolar Solvation Free Energy from Proximal Distribution Functions
Ou, Shu-Ching; Drake, Justin A.; Pettitt, B. Montgomery
2017-01-01
Using precomputed near neighbor or proximal distribution functions (pDFs) that approximate solvent density about atoms in a chemically bonded context one can estimate the solvation structures around complex solutes and the corresponding solute–solvent energetics. In this contribution, we extend this technique to calculate the solvation free energies (ΔG) of a variety of solutes. In particular we use pDFs computed for small peptide molecules to estimate ΔG for larger peptide systems. We separately compute the non polar (ΔGvdW) and electrostatic (ΔGelec) components of the underlying potential model. Here we show how the former can be estimated by thermodynamic integration using pDF-reconstructed solute–solvent interaction energy. The electrostatic component can be approximated with Linear Response theory as half of the electrostatic solute–solvent interaction energy. We test the method by calculating the solvation free energies of butane, propanol, polyalanine, and polyglycine and by comparing with traditional free energy simulations. Results indicate that the pDF-reconstruction algorithm approximately reproduces ΔGvdW calculated by benchmark free energy simulations to within ~ kcal/mol accuracy. The use of transferable pDFs for each solute atom allows for a rapid estimation of ΔG for arbitrary molecular systems. PMID:27992228
NASA Astrophysics Data System (ADS)
Perreault Levasseur, Laurence; Hezaveh, Yashar D.; Wechsler, Risa H.
2017-11-01
In Hezaveh et al. we showed that deep learning can be used for model parameter estimation and trained convolutional neural networks to determine the parameters of strong gravitational-lensing systems. Here we demonstrate a method for obtaining the uncertainties of these parameters. We review the framework of variational inference to obtain approximate posteriors of Bayesian neural networks and apply it to a network trained to estimate the parameters of the Singular Isothermal Ellipsoid plus external shear and total flux magnification. We show that the method can capture the uncertainties due to different levels of noise in the input data, as well as training and architecture-related errors made by the network. To evaluate the accuracy of the resulting uncertainties, we calculate the coverage probabilities of marginalized distributions for each lensing parameter. By tuning a single variational parameter, the dropout rate, we obtain coverage probabilities approximately equal to the confidence levels for which they were calculated, resulting in accurate and precise uncertainty estimates. Our results suggest that the application of approximate Bayesian neural networks to astrophysical modeling problems can be a fast alternative to Monte Carlo Markov Chains, allowing orders of magnitude improvement in speed.
NATIVE PLANTS FOR OPTIMIZING CARBON SEQUESTRATION IN RECLAIMED LANDS
DOE Office of Scientific and Technical Information (OSTI.GOV)
P. UNKEFER; M. EBINGER; ET AL
Carbon emissions and atmospheric concentrations are expected to continue to increase through the next century unless major changes are made in the way carbon is managed. Managing carbon has emerged as a pressing national energy and environmental need that will drive national policies and treaties through the coming decades. Addressing carbon management is now a major priority for DOE and the nation. One way to manage carbon is to use energy more efficiently to reduce our need for major energy and carbon source-fossil fuel combustion. Another way is to increase our use of low-carbon and carbon free fuels and technologies.more » A third way, and the focus of this proposal, is carbon sequestration, in which carbon is captured and stored thereby mitigating carbon emissions. Sequestration of carbon in the terrestrial biosphere has emerged as the principle means by which the US will meet its near-term international and economic requirements for reducing net carbon emissions (DOE Carbon Sequestration: State of the Science. 1999; IGBP 1998). Terrestrial carbon sequestration provides three major advantages. First, terrestrial carbon pools and fluxes are of sufficient magnitude to effectively mitigate national and even global carbon emissions. The terrestrial biosphere stores {approximately}2060 GigaTons of carbon and transfers approximately 120 GigaTons of carbon per year between the atmosphere and the earth's surface, whereas the current global annual emissions are about 6 GigaTons. Second, we can rapidly and readily modify existing management practices to increase carbon sequestration in our extensive forest, range, and croplands. Third, increasing soil carbon is without negative environment consequences and indeed positively impacts land productivity. The terrestrial carbon cycle is dependent on several interrelationships between plants and soils. Because the soil carbon pool ({approximately}1500 Giga Tons) is approximately three times that in terrestrial vegetation ({approximately}560 GigaTons), the principal focus of terrestrial sequestration efforts is to increase soil carbon. But soil carbon ultimately derives from vegetation and therefore must be managed indirectly through aboveground management of vegetation and nutrients. Hence, the response of whole ecosystems must be considered in terrestrial carbon sequestration strategies.« less
Multi-level methods and approximating distribution functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, D., E-mail: daniel.wilson@dtc.ox.ac.uk; Baker, R. E.
2016-07-15
Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparablemore » to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.« less
From neurons to circuits: linear estimation of local field potentials.
Rasch, Malte; Logothetis, Nikos K; Kreiman, Gabriel
2009-11-04
Extracellular physiological recordings are typically separated into two frequency bands: local field potentials (LFPs) (a circuit property) and spiking multiunit activity (MUA). Recently, there has been increased interest in LFPs because of their correlation with functional magnetic resonance imaging blood oxygenation level-dependent measurements and the possibility of studying local processing and neuronal synchrony. To further understand the biophysical origin of LFPs, we asked whether it is possible to estimate their time course based on the spiking activity from the same electrode or nearby electrodes. We used "signal estimation theory" to show that a linear filter operation on the activity of one or a few neurons can explain a significant fraction of the LFP time course in the macaque monkey primary visual cortex. The linear filter used to estimate the LFPs had a stereotypical shape characterized by a sharp downstroke at negative time lags and a slower positive upstroke for positive time lags. The filter was similar across different neocortical regions and behavioral conditions, including spontaneous activity and visual stimulation. The estimations had a spatial resolution of approximately 1 mm and a temporal resolution of approximately 200 ms. By considering a causal filter, we observed a temporal asymmetry such that the positive time lags in the filter contributed more to the LFP estimation than the negative time lags. Additionally, we showed that spikes occurring within approximately 10 ms of spikes from nearby neurons yielded better estimation accuracies than nonsynchronous spikes. In summary, our results suggest that at least some circuit-level local properties of the field potentials can be predicted from the activity of one or a few neurons.