Horwitz, Irwin B; McCall, Brian P
2004-10-01
This study estimated injury and illness rates, risk factors, and costs associated with construction work in Oregon from 1990-1997 using all accepted workers' compensation claims by Oregon construction employees (N = 20,680). Claim rates and risk estimates were estimated using a baseline calculated from Current Population Survey data of the Oregon workforce. The average annual rate of lost-time claims was 3.5 per 100 workers. More than 50% of claims were by workers under 35 years and with less than 1 year of tenure. The majority of claimants (96.1%) were male. There were 52 total fatalities reported over the period examined, representing an average annual death rate of 8.5 per 100,000 construction workers. Average claim cost was $10,084 and mean indemnity time was 57.3 days. Structural metal workers had the highest average days of indemnity of all workers (72. 1), highest average costs per claim ($16,472), and highest odds ratio of injury of all occupations examined. Sprains were the most frequently reported injury type, constituting 46.4% of all claims. The greatest accident risk occurred during the third hour of work. Training interventions should be extensively utilized for inexperienced workers, and prework exercises could potentially reduce injury frequency and severity.
Estimation of annual average daily traffic for off-system roads in Florida
DOT National Transportation Integrated Search
1999-07-28
Estimation of Annual Average Daily Traffic (AADT) is extremely important in traffic planning and operations for the state departments of transportation (DOTs), because AADT provides information for the planning of new road construction, determination...
Estimating procedure for major highway construction bid item cost : final report.
DOT National Transportation Integrated Search
1978-06-01
The present procedure for estimating construction bid item cost makes use of the quarterly weighted average unit price report coupled with engineering judgement. The limitation to this method is that this report format provides only the lowest bid da...
Doubly robust nonparametric inference on the average treatment effect.
Benkeser, D; Carone, M; Laan, M J Van Der; Gilbert, P B
2017-12-01
Doubly robust estimators are widely used to draw inference about the average effect of a treatment. Such estimators are consistent for the effect of interest if either one of two nuisance parameters is consistently estimated. However, if flexible, data-adaptive estimators of these nuisance parameters are used, double robustness does not readily extend to inference. We present a general theoretical study of the behaviour of doubly robust estimators of an average treatment effect when one of the nuisance parameters is inconsistently estimated. We contrast different methods for constructing such estimators and investigate the extent to which they may be modified to also allow doubly robust inference. We find that while targeted minimum loss-based estimation can be used to solve this problem very naturally, common alternative frameworks appear to be inappropriate for this purpose. We provide a theoretical study and a numerical evaluation of the alternatives considered. Our simulations highlight the need for and usefulness of these approaches in practice, while our theoretical developments have broad implications for the construction of estimators that permit doubly robust inference in other problems.
NASA Technical Reports Server (NTRS)
Chelton, Dudley B.; Schlax, Michael G.
1991-01-01
The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.
ERIC Educational Resources Information Center
Matlock, Ki Lynn; Turner, Ronna
2016-01-01
When constructing multiple test forms, the number of items and the total test difficulty are often equivalent. Not all test developers match the number of items and/or average item difficulty within subcontent areas. In this simulation study, six test forms were constructed having an equal number of items and average item difficulty overall.…
23 CFR 635.127 - Agreement provisions regarding overruns in contract time.
Code of Federal Regulations, 2012 CFR
2012-04-01
... ENGINEERING AND TRAFFIC OPERATIONS CONSTRUCTION AND MAINTENANCE Contract Procedures § 635.127 Agreement... types. These rates shall, as a minimum, be established to cover the estimated average daily construction... proportional share, as used in this section, is the ratio of the final contract construction costs eligible for...
23 CFR 635.127 - Agreement provisions regarding overruns in contract time.
Code of Federal Regulations, 2013 CFR
2013-04-01
... ENGINEERING AND TRAFFIC OPERATIONS CONSTRUCTION AND MAINTENANCE Contract Procedures § 635.127 Agreement... types. These rates shall, as a minimum, be established to cover the estimated average daily construction... proportional share, as used in this section, is the ratio of the final contract construction costs eligible for...
23 CFR 635.127 - Agreement provisions regarding overruns in contract time.
Code of Federal Regulations, 2010 CFR
2010-04-01
... ENGINEERING AND TRAFFIC OPERATIONS CONSTRUCTION AND MAINTENANCE Contract Procedures § 635.127 Agreement... types. These rates shall, as a minimum, be established to cover the estimated average daily construction... proportional share, as used in this section, is the ratio of the final contract construction costs eligible for...
23 CFR 635.127 - Agreement provisions regarding overruns in contract time.
Code of Federal Regulations, 2014 CFR
2014-04-01
... ENGINEERING AND TRAFFIC OPERATIONS CONSTRUCTION AND MAINTENANCE Contract Procedures § 635.127 Agreement... types. These rates shall, as a minimum, be established to cover the estimated average daily construction... proportional share, as used in this section, is the ratio of the final contract construction costs eligible for...
23 CFR 635.127 - Agreement provisions regarding overruns in contract time.
Code of Federal Regulations, 2011 CFR
2011-04-01
... ENGINEERING AND TRAFFIC OPERATIONS CONSTRUCTION AND MAINTENANCE Contract Procedures § 635.127 Agreement... types. These rates shall, as a minimum, be established to cover the estimated average daily construction... proportional share, as used in this section, is the ratio of the final contract construction costs eligible for...
Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon
2015-03-30
The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.
Estimation of construction waste generation and management in Thailand.
Kofoworola, Oyeshola Femi; Gheewala, Shabbir H
2009-02-01
This study examines construction waste generation and management in Thailand. It is estimated that between 2002 and 2005, an average of 1.1 million tons of construction waste was generated per year in Thailand. This constitutes about 7.7% of the total amount of waste disposed in both landfills and open dumpsites annually during the same period. Although construction waste constitutes a major source of waste in terms of volume and weight, its management and recycling are yet to be effectively practiced in Thailand. Recently, the management of construction waste is being given attention due to its rapidly increasing unregulated dumping in undesignated areas, and recycling is being promoted as a method of managing this waste. If effectively implemented, its potential economic and social benefits are immense. It was estimated that between 70 and 4,000 jobs would have been created between 2002 and 2005, if all construction wastes in Thailand had been recycled. Additionally it would have contributed an average savings of about 3.0 x 10(5) GJ per year in the final energy consumed by the construction sector of the nation within the same period based on the recycling scenario analyzed. The current national integrated waste management plan could enhance the effective recycling of construction and demolition waste in Thailand when enforced. It is recommended that an inventory of all construction waste generated in the country be carried out in order to assess the feasibility of large scale recycling of construction and demolition waste.
Optimizing traffic counting procedures.
DOT National Transportation Integrated Search
1986-01-01
Estimates of annual average daily traffic volumes are important in the planning and operations of state highway departments. These estimates are used in the planning of new construction and improvement of existing facilities, and, in some cases, in t...
Robust estimation of event-related potentials via particle filter.
Fukami, Tadanori; Watanabe, Jun; Ishikawa, Fumito
2016-03-01
In clinical examinations and brain-computer interface (BCI) research, a short electroencephalogram (EEG) measurement time is ideal. The use of event-related potentials (ERPs) relies on both estimation accuracy and processing time. We tested a particle filter that uses a large number of particles to construct a probability distribution. We constructed a simple model for recording EEG comprising three components: ERPs approximated via a trend model, background waves constructed via an autoregressive model, and noise. We evaluated the performance of the particle filter based on mean squared error (MSE), P300 peak amplitude, and latency. We then compared our filter with the Kalman filter and a conventional simple averaging method. To confirm the efficacy of the filter, we used it to estimate ERP elicited by a P300 BCI speller. A 400-particle filter produced the best MSE. We found that the merit of the filter increased when the original waveform already had a low signal-to-noise ratio (SNR) (i.e., the power ratio between ERP and background EEG). We calculated the amount of averaging necessary after applying a particle filter that produced a result equivalent to that associated with conventional averaging, and determined that the particle filter yielded a maximum 42.8% reduction in measurement time. The particle filter performed better than both the Kalman filter and conventional averaging for a low SNR in terms of both MSE and P300 peak amplitude and latency. For EEG data produced by the P300 speller, we were able to use our filter to obtain ERP waveforms that were stable compared with averages produced by a conventional averaging method, irrespective of the amount of averaging. We confirmed that particle filters are efficacious in reducing the measurement time required during simulations with a low SNR. Additionally, particle filters can perform robust ERP estimation for EEG data produced via a P300 speller. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon
2015-01-01
The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads. PMID:25831087
Time prediction of failure a type of lamps by using general composite hazard rate model
NASA Astrophysics Data System (ADS)
Riaman; Lesmana, E.; Subartini, B.; Supian, S.
2018-03-01
This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.
Mitigation Study. Lake Pontchartrain, Louisiana, and Vicinity. Hurricane Protection Project.
1988-03-01
the reduction of the photic zone. Reductions in plankton populations are possible as a result of clumping and flocculation. Phytoplankton and algae ...from the wetland preservation and dike construction. Temporary turbidity and a slight loss of benthic productivity would occur during construction of...construction (average annual acres) and the estimated importance of the nearshore lake habitat and benthic food chain to sport fish production (Rogillio
Couch, James R; Petersen, Martin; Rice, Carol; Schubauer-Berigan, Mary K
2011-05-01
To construct a job-exposure matrix (JEM) for an Ohio beryllium processing facility between 1953 and 2006 and to evaluate temporal changes in airborne beryllium exposures. Quantitative area- and breathing-zone-based exposure measurements of airborne beryllium were made between 1953 and 2006 and used by plant personnel to estimate daily weighted average (DWA) exposure concentrations for sampled departments and operations. These DWA measurements were used to create a JEM with 18 exposure metrics, which was linked to the plant cohort consisting of 18,568 unique job, department and year combinations. The exposure metrics ranged from quantitative metrics (annual arithmetic/geometric average DWA exposures, maximum DWA and peak exposures) to descriptive qualitative metrics (chemical beryllium species and physical form) to qualitative assignment of exposure to other risk factors (yes/no). Twelve collapsed job titles with long-term consistent industrial hygiene samples were evaluated using regression analysis for time trends in DWA estimates. Annual arithmetic mean DWA estimates (overall plant-wide exposures including administration, non-production, and production estimates) for the data by decade ranged from a high of 1.39 μg/m(3) in the 1950s to a low of 0.33 μg/m(3) in the 2000s. Of the 12 jobs evaluated for temporal trend, the average arithmetic DWA mean was 2.46 μg/m(3) and the average geometric mean DWA was 1.53 μg/m(3). After the DWA calculations were log-transformed, 11 of the 12 had a statistically significant (p < 0.05) decrease in reported exposure over time. The constructed JEM successfully differentiated beryllium exposures across jobs and over time. This is the only quantitative JEM containing exposure estimates (average and peak) for the entire plant history.
Whole stand volume tables for quaking aspen in the Rocky Mountains
Wayne D. Shepperd; H. Todd Mowrer
1984-01-01
Linear regression equations were developed to predict stand volumes for aspen given average stand basal area and average stand height. Tables constructed from these equations allow easy field estimation of gross merchantable cubic and board foot Scribner Rules per acre, and cubic meters per hectare using simple prism cruise data.
NASA Astrophysics Data System (ADS)
Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio
The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and numerical analyses based on the finite-difference time-domain (FDTD) method. We consider the anatomical European human phantoms and plane-wave in the 2GHz mobile phone frequency band. The measured whole-body averaged SAR results obtained by the proposed method are compared with the results of the FDTD analyses.
Psychometric Evaluation of Lexical Diversity Indices: Assessing Length Effects.
Fergadiotis, Gerasimos; Wright, Heather Harris; Green, Samuel B
2015-06-01
Several novel techniques have been developed recently to assess the breadth of a speaker's vocabulary exhibited in a language sample. The specific aim of this study was to increase our understanding of the validity of the scores generated by different lexical diversity (LD) estimation techniques. Four techniques were explored: D, Maas, measure of textual lexical diversity, and moving-average type-token ratio. Four LD indices were estimated for language samples on 4 discourse tasks (procedures, eventcasts, story retell, and recounts) from 442 adults who are neurologically intact. The resulting data were analyzed using structural equation modeling. The scores for measure of textual lexical diversity and moving-average type-token ratio were stronger indicators of the LD of the language samples. The results for the other 2 techniques were consistent with the presence of method factors representing construct-irrelevant sources. These findings offer a deeper understanding of the relative validity of the 4 estimation techniques and should assist clinicians and researchers in the selection of LD measures of language samples that minimize construct-irrelevant sources.
NASA Astrophysics Data System (ADS)
Wayson, Michael B.; Bolch, Wesley E.
2018-04-01
Internal radiation dose estimates for diagnostic nuclear medicine procedures are typically calculated for a reference individual. Resultantly, there is uncertainty when determining the organ doses to patients who are not at 50th percentile on either height or weight. This study aims to better personalize internal radiation dose estimates for individual patients by modifying the dose estimates calculated for reference individuals based on easily obtainable morphometric characteristics of the patient. Phantoms of different sitting heights and waist circumferences were constructed based on computational reference phantoms for the newborn, 10 year-old, and adult. Monoenergetic photons and electrons were then simulated separately at 15 energies. Photon and electron specific absorbed fractions (SAFs) were computed for the newly constructed non-reference phantoms and compared to SAFs previously generated for the age-matched reference phantoms. Differences in SAFs were correlated to changes in sitting height and waist circumference to develop scaling factors that could be applied to reference SAFs as morphometry corrections. A further set of arbitrary non-reference phantoms were then constructed and used in validation studies for the SAF scaling factors. Both photon and electron dose scaling methods were found to increase average accuracy when sitting height was used as the scaling parameter (~11%). Photon waist circumference-based scaling factors showed modest increases in average accuracy (~7%) for underweight individuals, but not for overweight individuals. Electron waist circumference-based scaling factors did not show increases in average accuracy. When sitting height and waist circumference scaling factors were combined, modest average gains in accuracy were observed for photons (~6%), but not for electrons. Both photon and electron absorbed doses are more reliably scaled using scaling factors computed in this study. They can be effectively scaled using sitting height alone as patient-specific morphometric parameter.
Wayson, Michael B; Bolch, Wesley E
2018-04-13
Internal radiation dose estimates for diagnostic nuclear medicine procedures are typically calculated for a reference individual. Resultantly, there is uncertainty when determining the organ doses to patients who are not at 50th percentile on either height or weight. This study aims to better personalize internal radiation dose estimates for individual patients by modifying the dose estimates calculated for reference individuals based on easily obtainable morphometric characteristics of the patient. Phantoms of different sitting heights and waist circumferences were constructed based on computational reference phantoms for the newborn, 10 year-old, and adult. Monoenergetic photons and electrons were then simulated separately at 15 energies. Photon and electron specific absorbed fractions (SAFs) were computed for the newly constructed non-reference phantoms and compared to SAFs previously generated for the age-matched reference phantoms. Differences in SAFs were correlated to changes in sitting height and waist circumference to develop scaling factors that could be applied to reference SAFs as morphometry corrections. A further set of arbitrary non-reference phantoms were then constructed and used in validation studies for the SAF scaling factors. Both photon and electron dose scaling methods were found to increase average accuracy when sitting height was used as the scaling parameter (~11%). Photon waist circumference-based scaling factors showed modest increases in average accuracy (~7%) for underweight individuals, but not for overweight individuals. Electron waist circumference-based scaling factors did not show increases in average accuracy. When sitting height and waist circumference scaling factors were combined, modest average gains in accuracy were observed for photons (~6%), but not for electrons. Both photon and electron absorbed doses are more reliably scaled using scaling factors computed in this study. They can be effectively scaled using sitting height alone as patient-specific morphometric parameter.
Ockerman, Darwin J.
2005-01-01
The U.S. Geological Survey, in cooperation with the San Antonio Water System, constructed three watershed models using the Hydrological Simulation Program—FORTRAN (HSPF) to simulate streamflow and estimate recharge to the Edwards aquifer in the Hondo Creek, Verde Creek, and San Geronimo Creek watersheds in south-central Texas. The three models were calibrated and tested with available data collected during 1992–2003. Simulations of streamflow and recharge were done for 1951–2003. The approach to construct the models was to first calibrate the Hondo Creek model (with an hourly time step) using 1992–99 data and test the model using 2000–2003 data. The Hondo Creek model parameters then were applied to the Verde Creek and San Geronimo Creek watersheds to construct the Verde Creek and San Geronimo Creek models. The simulated streamflows for Hondo Creek are considered acceptable. Annual, monthly, and daily simulated streamflows adequately match measured values, but simulated hourly streamflows do not. The accuracy of streamflow simulations for Verde Creek is uncertain. For San Geronimo Creek, the match of measured and simulated annual and monthly streamflows is acceptable (or nearly so); but for daily and hourly streamflows, the calibration is relatively poor. Simulated average annual total streamflow for 1951–2003 to Hondo Creek, Verde Creek, and San Geronimo Creek is 45,400; 32,400; and 11,100 acre-feet, respectively. Simulated average annual streamflow at the respective watershed outlets is 13,000; 16,200; and 6,920 acre-feet. The difference between total streamflow and streamflow at the watershed outlet is streamflow lost to channel infiltration. Estimated average annual Edwards aquifer recharge for Hondo Creek, Verde Creek, and San Geronimo Creek watersheds for 1951–2003 is 37,900 acrefeet (5.04 inches), 26,000 acre-feet (3.36 inches), and 5,940 acre-feet (1.97 inches), respectively. Most of the recharge (about 77 percent for the three watersheds together) occurs as streamflow channel infiltration. Diffuse recharge (direct infiltration of rainfall to the aquifer) accounts for the remaining 23 percent of recharge. For the Hondo Creek watershed, the HSPF recharge estimates for 1992–2003 averaged about 22 percent less than those estimated by the Puente method, a method the U.S. Geological Survey has used to compute annual recharge to the Edwards aquifer since 1978. HSPF recharge estimates for the Verde Creek watershed average about 40 percent less than those estimated by the Puente method.
A comparison of alternative methods for measuring cigarette prices.
Chaloupka, Frank J; Tauras, John A; Strasser, Julia H; Willis, Gordon; Gibson, James T; Hartman, Anne M
2015-05-01
Government agencies, public health organisations and tobacco control researchers rely on accurate estimates of cigarette prices for a variety of purposes. Since the 1950s, the Tax Burden on Tobacco (TBOT) has served as the most widely used source of this price data despite its limitations. This paper compares the prices and collection methods of the TBOT retail-based data and the 2003 and 2006/2007 waves of the population-based Tobacco Use Supplement to the Current Population Survey (TUS-CPS). From the TUS-CPS, we constructed multiple state-level measures of cigarette prices, including weighted average prices per pack (based on average prices for single-pack purchases and average prices for carton purchases) and compared these with the weighted average price data reported in the TBOT. We also constructed several measures of tax avoidance from the TUS-CPS self-reported data. For the 2003 wave, the average TUS-CPS price was 71 cents per pack less than the average TBOT price; for the 2006/2007 wave, the difference was 47 cents. TUS-CPS and TBOT prices were also significantly different at the state level. However, these differences varied widely by state due to tax avoidance opportunities, such as cross-border purchasing. The TUS-CPS can be used to construct valid measures of cigarette prices. Unlike the TBOT, the TUS-CPS captures the effect of price-reducing marketing strategies, as well as tax avoidance practices and non-traditional types of purchasing. Thus, self-reported data like TUS-CPS appear to have advantages over TBOT in estimating the 'real' price that smokers face. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
NASA Astrophysics Data System (ADS)
Magintan, D.; Shukorb, M. N.; Lihan, Tukimat; Campos, Ahimza-arceiz; Saaban, Salman; Husin, Shahril Mohd; Ahmad, Mohd Noh
2016-11-01
Home ranges and movement patterns of elephants during construction of hydroelectric dams were carried out in Hulu Terengganu, Terengganu, Peninsular Malaysia. Two elephants from two herds were captured, collared and released in the catchment area four to five months before inundation started in early October 2014. The two elephants were identified as Puah (female) and Sireh (male). The home range size of each individual during the construction of dams was estimated at 96.53 km2 for Puah and 367.99 km2 for Sireh. The monthly estimates of ranging for Puah was between 5.1 km2 and 38.4 km2 with average monthly ranging of 19.2 ± 4.7, while for Sireh, the monthly ranging estimates were between 20.6 km2 and 184.7 km2 with average monthly ranging at 79.9 ± 34.7. The movement mean rate (based on distance per day) for Puah and Sireh per day were 1.3 ± 0.1 km and 1.9 ± 0.1 km, respectively. Puah movement estimates for the first day after putting the collar was 0.88 km, whereas, the distance movement for Sireh on the first day after the collar was 0.02 km. The total distance travelled for Puah before inundation was 226.18 km, while Sireh covered 267.38 km.
Drift-mine reclamation in Big Four Hollow near Lake Hope, Ohio; a preliminary data report
Nichols, Vance E.
1983-01-01
A subsurface clay dike and hydraulic seals were constructed in 1979 by the Ohio Department of Natural Resources, Division of Reclamation, to reduce acid mine drainage from an abandoned drift mine into Big Four Hollow Creek; Big Four Hollow Creek flow into Sandy Run, the major tributary to Lake Hope. A monitoring program was established in 1979 by the U.S. Geological Survey, Water Resources Division to evaluate sealing effects on surface-water and ground-water systems fo the Big Four Hollow Creek and Sandy Run area just below the mine. Data were collected by private consultants in 1970-71 near the mouth of Big Four Hollow Creek (U.S. Geological Survey station (03201700). Results showed an average pH of 3.1 (calculated from mean hydrogen-ion concentration in moles per liter) and a pH range of 2.7 to 4.8. The estimated sulfate load was 1,000 pounds per day, and the estimated iron load wsa 100 pounds per day. Data collected in 1979, before dike construction at this site, showed a daily mean pH range of 3.4 to 5.4 with an average of 3.7, and a daily mean specific-conductance range of 160 to 600 micromhos per centimeter at 25 degrees Celsius (?mho/cm), averaging 400. Again, the estimated sulfate load was 1,000 pounds per day, but the estimated iron load had decreased to 50 pounds per day. The first 6 months of postconstruction data from the site in 1980 showd a daily mean pH range of 4.5 to 6.8 with an average of 4.9, and a daily mean conductance range of 175 to 405 ?mho/cm with an average of 300. The estimated sulfate load had decreased to 570 pounds per day and the iron load to 8.5 pounds per day. Data collected during the first 6 months after construction indicate moderate improvement in water quality. However, acidic water is still being impounded behind the dike and seals and has not yet been flushed ou by infiltrating rain and ground water. Because the system has not yet stabilized, no interpretation or conclusive statement can be made at this time.
Oil and gas pipeline construction cost analysis and developing regression models for cost estimation
NASA Astrophysics Data System (ADS)
Thaduri, Ravi Kiran
In this study, cost data for 180 pipelines and 136 compressor stations have been analyzed. On the basis of the distribution analysis, regression models have been developed. Material, Labor, ROW and miscellaneous costs make up the total cost of a pipeline construction. The pipelines are analyzed based on different pipeline lengths, diameter, location, pipeline volume and year of completion. In a pipeline construction, labor costs dominate the total costs with a share of about 40%. Multiple non-linear regression models are developed to estimate the component costs of pipelines for various cross-sectional areas, lengths and locations. The Compressor stations are analyzed based on the capacity, year of completion and location. Unlike the pipeline costs, material costs dominate the total costs in the construction of compressor station, with an average share of about 50.6%. Land costs have very little influence on the total costs. Similar regression models are developed to estimate the component costs of compressor station for various capacities and locations.
Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng
2015-01-01
Summary The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function. PMID:27346982
Chan, Kwun Chuen Gary; Yam, Sheung Chi Phillip; Zhang, Zheng
2016-06-01
The estimation of average treatment effects based on observational data is extremely important in practice and has been studied by generations of statisticians under different frameworks. Existing globally efficient estimators require non-parametric estimation of a propensity score function, an outcome regression function or both, but their performance can be poor in practical sample sizes. Without explicitly estimating either functions, we consider a wide class calibration weights constructed to attain an exact three-way balance of the moments of observed covariates among the treated, the control, and the combined group. The wide class includes exponential tilting, empirical likelihood and generalized regression as important special cases, and extends survey calibration estimators to different statistical problems and with important distinctions. Global semiparametric efficiency for the estimation of average treatment effects is established for this general class of calibration estimators. The results show that efficiency can be achieved by solely balancing the covariate distributions without resorting to direct estimation of propensity score or outcome regression function. We also propose a consistent estimator for the efficient asymptotic variance, which does not involve additional functional estimation of either the propensity score or the outcome regression functions. The proposed variance estimator outperforms existing estimators that require a direct approximation of the efficient influence function.
Lumber and plywood used in California apartment construction, 1969
George B. Harpole
1973-01-01
The volume of lumber and plywood products used in apartment construction in California was estimated from a sample of apartments for which architectural plans were completed in 1969. Excluding wood mouldings, doors, cabinets, and shelving, an average of 4.85 board feet of lumber and 2.03 square feet (318-inch basis) of plywood per square foot of floor area were used in...
NASA Astrophysics Data System (ADS)
Rui, Zhenhua
This study analyzes historical cost data of 412 pipelines and 220 compressor stations. On the basis of this analysis, the study also evaluates the feasibility of an Alaska in-state gas pipeline using Monte Carlo simulation techniques. Analysis of pipeline construction costs shows that component costs, shares of cost components, and learning rates for material and labor costs vary by diameter, length, volume, year, and location. Overall average learning rates for pipeline material and labor costs are 6.1% and 12.4%, respectively. Overall average cost shares for pipeline material, labor, miscellaneous, and right of way (ROW) are 31%, 40%, 23%, and 7%, respectively. Regression models are developed to estimate pipeline component costs for different lengths, cross-sectional areas, and locations. An analysis of inaccuracy in pipeline cost estimation demonstrates that the cost estimation of pipeline cost components is biased except for in the case of total costs. Overall overrun rates for pipeline material, labor, miscellaneous, ROW, and total costs are 4.9%, 22.4%, -0.9%, 9.1%, and 6.5%, respectively, and project size, capacity, diameter, location, and year of completion have different degrees of impacts on cost overruns of pipeline cost components. Analysis of compressor station costs shows that component costs, shares of cost components, and learning rates for material and labor costs vary in terms of capacity, year, and location. Average learning rates for compressor station material and labor costs are 12.1% and 7.48%, respectively. Overall average cost shares of material, labor, miscellaneous, and ROW are 50.6%, 27.2%, 21.5%, and 0.8%, respectively. Regression models are developed to estimate compressor station component costs in different capacities and locations. An investigation into inaccuracies in compressor station cost estimation demonstrates that the cost estimation for compressor stations is biased except for in the case of material costs. Overall average overrun rates for compressor station material, labor, miscellaneous, land, and total costs are 3%, 60%, 2%, -14%, and 11%, respectively, and cost overruns for cost components are influenced by location and year of completion to different degrees. Monte Carlo models are developed and simulated to evaluate the feasibility of an Alaska in-state gas pipeline by assigning triangular distribution of the values of economic parameters. Simulated results show that the construction of an Alaska in-state natural gas pipeline is feasible at three scenarios: 500 million cubic feet per day (mmcfd), 750 mmcfd, and 1000 mmcfd.
ERIC Educational Resources Information Center
Breierova, Lucia; Duflo, Esther
2003-01-01
This paper takes advantage of a massive school construction program that took place in Indonesia between 1973 and 1978 to estimate the effect of education on fertility and child mortality. Time and region varying exposure to the school construction program generates instrumental variables for the average education in the household, and the…
Cochran, Kimberly; Townsend, Timothy; Reinhart, Debra; Heck, Howell
2007-01-01
Methodology for the accounting, generation, and composition of building-related construction and demolition (C&D) at a regional level was explored. Six specific categories of debris were examined: residential construction, nonresidential construction, residential demolition, nonresidential demolition, residential renovation, and nonresidential renovation. Debris produced from each activity was calculated as the product of the total area of activity and waste generated per unit area of activity. Similarly, composition was estimated as the product of the total area of activity and the amount of each waste component generated per unit area. The area of activity was calculated using statistical data, and individual site studies were used to assess the average amount of waste generated per unit area. The application of the methodology was illustrated using Florida, US approximately 3,750,000 metric tons of building-related C&D debris were estimated as generated in Florida in 2000. Of that amount, concrete represented 56%, wood 13%, drywall 11%, miscellaneous debris 8%, asphalt roofing materials 7%, metal 3%, cardboard 1%, and plastic 1%. This model differs from others because it accommodates regional construction styles and available data. The resulting generation amount per capita is less than the US estimate - attributable to the high construction, low demolition activity seen in Florida.
NASA Astrophysics Data System (ADS)
Mundermann, Lars; Mundermann, Annegret; Chaudhari, Ajit M.; Andriacchi, Thomas P.
2005-01-01
Anthropometric parameters are fundamental for a wide variety of applications in biomechanics, anthropology, medicine and sports. Recent technological advancements provide methods for constructing 3D surfaces directly. Of these new technologies, visual hull construction may be the most cost-effective yet sufficiently accurate method. However, the conditions influencing the accuracy of anthropometric measurements based on visual hull reconstruction are unknown. The purpose of this study was to evaluate the conditions that influence the accuracy of 3D shape-from-silhouette reconstruction of body segments dependent on number of cameras, camera resolution and object contours. The results demonstrate that the visual hulls lacked accuracy in concave regions and narrow spaces, but setups with a high number of cameras reconstructed a human form with an average accuracy of 1.0 mm. In general, setups with less than 8 cameras yielded largely inaccurate visual hull constructions, while setups with 16 and more cameras provided good volume estimations. Body segment volumes were obtained with an average error of 10% at a 640x480 resolution using 8 cameras. Changes in resolution did not significantly affect the average error. However, substantial decreases in error were observed with increasing number of cameras (33.3% using 4 cameras; 10.5% using 8 cameras; 4.1% using 16 cameras; 1.2% using 64 cameras).
2014-11-11
REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1...completing and reviewing this collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information...including suggestions for reducing this burden to Department of Defense, Washington Headquarters Services, Directorate for Information Operations
F-35 Risk during Department of Defense Financial Crisis
2013-03-01
information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering...and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other...program, an approach intended to save time and money by launching construction at an early stage and at the same time the aircraft was put through
Regional patterns of future runoff changes from Earth system models constrained by observation
NASA Astrophysics Data System (ADS)
Yang, Hui; Zhou, Feng; Piao, Shilong; Huang, Mengtian; Chen, Anping; Ciais, Philippe; Li, Yue; Lian, Xu; Peng, Shushi; Zeng, Zhenzhong
2017-06-01
In the recent Intergovernmental Panel on Climate Change assessment, multimodel ensembles (arithmetic model averaging, AMA) were constructed with equal weights given to Earth system models, without considering the performance of each model at reproducing current conditions. Here we use Bayesian model averaging (BMA) to construct a weighted model ensemble for runoff projections. Higher weights are given to models with better performance in estimating historical decadal mean runoff. Using the BMA method, we find that by the end of this century, the increase of global runoff (9.8 ± 1.5%) under Representative Concentration Pathway 8.5 is significantly lower than estimated from AMA (12.2 ± 1.3%). BMA presents a less severe runoff increase than AMA at northern high latitudes and a more severe decrease in Amazonia. Runoff decrease in Amazonia is stronger than the intermodel difference. The intermodel difference in runoff changes is mainly caused not only by precipitation differences among models, but also by evapotranspiration differences at the high northern latitudes.
Costs of occupational injuries in construction in the United States.
Waehrer, Geetha M; Dong, Xiuwen S; Miller, Ted; Haile, Elizabeth; Men, Yurong
2007-11-01
This paper presents costs of fatal and nonfatal injuries for the construction industry using 2002 national incidence data from the Bureau of Labor Statistics and a comprehensive cost model that includes direct medical costs, indirect losses in wage and household productivity, as well as an estimate of the quality of life costs due to injury. Costs are presented at the three-digit industry level, by worker characteristics, and by detailed source and event of injury. The total costs of fatal and nonfatal injuries in the construction industry were estimated at $11.5 billion in 2002, 15% of the costs for all private industry. The average cost per case of fatal or nonfatal injury is $27,000 in construction, almost double the per-case cost of $15,000 for all industry in 2002. Five industries accounted for over half the industry's total fatal and nonfatal injury costs. They were miscellaneous special trade contractors (SIC 179), followed by plumbing, heating and air-conditioning (SIC 171), electrical work (SIC 173), heavy construction except highway (SIC 162), and residential building construction (SIC 152), each with over $1 billion in costs.
Shoreline development and degradation of coastal fish reproduction habitats.
Sundblad, Göran; Bergström, Ulf
2014-12-01
Coastal development has severely affected habitats and biodiversity during the last century, but quantitative estimates of the impacts are usually lacking. We utilize predictive habitat modeling and mapping of human pressures to estimate the cumulative long-term effects of coastal development in relation to fish habitats. Based on aerial photographs since the 1960s, shoreline development rates were estimated in the Stockholm archipelago in the Baltic Sea. By combining shoreline development rates with spatial predictions of fish reproduction habitats, we estimated annual habitat degradation rates for three of the most common coastal fish species, northern pike (Esox lucius), Eurasian perch (Perca fluviatilis) and roach (Rutilus rutilus). The results showed that shoreline constructions were concentrated to the reproduction habitats of these species. The estimated degradation rates, where a degraded habitat was defined as having ≥3 constructions per 100 m shoreline, were on average 0.5 % of available habitats per year and about 1 % in areas close to larger population centers. Approximately 40 % of available habitats were already degraded in 2005. These results provide an example of how many small construction projects over time may have a vast impact on coastal fish populations.
The link between judgments of comparative risk and own risk: further evidence.
Gold, Ron S
2007-03-01
Individuals typically believe that they are less likely than the average person to experience negative events, a phenomenon termed "unrealistic optimism". The direct method of assessing unrealistic optimism employs a question of the form, "Compared with the average person, what is the chance that X will occur to you?". However, it has been proposed that responses to such a question (direct-estimates) are based essentially just on estimates that X will occur to the self (self-estimates). If this is so, any factors that affect one of these estimates should also affect the other. This prediction was tested in two experiments. In each, direct- and self-estimates for an unfamiliar health threat - homocysteine-related heart problems - were recorded. It was found that both types of estimate were affected in the same way by varying the stated probability of having unsafe levels of homocysteine (Study 1, N=149) and varying the stated probability that unsafe levels of homocysteine will lead to heart problems (Study 2, N=111). The results are consistent with the proposal that direct-estimates are constructed just from self-estimates.
NASA Astrophysics Data System (ADS)
Zou, Hai-Long; Yu, Zu-Guo; Anh, Vo; Ma, Yuan-Lin
2018-05-01
In recent years, researchers have proposed several methods to transform time series (such as those of fractional Brownian motion) into complex networks. In this paper, we construct horizontal visibility networks (HVNs) based on the -stable Lévy motion. We aim to study the relations of multifractal and Laplacian spectrum of transformed networks on the parameters and of the -stable Lévy motion. First, we employ the sandbox algorithm to compute the mass exponents and multifractal spectrum to investigate the multifractality of these HVNs. Then we perform least squares fits to find possible relations of the average fractal dimension , the average information dimension and the average correlation dimension against using several methods of model selection. We also investigate possible dependence relations of eigenvalues and energy on , calculated from the Laplacian and normalized Laplacian operators of the constructed HVNs. All of these constructions and estimates will help us to evaluate the validity and usefulness of the mappings between time series and networks, especially between time series of -stable Lévy motions and HVNs.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) Benzene concentration. An estimate of the average gasoline benzene concentration corresponding to the time... engineering and permitting, Procurement and Construction, and Commissioning and startup. (7) Basic information regarding the selected technology pathway for compliance (e.g., precursor re-routing or other technologies...
Fitting a function to time-dependent ensemble averaged data.
Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias
2018-05-03
Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.
2006-07-10
Jg/m3 = microgram per cubic day meter CO carbon monoxide NA not applicable N/A not available NOx nitrogen oxides PM1o = particulate matter equal...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the...Inventory Table 4-1. SCAQMD Air Quality Significance Thresholds Table 4-2. Total Emissions per Construction Phase (lb/day): FIGURES 1 LAAFB
Channel Characterization for Free-Space Optical Communications
2012-07-01
parameters. From the path- average parameters, a 2nC profile model, called the HAP model, was constructed so that the entire channel from air to ground...SR), both of which are required to estimate the Power in the Bucket (PIB) and Power in the Fiber (PIF) associated with the FOENEX data beam. UCF was...of the path-average values of 2nC , the resulting HAP 2nC profile model led to values of ground level 2 nC that compared very well with actual
Costs of Occupational Injuries in Construction in the United States
Waehrer, Geetha M.; Dong, Xiuwen S.; Miller, Ted; Haile, Elizabeth; Men, Yurong
2008-01-01
This paper presents costs of fatal and non-fatal injuries for the construction industry using 2002 national incidence data from the Bureau of Labor Statistics and a comprehensive cost model that includes direct medical costs, indirect losses in wage and household productivity, as well as an estimate of the quality of life costs due to injury. Costs are presented at the three-digit industry level, by worker characteristics, and by detailed source and event of injury. The total costs of fatal and non-fatal injuries in the construction industry were estimated at $11.5 billion in 2002, 15% of the costs for all private industry. The average cost per case of fatal or nonfatal injury is $27,000 in construction, almost double the per-case cost of $15,000 for all industry in 2002. Five industries accounted for over half the industry’s total fatal and non-fatal injury costs. They were miscellaneous special trade contractors (SIC 179), followed by plumbing, heating and air-conditioning (SIC 171), electrical work (SIC 173), heavy construction except highway (SIC 162), and residential building construction (SIC 152), each with over $1 billion in costs. PMID:17920850
ERIC Educational Resources Information Center
Jolicoeur, Mark; Kahl, Melanie
2010-01-01
More than 10 years ago, the U.S. Department of Education estimated that the average age of American school facilities was 40 years. With the slowing education construction market, one can assume that this age is continuing to rise. And this system of aging school facilities begs the question: Renovate or replace? When schools are looking at their…
Power flow prediction in vibrating systems via model reduction
NASA Astrophysics Data System (ADS)
Li, Xianhui
This dissertation focuses on power flow prediction in vibrating systems. Reduced order models (ROMs) are built based on rational Krylov model reduction which preserve power flow information in the original systems over a specified frequency band. Stiffness and mass matrices of the ROMs are obtained by projecting the original system matrices onto the subspaces spanned by forced responses. A matrix-free algorithm is designed to construct ROMs directly from the power quantities at selected interpolation frequencies. Strategies for parallel implementation of the algorithm via message passing interface are proposed. The quality of ROMs is iteratively refined according to the error estimate based on residual norms. Band capacity is proposed to provide a priori estimate of the sizes of good quality ROMs. Frequency averaging is recast as ensemble averaging and Cauchy distribution is used to simplify the computation. Besides model reduction for deterministic systems, details of constructing ROMs for parametric and nonparametric random systems are also presented. Case studies have been conducted on testbeds from Harwell-Boeing collections. Input and coupling power flow are computed for the original systems and the ROMs. Good agreement is observed in all cases.
An annual quasidifference approach to water price elasticity
NASA Astrophysics Data System (ADS)
Bell, David R.; Griffin, Ronald C.
2008-08-01
The preferred price specification for retail water demand estimation has not been fully settled by prior literature. Empirical consistency of price indices is necessary to enable testing of competing specifications. Available methods of unbiasing the price index are summarized here. Using original rate information from several hundred Texas utilities, new indices of marginal and average price change are constructed. Marginal water price change is shown to explain consumption variation better than average water price change, based on standard information criteria. Annual change in quantity consumed per month is estimated with differences in climate variables and the new quasidifference marginal price index. As expected, the annual price elasticity of demand is found to vary with daily high and low temperatures and the frequency of precipitation.
Pittman, Jeremy Joshua; Arnall, Daryl Brian; Interrante, Sindy M.; Moffet, Corey A.; Butler, Twain J.
2015-01-01
Non-destructive biomass estimation of vegetation has been performed via remote sensing as well as physical measurements. An effective method for estimating biomass must have accuracy comparable to the accepted standard of destructive removal. Estimation or measurement of height is commonly employed to create a relationship between height and mass. This study examined several types of ground-based mobile sensing strategies for forage biomass estimation. Forage production experiments consisting of alfalfa (Medicago sativa L.), bermudagrass [Cynodon dactylon (L.) Pers.], and wheat (Triticum aestivum L.) were employed to examine sensor biomass estimation (laser, ultrasonic, and spectral) as compared to physical measurements (plate meter and meter stick) and the traditional harvest method (clipping). Predictive models were constructed via partial least squares regression and modeled estimates were compared to the physically measured biomass. Least significant difference separated mean estimates were examined to evaluate differences in the physical measurements and sensor estimates for canopy height and biomass. Differences between methods were minimal (average percent error of 11.2% for difference between predicted values versus machine and quadrat harvested biomass values (1.64 and 4.91 t·ha−1, respectively), except at the lowest measured biomass (average percent error of 89% for harvester and quad harvested biomass < 0.79 t·ha−1) and greatest measured biomass (average percent error of 18% for harvester and quad harvested biomass >6.4 t·ha−1). These data suggest that using mobile sensor-based biomass estimation models could be an effective alternative to the traditional clipping method for rapid, accurate in-field biomass estimation. PMID:25635415
Evaluation of statistical models for forecast errors from the HBV model
NASA Astrophysics Data System (ADS)
Engeland, Kolbjørn; Renard, Benjamin; Steinsland, Ingelin; Kolberg, Sjur
2010-04-01
SummaryThree statistical models for the forecast errors for inflow into the Langvatn reservoir in Northern Norway have been constructed and tested according to the agreement between (i) the forecast distribution and the observations and (ii) median values of the forecast distribution and the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order auto-regressive model was constructed for the forecast errors. The parameters were conditioned on weather classes. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order auto-regressive model was constructed for the forecast errors. For the third model positive and negative errors were modeled separately. The errors were first NQT-transformed before conditioning the mean error values on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: we wanted (a) the forecast distribution to be reliable; (b) the forecast intervals to be narrow; (c) the median values of the forecast distribution to be close to the observed values. Models 1 and 2 gave almost identical results. The median values improved the forecast with Nash-Sutcliffe R eff increasing from 0.77 for the original forecast to 0.87 for the corrected forecasts. Models 1 and 2 over-estimated the forecast intervals but gave the narrowest intervals. Their main drawback was that the distributions are less reliable than Model 3. For Model 3 the median values did not fit well since the auto-correlation was not accounted for. Since Model 3 did not benefit from the potential variance reduction that lies in bias estimation and removal it gave on average wider forecasts intervals than the two other models. At the same time Model 3 on average slightly under-estimated the forecast intervals, probably explained by the use of average measures to evaluate the fit.
NASA Astrophysics Data System (ADS)
Rosas, Pedro; Wagemans, Johan; Ernst, Marc O.; Wichmann, Felix A.
2005-05-01
A number of models of depth-cue combination suggest that the final depth percept results from a weighted average of independent depth estimates based on the different cues available. The weight of each cue in such an average is thought to depend on the reliability of each cue. In principle, such a depth estimation could be statistically optimal in the sense of producing the minimum-variance unbiased estimator that can be constructed from the available information. Here we test such models by using visual and haptic depth information. Different texture types produce differences in slant-discrimination performance, thus providing a means for testing a reliability-sensitive cue-combination model with texture as one of the cues to slant. Our results show that the weights for the cues were generally sensitive to their reliability but fell short of statistically optimal combination - we find reliability-based reweighting but not statistically optimal cue combination.
Calculating sediment discharge from a highway construction site in central Pennsylvania
Reed, L.A.; Ward, J.R.; Wetzel, K.L.
1985-01-01
The Pennsylvania Department of Transportation, the Federal Highway Administration, and the U.S. Geological Survey have cooperated in a study to evaluate two methods of predicting sediment yields during highway construction. Sediment yields were calculated using the Universal Soil Loss and the Younkin Sediment Prediction Equations. Results were compared to the actual measured values, and standard errors and coefficients of correlation were calculated. Sediment discharge from the construction area was determined for storms that occurred during construction of Interstate 81 in a 0.38-square mile basin near Harrisburg, Pennsylvania. Precipitation data tabulated included total rainfall, maximum 30-minute rainfall, kinetic energy, and the erosive index of the precipitation. Highway construction data tabulated included the area disturbed by clearing and grubbing, the area in cuts and fills, the average depths of cuts and fills, the area seeded and mulched, and the area paved. Using the Universal Soil Loss Equation, sediment discharge from the construction area was calculated for storms. The standard error of estimate was 0.40 (about 105 percent), and the coefficient of correlation was 0.79. Sediment discharge from the construction area was also calculated using the Younkin Equation. The standard error of estimate of 0.42 (about 110 percent), and the coefficient of correlation of 0.77 are comparable to those from the Universal Soil Loss Equation.
NASA Astrophysics Data System (ADS)
Matsunaga, Y.; Sugita, Y.
2018-06-01
A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.
1980-08-01
dam. . 2.2 Construction Data. No record of original construction is avail- ’.. able for this dam. A general location plan prepared by Reino E. low Hyypa...and S"’: overuse. The slopes of the shoreline are flat and generally well covered with grass and vegetation to preclude sloughing Pp. and shoreline...roadways. It is estimated that the water depths would average 9.8 feet and that velocities of flow could cause erosion, stripping of vegetation and
Wallace, Dorothy; Prosper, Olivia; Savos, Jacob; Dunham, Ann M; Chipman, Jonathan W; Shi, Xun; Ndenga, Bryson; Githeko, Andrew
2017-03-01
A dynamical model of Anopheles gambiae larval and adult populations is constructed that matches temperature-dependent maturation times and mortality measured experimentally as well as larval instar and adult mosquito emergence data from field studies in the Kenya Highlands. Spectral classification of high-resolution satellite imagery is used to estimate household density. Indoor resting densities collected over a period of one year combined with predictions of the dynamical model give estimates of both aquatic habitat and total adult mosquito densities. Temperature and precipitation patterns are derived from monthly records. Precipitation patterns are compared with average and extreme habitat estimates to estimate available aquatic habitat in an annual cycle. These estimates are coupled with the original model to produce estimates of adult and larval populations dependent on changing aquatic carrying capacity for larvae and changing maturation and mortality dependent on temperature. This paper offers a general method for estimating the total area of aquatic habitat in a given region, based on larval counts, emergence rates, indoor resting density data, and number of households.Altering the average daily temperature and the average daily rainfall simulates the effect of climate change on annual cycles of prevalence of An. gambiae adults. We show that small increases in average annual temperature have a large impact on adult mosquito density, whether measured at model equilibrium values for a single square meter of habitat or tracked over the course of a year of varying habitat availability and temperature. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Kim, Hyokyung
2016-01-01
For an airborne or spaceborne radar, the precipitation-induced path attenuation can be estimated from the measurements of the normalized surface cross section, sigma 0, in the presence and absence of precipitation. In one implementation, the mean rain-free estimate and its variability are found from a lookup table (LUT) derived from previously measured data. For the dual-frequency precipitation radar aboard the global precipitation measurement satellite, the nominal table consists of the statistics of the rain-free 0 over a 0.5 deg x 0.5 deg latitude-longitude grid using a three-month set of input data. However, a problem with the LUT is an insufficient number of samples in many cells. An alternative table is constructed by a stepwise procedure that begins with the statistics over a 0.25 deg x 0.25 deg grid. If the number of samples at a cell is too few, the area is expanded, cell by cell, choosing at each step that cell that minimizes the variance of the data. The question arises, however, as to whether the selected region corresponds to the smallest variance. To address this question, a second type of variable-averaging grid is constructed using all possible spatial configurations and computing the variance of the data within each region. Comparisons of the standard deviations for the fixed and variable-averaged grids are given as a function of incidence angle and surface type using a three-month set of data. The advantage of variable spatial averaging is that the average standard deviation can be reduced relative to the fixed grid while satisfying the minimum sample requirement.
Shrinkage Estimators for a Composite Measure of Quality Conceptualized as a Formative Construct
Shwartz, Michael; Peköz, Erol A; Christiansen, Cindy L; Burgess, James F; Berlowitz, Dan
2013-01-01
Objective To demonstrate the value of shrinkage estimators when calculating a composite quality measure as the weighted average of a set of individual quality indicators. Data Sources Rates of 28 quality indicators (QIs) calculated from the minimum dataset from residents of 112 Veterans Health Administration nursing homes in fiscal years 2005–2008. Study Design We compared composite scores calculated from the 28 QIs using both observed rates and shrunken rates derived from a Bayesian multivariate normal-binomial model. Principal Findings Shrunken-rate composite scores, because they take into account unreliability of estimates from small samples and the correlation among QIs, have more intuitive appeal than observed-rate composite scores. Facilities can be profiled based on more policy-relevant measures than point estimates of composite scores, and interval estimates can be calculated without assuming the QIs are independent. Usually, shrunken-rate composite scores in 1 year are better able to predict the observed total number of QI events or the observed-rate composite scores in the following year than the initial year observed-rate composite scores. Conclusion Shrinkage estimators can be useful when a composite measure is conceptualized as a formative construct. PMID:22716650
NASA Astrophysics Data System (ADS)
Tessler, Zachary D.; Vörösmarty, Charles J.; Overeem, Irina; Syvitski, James P. M.
2018-03-01
Modern deltas are dependent on human-mediated freshwater and sediment fluxes. Changes to these fluxes impact delta biogeophysical functioning and affect the long-term sustainability of these landscapes for human and for natural systems. Here we present contemporary estimates of long-term mean sediment balance and relative sea level rise across 46 global deltas. We model scenarios of contemporary and future water resource management schemes and hydropower infrastructure in upstream river basins to explore how changing sediment fluxes impact relative sea level rise in delta systems. Model results show that contemporary sediment fluxes, anthropogenic drivers of land subsidence, and sea level rise result in delta relative sea level rise rates that average 6.8 mm/y. Assessment of impacts of planned and under-construction dams on relative sea level rise rates suggests increases on the order of 1 mm/y in deltas with new upstream construction. Sediment fluxes are estimated to decrease by up to 60% in the Danube and 21% in the Ganges-Brahmaputra-Meghna if all currently planned dams are constructed. Reduced sediment retention on deltas caused by increased river channelization and management has a larger impact, increasing relative sea level rise on average by nearly 2 mm/y. Long-term delta sustainability requires a more complete understanding of how geophysical and anthropogenic change impact delta geomorphology. Local and regional strategies for sustainable delta management that focus on local and regional drivers of change, especially groundwater and hydrocarbon extraction and upstream dam construction, can be highly impactful even in the context of global climate-induced sea level rise.
Rauch, Geraldine; Brannath, Werner; Brückner, Matthias; Kieser, Meinhard
2018-05-01
In many clinical trial applications, the endpoint of interest corresponds to a time-to-event endpoint. In this case, group differences are usually expressed by the hazard ratio. Group differences are commonly assessed by the logrank test, which is optimal under the proportional hazard assumption. However, there are many situations in which this assumption is violated. Especially in applications were a full population and several subgroups or a composite time-to-first-event endpoint and several components are considered, the proportional hazard assumption usually does not simultaneously hold true for all test problems under investigation. As an alternative effect measure, Kalbfleisch and Prentice proposed the so-called 'average hazard ratio'. The average hazard ratio is based on a flexible weighting function to modify the influence of time and has a meaningful interpretation even in the case of non-proportional hazards. Despite this favorable property, it is hardly ever used in practice, whereas the standard hazard ratio is commonly reported in clinical trials regardless of whether the proportional hazard assumption holds true or not. There exist two main approaches to construct corresponding estimators and tests for the average hazard ratio where the first relies on weighted Cox regression and the second on a simple plug-in estimator. The aim of this work is to give a systematic comparison of these two approaches and the standard logrank test for different time-toevent settings with proportional and nonproportional hazards and to illustrate the pros and cons in application. We conduct a systematic comparative study based on Monte-Carlo simulations and by a real clinical trial example. Our results suggest that the properties of the average hazard ratio depend on the underlying weighting function. The two approaches to construct estimators and related tests show very similar performance for adequately chosen weights. In general, the average hazard ratio defines a more valid effect measure than the standard hazard ratio under non-proportional hazards and the corresponding tests provide a power advantage over the common logrank test. As non-proportional hazards are often met in clinical practice and the average hazard ratio tests often outperform the common logrank test, this approach should be used more routinely in applications. Schattauer GmbH.
Underground storage of imported water in the San Gorgonio Pass area, southern California
Bloyd, Richard M.
1971-01-01
The San Gorgonio Pass ground-water basin is divided into the Beaumont, Banning, Cabazon, San Timoteo, South Beaumont, Banning Bench, and Singleton storage units. The Beaumont storage unit, centrally located in the agency area, is the largest in volume of the storage units. Estimated long-term average annual precipitation in the San Gorgonio Pass Water Agency drainage area is 332,000 acre-feet, and estimated average annual recoverable water is 24,000 acre-feet, less than 10 percent of the total precipitation. Estimated average annual surface outflow is 1,700 acre-feet, and estimated average annual ground-water recharge is 22,000 acre-feet. Projecting tack to probable steady-state conditions, of the 22.000 acre-feet of recharge, 16,003 acre-feet per year became subsurface outflow into Coachella Valley, 6,000 acre-feet into the Redlands area, and 220 acre-feet into Potrero Canyon. After extensive development, estimated subsurface outflow from the area in 1967 was 6,000 acre-feet into the Redlands area, 220 acre-feet into Potrero Canyon, and 800 acre-feet into the fault systems south of the Banning storage unit, unwatered during construction of a tunnel. Subsurface outflow into Coachella Valley in 1967 is probably less than 50 percent of the steady-state flow. An anticipated 17,000 .acre-feet of water per year will be imported by 1980. Information developed in this study indicates it is technically feasible to store imported water in the eastern part of the Beaumont storage unit without causing waterlogging in the storage area and without losing any significant quantity of stored water.
Deng, Cai; Zhang, Wanchang
2018-05-30
As the backland of the Qinghai-Tibet Plateau, the river source region is highly sensitive to changes in global climate. Air temperature estimation using remote sensing satellite provides a new way of conducting studies in the field of climate change study. A geographically weighted regression model was applied to estimate synchronic air temperature from 2001 to 2015 using Moderate-Resolution Imaging Spectroradiometry (MODIS) data. The results were R 2 = 0.913 and RMSE = 2.47 °C, which confirmed the feasibility of the estimation. The spatial distribution and variation characteristics of the average annual and seasonal air temperature were analyzed. The findings are as follows: (1) the distribution of average annual air temperature has significant terrain characteristics. The reduction in average annual air temperature along the elevation of the region is 0.19 °C/km, whereas the reduction in the average annual air temperature along the latitude is 0.04 °C/degree. (2) The average annual air temperature increase in the region is 0.37 °C/decade. The average air temperature increase could be arranged in the following decreasing order: Yangtze River Basin > Mekong River Basin > Nujiang River Basin > Yarlung Zangbo River Basin > Yellow River Basin. The fastest, namely, Yangtze River Basin, is 0.47 °C/decade. (3) The average air temperature rise in spring, summer, and winter generally increases with higher altitude. The average annual air temperature in different types of lands following a decreasing order is as follows: wetland > construction land > bare land glacier > shrub grassland > arable land > forest land > water body and that of the fastest one, wetland, is 0.13 °C/year.
A Visual-Based Approach for Indoor Radio Map Construction Using Smartphones.
Liu, Tao; Zhang, Xing; Li, Qingquan; Fang, Zhixiang
2017-08-04
Localization of users in indoor spaces is a common issue in many applications. Among various technologies, a Wi-Fi fingerprinting based localization solution has attracted much attention, since it can be easily deployed using the existing off-the-shelf mobile devices and wireless networks. However, the collection of the Wi-Fi radio map is quite labor-intensive, which limits its potential for large-scale application. In this paper, a visual-based approach is proposed for the construction of a radio map in anonymous indoor environments. This approach collects multi-sensor data, e.g., Wi-Fi signals, video frames, inertial readings, when people are walking in indoor environments with smartphones in their hands. Then, it spatially recovers the trajectories of people by using both visual and inertial information. Finally, it estimates the location of fingerprints from the trajectories and constructs a Wi-Fi radio map. Experiment results show that the average location error of the fingerprints is about 0.53 m. A weighted k-nearest neighbor method is also used to evaluate the constructed radio map. The average localization error is about 3.2 m, indicating that the quality of the constructed radio map is at the same level as those constructed by site surveying. However, this approach can greatly reduce the human labor cost, which increases the potential for applying it to large indoor environments.
The Fiscal Effects of School Choice Programs on Public School Districts. National Research
ERIC Educational Resources Information Center
Scafidi, Benjamin
2012-01-01
In this report, the author constructs the first ever estimates for each state and the District of Columbia of the short-run fixed costs of educating children in public schools. He endeavors to make cautious overestimates of these short-run fixed costs. The United States' average spending per student was $12,450 in 2008-09. The author estimates…
NASA Technical Reports Server (NTRS)
Houlahan, Padraig; Scalo, John
1992-01-01
A new method of image analysis is described, in which images partitioned into 'clouds' are represented by simplified skeleton images, called structure trees, that preserve the spatial relations of the component clouds while disregarding information concerning their sizes and shapes. The method can be used to discriminate between images of projected hierarchical (multiply nested) and random three-dimensional simulated collections of clouds constructed on the basis of observed interstellar properties, and even intermediate systems formed by combining random and hierarchical simulations. For a given structure type, the method can distinguish between different subclasses of models with different parameters and reliably estimate their hierarchical parameters: average number of children per parent, scale reduction factor per level of hierarchy, density contrast, and number of resolved levels. An application to a column density image of the Taurus complex constructed from IRAS data is given. Moderately strong evidence for a hierarchical structural component is found, and parameters of the hierarchy, as well as the average volume filling factor and mass efficiency of fragmentation per level of hierarchy, are estimated. The existence of nested structure contradicts models in which large molecular clouds are supposed to fragment, in a single stage, into roughly stellar-mass cores.
Residential building codes, affordability, and health protection: a risk-tradeoff approach.
Hammitt, J K; Belsky, E S; Levy, J I; Graham, J D
1999-12-01
Residential building codes intended to promote health and safety may produce unintended countervailing risks by adding to the cost of construction. Higher construction costs increase the price of new homes and may increase health and safety risks through "income" and "stock" effects. The income effect arises because households that purchase a new home have less income remaining for spending on other goods that contribute to health and safety. The stock effect arises because suppression of new-home construction leads to slower replacement of less safe housing units. These countervailing risks are not presently considered in code debates. We demonstrate the feasibility of estimating the approximate magnitude of countervailing risks by combining the income effect with three relatively well understood and significant home-health risks. We estimate that a code change that increases the nationwide cost of constructing and maintaining homes by $150 (0.1% of the average cost to build a single-family home) would induce offsetting risks yielding between 2 and 60 premature fatalities or, including morbidity effects, between 20 and 800 lost quality-adjusted life years (both discounted at 3%) each year the code provision remains in effect. To provide a net health benefit, the code change would need to reduce risk by at least this amount. Future research should refine these estimates, incorporate quantitative uncertainty analysis, and apply a full risk-tradeoff approach to real-world case studies of proposed code changes.
Hangar Fire Suppression Utilizing Novec 1230
2018-01-01
The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...fuel fires in aircraft hangars. A 30×30×8-ft concrete-and-steel test structure was constructed for this test series . Four discharge assemblies...structure. System discharge parameters---discharge time , discharge rate, and quantity of agent discharged---were adjusted to produce the desired Novec 1230
Pannullo, Francesca; Lee, Duncan; Neal, Lucy; Dalvi, Mohit; Agnew, Paul; O'Connor, Fiona M; Mukhopadhyay, Sabyasachi; Sahu, Sujit; Sarran, Christophe
2017-03-27
Estimating the long-term health impact of air pollution in a spatio-temporal ecological study requires representative concentrations of air pollutants to be constructed for each geographical unit and time period. Averaging concentrations in space and time is commonly carried out, but little is known about how robust the estimated health effects are to different aggregation functions. A second under researched question is what impact air pollution is likely to have in the future. We conducted a study for England between 2007 and 2011, investigating the relationship between respiratory hospital admissions and different pollutants: nitrogen dioxide (NO 2 ); ozone (O 3 ); particulate matter, the latter including particles with an aerodynamic diameter less than 2.5 micrometers (PM 2.5 ), and less than 10 micrometers (PM 10 ); and sulphur dioxide (SO 2 ). Bayesian Poisson regression models accounting for localised spatio-temporal autocorrelation were used to estimate the relative risks (RRs) of pollution on disease risk, and for each pollutant four representative concentrations were constructed using combinations of spatial and temporal averages and maximums. The estimated RRs were then used to make projections of the numbers of likely respiratory hospital admissions in the 2050s attributable to air pollution, based on emission projections from a number of Representative Concentration Pathways (RCP). NO 2 exhibited the largest association with respiratory hospital admissions out of the pollutants considered, with estimated increased risks of between 0.9 and 1.6% for a one standard deviation increase in concentrations. In the future the projected numbers of respiratory hospital admissions attributable to NO 2 in the 2050s are lower than present day rates under 3 Representative Concentration Pathways (RCPs): 2.6, 6.0, and 8.5, which is due to projected reductions in future NO 2 emissions and concentrations. NO 2 concentrations exhibit consistent substantial present-day health effects regardless of how a representative concentration is constructed in space and time. Thus as concentrations are predicted to remain above limits set by European Union Legislation until the 2030s in parts of urban England, it will remain a substantial health risk for some time.
The effect of airline deregulation on automobile fatalities.
Bylow, L F; Savage, I
1991-10-01
This paper attempts to quantify the effects of airline deregulation in the United States on intercity automobile travel and consequently on the number of highway fatalities. A demand model is constructed for auto travel, which includes variables representing the price and availability of air service. A reduced form model of the airline market is then estimated. Finding that deregulation has decreased airfares and increased flights, it is estimated that auto travel has been reduced by 2.2% per year on average. Given assumptions on the characteristics of drivers switching modes and the types of roads they drove on, the number of automobile fatalities averted since 1978 is estimated to be in the range 200-300 per year.
Chung, Chen-Yuan; Heebner, Joseph; Baskaran, Harihara; Welter, Jean F.; Mansour, Joseph M.
2015-01-01
Tissue-engineered (TE) cartilage constructs tend to develop inhomogeneously, thus, to predict the mechanical performance of the tissue, conventional biomechanical testing, which yields average material properties, is of limited value. Rather, techniques for evaluating regional and depth-dependent properties of TE cartilage, preferably non-destructively, are required. The purpose of this study was to build upon our previous results and to investigate the feasibility of using ultrasound elastography to non-destructively assess the depth-dependent biomechanical characteristics of TE cartilage while in a sterile bioreactor. As a proof-of-concept, and to standardize an assessment protocol, a well-characterized three-layered hydrogel construct was used as a surrogate for TE cartilage, and was studied under controlled incremental compressions. The strain field of the construct predicted by elastography was then validated by comparison with a poroelastic finite-element analysis (FEA). On average, the differences between the strains predicted by elastography and the FEA were within 10%. Subsequently engineered cartilage tissue was evaluated in the same test fixture. Results from these examinations showed internal regions where the local strain was 1–2 orders of magnitude greater than that near the surface. These studies document the feasibility of using ultrasound to evaluate the mechanical behaviors of maturing TE constructs in a sterile environment. PMID:26077987
Empirical Bayes estimation of undercount in the decennial census.
Cressie, N
1989-12-01
Empirical Bayes methods are used to estimate the extent of the undercount at the local level in the 1980 U.S. census. "Grouping of like subareas from areas such as states, counties, and so on into strata is a useful way of reducing the variance of undercount estimators. By modeling the subareas within a stratum to have a common mean and variances inversely proportional to their census counts, and by taking into account sampling of the areas (e.g., by dual-system estimation), empirical Bayes estimators that compromise between the (weighted) stratum average and the sample value can be constructed. The amount of compromise is shown to depend on the relative importance of stratum variance to sampling variance. These estimators are evaluated at the state level (51 states, including Washington, D.C.) and stratified on race/ethnicity (3 strata) using data from the 1980 postenumeration survey (PEP 3-8, for the noninstitutional population)." excerpt
Cascading Oscillators in Decoding Speech: Reflection of a Cortical Computation Principle
2016-09-06
Combining an experimental paradigm based on Ghitza and Greenberg (2009) for speech with the approach of Farbood et al. (2013) to timing in key...Fuglsang, 2015). A model was developed which uses modulation spectrograms to construct an oscillating time - series synchronized with the slowly varying...estimated to average 1 hour per response, including the time for reviewing instructions, searching data sources, gathering and maintaining the data
Koo, Choongwan; Hong, Taehoon; Lee, Minhyun; Park, Hyo Seon
2013-05-07
The photovoltaic (PV) system is considered an unlimited source of clean energy, whose amount of electricity generation changes according to the monthly average daily solar radiation (MADSR). It is revealed that the MADSR distribution in South Korea has very diverse patterns due to the country's climatic and geographical characteristics. This study aimed to develop a MADSR estimation model for the location without the measured MADSR data, using an advanced case based reasoning (CBR) model, which is a hybrid methodology combining CBR with artificial neural network, multiregression analysis, and genetic algorithm. The average prediction accuracy of the advanced CBR model was very high at 95.69%, and the standard deviation of the prediction accuracy was 3.67%, showing a significant improvement in prediction accuracy and consistency. A case study was conducted to verify the proposed model. The proposed model could be useful for owner or construction manager in charge of determining whether or not to introduce the PV system and where to install it. Also, it would benefit contractors in a competitive bidding process to accurately estimate the electricity generation of the PV system in advance and to conduct an economic and environmental feasibility study from the life cycle perspective.
Effective transient behaviour of inclusions in diffusion problems
NASA Astrophysics Data System (ADS)
Brassart, Laurence; Stainier, Laurent
2018-06-01
This paper is concerned with the effective transport properties of heterogeneous media in which there is a high contrast between the phase diffusivities. In this case the transient response of the slow phase induces a memory effect at the macroscopic scale, which needs to be included in a macroscopic continuum description. This paper focuses on the slow phase, which we take as a dispersion of inclusions of arbitrary shape. We revisit the linear diffusion problem in such inclusions in order to identify the structure of the effective (average) inclusion response to a chemical load applied on the inclusion boundary. We identify a chemical creep function (similar to the creep function of viscoelasticity), from which we construct estimates with a reduced number of relaxation modes. The proposed estimates admit an equivalent representation based on a finite number of internal variables. These estimates allow us to predict the average inclusion response under arbitrary time-varying boundary conditions at very low computational cost. A heuristic generalisation to concentration-dependent diffusion coefficient is also presented. The proposed estimates for the effective transient response of an inclusion can serve as a building block for the formulation of multi-inclusion homogenisation schemes.
[Carbon footprint of buildings in the urban agglomeration of central Liaoning, China].
Shi, Yu; Yun, Ying Xia; Liu, Chong; Chu, Ya Qi
2017-06-18
With the development of urbanization in China, buildings consumed lots of material and energy. How to estimate carbon emission of buildings is an important scientific problem. Carbon footprint of the central Liaoning agglomeration was studied with carbon footprint approach, geographic information system (GIS) and high-resolution remote sensing (HRRS) technology. The results showed that the construction carbon footprint coefficient of central Liaoning urban agglomeration was 269.16 kg·m -2 . The approach of interpreting total building area and spatial distribution with HRRS was effective, and the accuracy was 89%. The extraction approach was critical for total carbon footprint and spatial distribution estimation. The building area and total carbon footprint of central Liaoning urban agglomeration in descending order was Shenyang, Anshan, Fushun, Liao-yang, Yingkou, Tieling and Benxi. The annual average increment of footprint from 2011 to 2013 in descending order was Shenyang, Benxi, Fushun, Anshan, Tieling, Yingkou and Liaoyang. The accurate estimation of construction carbon footprint spatial and its distribution was of significance for the planning and optimization of carbon emission reduction.
Cost analysis of the built environment: the case of bike and pedestrian trials in Lincoln, Neb.
Wang, Guijing; Macera, Caroline A; Scudder-Soucie, Barbara; Schmid, Tom; Pratt, Michael; Buchner, David; Heath, Gregory
2004-04-01
We estimated the annual cost of bike and pedestrian trails in Lincoln, Neb, using construction and maintenance costs provided by the Department of Parks and Recreation of Nebraska. We obtained the number of users of 5 trails from a 1998 census report. The annual construction cost of each trail was calculated by using 3%, 5%, and 10% discount rates for a period of useful life of 10, 30, and 50 years. The average cost per mile and per user was calculated. Trail length averaged 3.6 miles (range = 1.6-4.6 miles). Annual cost in 2002 dollars ranged from 25,762 to 248,479 (mean = 124,927; median = 171,064). The cost per mile ranged from 5735 to 54,017 (mean = 35,355; median = 37,994). The annual cost per user was 235 (range = 83-592), whereas per capita annual medical cost of inactivity was 622. Construction of trails fits a wide range of budgets and may be a viable health amenity for most communities. To increase trail cost-effectiveness, efforts to decrease cost and increase the number of users should be considered.
Freethey, G.W.; Spangler, L.E.; Monheiser, W.J.
1994-01-01
A 48-square-mile area in the southeastern part of the Salt Lake Valley, Utah, was studied to determine if generalized information obtained from geologic maps, water-level maps, and drillers' logs could be used to estimate hydraulic conduc- tivity, porosity, and slope of the potentiometric surface: the three properties needed to calculate average linear velocity of ground water. Estimated values of these properties could be used by water- management and regulatory agencies to compute values of average linear velocity, which could be further used to estimate travel time of ground water along selected flow lines, and thus to determine wellhead protection areas around public- supply wells. The methods used to estimate the three properties are based on assumptions about the drillers' descriptions, the depositional history of the sediments, and the boundary con- ditions of the hydrologic system. These assump- tions were based on geologic and hydrologic infor- mation determined from previous investigations. The reliability of the estimated values for hydro- logic properties and average linear velocity depends on the accuracy of these assumptions. Hydraulic conductivity of the principal aquifer was estimated by calculating the thickness- weighted average of values assigned to different drillers' descriptions of material penetrated during the construction of 98 wells. Using these 98 control points, the study area was divided into zones representing approximate hydraulic- conductivity values of 20, 60, 100, 140, 180, 220, and 250 feet per day. This range of values is about the same range of values used in developing a ground-water flow model of the principal aquifer in the early 1980s. Porosity of the principal aquifer was estimated by compiling the range of porosity values determined or estimated during previous investigations of basin-fill sediments, and then using five different values ranging from 15 to 35 percent to delineate zones in the study area that were assumed to be underlain by similar deposits. Delineation of the zones was based on depositional history of the area and the distri- bution of sediments shown on a surficial geologic map. Water levels in wells were measured twice in 1990: during late winter when ground-water with- drawals were the least and water levels the highest, and again in late summer, when ground- water withdrawals were the greatest and water levels the lowest. These water levels were used to construct potentiometric-contour maps and subsequently to determine the variability of the slope in the potentiometric surface in the area. Values for the three properties, derived from the described sources of information, were used to produce a map showing the general distribution of average linear velocity of ground water moving through the principal aquifer of the study area. Velocity derived ranged from 0.06 to 144 feet per day with a median of about 3 feet per day. Values were slightly faster for late summer 1990 than for late winter 1990, mainly because increased with- drawal of water during the summer created slightly steeper hydraulic-head gradients between the recharge area near the mountain front and the well fields farther to the west. The fastest average linear-velocity values were located at the mouth of Little Cottonwood Canyon and south of Dry Creek near the mountain front, where the hydraulic con- ductivity was estimated to be the largest because the drillers described the sediments to be pre- dominantly clean and coarse grained. Both of these areas also had steep slopes in the potentiometric surface. Other areas where average linear velocity was fast included small areas near pumping wells where the slope in the potentiometric surface was locally steepened. No apparent relation between average linear velocity and porosity could be seen in the mapped distributions of these two properties. Calculation of travel time along a flow line to a well in the southwestern part of the study area during the sum
Beamforming using subspace estimation from a diagonally averaged sample covariance.
Quijano, Jorge E; Zurk, Lisa M
2017-08-01
The potential benefit of a large-aperture sonar array for high resolution target localization is often challenged by the lack of sufficient data required for adaptive beamforming. This paper introduces a Toeplitz-constrained estimator of the clairvoyant signal covariance matrix corresponding to multiple far-field targets embedded in background isotropic noise. The estimator is obtained by averaging along subdiagonals of the sample covariance matrix, followed by covariance extrapolation using the method of maximum entropy. The sample covariance is computed from limited data snapshots, a situation commonly encountered with large-aperture arrays in environments characterized by short periods of local stationarity. Eigenvectors computed from the Toeplitz-constrained covariance are used to construct signal-subspace projector matrices, which are shown to reduce background noise and improve detection of closely spaced targets when applied to subspace beamforming. Monte Carlo simulations corresponding to increasing array aperture suggest convergence of the proposed projector to the clairvoyant signal projector, thereby outperforming the classic projector obtained from the sample eigenvectors. Beamforming performance of the proposed method is analyzed using simulated data, as well as experimental data from the Shallow Water Array Performance experiment.
2017-03-20
computation, Prime Implicates, Boolean Abstraction, real- time embedded software, software synthesis, correct by construction software design , model...types for time -dependent data-flow networks". J.-P. Talpin, P. Jouvelot, S. Shukla. ACM-IEEE Conference on Methods and Models for System Design ...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and
1987-06-01
Administration and U.S. NAVY in cooperation with Bethlehem Steel Corporation Marine Construction Division Report Documentation Page Form ApprovedOMB No...0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response , including the time for reviewing... RESPONSIBLE PERSON a. REPORT unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39
System Analysis and Design of a Low-Cost Micromechanical Seeker System
2008-06-01
reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching...who devoted a valuable amount of time to advising me with academic coursework as well as thesis research. Dan, your attention to detail and ability...never have come to be. Many thanks to Sean George, who sacrificed his valuable time to guide me through constructing the projectile flight simulation
Montuno, Michael A; Kohner, Andrew B; Foote, Kelly D; Okun, Michael S
2013-01-01
Deep brain stimulation (DBS) is an effective technique that has been utilized to treat advanced and medication-refractory movement and psychiatric disorders. In order to avoid implanted pulse generator (IPG) failure and consequent adverse symptoms, a better understanding of IPG battery longevity and management is necessary. Existing methods for battery estimation lack the specificity required for clinical incorporation. Technical challenges prevent higher accuracy longevity estimations, and a better approach to managing end of DBS battery life is needed. The literature was reviewed and DBS battery estimators were constructed by the authors and made available on the web at http://mdc.mbi.ufl.edu/surgery/dbs-battery-estimator. A clinical algorithm for management of DBS battery life was constructed. The algorithm takes into account battery estimations and clinical symptoms. Existing methods of DBS battery life estimation utilize an interpolation of averaged current drains to calculate how long a battery will last. Unfortunately, this technique can only provide general approximations. There are inherent errors in this technique, and these errors compound with each iteration of the battery estimation. Some of these errors cannot be accounted for in the estimation process, and some of the errors stem from device variation, battery voltage dependence, battery usage, battery chemistry, impedance fluctuations, interpolation error, usage patterns, and self-discharge. We present web-based battery estimators along with an algorithm for clinical management. We discuss the perils of using a battery estimator without taking into account the clinical picture. Future work will be needed to provide more reliable management of implanted device batteries; however, implementation of a clinical algorithm that accounts for both estimated battery life and for patient symptoms should improve the care of DBS patients. © 2012 International Neuromodulation Society.
2015-01-01
Abstract. Background: Little research has been conducted into the cost and prevention of self-harm in the workplace. Aims: To quantify the economic cost of self-harm and suicide among New South Wales (NSW) construction industry (CI) workers and to examine the potential economic impact of implementing Mates in Construction (MIC). Method: Direct and indirect costs were estimated. Effectiveness was measured using the relative risk ratio (RRR). In Queensland (QLD), relative suicide risks were estimated for 5-year periods before and after the commencement of MIC. For NSW, the difference between the expected (i.e., using NSW pre-MIC [2008–2012] suicide risk) and counterfactual suicide cases (i.e., applying QLD RRR) provided an estimate of potential suicide cases averted in the post-MIC period (2013–2017). Results were adjusted using the average uptake (i.e., 9.4%) of MIC activities in QLD. Economic savings from averted cases were compared with the cost of implementing MIC. Results: The cost of self-harm and suicide in the NSW CI was AU $527 million in 2010. MIC could potentially avert 0.4 suicides, 1.01 full incapacity cases, and 4.92 short absences, generating annual savings of AU $3.66 million. For every AU $1 invested, the economic return is approximately AU $4.6. Conclusion: MIC represents a positive economic investment in workplace safety. PMID:26695869
Doran, Christopher M; Ling, Rod; Gullestrup, Jorgen; Swannell, Sarah; Milner, Allison
2016-03-01
Little research has been conducted into the cost and prevention of self-harm in the workplace. To quantify the economic cost of self-harm and suicide among New South Wales (NSW) construction industry (CI) workers and to examine the potential economic impact of implementing Mates in Construction (MIC). Direct and indirect costs were estimated. Effectiveness was measured using the relative risk ratio (RRR). In Queensland (QLD), relative suicide risks were estimated for 5-year periods before and after the commencement of MIC. For NSW, the difference between the expected (i.e., using NSW pre-MIC [2008-2012] suicide risk) and counterfactual suicide cases (i.e., applying QLD RRR) provided an estimate of potential suicide cases averted in the post-MIC period (2013-2017). Results were adjusted using the average uptake (i.e., 9.4%) of MIC activities in QLD. Economic savings from averted cases were compared with the cost of implementing MIC. The cost of self-harm and suicide in the NSW CI was AU $527 million in 2010. MIC could potentially avert 0.4 suicides, 1.01 full incapacity cases, and 4.92 short absences, generating annual savings of AU $3.66 million. For every AU $1 invested, the economic return is approximately AU $4.6. MIC represents a positive economic investment in workplace safety.
You, Wei; Cretu, Edmond; Rohling, Robert
2013-11-01
This paper investigates a low computational cost, super-resolution ultrasound imaging method that leverages the asymmetric vibration mode of CMUTs. Instead of focusing on the broadband received signal on the entire CMUT membrane, we utilize the differential signal received on the left and right part of the membrane obtained by a multi-electrode CMUT structure. The differential signal reflects the asymmetric vibration mode of the CMUT cell excited by the nonuniform acoustic pressure field impinging on the membrane, and has a resonant component in immersion. To improve the resolution, we propose an imaging method as follows: a set of manifold matrices of CMUT responses for multiple focal directions are constructed off-line with a grid of hypothetical point targets. During the subsequent imaging process, the array sequentially steers to multiple angles, and the amplitudes (weights) of all hypothetical targets at each angle are estimated in a maximum a posteriori (MAP) process with the manifold matrix corresponding to that angle. Then, the weight vector undergoes a directional pruning process to remove the false estimation at other angles caused by the side lobe energy. Ultrasound imaging simulation is performed on ring and linear arrays with a simulation program adapted with a multi-electrode CMUT structure capable of obtaining both average and differential received signals. Because the differential signals from all receiving channels form a more distinctive temporal pattern than the average signals, better MAP estimation results are expected than using the average signals. The imaging simulation shows that using differential signals alone or in combination with the average signals produces better lateral resolution than the traditional phased array or using the average signals alone. This study is an exploration into the potential benefits of asymmetric CMUT responses for super-resolution imaging.
One-shot estimate of MRMC variance: AUC.
Gallas, Brandon D
2006-03-01
One popular study design for estimating the area under the receiver operating characteristic curve (AUC) is the one in which a set of readers reads a set of cases: a fully crossed design in which every reader reads every case. The variability of the subsequent reader-averaged AUC has two sources: the multiple readers and the multiple cases (MRMC). In this article, we present a nonparametric estimate for the variance of the reader-averaged AUC that is unbiased and does not use resampling tools. The one-shot estimate is based on the MRMC variance derived by the mechanistic approach of Barrett et al. (2005), as well as the nonparametric variance of a single-reader AUC derived in the literature on U statistics. We investigate the bias and variance properties of the one-shot estimate through a set of Monte Carlo simulations with simulated model observers and images. The different simulation configurations vary numbers of readers and cases, amounts of image noise and internal noise, as well as how the readers are constructed. We compare the one-shot estimate to a method that uses the jackknife resampling technique with an analysis of variance model at its foundation (Dorfman et al. 1992). The name one-shot highlights that resampling is not used. The one-shot and jackknife estimators behave similarly, with the one-shot being marginally more efficient when the number of cases is small. We have derived a one-shot estimate of the MRMC variance of AUC that is based on a probabilistic foundation with limited assumptions, is unbiased, and compares favorably to an established estimate.
Prediction Model for Impulsive Noise on Structures
2012-09-01
construction usually have an interior wall finish of: a) gypsum wallboard (also called plasterboard or drywall), b) plaster or c) wood paneling... Gypsum Plaster , Wall Board 11,67 0.04 NA For simply-supported beams vibrating in their fundamental mode, the value of KS is needed for...Dev of log10(f0) for wood panel interior to be average for wood walls with plaster or gypsum board interior. (8) L(w) based on estimated standard
Two-Stage Bayesian Model Averaging in Endogenous Variable Models*
Lenkoski, Alex; Eicher, Theo S.; Raftery, Adrian E.
2013-01-01
Economic modeling in the presence of endogeneity is subject to model uncertainty at both the instrument and covariate level. We propose a Two-Stage Bayesian Model Averaging (2SBMA) methodology that extends the Two-Stage Least Squares (2SLS) estimator. By constructing a Two-Stage Unit Information Prior in the endogenous variable model, we are able to efficiently combine established methods for addressing model uncertainty in regression models with the classic technique of 2SLS. To assess the validity of instruments in the 2SBMA context, we develop Bayesian tests of the identification restriction that are based on model averaged posterior predictive p-values. A simulation study showed that 2SBMA has the ability to recover structure in both the instrument and covariate set, and substantially improves the sharpness of resulting coefficient estimates in comparison to 2SLS using the full specification in an automatic fashion. Due to the increased parsimony of the 2SBMA estimate, the Bayesian Sargan test had a power of 50 percent in detecting a violation of the exogeneity assumption, while the method based on 2SLS using the full specification had negligible power. We apply our approach to the problem of development accounting, and find support not only for institutions, but also for geography and integration as development determinants, once both model uncertainty and endogeneity have been jointly addressed. PMID:24223471
Lin, Chaohsin; Hsu, Shuofen
2014-12-01
It is well known that the differences-in-differences (DD) estimator is based on the assumption that in the absence of treatment, the average outcomes for the treated group and the control group will follow a common trend over time. That can be problematic, especially when the selection for the treatment is influenced by the individual's unobserved behavior correlating with the medical utilization. The aim of this study was to develop an index for controlling a patient's unobserved heterogeneous response to reform, in order to improve the comparability of treatment assignment. This study showed that a DD estimator of the reform effects can be decomposed into effects induced by moral hazard and by changes in health risk within the same treated/untreated group. This article also presented evidence that the constructed index of the price elasticity of the adjusted clinical group has good statistical properties for identifying the impact of reform. © The Author(s) 2012.
Kessel, Eric D; Ketcheson, Scott J; Price, Jonathan S
2018-07-15
Post-mine landscape reclamation of the Athabasca Oil Sands Region requires the use of tailings sand, an abundant mine-waste material that often contains large amounts of sodium (Na + ). Due to the mobility of Na + in groundwater and its effects on vegetation, water quality is a concern when incorporating mine waste materials, especially when attempting to construct groundwater-fed peatlands. This research is the first published account of Na + redistribution in groundwater from a constructed tailings sand upland to an adjacent constructed fen peat deposit (Nikanotee Fen). A permeable petroleum coke layer underlying the fen, extending partway into the upland, was important in directing flow and Na + beneath the peat, as designed. Initially, Na + concentration was highest in the tailings sand (average of 232mgL -1 ) and lowest in fen peat (96mgL -1 ). Precipitation-driven recharge to the upland controlled the mass flux of Na from upland to fen, which ranged from 2 to 13tons Na + per year. The mass flux was highest in the driest summer, in part from dry-period flowpaths that direct groundwater with higher concentrations of Na + into the coke layer, and in part because of the high evapotranspiration loss from the fen in dry periods, which induces upward water flow. With the estimated flux rates of 336mmyr -1 , the Na + arrival time to the fen surface was estimated to be between 4 and 11years. Over the four-year study, average Na + concentrations within the fen rooting zone increased from 87 to 200mgL -1 , and in the tailings sand decreased to 196mgL -1 . The planting of more salt-tolerant vegetation in the fen is recommended, given the potential for Na + accumulation. This study shows reclamation designs can use layered flow system to control the rate, pattern, and timing of solute interactions with surface soil systems. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.
Pang, Shih-Hao; Frey, H Christopher; Rasdorf, William J
2009-08-15
Substitution of soy-based biodiesel fuels for petroleum diesel will alter life cycle emissions for construction vehicles. A life cycle inventory was used to estimate fuel cycle energy consumption and emissions of selected pollutants and greenhouse gases. Real-world measurements using a portable emission measurement system (PEMS) were made forfive backhoes, four front-end loaders, and six motor graders on both fuels from which fuel consumption and tailpipe emission factors of CO, HC, NO(x), and PM were estimated. Life cycle fossil energy reductions are estimated it 9% for B20 and 42% for B100 versus petroleum diesel based on the current national energy mix. Fuel cycle emissions will contribute a larger share of total life cycle emissions as new engines enter the in-use fleet. The average differences in life cycle emissions for B20 versus diesel are: 3.5% higher for NO(x); 11.8% lower for PM, 1.6% higher for HC, and 4.1% lower for CO. Local urban tailpipe emissions are estimated to be 24% lower for HC, 20% lower for CO, 17% lower for PM, and 0.9% lower for NO(x). Thus, there are environmental trade-offs such as for rural vs urban areas. The key sources of uncertainty in the B20 LCI are vehicle emission factors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, C; Zhong, Y; Wang, T
2015-06-15
Purpose: To investigate the accuracy in estimating the mean glandular dose (MGD) for homogeneous breast phantoms by converting from the average breast dose using the F-factor in cone beam breast CT. Methods: EGSnrc-based Monte Carlo codes were used to estimate the MGDs. 13-cm in diameter, 10-cm high hemi-ellipsoids were used to simulate pendant-geometry breasts. Two different types of hemi-ellipsoidal models were employed: voxels in quasi-homogeneous phantoms were designed as either adipose or glandular tissue while voxels in homogeneous phantoms were designed as the mixture of adipose and glandular tissues. Breast compositions of 25% and 50% volume glandular fractions (VGFs), definedmore » as the ratio of glandular tissue voxels to entire breast voxels in the quasi-homogeneous phantoms, were studied. These VGFs were converted into glandular fractions by weight and used to construct the corresponding homogeneous phantoms. 80 kVp x-rays with a mean energy of 47 keV was used in the simulation. A total of 109 photons were used to image the phantoms and the energies deposited in the phantom voxels were tallied. Breast doses in homogeneous phantoms were averaged over all voxels and then used to calculate the MGDs using the F-factors evaluated at the mean energy of the x-rays. The MGDs for quasi-homogeneous phantoms were computed directly by averaging the doses over all glandular tissue voxels. The MGDs estimated for the two types of phantoms were normalized to the free-in-air dose at the iso-center and compared. Results: The normalized MGDs were 0.756 and 0.732 mGy/mGy for the 25% and 50% VGF homogeneous breasts and 0.761 and 0.733 mGy/mGy for the corresponding quasi-homogeneous breasts, respectively. The MGDs estimated for the two types of phantoms were similar within 1% in this study. Conclusion: MGDs for homogeneous breast models may be adequately estimated by converting from the average breast dose using the F-factor.« less
NASA Astrophysics Data System (ADS)
Wang, Dong; Tse, Peter W.
2015-05-01
Slurry pumps are commonly used in oil-sand mining for pumping mixtures of abrasive liquids and solids. These operations cause constant wear of slurry pump impellers, which results in the breakdown of the slurry pumps. This paper develops a prognostic method for estimating remaining useful life of slurry pump impellers. First, a moving-average wear degradation index is proposed to assess the performance degradation of the slurry pump impeller. Secondly, the state space model of the proposed health index is constructed. A general sequential Monte Carlo method is employed to derive the parameters of the state space model. The remaining useful life of the slurry pump impeller is estimated by extrapolating the established state space model to a specified alert threshold. Data collected from an industrial oil sand pump were used to validate the developed method. The results show that the accuracy of the developed method improves as more data become available.
Household's willingness to pay for arsenic safe drinking water in Bangladesh.
Khan, Nasreen Islam; Brouwer, Roy; Yang, Hong
2014-10-01
This study examines willingness to pay (WTP) in Bangladesh for arsenic (As) safe drinking water across different As-risk zones, applying a double bound discrete choice value elicitation approach. The study aims to provide a robust estimate of the benefits of As safe drinking water supply, which is compared to the results from a similar study published almost 10 years ago using a single bound estimation procedure. Tests show that the double bound valuation design does not suffer from anchoring or incentive incompatibility effects. Health risk awareness levels are high and households are willing to pay on average about 5 percent of their disposable average annual household income for As safe drinking water. Important factors influencing WTP include the bid amount to construct communal deep tubewell for As safe water supply, the risk zone where respondents live, household income, water consumption, awareness of water source contamination, whether household members are affected by As contamination, and whether they already take mitigation measures. Copyright © 2014 Elsevier Ltd. All rights reserved.
Bayesian source term estimation of atmospheric releases in urban areas using LES approach.
Xue, Fei; Kikumoto, Hideki; Li, Xiaofeng; Ooka, Ryozo
2018-05-05
The estimation of source information from limited measurements of a sensor network is a challenging inverse problem, which can be viewed as an assimilation process of the observed concentration data and the predicted concentration data. When dealing with releases in built-up areas, the predicted data are generally obtained by the Reynolds-averaged Navier-Stokes (RANS) equations, which yields building-resolving results; however, RANS-based models are outperformed by large-eddy simulation (LES) in the predictions of both airflow and dispersion. Therefore, it is important to explore the possibility of improving the estimation of the source parameters by using the LES approach. In this paper, a novel source term estimation method is proposed based on LES approach using Bayesian inference. The source-receptor relationship is obtained by solving the adjoint equations constructed using the time-averaged flow field simulated by the LES approach based on the gradient diffusion hypothesis. A wind tunnel experiment with a constant point source downwind of a single building model is used to evaluate the performance of the proposed method, which is compared with that of the existing method using a RANS model. The results show that the proposed method reduces the errors of source location and releasing strength by 77% and 28%, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.
Stratigraphic framework for Pliocene paleoclimate reconstruction: The correlation conundrum
Dowsett, H.J.; Robinson, M.M.
2006-01-01
Pre-Holocene paleoclimate reconstructions face a correlation conundrum because complications inherent in the stratigraphic record impede the development of synchronous reconstruction. The Pliocene Research, Interpretation and Synoptic Mapping (PRISM) paleoenvironmental reconstructions have carefully balanced temporal resolution and paleoclimate proxy data to achieve a useful and reliable product and are the most comprehensive pre-Pleistocene data sets available for analysis of warmer-than-present climate and for climate modeling experiments. This paper documents the stratigraphic framework for the mid-Pliocene sea surface temperature (SST) reconstruction of the North Atlantic and explores the relationship between stratigraphic/temporal resolution and various paleoceanographic estimates of SST. The magnetobiostratigraphic framework for the PRISM North Atlantic region is constructed from planktic foraminifer, calcareous nannofossil and paleomagnetic reversal events recorded in deep-sea cores and calibrated to age. Planktic foraminifer census data from multiple samples within the mid-Pliocene yield multiple SST estimates for each site. Extracting a single SST value at each site from multiple estimates, given the limitations of the material and stratigraphic resolution, is problematic but necessary for climate model experiments. The PRISM reconstruction, unprecedented in its integration of many different types of data at a focused stratigraphic interval, utilizes a time slab approach and is based on warm peak average temperatures. A greater understanding of the dynamics of the climate system and significant advances in models now mandate more precise, globally distributed yet temporally synchronous SST estimates than are available through averaging techniques. Regardless of the precision used to correlate between sequences within the midd-Pliocene, a truly synoptic reconstruction in the temporal sense is unlikely. SST estimates from multiple proxies promise to further refine paleoclimate reconstructions but must consider the complications associated with each method, what each proxy actually records, and how these different proxies compare in time-averaged samples.
Construction and demolition waste indicators.
Mália, Miguel; de Brito, Jorge; Pinheiro, Manuel Duarte; Bravo, Miguel
2013-03-01
The construction industry is one of the biggest and most active sectors of the European Union (EU), consuming more raw materials and energy than any other economic activity. Furthermore, construction waste is the commonest waste produced in the EU. Current EU legislation sets out to implement construction and demolition waste (CDW) prevention and recycling measures. However it lacks tools to accelerate the development of a sector as bound by tradition as the building industry. The main objective of the present study was to determine indicators to estimate the amount of CDW generated on site both globally and by waste stream. CDW generation was estimated for six specific sectors: new residential construction, new non-residential construction, residential demolition, non-residential demolition, residential refurbishment, and non-residential refurbishment. The data needed to develop the indicators was collected through an exhaustive survey of previous international studies. The indicators determined suggest that the average composition of waste generated on site is mostly concrete and ceramic materials. Specifically for new residential and new non-residential construction the production of concrete waste in buildings with a reinforced concrete structure lies between 17.8 and 32.9 kg m(-2) and between 18.3 and 40.1 kg m(-2), respectively. For the residential and non-residential demolition sectors the production of this waste stream in buildings with a reinforced concrete structure varies from 492 to 840 kg m(-2) and from 401 to 768 kg/m(-2), respectively. For the residential and non-residential refurbishment sectors the production of concrete waste in buildings lies between 18.9 and 45.9 kg/m(-2) and between 18.9 and 191.2 kg/m(-2), respectively.
Influence of the time scale on the construction of financial networks.
Emmert-Streib, Frank; Dehmer, Matthias
2010-09-30
In this paper we investigate the definition and formation of financial networks. Specifically, we study the influence of the time scale on their construction. For our analysis we use correlation-based networks obtained from the daily closing prices of stock market data. More precisely, we use the stocks that currently comprise the Dow Jones Industrial Average (DJIA) and estimate financial networks where nodes correspond to stocks and edges correspond to none vanishing correlation coefficients. That means only if a correlation coefficient is statistically significant different from zero, we include an edge in the network. This construction procedure results in unweighted, undirected networks. By separating the time series of stock prices in non-overlapping intervals, we obtain one network per interval. The length of these intervals corresponds to the time scale of the data, whose influence on the construction of the networks will be studied in this paper. Numerical analysis of four different measures in dependence on the time scale for the construction of networks allows us to gain insights about the intrinsic time scale of the stock market with respect to a meaningful graph-theoretical analysis.
NASA Astrophysics Data System (ADS)
Ospennikov, E. N.; Hilimonjuk, V. Z.
2009-04-01
Economic development of northern oil-and gas-bearing regions, even by application of shift method, is accompanied by a construction of the linear transport systems including automobile- and railways. Construction of such roads is connected with the risks caused by the whole complex of hazards, defined by the environmental features of the region, including flat surface with strong marshiness, development of a peat, fine-grained and easily eroded friable sedimentations, as well as by complicated geocryological conditions. Geocryological conditions of Western Siberia area are characterized by a rather high heterogeneity. This implies the strong variability of permafrost soils distribution, their thickness and continuity, depths of seasonal thawing and frost penetration, and also intact development of geocryological processes and phenomena. Thermokarst, thermo erosion and thermo-abrasion develop in the natural conditions. These processes are caused by partial degradation of permafrost. A frost heave also occurs during their seasonal or long-term freezing. Failure of an environment, which is always peculiar to construction of the roads, causes reorganization of geocryological systems that is accompanied by occurrence of dangerous geocryological processes, such as technogenic thermokarst (with formation of various negative forms of a relief: from fine subsidence up to small and average sized lakes), frost heave ground (with formation frost mound in height up to 0,5 - 1,5 meters and more), thermal erosion (gullies and ravines with volume of the born material up to several thousand cubic meters). Development of these destructive processes in a road stripes leads to emergencies owing to deformations and destructions of an earthen cloth, and to failure of natural tundra and forest-tundra ecosystems. The methodical approaches based on typification and zoning of the area by its environmental complex have been developed for an estimation of geocryological hazards at linear construction. The estimation was carried out on the basis of the analysis, including features of geocryological processes development in natural conditions and certain types of geocryological conditions; character of the failures caused by construction and operation of roads; hazard severity of destructive processes for certain geotechnical systems of roads. Three categories of territories have been specified as a result on base of hazard severity: very complex, complex and simple. Very complex ones are characterized by close to 0 0C by average annual temperatures of soils, presence massive pore and it is repeated- wedge ices, a wide circulation it is high ice bearing ground and active modern development of processes thermokarst, thermo erosion and frost heave. Simple territories differ in low average annual temperatures of soils (below -4 0С), absence massive underground ices and weak development of geocryological processes. All other territories representing potential hazard at adverse change of an environment are classified as complex territories.
Thompson, Ryan F.
2002-01-01
A wetland was constructed in the Skunk Creek flood plain near Lyons in southeast South Dakota to mitigate for wetland areas that were filled during construction of a municipal golf course for the city of Sioux Falls. A water-rights permit was obtained to allow the city to pump water from Skunk Creek into the wetland during times when the wetland would be dry. The amount of water seeping through the wetland and recharging the underlying Skunk Creek aquifer was not known. The U.S. Geological Survey, in cooperation with the city of Sioux Falls, conducted a study during 1997-2000 to evaluate recharge to the Skunk Creek aquifer from the constructed wetland. Three methods were used to estimate recharge from the wetland to the aquifer: (1) analysis of the rate of water-level decline during periods of no inflow; (2) flow-net analysis; and (3) analysis of the hydrologic budget. The hydrologic budget also was used to evaluate the efficiency of recharge from the wetland to the aquifer. Recharge rates estimated by analysis of shut-off events ranged from 0.21 to 0.82 foot per day, but these estimates may be influenced by possible errors in volume calculations. Recharge rates determined by flow-net analysis were calculated using selected values of hydraulic conductivity and ranged from 566,000 gallons per day using a hydraulic conductivity of 0.5 foot per day to 1,684,000 gallons per day using a hydraulic conductivity of 1.0 foot per day. Recharge rates from the hydrologic budget varied from 0.74 to 0.85 foot per day, and averaged 0.79 foot per day. The amount of water lost to evapotranspiration at the study wetland is very small compared to the amount of water seeping from the wetland into the aquifer. Based on the hydrologic budget, the average recharge efficiency was estimated as 97.9 percent, which indicates that recharging the Skunk Creek aquifer by pumping water into the study wetland is highly efficient. Because the Skunk Creek aquifer is composed of sand and gravel, the 'recharge mound' is less distinct than might be found in an aquifer composed of finer materials. However, water levels recorded from piezometers in and around the wetland do show a higher water table than periods when the wetland was dry. The largest increases in water level occur between the wetland channel and Skunk Creek. The results of this study demonstrate that artificially recharged wetlands can be useful in recharging underlying aquifers and increasing water levels in these aquifers.
PockDrug: A Model for Predicting Pocket Druggability That Overcomes Pocket Estimation Uncertainties.
Borrel, Alexandre; Regad, Leslie; Xhaard, Henri; Petitjean, Michel; Camproux, Anne-Claude
2015-04-27
Predicting protein druggability is a key interest in the target identification phase of drug discovery. Here, we assess the pocket estimation methods' influence on druggability predictions by comparing statistical models constructed from pockets estimated using different pocket estimation methods: a proximity of either 4 or 5.5 Å to a cocrystallized ligand or DoGSite and fpocket estimation methods. We developed PockDrug, a robust pocket druggability model that copes with uncertainties in pocket boundaries. It is based on a linear discriminant analysis from a pool of 52 descriptors combined with a selection of the most stable and efficient models using different pocket estimation methods. PockDrug retains the best combinations of three pocket properties which impact druggability: geometry, hydrophobicity, and aromaticity. It results in an average accuracy of 87.9% ± 4.7% using a test set and exhibits higher accuracy (∼5-10%) than previous studies that used an identical apo set. In conclusion, this study confirms the influence of pocket estimation on pocket druggability prediction and proposes PockDrug as a new model that overcomes pocket estimation variability.
Energy performance of net-zero and near net-zero energy homes in New England
NASA Astrophysics Data System (ADS)
Thomas, Walter D.
Net-Zero Energy Homes (NZEHs) are homes that consume no more energy than they produce on site during the course of a year. They are well insulated and sealed, use energy efficient appliances, lighting, and mechanical equipment, are designed to maximize the benefits from day lighting, and most often use a combination of solar hot water, passive solar and photovoltaic (PV) panels to produce their on-site energy. To date, NZEHs make up a miniscule percentage of homes in the United States, and of those, few have had their actual performance measured and analyzed once built and occupied. This research focused on 19 NZEHs and near net-zero energy homes (NNZEHs) built in New England. This set of homes had varying designs, numbers of occupants, and installed technologies for energy production, space heating and cooling, and domestic hot water systems. The author worked with participating homeowners to collect construction and systems specifications, occupancy information, and twelve months of energy consumption, production and cost measurements, in order to determine whether the homes reached their respective energy performance design goals. The author found that six out of ten NZEHs achieved net-zero energy or better, while all nine of the NNZEHs achieved an energy density (kWh/ft 2/person) at least half as low as the control house, also built in New England. The median construction cost for the 19 homes was 155/ft 2 vs. 110/ft2 for the US average, their average monthly energy cost was 84% below the average for homes in New England, and their estimated CO2 emissions averaged 90% below estimated CO2 emissions from the control house. Measured energy consumption averaged 14% below predictions for the NZEHs and 38% above predictions for the NNZEHs, while generated energy was within +/- 10% of predicted for 17 out of 18 on-site PV systems. Based on these results, the author concludes that these types of homes can meet or exceed their designed energy performance (depending on occupant behavior), can be affordably built, and will have very low energy costs and CO2 emissions compared to conventional homes. In short, they are very suitable for New England.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schreiber, Leslie A.; Hansen, Christopher P.; Rumble, Mark A.
Greater sage-grouse Centrocercus urophasianus populations across North America have been declining due to degradation and fragmentation of sagebrush habitat. As part of a study quantifying greater sage-grouse demographics prior to construction of a wind energy facility, we estimated apparent net nest productivity and survival rate of chicks associated with radio-equipped female sage-grouse in Carbon County, Wyoming, USA. We estimated apparent net nest productivity using a weighted mean of the average brood size and used a modified logistic-exposure method to estimate daily chick survival over a 70-day time period. Apparent nest productivity was 2.79 chicks per female (95% CI: 1.46–4.12) inmore » 2011, 2.00 chicks per female (95% CI: 1.00–3.00) in 2012, and 1.54 chick per female (95% CI: 0.62–2.46) in 2013. Chick survival to 70 days post-hatch was 19.10% (95% CI: 6.22–37.42%) in 2011, 4.20% (95% CI: 0.84–12.31%) in 2012, and 16.05% (95% CI: 7.67–27.22%) in 2013. These estimates were low, yet within the range of other published survival rates. Chick survival was primarily associated with year and chick age, with minor effects of average temperature between surveys and hatch date. The variability in chick survival rates across years of our study suggests annual weather patterns may have large impacts on chick survival. Thus, management actions that increase the availability of food and cover for chicks may be necessary, especially during years with drought and above-average spring temperatures.« less
Scale Matters: A Cost-Outcome Analysis of an m-Health Intervention in Malawi.
Larsen-Cooper, Erin; Bancroft, Emily; Rajagopal, Sharanya; O'Toole, Maggie; Levin, Ann
2016-04-01
The primary objectives of this study are to determine cost per user and cost per contact with users of a mobile health (m-health) intervention. The secondary objectives are to map costs to changes in maternal, newborn, and child health (MNCH) and to estimate costs of alternate implementation and usage scenarios. A base cost model, constructed from recurrent costs and selected capital costs, was used to estimate average cost per user and per contact of an m-health intervention. This model was mapped to statistically significant changes in MNCH intermediate outcomes to determine the cost of improvements in MNCH indicators. Sensitivity analyses were conducted to estimate costs in alternate scenarios. The m-health intervention cost $29.33 per user and $4.33 per successful contact. The average cost for each user experiencing a change in an MNCH indicator ranged from $67 to $355. The sensitivity analyses showed that cost per user could be reduced by 48% if the service were to operate at full capacity. We believe that the intervention, operating at scale, has potential to be a cost-effective method for improving maternal and child health indicators.
Scale Matters: A Cost-Outcome Analysis of an m-Health Intervention in Malawi
Bancroft, Emily; Rajagopal, Sharanya; O'Toole, Maggie; Levin, Ann
2016-01-01
Abstract Background: The primary objectives of this study are to determine cost per user and cost per contact with users of a mobile health (m-health) intervention. The secondary objectives are to map costs to changes in maternal, newborn, and child health (MNCH) and to estimate costs of alternate implementation and usage scenarios. Materials and Methods: A base cost model, constructed from recurrent costs and selected capital costs, was used to estimate average cost per user and per contact of an m-health intervention. This model was mapped to statistically significant changes in MNCH intermediate outcomes to determine the cost of improvements in MNCH indicators. Sensitivity analyses were conducted to estimate costs in alternate scenarios. Results: The m-health intervention cost $29.33 per user and $4.33 per successful contact. The average cost for each user experiencing a change in an MNCH indicator ranged from $67 to $355. The sensitivity analyses showed that cost per user could be reduced by 48% if the service were to operate at full capacity. Conclusions: We believe that the intervention, operating at scale, has potential to be a cost-effective method for improving maternal and child health indicators. PMID:26348994
2014-09-01
quarter. Deep natural language understanding , efficient inference, pragmatics, background knowledge U U U SAR 4 Dr. David McDonald (781) 718-1964 C3...effective and efficient way to marshal inferences from background knowledge ’ N00014-13-1-0228 Dr. David McDonald Smart Information Flow Technologies, dba...for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data
Downs, S.C.; Appel, David H.
1986-01-01
Construction of the four-lane Appalachian Corridon G highway disturbed about 2 sq mi in the Coal River and 0.35 sq mi of the 4.75 sq mi Trace Fork basin in southern West Virginia. Construction had a negligible effect on runoff and suspended-sediment load in the Coal River and its major tributaries, the Little Coal and Big Coal Rivers. Drainage areas of the mainstem sites in the Coal River basin ranged from 269 to 862 sq mi, and average annual suspended-sediment yields ranged from 535 to 614 tons/sq mi for the 1975-81 water years. Suspended-sediment load in the smaller Trace Fork basin (4.72 sq mi) was significantly affected by the highway construction. Based on data from undisturbed areas upstream from construction, the normal background load at Trace Fork downstream from construction during the period July 1980 to September 1981 was estimated to be 830 tons; the measured load was 2,385 tons. Runoff from the 0.35 sq mi area disturbed by highway construction transported approximately 1,550 tons of sediment. Suspended-sediment loads from the construction zone were also higher than normal background loads during storms. (USGS)
Qureshi, A A; Manzoor, S; Younis, H; Shah, K H; Ahmed, T
2018-01-01
Natural radioactivity was measured in Bunair Granite using high purity germanium gamma-ray spectrometer and compared to world's granites and building materials to access its suitability for the construction purpose. Average gamma-activities of 226Ra, 232Th and 40K were found to be 52.41, 58.41 and 1130.12 Bq kg-1, respectively. The Indoor and outdoor radiation indices including excessive life-time cancer risk (ELCR) were calculated. The average indoor ELCR was estimated as 3.49 × 10-3. The average outdoor ELCR was assessed as 0.46 × 10-3. As a basic building material Bunair Granite should be on low propriety. For flooring, facing the buildings and as Table tops, in kitchens and other utilities it is safe. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
The maximum intelligible range of the human voice
NASA Astrophysics Data System (ADS)
Boren, Braxton
This dissertation examines the acoustics of the spoken voice at high levels and the maximum number of people that could hear such a voice unamplified in the open air. In particular, it examines an early auditory experiment by Benjamin Franklin which sought to determine the maximum intelligible crowd for the Anglican preacher George Whitefield in the eighteenth century. Using Franklin's description of the experiment and a noise source on Front Street, the geometry and diffraction effects of such a noise source are examined to more precisely pinpoint Franklin's position when Whitefield's voice ceased to be intelligible. Based on historical maps, drawings, and prints, the geometry and material of Market Street is constructed as a computer model which is then used to construct an acoustic cone tracing model. Based on minimal values of the Speech Transmission Index (STI) at Franklin's position, Whitefield's on-axis Sound Pressure Level (SPL) at 1 m is determined, leading to estimates centering around 90 dBA. Recordings are carried out on trained actors and singers to determine their maximum time-averaged SPL at 1 m. This suggests that the greatest average SPL achievable by the human voice is 90-91 dBA, similar to the median estimates for Whitefield's voice. The sites of Whitefield's largest crowds are acoustically modeled based on historical evidence and maps. Based on Whitefield's SPL, the minimal STI value, and the crowd's background noise, this allows a prediction of the minimally intelligible area for each site. These yield maximum crowd estimates of 50,000 under ideal conditions, while crowds of 20,000 to 30,000 seem more reasonable when the crowd was reasonably quiet and Whitefield's voice was near 90 dBA.
Basic repository environmental assessment design basis, Lavender Canyon site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1988-01-01
This study examines the engineering factors and costs associated with the construction, operation, and decommissioning of a high-level nuclear waste repository in salt in the Paradox Basin in Lavender Canyon, Utah. The study assumes a repository capacity of 36,000 metric tons of heavy metal (MTHM) of unreprocessed spent fuel and 36,000 MTHM of commercial high-level reprocessing waste, along with 7020 canisters of defense high-level reprocessing waste and associated quantities of remote- and contact-handled transuranic waste (TRU). With the exception of TRU, all the waste forms are placed in 300- to 1000-year-life carbon-steel waste packages in a collocated waste handling andmore » packaging facility (WHPF), which is also described. The construction, operation, and decommissioning of the proposed repository is estimated to cost approximately $5.51 billion. Costs include those for the collocated WHPP, engineering, and contingency, but exclude waste form assembly and shipment to the site and waste package fabrication and shipment to the site. These costs reflect the relative average wage rates of the region and the relatively sound nature of the salt at this site. Construction would require an estimated 7.75 years. Engineering factors and costs are not strongly influenced by environmental considerations. 51 refs., 24 figs., 20 tabs.« less
Lead Coolant Test Facility Systems Design, Thermal Hydraulic Analysis and Cost Estimate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soli Khericha; Edwin Harvego; John Svoboda
2012-01-01
The Idaho National Laboratory prepared a preliminary technical and functional requirements (T&FR), thermal hydraulic design and cost estimate for a lead coolant test facility. The purpose of this small scale facility is to simulate lead coolant fast reactor (LFR) coolant flow in an open lattice geometry core using seven electrical rods and liquid lead or lead-bismuth eutectic coolant. Based on review of current world lead or lead-bismuth test facilities and research needs listed in the Generation IV Roadmap, five broad areas of requirements were identified as listed: (1) Develop and Demonstrate Feasibility of Submerged Heat Exchanger; (2) Develop and Demonstratemore » Open-lattice Flow in Electrically Heated Core; (3) Develop and Demonstrate Chemistry Control; (4) Demonstrate Safe Operation; and (5) Provision for Future Testing. This paper discusses the preliminary design of systems, thermal hydraulic analysis, and simplified cost estimate. The facility thermal hydraulic design is based on the maximum simulated core power using seven electrical heater rods of 420 kW; average linear heat generation rate of 300 W/cm. The core inlet temperature for liquid lead or Pb/Bi eutectic is 4200 C. The design includes approximately seventy-five data measurements such as pressure, temperature, and flow rates. The preliminary estimated cost of construction of the facility is $3.7M (in 2006 $). It is also estimated that the facility will require two years to be constructed and ready for operation.« less
Sophocleous, M.
2000-01-01
A practical methodology for recharge characterization was developed based on several years of field-oriented research at 10 sites in the Great Bend Prairie of south-central Kansas. This methodology combines the soil-water budget on a storm-by-storm year-round basis with the resulting watertable rises. The estimated 1985-1992 average annual recharge was less than 50mm/year with a range from 15 mm/year (during the 1998 drought) to 178 mm/year (during the 1993 flood year). Most of this recharge occurs during the spring months. To regionalize these site-specific estimates, an additional methodology based on multiple (forward) regression analysis combined with classification and GIS overlay analyses was developed and implemented. The multiple regression analysis showed that the most influential variables were, in order of decreasing importance, total annual precipitation, average maximum springtime soil-profile water storage, average shallowest springtime depth to watertable, and average springtime precipitation rate. Therefore, four GIS (ARC/INFO) data "layers" or coverages were constructed for the study region based on these four variables, and each such coverage was classified into the same number of data classes to avoid biasing the results. The normalized regression coefficients were employed to weigh the class rankings of each recharge-affecting variable. This approach resulted in recharge zonations that agreed well with the site recharge estimates. During the "Great Flood of 1993," when rainfall totals exceeded normal levels by -200% in the northern portion of the study region, the developed regionalization methodology was tested against such extreme conditions, and proved to be both practical, based on readily available or easily measurable data, and robust. It was concluded that the combination of multiple regression and GIS overlay analyses is a powerful and practical approach to regionalizing small samples of recharge estimates.
Integrated Water Resources Simulation Model for Rural Community
NASA Astrophysics Data System (ADS)
Li, Y.-H.; Liao, W.-T.; Tung, C.-P.
2012-04-01
The purpose of this study is to develop several water resources simulation models for residence houses, constructed wetlands and farms and then integrate these models for a rural community. Domestic and irrigation water uses are the major water demand in rural community. To build up a model estimating domestic water demand for residence houses, the average water use per person per day should be accounted first, including water uses of kitchen, bathroom, toilet and laundry. On the other hand, rice is the major crop in the study region, and its productive efficiency sometimes depends on the quantity of irrigation water. The water demand can be estimated by crop water use, field leakage and water distribution loss. Irrigation water comes from rainfall, water supply system and reclaimed water which treated by constructed wetland. In recent years, constructed wetlands play an important role in water resources recycle. They can purify domestic wastewater for water recycling and reuse. After treating from constructed wetlands, the reclaimed water can be reused in washing toilets, watering gardens and irrigating farms. Constructed wetland is one of highly economic benefits for treating wastewater through imitating the processing mechanism of natural wetlands. In general, the treatment efficiency of constructed wetlands is determined by evapotranspiration, inflow, and water temperature. This study uses system dynamics modeling to develop models for different water resource components in a rural community. Furthermore, these models are integrated into a whole system. The model not only is utilized to simulate how water moves through different components, including residence houses, constructed wetlands and farms, but also evaluates the efficiency of water use. By analyzing the flow of water, the water resource simulation model can optimizes water resource distribution under different scenarios, and the result can provide suggestions for designing water resource system of a rural community. Keywords: Water Resources, Simulation Model, Domestic Water, Irrigation, Constructed Wetland, Rural Community
Sequential deconvolution from wave-front sensing using bivariate simplex splines
NASA Astrophysics Data System (ADS)
Guo, Shiping; Zhang, Rongzhi; Li, Jisheng; Zou, Jianhua; Xu, Rong; Liu, Changhai
2015-05-01
Deconvolution from wave-front sensing (DWFS) is an imaging compensation technique for turbulence degraded images based on simultaneous recording of short exposure images and wave-front sensor data. This paper employs the multivariate splines method for the sequential DWFS: a bivariate simplex splines based average slopes measurement model is built firstly for Shack-Hartmann wave-front sensor; next, a well-conditioned least squares estimator for the spline coefficients is constructed using multiple Shack-Hartmann measurements; then, the distorted wave-front is uniquely determined by the estimated spline coefficients; the object image is finally obtained by non-blind deconvolution processing. Simulated experiments in different turbulence strength show that our method performs superior image restoration results and noise rejection capability especially when extracting the multidirectional phase derivatives.
Geohydrology and simulation of ground-water flow in the aquifer system near Calvert City, Kentucky
Starn, J.J.; Arihood, L.D.; Rose, M.F.
1995-01-01
The U.S. Geological Survey, in cooperation with the Kentucky Natural Resources and Environmental Protection Cabinet, constructed a two-dimensional, steady-state ground-water-flow model to estimate hydraulic properties, contributing areas to discharge boundaries, and the average linear velocity at selected locations in an aquifer system near Calvert City, Ky. Nonlinear regression was used to estimate values of model parameters and the reliability of the parameter estimates. The regression minimizes the weighted difference between observed and calculated hydraulic heads and rates of flow. The calibrated model generally was better than alternative models considered, and although adding transmissive faults in the bedrock produced a slightly better model, fault transmissivity was not estimated reliably. The average transmissivity of the aquifer was 20,000 feet squared per day. Recharge to two outcrop areas, the McNairy Formation of Cretaceous age and the alluvium of Quaternary age, were 0.00269 feet per day (11.8 inches per year) and 0.000484 feet per day (2.1 inches per year), respectively. Contributing areas to wells at the Calvert City Water Company in 1992 did not include the Calvert City Industrial Complex. Since completing the fieldwork for this study in 1992, the Calvert City Water Company discontinued use of their wells and began withdrawing water from new wells that were located 4.5 miles east-southeast of the previous location; the contributing area moved farther from the industrial complex. The extent of the alluvium contributing water to wells was limited by the overlying lacustrine deposits. The average linear ground-water velocity at the industrial complex ranged from 0.90 feet per day to 4.47 feet per day with a mean of 1.98 feet per day.
Greater sage-grouse apparent nest productivity and chick survival in Carbon County, Wyoming
Schreiber, Leslie A.; Hansen, Christopher P.; Rumble, Mark A.; ...
2016-03-01
Greater sage-grouse Centrocercus urophasianus populations across North America have been declining due to degradation and fragmentation of sagebrush habitat. As part of a study quantifying greater sage-grouse demographics prior to construction of a wind energy facility, we estimated apparent net nest productivity and survival rate of chicks associated with radio-equipped female sage-grouse in Carbon County, Wyoming, USA. We estimated apparent net nest productivity using a weighted mean of the average brood size and used a modified logistic-exposure method to estimate daily chick survival over a 70-day time period. Apparent nest productivity was 2.79 chicks per female (95% CI: 1.46–4.12) inmore » 2011, 2.00 chicks per female (95% CI: 1.00–3.00) in 2012, and 1.54 chick per female (95% CI: 0.62–2.46) in 2013. Chick survival to 70 days post-hatch was 19.10% (95% CI: 6.22–37.42%) in 2011, 4.20% (95% CI: 0.84–12.31%) in 2012, and 16.05% (95% CI: 7.67–27.22%) in 2013. These estimates were low, yet within the range of other published survival rates. Chick survival was primarily associated with year and chick age, with minor effects of average temperature between surveys and hatch date. The variability in chick survival rates across years of our study suggests annual weather patterns may have large impacts on chick survival. Thus, management actions that increase the availability of food and cover for chicks may be necessary, especially during years with drought and above-average spring temperatures.« less
NASA Astrophysics Data System (ADS)
Siirila, E. R.; Fernandez-Garcia, D.; Sanchez-Vila, X.
2014-12-01
Particle tracking (PT) techniques, often considered favorable over Eulerian techniques due to artificial smoothening in breakthrough curves (BTCs), are evaluated in a risk-driven framework. Recent work has shown that given a relatively few number of particles (np), PT methods can yield well-constructed BTCs with kernel density estimators (KDEs). This work compares KDE and non-KDE BTCs simulated as a function of np (102-108) and averaged as a function of the exposure duration, ED. Results show that regardless of BTC shape complexity, un-averaged PT BTCs show a large bias over several orders of magnitude in concentration (C) when compared to the KDE results, remarkably even when np is as low as 102. With the KDE, several orders of magnitude less np are required to obtain the same global error in BTC shape as the PT technique. PT and KDE BTCs are averaged as a function of the ED with standard and new methods incorporating the optimal h (ANA). The lowest error curve is obtained through the ANA method, especially for smaller EDs. Percent error of peak of averaged-BTCs, important in a risk framework, is approximately zero for all scenarios and all methods for np ≥105, but vary between the ANA and PT methods, when np is lower. For fewer np, the ANA solution provides a lower error fit except when C oscillations are present during a short time frame. We show that obtaining a representative average exposure concentration is reliant on an accurate representation of the BTC, especially when data is scarce.
Bhatia, Triptish; Gettig, Elizabeth A; Gottesman, Irving I; Berliner, Jonathan; Mishra, N N; Nimgaonkar, Vishwajit L; Deshpande, Smita N
2016-12-01
Schizophrenia (SZ) has an estimated heritability of 64-88%, with the higher values based on twin studies. Conventionally, family history of psychosis is the best individual-level predictor of risk, but reliable risk estimates are unavailable for Indian populations. Genetic, environmental, and epigenetic factors are equally important and should be considered when predicting risk in 'at risk' individuals. To estimate risk based on an Indian schizophrenia participant's family history combined with selected demographic factors. To incorporate variables in addition to family history, and to stratify risk, we constructed a regression equation that included demographic variables in addition to family history. The equation was tested in two independent Indian samples: (i) an initial sample of SZ participants (N=128) with one sibling or offspring; (ii) a second, independent sample consisting of multiply affected families (N=138 families, with two or more sibs/offspring affected with SZ). The overall estimated risk was 4.31±0.27 (mean±standard deviation). There were 19 (14.8%) individuals in the high risk group, 75 (58.6%) in the moderate risk and 34 (26.6%) in the above average risk (in Sample A). In the validation sample, risks were distributed as: high (45%), moderate (38%) and above average (17%). Consistent risk estimates were obtained from both samples using the regression equation. Familial risk can be combined with demographic factors to estimate risk for SZ in India. If replicated, the proposed stratification of risk may be easier and more realistic for family members. Copyright © 2016. Published by Elsevier B.V.
Influence of the Time Scale on the Construction of Financial Networks
Emmert-Streib, Frank; Dehmer, Matthias
2010-01-01
Background In this paper we investigate the definition and formation of financial networks. Specifically, we study the influence of the time scale on their construction. Methodology/Principal Findings For our analysis we use correlation-based networks obtained from the daily closing prices of stock market data. More precisely, we use the stocks that currently comprise the Dow Jones Industrial Average (DJIA) and estimate financial networks where nodes correspond to stocks and edges correspond to none vanishing correlation coefficients. That means only if a correlation coefficient is statistically significant different from zero, we include an edge in the network. This construction procedure results in unweighted, undirected networks. By separating the time series of stock prices in non-overlapping intervals, we obtain one network per interval. The length of these intervals corresponds to the time scale of the data, whose influence on the construction of the networks will be studied in this paper. Conclusions/Significance Numerical analysis of four different measures in dependence on the time scale for the construction of networks allows us to gain insights about the intrinsic time scale of the stock market with respect to a meaningful graph-theoretical analysis. PMID:20949124
NASA Astrophysics Data System (ADS)
Harrington, Seán T.; Harrington, Joseph R.
2013-03-01
This paper presents an assessment of the suspended sediment rating curve approach for load estimation on the Rivers Bandon and Owenabue in Ireland. The rivers, located in the South of Ireland, are underlain by sandstone, limestones and mudstones, and the catchments are primarily agricultural. A comprehensive database of suspended sediment data is not available for rivers in Ireland. For such situations, it is common to estimate suspended sediment concentrations from the flow rate using the suspended sediment rating curve approach. These rating curves are most commonly constructed by applying linear regression to the logarithms of flow and suspended sediment concentration or by applying a power curve to normal data. Both methods are assessed in this paper for the Rivers Bandon and Owenabue. Turbidity-based suspended sediment loads are presented for each river based on continuous (15 min) flow data and the use of turbidity as a surrogate for suspended sediment concentration is investigated. A database of paired flow rate and suspended sediment concentration values, collected between the years 2004 and 2011, is used to generate rating curves for each river. From these, suspended sediment load estimates using the rating curve approach are estimated and compared to the turbidity based loads for each river. Loads are also estimated using stage and seasonally separated rating curves and daily flow data, for comparison purposes. The most accurate load estimate on the River Bandon is found using a stage separated power curve, while the most accurate load estimate on the River Owenabue is found using a general power curve. Maximum full monthly errors of - 76% to + 63% are found on the River Bandon with errors of - 65% to + 359% found on the River Owenabue. The average monthly error on the River Bandon is - 12% with an average error of + 87% on the River Owenabue. The use of daily flow data in the load estimation process does not result in a significant loss of accuracy on either river. Historic load estimates (with a 95% confidence interval) were hindcast from the flow record and average annual loads of 7253 ± 673 tonnes on the River Bandon and 1935 ± 325 tonnes on the River Owenabue were estimated to be passing the gauging stations.
Recharge studies on the High Plains in northern Lea County, New Mexico
Havens, John S.
1966-01-01
The area described in this report is that part of the southern High Plains principally within northern Lea County, N. Mex. ; it comprises about 1,400,000 acres. Hydrologic boundaries isolate the main aquifer of the area, the Ogallala Formation, from outside sources of natural recharge other than precipitation on the area. Natural recharge to this aquifer from the 15-inch average annual precipitation for the period 1949-60 is estimated to be about 95,000 acre-ft (acre-feet) which is between the 59,000 and 118,000 acre-ft a year obtained from the This estimate (1934) of ? to 1 inch a year. About one-sixth of the water pumped for irrigation, or an average of about 23,000 acre-ft a year in the period 1949-60, returns to the aquifer. The estimated long-term (1939-60) average annual recharge to the aquifer is about 77,000 acre-ft. Discharge from the aquifer is by pumping and underflow from the area. Gross pumpage averaged about 151,000 acre-ft a year in the period 1949-60. Underflow from the area is estimated to have been about 36,000 acre-ft a year. Thus, the estimated average annual discharge from the aquifer was about 187,000 acre-ft a year, and this exceeded recharge by about 69,000 acre-ft a year. This overdraft is reflected in a general net decline of the water table of 10 ft in the period 1950-60 and net declines of as much as 30 feet in local areas. Data obtained during this study indicate that about 100,000 acre-ft of water collects in closed depressions on the surface of the High Plains in years when precipitation is normal. Studies of water losses from ponds in selected depressions indicate that between 20 and 80 percent of this loss recharges the groundwater body and the balance is lost to evapotranspiration, principally evaporation. Artificial recharge facilities constructed in the depressions could put at least 50,000 acre-ft of water underground annually that otherwise would be lost to evaporation. Recharging through pits or spreading ponds would cost less per unit volume of water than recharge through wells.
Savard, L; Li, P; Strauss, S H; Chase, M W; Michaud, M; Bousquet, J
1994-01-01
We have estimated the time for the last common ancestor of extant seed plants by using molecular clocks constructed from the sequences of the chloroplastic gene coding for the large subunit of ribulose-1,5-bisphosphate carboxylase/oxygenase (rbcL) and the nuclear gene coding for the small subunit of rRNA (Rrn18). Phylogenetic analyses of nucleotide sequences indicated that the earliest divergence of extant seed plants is likely represented by a split between conifer-cycad and angiosperm lineages. Relative-rate tests were used to assess homogeneity of substitution rates among lineages, and annual angiosperms were found to evolve at a faster rate than other taxa for rbcL and, thus, these sequences were excluded from construction of molecular clocks. Five distinct molecular clocks were calibrated using substitution rates for the two genes and four divergence times based on fossil and published molecular clock estimates. The five estimated times for the last common ancestor of extant seed plants were in agreement with one another, with an average of 285 million years and a range of 275-290 million years. This implies a substantially more recent ancestor of all extant seed plants than suggested by some theories of plant evolution. PMID:8197201
Encoding probabilistic brain atlases using Bayesian inference.
Van Leemput, Koen
2009-06-01
This paper addresses the problem of creating probabilistic brain atlases from manually labeled training data. Probabilistic atlases are typically constructed by counting the relative frequency of occurrence of labels in corresponding locations across the training images. However, such an "averaging" approach generalizes poorly to unseen cases when the number of training images is limited, and provides no principled way of aligning the training datasets using deformable registration. In this paper, we generalize the generative image model implicitly underlying standard "average" atlases, using mesh-based representations endowed with an explicit deformation model. Bayesian inference is used to infer the optimal model parameters from the training data, leading to a simultaneous group-wise registration and atlas estimation scheme that encompasses standard averaging as a special case. We also use Bayesian inference to compare alternative atlas models in light of the training data, and show how this leads to a data compression problem that is intuitive to interpret and computationally feasible. Using this technique, we automatically determine the optimal amount of spatial blurring, the best deformation field flexibility, and the most compact mesh representation. We demonstrate, using 2-D training datasets, that the resulting models are better at capturing the structure in the training data than conventional probabilistic atlases. We also present experiments of the proposed atlas construction technique in 3-D, and show the resulting atlases' potential in fully-automated, pulse sequence-adaptive segmentation of 36 neuroanatomical structures in brain MRI scans.
Asset allocation using option-implied moments
NASA Astrophysics Data System (ADS)
Bahaludin, H.; Abdullah, M. H.; Tolos, S. M.
2017-09-01
This study uses an option-implied distribution as the input in asset allocation. The computation of risk-neutral densities (RND) are based on the Dow Jones Industrial Average (DJIA) index option and its constituents. Since the RNDs estimation does not incorporate risk premium, the conversion of RND into risk-world density (RWD) is required. The RWD is obtained through parametric calibration using the beta distributions. The mean, volatility, and covariance are then calculated to construct the portfolio. The performance of the portfolio is evaluated by using portfolio volatility and Sharpe ratio.
Finite-Dimensional Representations for Controlled Diffusions with Delay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Federico, Salvatore, E-mail: salvatore.federico@unimi.it; Tankov, Peter, E-mail: tankov@math.univ-paris-diderot.fr
2015-02-15
We study stochastic delay differential equations (SDDE) where the coefficients depend on the moving averages of the state process. As a first contribution, we provide sufficient conditions under which the solution of the SDDE and a linear path functional of it admit a finite-dimensional Markovian representation. As a second contribution, we show how approximate finite-dimensional Markovian representations may be constructed when these conditions are not satisfied, and provide an estimate of the error corresponding to these approximations. These results are applied to optimal control and optimal stopping problems for stochastic systems with delay.
Crowdsourcing-Assisted Radio Environment Database for V2V Communication.
Katagiri, Keita; Sato, Koya; Fujii, Takeo
2018-04-12
In order to realize reliable Vehicle-to-Vehicle (V2V) communication systems for autonomous driving, the recognition of radio propagation becomes an important technology. However, in the current wireless distributed network systems, it is difficult to accurately estimate the radio propagation characteristics because of the locality of the radio propagation caused by surrounding buildings and geographical features. In this paper, we propose a measurement-based radio environment database for improving the accuracy of the radio environment estimation in the V2V communication systems. The database first gathers measurement datasets of the received signal strength indicator (RSSI) related to the transmission/reception locations from V2V systems. By using the datasets, the average received power maps linked with transmitter and receiver locations are generated. We have performed measurement campaigns of V2V communications in the real environment to observe RSSI for the database construction. Our results show that the proposed method has higher accuracy of the radio propagation estimation than the conventional path loss model-based estimation.
ESTIMATION OF THE NUMBER OF INFECTIOUS BACTERIAL OR VIRAL PARTICLES BY THE DILUTION METHOD
Seligman, Stephen J.; Mickey, M. Ray
1964-01-01
Seligman, Stephen J. (University of California, Los Angeles), and M. Ray Mickey. Estimation of the number of infectious bacterial or viral particles by the dilution method. J. Bacteriol. 88:31–36. 1964.—For viral or bacterial systems in which discrete foci of infection are not obtainable, it is possible to obtain an estimate of the number of infectious particles by use of the quantal response if the assay system is such that one infectious particle can elicit the response. Unfortunately, the maximum likelihood estimate is difficult to calculate, but, by the use of a modification of Haldane's approximation, it is possible to construct a table which facilitates calculation of both the average number of infectious particles and its relative error. Additional advantages of the method are that the number of test units per dilution can be varied, the dilutions need not bear any fixed relation to each other, and the one-particle hypothesis can be readily tested. PMID:14197902
Chou, I.-Ming; Rostam-Abadi, M.; Lytle, J.M.; Achorn, F.P.
1996-01-01
Costs for constructing and operating a conceptual plant based on a proposed process that converts flue gas desulfurization (FGD)-gypsum to ammonium sulfate fertilizer has been calculated and used to estimate a market price for the product. The average market price of granular ammonium sulfate ($138/ton) exceeds the rough estimated cost of ammonium sulfate from the proposed process ($111/ ton), by 25 percent, if granular size ammonium sulfate crystals of 1.2 to 3.3 millimeters in diameters can be produced by the proposed process. However, there was at least ??30% margin in the cost estimate calculations. The additional costs for compaction, if needed to create granules of the required size, would make the process uneconomical unless considerable efficiency gains are achieved to balance the additional costs. This study suggests the need both to refine the crystallization process and to find potential markets for the calcium carbonate produced by the process.
Crowdsourcing-Assisted Radio Environment Database for V2V Communication †
Katagiri, Keita; Fujii, Takeo
2018-01-01
In order to realize reliable Vehicle-to-Vehicle (V2V) communication systems for autonomous driving, the recognition of radio propagation becomes an important technology. However, in the current wireless distributed network systems, it is difficult to accurately estimate the radio propagation characteristics because of the locality of the radio propagation caused by surrounding buildings and geographical features. In this paper, we propose a measurement-based radio environment database for improving the accuracy of the radio environment estimation in the V2V communication systems. The database first gathers measurement datasets of the received signal strength indicator (RSSI) related to the transmission/reception locations from V2V systems. By using the datasets, the average received power maps linked with transmitter and receiver locations are generated. We have performed measurement campaigns of V2V communications in the real environment to observe RSSI for the database construction. Our results show that the proposed method has higher accuracy of the radio propagation estimation than the conventional path loss model-based estimation. PMID:29649174
NASA Astrophysics Data System (ADS)
Sharpe, Saxon E.
2002-05-01
Five Neotoma spp. (packrat) middens are analyzed from Sand Canyon Alcove, Dinosaur National Monument, Colorado. Plant remains in middens dated at approximately 9870, 9050, 8460, 3000, and 0 14C yr B.P. are used to estimate Holocene seasonal temperature and precipitation values based on modern plant tolerances published by Thompson et al. (1999a, 1999b). Early Holocene vegetation at the alcove shows a transition from a cool/mesic to a warmer, more xeric community between 9050 and 8460 14C yr B.P. Picea pungens, Pinus flexilis, and Juniperus communis exhibit an average minimum elevational displacement of 215 m. Picea pungens and Pinus flexilis are no longer found in the monument. Estimates based on modern plant parameters (Thompson et al., 1999a) suggest that average temperatures at 9870 14C yr B.P. may have been at least 1° to 3°C colder in January and no greater than 3° to 10°C colder in July than modern at this site. Precipitation during this time may have been at least 2 times modern in January and 2 to 3 times modern in July. Discrepancies in estimated temperature and precipitation tolerances between last occurrence and first occurrence taxa in the midden record suggest that midden assemblages may include persisting relict vegetation.
New geothermal site identification and qualification. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2004-04-01
This study identifies remaining undeveloped geothermal resources in California and western Nevada, and it estimates the development costs of each. It has relied on public-domain information and such additional data as geothermal developers have chosen to make available. Reserve estimation has been performed by volumetric analysis with a probabilistic approach to uncertain input parameters. Incremental geothermal reserves in the California/Nevada study area have a minimum value of 2,800 grosss MW and a most-likely value of 4,300 gross MW. For the state of California alone, these values are 2,000 and 3,000 gross MW, respectively. These estimates may be conservative to themore » extent that they do not take into account resources about which little or no public-domain information is available. The average capital cost of incremental generation capacity is estimated to average $3,100/kW for the California/Nevada study area, and $2,950/kW for the state of California alone. These cost estimates include exploration, confirmation drilling, development drilling, plant construction, and transmission-line costs. For the purposes of this study, a capital cost of $2,400/kW is considered competitive with other renewable resources. The amount of incremental geothermal capacity available at or below $2,400/kW is about 1,700 gross MW for the California/Nevada study area, and the same amount (within 50-MW rounding) for the state of California alone. The capital cost estimates are only approximate, because each developer would bring its own experience, bias, and opportunities to the development process. Nonetheless, the overall costs per project estimated in this study are believed to be reasonable.« less
Ford, M.; Ferguson, C.C.
1985-01-01
In south-west Ireland, hydrothermally formed arsenopyrite crystals in a Devonian mudstone have responded to Variscan deformation by brittle extension fracture and fragment separation. The interfragment gaps and terminal extension zones of each crystal are infilled with fibrous quartz. Stretches within the cleavage plane have been calculated by the various methods available, most of which can be modified to incorporate terminal extension zones. The Strain Reversal Method is the most accurate currently available but still gives a minimum estimate of the overall strain. The more direct Hossain method, which gives only slightly lower estimates with this data, is more practical for field use. A strain ellipse can be estimated from each crystal rosette composed of three laths (assuming the original interlimb angles were all 60??) and, because actual rather than relative stretches are estimated, this provides a lower bound to the area increase in the plane of cleavage. Based on the average of our calculated strain ellipses this area increase is at least 114% and implies an average shortening across the cleavage of at least 53%. However, several lines of evidence suggest that the cleavage deformation was more intense and more oblate than that calculated, and we argue that a 300% area increase in the cleavage plane and 75% shortening across the cleavage are more realistic estimates of the true strain. Furthermore, the along-strike elongation indicated is at least 80%, which may be regionally significant. Estimates of orogenic contraction derived from balanced section construction should therefore take into account the possibility of a substantial strike elongation, and tectonic models that can accommodate such elongations need to be developed. ?? 1985.
Assessing operating characteristics of CAD algorithms in the absence of a gold standard
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roy Choudhury, Kingshuk; Paik, David S.; Yi, Chin A.
2010-04-15
Purpose: The authors examine potential bias when using a reference reader panel as ''gold standard'' for estimating operating characteristics of CAD algorithms for detecting lesions. As an alternative, the authors propose latent class analysis (LCA), which does not require an external gold standard to evaluate diagnostic accuracy. Methods: A binomial model for multiple reader detections using different diagnostic protocols was constructed, assuming conditional independence of readings given true lesion status. Operating characteristics of all protocols were estimated by maximum likelihood LCA. Reader panel and LCA based estimates were compared using data simulated from the binomial model for a range ofmore » operating characteristics. LCA was applied to 36 thin section thoracic computed tomography data sets from the Lung Image Database Consortium (LIDC): Free search markings of four radiologists were compared to markings from four different CAD assisted radiologists. For real data, bootstrap-based resampling methods, which accommodate dependence in reader detections, are proposed to test of hypotheses of differences between detection protocols. Results: In simulation studies, reader panel based sensitivity estimates had an average relative bias (ARB) of -23% to -27%, significantly higher (p-value <0.0001) than LCA (ARB -2% to -6%). Specificity was well estimated by both reader panel (ARB -0.6% to -0.5%) and LCA (ARB 1.4%-0.5%). Among 1145 lesion candidates LIDC considered, LCA estimated sensitivity of reference readers (55%) was significantly lower (p-value 0.006) than CAD assisted readers' (68%). Average false positives per patient for reference readers (0.95) was not significantly lower (p-value 0.28) than CAD assisted readers' (1.27). Conclusions: Whereas a gold standard based on a consensus of readers may substantially bias sensitivity estimates, LCA may be a significantly more accurate and consistent means for evaluating diagnostic accuracy.« less
Distribution of materials in construction and demolition waste in Portugal.
Coelho, André; de Brito, Jorge
2011-08-01
It may not be enough simply to know the global volume of construction and demolition waste (CDW) generated in a certain region or country if one wants to estimate, for instance, the revenue accruing from separating several types of materials from the input entering a given CDW recycling plant. A more detailed determination of the distribution of the materials within the generated CDW is needed and the present paper addresses this issue, distinguishing different buildings and types of operation (new construction, retrofitting and demolition). This has been achieved by measuring the materials from buildings of different ages within the Portuguese building stock, and by using direct data from demolition/retrofitting sites and new construction average values reported in the literature. An attempt to establish a benchmark with other countries is also presented. This knowledge may also benefit industry management, especially that related to CDW recycling, helping to optimize procedures, equipment size and operation and even industrial plant spatial distribution. In an extremely competitive market, where as in Portugal low-tech and high environmental impact procedures remain the norm in the construction industry (in particular, the construction waste industry), the introduction of a successful recycling industry is only possible with highly optimized processes and based on a knowledge-based approach to problems.
Zeemering, Stef; Bonizzi, Pietro; Maesen, Bart; Peeters, Ralf; Schotten, Ulrich
2015-01-01
Spatiotemporal complexity of atrial fibrillation (AF) patterns is often quantified by annotated intracardiac contact mapping. We introduce a new approach that applies recurrence plot (RP) construction followed by recurrence quantification analysis (RQA) to epicardial atrial electrograms, recorded with a high-density grid of electrodes. In 32 patients with no history of AF (aAF, n=11), paroxysmal AF (PAF, n=12) and persistent AF (persAF, n=9), RPs were constructed using a phase space electrogram embedding dimension equal to the estimated AF cycle length. Spatial information was incorporated by 1) averaging the recurrence over all electrodes, and 2) by applying principal component analysis (PCA) to the matrix of embedded electrograms and selecting the first principal component as a representation of spatial diversity. Standard RQA parameters were computed on the constructed RPs and correlated to the number of fibrillation waves per AF cycle (NW). Averaged RP RQA parameters showed no correlation with NW. Correlations improved when applying PCA, with maximum correlation achieved between RP threshold and NW (RR1%, r=0.68, p <; 0.001) and RP determinism (DET, r=-0.64, p <; 0.001). All studied RQA parameters based on the PCA RP were able to discriminate between persAF and aAF/PAF (DET persAF 0.40 ± 0.11 vs. 0.59 ± 0.14/0.62 ± 0.16, p <; 0.01). RP construction and RQA combined with PCA provide a quick and reliable tool to visualize dynamical behaviour and to assess the complexity of contact mapping patterns in AF.
Epic Erosion Along Newly Constructed Roads in Yunnan, China
NASA Astrophysics Data System (ADS)
Sidle, R. C.; Kono, Y.; Yamaguchi, T.
2007-05-01
The recent expansion and construction of new mountain roads in northwestern Yunnan Province, China, poses problems related to landslides and surface erosion that are impacting the headwaters of three great river systems: the Salween, Mekong, and Yangtze. Many of these newer roads are simply blasted into unstable hillsides with virtually no attention paid to optimal road location, construction practices, and erosion control measures. During summer 2006, seven people traveling in a minivan along a newly constructed road to Weixi were killed by a landslide. A survey conducted along a this 23.5 km road section (4 yr old) in the headwaters of the Mekong River revealed epic levels of landslides and surface erosion. Based on a preliminary survey, the road erosion was categorized as moderately severe, severe, or very severe, and a representative 0.75 to 0.90 km stretch of road was then surveyed for both landslide (based on dimensional analysis) and surface erosion (based on soil pedestal height). Average mass wasting rates (9608 t ha-1yr-1) along the road were more than 13 times higher than surface erosion (720 t ha-1yr-1), even though surface erosion rates are among the highest reported for disturbed lands. Dry ravel constituted a minor proportion of the mass wasting: 4% in the severe erosion section of the road and 0.5-0.6% in the moderately severe and very severe sections. For the very severe erosion road section (6 km long), estimated landslide erosion alone was > 33,000 t ha- 1yr-1, 620 times the average landslide erosion from forest roads built in unstable terrain in western North America. These levels of landslide erosion along the Weixi road are the highest ever documented and are somewhat representative of erosion along new mountain roads in this region of Yunnan. Sediment produced from roads is highly connected to fluvial systems; we estimate that 80-95% of the direct sediment contributions into the headwaters of these rivers are attributable to road erosion and landslides. These epic sediment loads represent cumulative effects that may persist in these important transnational rivers for decades.
Wu, Haiming; Zhang, Jian; Wei, Rong; Liang, Shuang; Li, Cong; Xie, Huijun
2013-01-01
Nitrogen removal processing in different constructed wetlands treating different kinds of wastewater often varies, and the contribution to nitrogen removal by various pathways remains unclear. In this study, the seasonal nitrogen removal and transformations as well as nitrogen balance in wetland microcosms treating slightly polluted river water was investigated. The results showed that the average total nitrogen removal rates varied in different seasons. According to the mass balance approach, plant uptake removed 8.4-34.3 % of the total nitrogen input, while sediment storage and N(2)O emission contributed 20.5-34.4 % and 0.6-1.9 % of nitrogen removal, respectively. However, the percentage of other nitrogen loss such as N(2) emission due to nitrification and denitrification was estimated to be 2.0-23.5 %. The results indicated that plant uptake and sediment storage were the key factors limiting nitrogen removal besides microbial processes in surface constructed wetland for treating slightly polluted river water.
Image construction from the IRAS survey and data fusion
NASA Technical Reports Server (NTRS)
Bontekoe, Tj. R.
1990-01-01
The IRAS survey data can be used successfully to produce images of extended objects. The major difficulty, viz. non-uniform sampling, different response functions for each detector, and varying signal-to-noise levels for each detector for each scan, were resolved. The results of three different image construction techniques are compared: co-addition, constrained least squares, and maximum entropy. The maximum entropy result is superior. An image of the galaxy M51 with an average spatial resolution of 45 arc seconds, is presented using 60 micron survey data. This exceeds the telescope diffraction limit of 1 minute of arc, at this wavelength. Data fusion is a proposed method for combining data from different instruments, with different spatial resolutions, at different wavelengths. Direct estimates of the physical parameters, temperature, density and composition, can be made from the data without prior images (re-)construction. An increase in the accuracy of these parameters is expected as the result of this more systematic approach.
Automatic Construction of Wi-Fi Radio Map Using Smartphones
NASA Astrophysics Data System (ADS)
Liu, Tao; Li, Qingquan; Zhang, Xing
2016-06-01
Indoor positioning could provide interesting services and applications. As one of the most popular indoor positioning methods, location fingerprinting determines the location of mobile users by matching the received signal strength (RSS) which is location dependent. However, fingerprinting-based indoor positioning requires calibration and updating of the fingerprints which is labor-intensive and time-consuming. In this paper, we propose a visual-based approach for the construction of radio map for anonymous indoor environments without any prior knowledge. This approach collects multi-sensors data, e.g. video, accelerometer, gyroscope, Wi-Fi signals, etc., when people (with smartphones) walks freely in indoor environments. Then, it uses the multi-sensor data to restore the trajectories of people based on an integrated structure from motion (SFM) and image matching method, and finally estimates location of sampling points on the trajectories and construct Wi-Fi radio map. Experiment results show that the average location error of the fingerprints is about 0.53 m.
Aykamis, Ahmet S; Turhan, Seref; Aysun Ugur, F; Baykan, Umut N; Kiliç, Ahmet M
2013-11-01
It is very important to determine the levels of the natural radioactivity in construction materials and radon exhalation rate from these materials for assessing potential exposure risks for the residents. The present study deals with 22 different granite samples employed as decoration stones in constructions in Turkey. The natural radioactivity in granite samples was measured by gamma-ray spectrometry with an HPGe detector. The activity concentrations of (226)Ra, (232)Th and (40)K were found to be in the range of 10-187, 16-354 and 104-1630 Bq kg(-1), respectively. The radon surface exhalation rate and the radon mass exhalation rate estimated from the measured values of (226)Ra content and material properties varied from 1.3 to 24.8 Bq m(-2) h(-1) with a mean of 10.5±1.5 Bq m(-2) h(-1) and 0.03-0.64 Bq kg(-1) h(-1) with a mean of 0.27±0.04 Bq kg(-1) h(-1), respectively. Radon concentrations in the room caused from granite samples estimated using a mass balance equation varied from 23 to 461 Bq m(-3) with a mean of 196±27 Bq m(-3). Also the gamma index (Iγ), external indoor annual effective dose (Eγ) and annual effective dose due to the indoor radon exposure (ERn) were estimated as the average value of 1.1±0.1, 0.16±0.02 mSv and 5.0±0.7 mSv, respectively, for the granite samples.
APPLICATION OF THE 3D MODEL OF RAILWAY VIADUCTS TO COST ESTIMATION AND CONSTRUCTION
NASA Astrophysics Data System (ADS)
Fujisawa, Yasuo; Yabuki, Nobuyoshi; Igarashi, Zenichi; Yoshino, Hiroyuki
Three dimensional models of civil engineering structures are only partially used in either design or construction but not both. Research on integration of design, cost estimation and construction by 3Dmodels has not been heard in civil engineering domain yet. Using continuously a 3D product model of a structure from design to construction through estimation should improve the efficiency and decrease the occurrence of mistakes, hence enhancing the quality. In this research, we investigated the current practices of flow from design to construction, particularly focusing on cost estimation. Then, we identified advantages and issues on utilization of 3D design models to estimation and construction by applying 3D models to an actual railway construction project.
Wake characteristics of an eight-leg tower for a MOD-0 type wind turbine
NASA Technical Reports Server (NTRS)
Savino, J. M.; Wagner, L. H.; Sinclair, D.
1977-01-01
Low speed wind tunnel tests were conducted to determine the flow characteristics of the wake downwind of a 1/25th scale, all tubular eight leg tower concept suitable for application to the DOE-NASA MOD-0 wind power turbine. Measurements were made of wind speed profiles, and from these were determined the wake local minimum velocity, average velocity, and width for several wind approach angles. These data are presented herein along with tower shadow photographs and comparisons with data from an earlier lattice type, four leg tower model constructed of tubular members. Values of average wake velocity defect ratio and average ratio of wake width to blade radius for the eight leg model were estimated to be around 0.17 and 0.30, respectively, at the plane of the rotor blade. These characteristics suggest that the tower wake of the eight leg concept is slightly less than that of the four leg design.
Confidence in Altman-Bland plots: a critical review of the method of differences.
Ludbrook, John
2010-02-01
1. Altman and Bland argue that the virtue of plotting differences against averages in method-comparison studies is that 95% confidence limits for the differences can be constructed. These allow authors and readers to judge whether one method of measurement could be substituted for another. 2. The technique is often misused. So I have set out, by statistical argument and worked examples, to advise pharmacologists and physiologists how best to construct these limits. 3. First, construct a scattergram of differences on averages, then calculate the line of best fit for the linear regression of differences on averages. If the slope of the regression is shown to differ from zero, there is proportional bias. 4. If there is no proportional bias and if the scatter of differences is uniform (homoscedasticity), construct 'classical' 95% confidence limits. 5. If there is proportional bias yet homoscedasticity, construct hyperbolic 95% confidence limits (prediction interval) around the line of best fit. 6. If there is proportional bias and the scatter of values for differences increases progressively as the average values increase (heteroscedasticity), log-transform the raw values from the two methods and replot differences against averages. If this eliminates proportional bias and heteroscedasticity, construct 'classical' 95% confidence limits. Otherwise, construct horizontal V-shaped 95% confidence limits around the line of best fit of differences on averages or around the weighted least products line of best fit to the original data. 7. In designing a method-comparison study, consult a qualified biostatistician, obey the rules of randomization and make replicate observations.
Zhu, Lin; Gong, Huili; Chen, Yun; Li, Xiaojuan; Chang, Xiang; Cui, Yijiao
2016-03-01
Hydraulic conductivity is a major parameter affecting the output accuracy of groundwater flow and transport models. The most commonly used semi-empirical formula for estimating conductivity is Kozeny-Carman equation. However, this method alone does not work well with heterogeneous strata. Two important parameters, grain size and porosity, often show spatial variations at different scales. This study proposes a method for estimating conductivity distributions by combining a stochastic hydrofacies model with geophysical methods. The Markov chain model with transition probability matrix was adopted to re-construct structures of hydrofacies for deriving spatial deposit information. The geophysical and hydro-chemical data were used to estimate the porosity distribution through the Archie's law. Results show that the stochastic simulated hydrofacies model reflects the sedimentary features with an average model accuracy of 78% in comparison with borehole log data in the Chaobai alluvial fan. The estimated conductivity is reasonable and of the same order of magnitude of the outcomes of the pumping tests. The conductivity distribution is consistent with the sedimentary distributions. This study provides more reliable spatial distributions of the hydraulic parameters for further numerical modeling.
Investing in health: is social housing value for money? A cost-utility analysis.
Lawson, K D; Kearns, A; Petticrew, M; Fenwick, E A L
2013-10-01
There is a healthy public policy agenda investigating the health impacts of improving living conditions. However, there are few economic evaluations, to date, assessing value for money. We conducted the first cost-effectiveness analysis of a nationwide intervention transferring social and private tenants to new-build social housing, in Scotland. A quasi-experimental prospective study was undertaken involving 205 intervention households and 246 comparison households, over 2 years. A cost-utility analysis assessed the average cost per change in health utility (a single score summarising overall health-related quality of life), generated via the SF-6D algorithm. Construction costs for new builds were included. Analysis was conducted for all households, and by family, adult and elderly households; with estimates adjusted for baseline confounders. Outcomes were annuitised and discounted at 3.5%. The average discounted cost was £18, 708 per household, at a national programme cost of £ 28.4 million. The average change in health utility scores in the intervention group attributable to the intervention were +0.001 for all households, +0.001 for family households, -0.04 for adult households and -0.03 for elderly households. All estimates were statistically insignificant. At face value, the interventions were not value for money in health terms. However, because the policy rationale was the amenity provision of housing for disadvantaged groups, impacts extend beyond health and may be fully realised over the long term. Before making general value-for-money inferences, economic evaluation should attempt to estimate the full social value of interventions, model long-term impacts and explicitly incorporate equity considerations.
Estimating Dense Cardiac 3D Motion Using Sparse 2D Tagged MRI Cross-sections*
Ardekani, Siamak; Gunter, Geoffrey; Jain, Saurabh; Weiss, Robert G.; Miller, Michael I.; Younes, Laurent
2015-01-01
In this work, we describe a new method, an extension of the Large Deformation Diffeomorphic Metric Mapping to estimate three-dimensional deformation of tagged Magnetic Resonance Imaging Data. Our approach relies on performing non-rigid registration of tag planes that were constructed from set of initial reference short axis tag grids to a set of deformed tag curves. We validated our algorithm using in-vivo tagged images of normal mice. The mapping allows us to compute root mean square distance error between simulated tag curves in a set of long axis image planes and the acquired tag curves in the same plane. Average RMS error was 0.31±0.36(SD) mm, which is approximately 2.5 voxels, indicating good matching accuracy. PMID:25571140
Raj, Pradeep
2011-07-01
Water table fluctuation (δh) can be used to rapidly assess changes in groundwater storage. But δh gives acceptable results only if the point of observation is ideally located in the catchment of interest and gives average δh of the area, a condition which is rarely met. However, if large numbers of observation wells are located within a basin (a catchment) the average δh can be computed and used. But, a better way is to use points obtained by shallowest water level and deepest water levels to construct a wedge of water table fluctuation across the area of interest; the mean height of this wedge can be assumed to be the mean δh in the area. And when there is only one observation well, the fact that water table is a subdued replica of the topography, is made use to construct the wedge of water table fluctuation. The results from some randomly selected observations in typical semi-arid, hard rock environment in Andhra Pradesh show that through this approach mean δh can be effectively used to get change in groundwater storage in area. The mean recharge obtained in this study is in the order of 75 and mean draft is 58 mm/a, while mean recharge and draft obtained by conventional technique are 66 and 54 mm/a, respectively. The most likely specific yield around the middle reaches of a catchment ranges between 0.012 and 0.041 which is within the range given by Groundwater Estimation Committee of India for hard rocks.
NASA Astrophysics Data System (ADS)
Zhao, Y.; Hu, Q.
2017-09-01
Continuous development of urban road traffic system requests higher standards of road ecological environment. Ecological benefits of street trees are getting more attention. Carbon sequestration of street trees refers to the carbon stocks of street trees, which can be a measurement for ecological benefits of street trees. Estimating carbon sequestration in a traditional way is costly and inefficient. In order to solve above problems, a carbon sequestration estimation approach for street trees based on 3D point cloud from vehicle-borne laser scanning system is proposed in this paper. The method can measure the geometric parameters of a street tree, including tree height, crown width, diameter at breast height (DBH), by processing and analyzing point cloud data of an individual tree. Four Chinese scholartree trees and four camphor trees are selected for experiment. The root mean square error (RMSE) of tree height is 0.11m for Chinese scholartree and 0.02m for camphor. Crown widths in X direction and Y direction, as well as the average crown width are calculated. And the RMSE of average crown width is 0.22m for Chinese scholartree and 0.10m for camphor. The last calculated parameter is DBH, the RMSE of DBH is 0.5cm for both Chinese scholartree and camphor. Combining the measured geometric parameters and an appropriate carbon sequestration calculation model, the individual tree's carbon sequestration will be estimated. The proposed method can help enlarge application range of vehicle-borne laser point cloud data, improve the efficiency of estimating carbon sequestration, construct urban ecological environment and manage landscape.
Haas, Patrick J; Bishop, Charles E; Gao, Yan; Griswold, Michael E; Schweinfurth, John M
2016-10-01
To evaluate the relationships among measures of physical activity and hearing in the Jackson Heart Study. Prospective cohort study. We assessed hearing on 1,221 Jackson Heart Study participants who also had validated physical activity questionnaire data on file. Hearing thresholds were measured across frequency octaves from 250 to 8,000 Hz, and various frequency pure-tone averages (PTAs) were constructed, including PTA4 (average of 500, 1,000, 2,000, and 4,000 Hz), PTA-high (average of 4,000 and 8,000 Hz), PTA-mid (average of 1,000 and 2,000 Hz), and PTA-low (average of 250 and 500 Hz). Hearing loss was defined for pure tones and pure-tone averages as >25 dB HL in either ear and averaged between the ears. Associations between physical activity and hearing were estimated using linear regression, reporting changes in decibel hearing level, and logistic regression, reporting odds ratios (OR) of hearing loss. Physical activity exhibited a statistically significant but small inverse relationship with PTA4, -0.20 dB HL per doubling of activity (95% confidence interval [CI]: -0.35, -0.04; P = .016), as well as with PTA-low and pure tones at 250, 2,000, and 4,000 Hz in adjusted models. Multivariable logistic regression modeling supported a decrease in the odds of high-frequency hearing loss among participants who reported at least some moderate weekly physical activity (PTA-high, OR: 0.69 [95% CI: 0.52, 0.92]; P = .011 and 4000 Hz, OR: 0.75 [95% CI: 0.57, 0.99]; P = .044). Our study provides further evidence that physical activity is related to better hearing; however, the clinical significance of this relationship cannot be estimated given the nature of the cross-sectional study design. 2b Laryngoscope, 126:2376-2381, 2016. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.
On the Coplanar Integrable Case of the Twice-Averaged Hill Problem with Central Body Oblateness
NASA Astrophysics Data System (ADS)
Vashkov'yak, M. A.
2018-01-01
The twice-averaged Hill problem with the oblateness of the central planet is considered in the case where its equatorial plane coincides with the plane of its orbital motion relative to the perturbing body. A qualitative study of this so-called coplanar integrable case was begun by Y. Kozai in 1963 and continued by M.L. Lidov and M.V. Yarskaya in 1974. However, no rigorous analytical solution of the problem can be obtained due to the complexity of the integrals. In this paper we obtain some quantitative evolution characteristics and propose an approximate constructive-analytical solution of the evolution system in the form of explicit time dependences of satellite orbit elements. The methodical accuracy has been estimated for several orbits of artificial lunar satellites by comparison with the numerical solution of the evolution system.
Brink, LuAnn L; Talbott, Evelyn O; Burks, J Alton; Palmer, Catherine V
2002-01-01
Noise induced hearing loss (NIHL) is among the 10 leading occupational diseases, afflicting between 7.4 and 10.2 million people who work in noise above 85 dBA. Although mandatory hearing conservation programs (HCPs) have been in effect since 1972, this problem persists, as hearing protectors are not consistently used by workers, or may not attenuate to manufacturer's estimates in real world conditions. In this study, information from noise and hearing protection use measurements taken at an automobile assembly plant were used to construct average lifetime noise exposure and hearing protection compliance estimates for use in modeling to predict both total hearing loss and onset of two accepted definitions of hearing loss. There were 301 males and females in this cohort; their mean age was 42.6 (7.2) years, and mean tenure was 14.3 (3.5) years. Average length of follow-up was 14.0 years. There were 16 members of this cohort who had hearing loss at the speech frequencies (defined as an average hearing level > or = 25 dB at 500, 1000, and 2000 Hz). In cross-sectional multivariate analyses, years of employment, male gender, and proportion of time wearing hearing protection were the factors most associated with hearing loss at the average of 2000, 3000, and 4000 Hz (p < 0.0001) controlling for age, transfer status (as a surrogate for previous noise exposure), race, and lifetime average noise exposure. The most consistent predictor of hearing loss in both univariate and multivariate analyses was percentage of time having used hearing protection during the workers' tenure.
Quality planning in Construction Project
NASA Astrophysics Data System (ADS)
Othman, I.; Shafiq, Nasir; Nuruddin, M. F.
2017-12-01
The purpose of this paper is to investigate deeper on the factors that contribute to the effectiveness of quality planning, identifying the common problems encountered in quality planning, practices and ways for improvements in quality planning for construction projects. This paper involves data collected from construction company representatives across Malaysia that are obtained through semi-structured interviews as well as questionnaire distributions. Results shows that design of experiments (average index: 4.61), inspection (average index: 4.45) and quality audit as well as other methods (average index: 4.26) rank first, second and third most important factors respectively.
Groves-Kirkby, C J; Denman, A R; Phillips, P S; Crockett, R G M; Woolridge, A C; Tornberg, R
2006-05-01
Although United Kingdom (UK) Building Regulations applicable to houses constructed since 1992 in Radon Affected Areas address the health issues arising from the presence of radon in domestic properties and specify the installation of radon-mitigation measures during construction, no legislative requirement currently exists for monitoring the effectiveness of such remediation once construction is completed and the houses are occupied. To assess the relative effectiveness of During-Construction radon reduction and Post-Construction remediation, radon concentration data from houses constructed before and after 1992 in Northamptonshire, UK, a designated Radon Affected Area, was analysed. Post-Construction remediation of 73 pre-1992 houses using conventional fan-assisted sump technology proved to be extremely effective, with radon concentrations reduced to the Action Level, or below, in all cases. Of 64 houses constructed since 1992 in a well-defined geographical area, and known to have had radon-barrier membranes installed during construction, 11% exhibited radon concentrations in excess of the Action Level. This compares with the estimated average for all houses in the same area of 17%, suggesting that, in some 60% of the houses surveyed, installation of a membrane has not resulted in reduction of mean annual radon concentrations to below the Action Level. Detailed comparison of the two data sets reveals marked differences in the degree of mitigation achieved by remediation. There is therefore an ongoing need for research to resolve definitively the issue of radon mitigation and to define truly effective anti-radon measures, readily installed in domestic properties at the time of construction. It is therefore recommended that mandatory testing be introduced for all new houses in Radon Affected Areas.
NASA Astrophysics Data System (ADS)
Chen, Liang; Zhao, Qile; Hu, Zhigang; Jiang, Xinyuan; Geng, Changjiang; Ge, Maorong; Shi, Chuang
2018-01-01
Lots of ambiguities in un-differenced (UD) model lead to lower calculation efficiency, which isn't appropriate for the high-frequency real-time GNSS clock estimation, like 1 Hz. Mixed differenced model fusing UD pseudo-range and epoch-differenced (ED) phase observations has been introduced into real-time clock estimation. In this contribution, we extend the mixed differenced model for realizing multi-GNSS real-time clock high-frequency updating and a rigorous comparison and analysis on same conditions are performed to achieve the best real-time clock estimation performance taking the efficiency, accuracy, consistency and reliability into consideration. Based on the multi-GNSS real-time data streams provided by multi-GNSS Experiment (MGEX) and Wuhan University, GPS + BeiDou + Galileo global real-time augmentation positioning prototype system is designed and constructed, including real-time precise orbit determination, real-time precise clock estimation, real-time Precise Point Positioning (RT-PPP) and real-time Standard Point Positioning (RT-SPP). The statistical analysis of the 6 h-predicted real-time orbits shows that the root mean square (RMS) in radial direction is about 1-5 cm for GPS, Beidou MEO and Galileo satellites and about 10 cm for Beidou GEO and IGSO satellites. Using the mixed differenced estimation model, the prototype system can realize high-efficient real-time satellite absolute clock estimation with no constant clock-bias and can be used for high-frequency augmentation message updating (such as 1 Hz). The real-time augmentation message signal-in-space ranging error (SISRE), a comprehensive accuracy of orbit and clock and effecting the users' actual positioning performance, is introduced to evaluate and analyze the performance of GPS + BeiDou + Galileo global real-time augmentation positioning system. The statistical analysis of real-time augmentation message SISRE is about 4-7 cm for GPS, whlile 10 cm for Beidou IGSO/MEO, Galileo and about 30 cm for BeiDou GEO satellites. The real-time positioning results prove that the GPS + BeiDou + Galileo RT-PPP comparing to GPS-only can effectively accelerate convergence time by about 60%, improve the positioning accuracy by about 30% and obtain averaged RMS 4 cm in horizontal and 6 cm in vertical; additionally RT-SPP accuracy in the prototype system can realize positioning accuracy with about averaged RMS 1 m in horizontal and 1.5-2 m in vertical, which are improved by 60% and 70% to SPP based on broadcast ephemeris, respectively.
Solution of the sign problem in the Potts model at fixed fermion number
NASA Astrophysics Data System (ADS)
Alexandru, Andrei; Bergner, Georg; Schaich, David; Wenger, Urs
2018-06-01
We consider the heavy-dense limit of QCD at finite fermion density in the canonical formulation and approximate it by a three-state Potts model. In the strong-coupling limit, the model is free of the sign problem. Away from the strong coupling, the sign problem is solved by employing a cluster algorithm which allows to average each cluster over the Z (3 ) sectors. Improved estimators for physical quantities can be constructed by taking into account the triality of the clusters, that is, their transformation properties with respect to Z (3 ) transformations.
Femoral anatomical frame: assessment of various definitions.
Della Croce, U; Camomilla, V; Leardini, A; Cappozzo, A
2003-06-01
The reliability of the estimate of joint kinematic variables and the relevant functional interpretation are affected by the uncertainty with which bony anatomical landmarks and underlying bony segment anatomical frames are determined. When a stereo-photogrammetric system is used for in vivo studies, minimising and compensating for this uncertainty is crucial. This paper deals with the propagation of the errors associated with the location of both internal and palpable femoral anatomical landmarks to the estimation of the orientation of the femoral anatomical frame and to the knee joint angles during movement. Given eight anatomical landmarks, and the precision with which they can be identified experimentally, 12 different rules were defined for the construction of the anatomical frame and submitted to comparative assessment. Results showed that using more than three landmarks allows for more repeatable anatomical frame orientation and knee joint kinematics estimation. Novel rules are proposed that use optimization algorithms. On the average, the femoral frame orientation dispersion had a standard deviation of 2, 2.5 and 1.5 degrees for the frontal, transverse, and sagittal plane, respectively. However, a proper choice of the relevant construction rule allowed for a reduction of these inaccuracies in selected planes to 1 degrees rms. The dispersion of the knee adduction-abduction and internal-external rotation angles could also be limited to 1 degrees rms irrespective of the flexion angle value.
Integrity Testing of Pile Cover Using Distributed Fibre Optic Sensing
Rui, Yi; Kechavarzi, Cedric; O’Leary, Frank; Barker, Chris; Nicholson, Duncan; Soga, Kenichi
2017-01-01
The integrity of cast-in-place foundation piles is a major concern in geotechnical engineering. In this study, distributed fibre optic sensing (DFOS) cables, embedded in a pile during concreting, are used to measure the changes in concrete curing temperature profile to infer concrete cover thickness through modelling of heat transfer processes within the concrete and adjacent ground. A field trial was conducted at a high-rise building construction site in London during the construction of a 51 m long test pile. DFOS cables were attached to the reinforcement cage of the pile at four different axial directions to obtain distributed temperature change data along the pile. The monitoring data shows a clear development of concrete hydration temperature with time and the pattern of the change varies due to small changes in concrete cover. A one-dimensional axisymmetric heat transfer finite element (FE) model is used to estimate the pile geometry with depth by back analysing the DFOS data. The results show that the estimated pile diameter varies with depth in the range between 1.40 and 1.56 m for this instrumented pile. This average pile diameter profile compares well to that obtained with the standard Thermal Integrity Profiling (TIP) method. A parametric study is conducted to examine the sensitivity of concrete and soil thermal properties on estimating the pile geometry. PMID:29257094
Integrity Testing of Pile Cover Using Distributed Fibre Optic Sensing.
Rui, Yi; Kechavarzi, Cedric; O'Leary, Frank; Barker, Chris; Nicholson, Duncan; Soga, Kenichi
2017-12-19
The integrity of cast-in-place foundation piles is a major concern in geotechnical engineering. In this study, distributed fibre optic sensing (DFOS) cables, embedded in a pile during concreting, are used to measure the changes in concrete curing temperature profile to infer concrete cover thickness through modelling of heat transfer processes within the concrete and adjacent ground. A field trial was conducted at a high-rise building construction site in London during the construction of a 51 m long test pile. DFOS cables were attached to the reinforcement cage of the pile at four different axial directions to obtain distributed temperature change data along the pile. The monitoring data shows a clear development of concrete hydration temperature with time and the pattern of the change varies due to small changes in concrete cover. A one-dimensional axisymmetric heat transfer finite element (FE) model is used to estimate the pile geometry with depth by back analysing the DFOS data. The results show that the estimated pile diameter varies with depth in the range between 1.40 and 1.56 m for this instrumented pile. This average pile diameter profile compares well to that obtained with the standard Thermal Integrity Profiling (TIP) method. A parametric study is conducted to examine the sensitivity of concrete and soil thermal properties on estimating the pile geometry.
A Comprehensive Linkage Map of the Dog Genome
Wong, Aaron K.; Ruhe, Alison L.; Dumont, Beth L.; Robertson, Kathryn R.; Guerrero, Giovanna; Shull, Sheila M.; Ziegle, Janet S.; Millon, Lee V.; Broman, Karl W.; Payseur, Bret A.; Neff, Mark W.
2010-01-01
We have leveraged the reference sequence of a boxer to construct the first complete linkage map for the domestic dog. The new map improves access to the dog's unique biology, from human disease counterparts to fascinating evolutionary adaptations. The map was constructed with ∼3000 microsatellite markers developed from the reference sequence. Familial resources afforded 450 mostly phase-known meioses for map assembly. The genotype data supported a framework map with ∼1500 loci. An additional ∼1500 markers served as map validators, contributing modestly to estimates of recombination rate but supporting the framework content. Data from ∼22,000 SNPs informing on a subset of meioses supported map integrity. The sex-averaged map extended 21 M and revealed marked region- and sex-specific differences in recombination rate. The map will enable empiric coverage estimates and multipoint linkage analysis. Knowledge of the variation in recombination rate will also inform on genomewide patterns of linkage disequilibrium (LD), and thus benefit association, selective sweep, and phylogenetic mapping approaches. The computational and wet-bench strategies can be applied to the reference genome of any nonmodel organism to assemble a de novo linkage map. PMID:19966068
Sequential estimation of surface water mass changes from daily satellite gravimetry data
NASA Astrophysics Data System (ADS)
Ramillien, G. L.; Frappart, F.; Gratton, S.; Vasseur, X.
2015-03-01
We propose a recursive Kalman filtering approach to map regional spatio-temporal variations of terrestrial water mass over large continental areas, such as South America. Instead of correcting hydrology model outputs by the GRACE observations using a Kalman filter estimation strategy, regional 2-by-2 degree water mass solutions are constructed by integration of daily potential differences deduced from GRACE K-band range rate (KBRR) measurements. Recovery of regional water mass anomaly averages obtained by accumulation of information of daily noise-free simulated GRACE data shows that convergence is relatively fast and yields accurate solutions. In the case of cumulating real GRACE KBRR data contaminated by observational noise, the sequential method of step-by-step integration provides estimates of water mass variation for the period 2004-2011 by considering a set of suitable a priori error uncertainty parameters to stabilize the inversion. Spatial and temporal averages of the Kalman filter solutions over river basin surfaces are consistent with the ones computed using global monthly/10-day GRACE solutions from official providers CSR, GFZ and JPL. They are also highly correlated to in situ records of river discharges (70-95 %), especially for the Obidos station where the total outflow of the Amazon River is measured. The sparse daily coverage of the GRACE satellite tracks limits the time resolution of the regional Kalman filter solutions, and thus the detection of short-term hydrological events.
Development, Production and Validation of the NOAA Solar Irradiance Climate Data Record
NASA Astrophysics Data System (ADS)
Coddington, O.; Lean, J.; Pilewskie, P.; Snow, M. A.; Lindholm, D. M.
2015-12-01
A new climate data record of Total Solar Irradiance (TSI) and Solar Spectral Irradiance (SSI), including source code and supporting documentation is now publicly available as part of the National Oceanographic and Atmospheric Administration's (NOAA) National Centers for Environmental Information (NCEI) Climate Data Record (CDR) Program. Daily and monthly averaged values of TSI and SSI, with associated time and wavelength dependent uncertainties, are estimated from 1882 to the present with yearly averaged values since 1610, updated quarterly for the foreseeable future. The new Solar Irradiance Climate Data Record, jointly developed by the University of Colorado at Boulder's Laboratory for Atmospheric and Space Physics (LASP) and the Naval Research Laboratory (NRL), is constructed from solar irradiance models that determine the changes from quiet Sun conditions when bright faculae and dark sunspots are present on the solar disk. The magnitudes of the irradiance changes that these features produce are determined from linear regression of the proxy Mg II index and sunspot area indices against the approximately decade-long solar irradiance measurements made by instruments on the SOlar Radiation and Climate Experiment (SORCE) spacecraft. We describe the model formulation, uncertainty estimates, operational implementation and validation approach. Future efforts to improve the uncertainty estimates of the Solar Irradiance CDR arising from model assumptions, and augmentation of the solar irradiance reconstructions with direct measurements from the Total and Spectral Solar Irradiance Sensor (TSIS: launch date, July 2017) are also discussed.
Estimation of Full-Body Poses Using Only Five Inertial Sensors: An Eager or Lazy Learning Approach?
Wouda, Frank J.; Giuberti, Matteo; Bellusci, Giovanni; Veltink, Peter H.
2016-01-01
Human movement analysis has become easier with the wide availability of motion capture systems. Inertial sensing has made it possible to capture human motion without external infrastructure, therefore allowing measurements in any environment. As high-quality motion capture data is available in large quantities, this creates possibilities to further simplify hardware setups, by use of data-driven methods to decrease the number of body-worn sensors. In this work, we contribute to this field by analyzing the capabilities of using either artificial neural networks (eager learning) or nearest neighbor search (lazy learning) for such a problem. Sparse orientation features, resulting from sensor fusion of only five inertial measurement units with magnetometers, are mapped to full-body poses. Both eager and lazy learning algorithms are shown to be capable of constructing this mapping. The full-body output poses are visually plausible with an average joint position error of approximately 7 cm, and average joint angle error of 7∘. Additionally, the effects of magnetic disturbances typical in orientation tracking on the estimation of full-body poses was also investigated, where nearest neighbor search showed better performance for such disturbances. PMID:27983676
Spatial probabilistic pulsatility model for enhancing photoplethysmographic imaging systems
NASA Astrophysics Data System (ADS)
Amelard, Robert; Clausi, David A.; Wong, Alexander
2016-11-01
Photoplethysmographic imaging (PPGI) is a widefield noncontact biophotonic technology able to remotely monitor cardiovascular function over anatomical areas. Although spatial context can provide insight into physiologically relevant sampling locations, existing PPGI systems rely on coarse spatial averaging with no anatomical priors for assessing arterial pulsatility. Here, we developed a continuous probabilistic pulsatility model for importance-weighted blood pulse waveform extraction. Using a data-driven approach, the model was constructed using a 23 participant sample with a large demographic variability (11/12 female/male, age 11 to 60 years, BMI 16.4 to 35.1 kg·m-2). Using time-synchronized ground-truth blood pulse waveforms, spatial correlation priors were computed and projected into a coaligned importance-weighted Cartesian space. A modified Parzen-Rosenblatt kernel density estimation method was used to compute the continuous resolution-agnostic probabilistic pulsatility model. The model identified locations that consistently exhibited pulsatility across the sample. Blood pulse waveform signals extracted with the model exhibited significantly stronger temporal correlation (W=35,p<0.01) and spectral SNR (W=31,p<0.01) compared to uniform spatial averaging. Heart rate estimation was in strong agreement with true heart rate [r2=0.9619, error (μ,σ)=(0.52,1.69) bpm].
Estimation of Full-Body Poses Using Only Five Inertial Sensors: An Eager or Lazy Learning Approach?
Wouda, Frank J; Giuberti, Matteo; Bellusci, Giovanni; Veltink, Peter H
2016-12-15
Human movement analysis has become easier with the wide availability of motion capture systems. Inertial sensing has made it possible to capture human motion without external infrastructure, therefore allowing measurements in any environment. As high-quality motion capture data is available in large quantities, this creates possibilities to further simplify hardware setups, by use of data-driven methods to decrease the number of body-worn sensors. In this work, we contribute to this field by analyzing the capabilities of using either artificial neural networks (eager learning) or nearest neighbor search (lazy learning) for such a problem. Sparse orientation features, resulting from sensor fusion of only five inertial measurement units with magnetometers, are mapped to full-body poses. Both eager and lazy learning algorithms are shown to be capable of constructing this mapping. The full-body output poses are visually plausible with an average joint position error of approximately 7 cm, and average joint angle error of 7 ∘ . Additionally, the effects of magnetic disturbances typical in orientation tracking on the estimation of full-body poses was also investigated, where nearest neighbor search showed better performance for such disturbances.
Estimating moisture transport over oceans using space-based observations
NASA Technical Reports Server (NTRS)
Liu, W. Timothy; Wenqing, Tang
2005-01-01
The moisture transport integrated over the depth of the atmosphere (0) is estimated over oceans using satellite data. The transport is the product of the precipitable water and an equivalent velocity (ue), which, by definition, is the depth-averaged wind velocity weighted by humidity. An artificial neural network is employed to construct a relation between the surface wind velocity measured by the spaceborne scatterometer and coincident ue derived using humidity and wind profiles measured by rawinsondes and produced by reanalysis of operational numerical weather prediction (NWP). On the basis of this relation, 0 fields are produced over global tropical and subtropical oceans (40_N- 40_S) at 0.25_ latitude-longitude and twice daily resolutions from August 1999 to December 2003 using surface wind vector from QuikSCAT and precipitable water from the Tropical Rain Measuring Mission. The derived ue were found to capture the major temporal variability when compared with radiosonde measurements. The average error over global oceans, when compared with NWP data, was comparable with the instrument accuracy specification of space-based scatterometers. The global distribution exhibits the known characteristics of, and reveals more detailed variability than in, previous data.
ANNUAL WATER BUDGETS FOR A FORESTED SINKHOLE WETLAND
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hill, Dr. Andrew Jason; Neary, Vincent S
2012-01-01
Annual water budgets spanning two years, 2004 and 2005, are constructed for a sinkhole wetland in the Tennessee Highland Rim following conversion of 13 % of its watershed to impervious surfaces. The effect of watershed development on the hydrology of the study wetland was significant. Surface runoff was the dominant input, with a contribution of 61.4 % of the total. An average of 18.9 % of gross precipitation was intercepted by the canopy and evaporated. Seepage from the surface water body to the local groundwater system accounted for 83.1 % of the total outflow. Deep recharge varied from 43.2 %more » (2004) to 12.1 % (2005) of total outflow. Overall, evapotranspiration accounted for 72.4 % of the total losses, with an average of 65.7 % lost from soil profile storage. The annual water budgets indicate that deep recharge is a significant hydrologic function performed by isolated sinkhole wetlands, or karst pans, on the Tennessee Highland Rim. Continued hydrologic monitoring of sinkhole wetlands are needed to evaluate hydrologic function and response to anthropogenic impacts. The regression technique developed to estimate surface runoff entering the wetland is shown to provide reasonable annual runoff estimates, but further testing is needed.« less
NASA Astrophysics Data System (ADS)
Fishman, M. M.
1985-01-01
The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.
Song, Yongze; Ge, Yong; Wang, Jinfeng; Ren, Zhoupeng; Liao, Yilan; Peng, Junhuan
2016-07-07
Malaria is one of the most severe parasitic diseases in the world. Spatial distribution estimation of malaria and its future scenarios are important issues for malaria control and elimination. Furthermore, sophisticated nonlinear relationships for prediction between malaria incidence and potential variables have not been well constructed in previous research. This study aims to estimate these nonlinear relationships and predict future malaria scenarios in northern China. Nonlinear relationships between malaria incidence and predictor variables were constructed using a genetic programming (GP) method, to predict the spatial distributions of malaria under climate change scenarios. For this, the examples of monthly average malaria incidence were used in each county of northern China from 2004 to 2010. Among the five variables at county level, precipitation rate and temperature are used for projections, while elevation, water density index, and gross domestic product are held at their present-day values. Average malaria incidence was 0.107 ‰ per annum in northern China, with incidence characteristics in significant spatial clustering. A GP-based model fit the relationships with average relative error (ARE) = 8.127 % for training data (R(2) = 0.825) and 17.102 % for test data (R(2) = 0.532). The fitness of GP results are significantly improved compared with those by generalized additive models (GAM) and linear regressions. With the future precipitation rate and temperature conditions in Special Report on Emission Scenarios (SRES) family B1, A1B and A2 scenarios, spatial distributions and changes in malaria incidences in 2020, 2030, 2040 and 2050 were predicted and mapped. The GP method increases the precision of predicting the spatial distribution of malaria incidence. With the assumption of varied precipitation rate and temperature, and other variables controlled, the relationships between incidence and the varied variables appear sophisticated nonlinearity and spatially differentiation. Using the future fluctuated precipitation and the increased temperature, median malaria incidence in 2020, 2030, 2040 and 2050 would significantly increase that it might increase 19 to 29 % in 2020, but currently China is in the malaria elimination phase, indicating that the effective strategies and actions had been taken. While the mean incidences will not increase even reduce due to the incidence reduction in high-risk regions but the simultaneous expansion of the high-risk areas.
Auditing of suppliers as the requirement of quality management systems in construction
NASA Astrophysics Data System (ADS)
Harasymiuk, Jolanta; Barski, Janusz
2017-07-01
The choice of a supplier of construction materials can be important factor of increase or reduction of building works costs. Construction materials present from 40 for 70% of investment task depending on kind of works being provided for realization. There is necessity of estimate of suppliers from the point of view of effectiveness of construction undertaking and necessity from the point of view of conformity of taken operation by executives of construction job and objects within the confines of systems of managements quality being initiated in their organizations. The estimate of suppliers of construction materials and subexecutives of special works is formal requirement in quality management systems, which meets the requirements of the ISO 9001 standard. The aim of this paper is to show possibilities of making use of anaudit for estimate of credibility and reliability of the supplier of construction materials. The article describes kinds of audits, that were carried in quality management systems, with particular taking into consideration audits called as second-site. One characterizes the estimate criterions of qualitative ability and method of choice of the supplier of construction materials. The paper shows also propositions of exemplary questions, that would be estimated in audit process, the way of conducting of this estimate and conditionality of estimate.
Square-lashing technique in segmental spinal instrumentation: a biomechanical study.
Arlet, Vincent; Draxinger, Kevin; Beckman, Lorne; Steffen, Thomas
2006-07-01
Sublaminar wires have been used for many years for segmental spinal instrumentation in scoliosis surgery. More recently, stainless steel wires have been replaced by titanium cables. However, in rigid scoliotic curves, sublaminar wires or simple cables can either brake or pull out. The square-lashing technique was devised to avoid complications such as cable breakage or lamina cutout. The purpose of the study was therefore to test biomechanically the pull out and failure mode of simple sublaminar constructs versus the square-lashing technique. Individual vertebrae were subjected to pullout testing having one of two different constructs (single loop and square lashing) using either monofilament wire or multifilament cables. Four different methods of fixation were therefore tested: single wire construct, square-lashing wiring construct, single cable construct, and square-lashing cable construct. Ultimate failure load and failure mechanism were recorded. For the single wire the construct failed 12/16 times by wire breakage with an average ultimate failure load of 793 N. For the square-lashing wire the construct failed with pedicle fracture in 14/16, one bilateral lamina fracture, and one wire breakage. Ultimate failure load average was 1,239 N For the single cable the construct failed 12/16 times due to cable breakage (average force 1,162 N). 10/12 of these breakages were where the cable looped over the rod. For the square-lashing cable all of these constructs (16/16) failed by fracture of the pedicle with an average ultimate failure load of 1,388 N. The square-lashing construct had a higher pullout strength than the single loop and almost no cutting out from the lamina. The square-lashing technique with cables may therefore represent a new advance in segmental spinal instrumentation.
Towards universal hybrid star formation rate estimators
NASA Astrophysics Data System (ADS)
Boquien, M.; Kennicutt, R.; Calzetti, D.; Dale, D.; Galametz, M.; Sauvage, M.; Croxall, K.; Draine, B.; Kirkpatrick, A.; Kumari, N.; Hunt, L.; De Looze, I.; Pellegrini, E.; Relaño, M.; Smith, J.-D.; Tabatabaei, F.
2016-06-01
Context. To compute the star formation rate (SFR) of galaxies from the rest-frame ultraviolet (UV), it is essential to take the obscuration by dust into account. To do so, one of the most popular methods consists in combining the UV with the emission from the dust itself in the infrared (IR). Yet, different studies have derived different estimators, showing that no such hybrid estimator is truly universal. Aims: In this paper we aim at understanding and quantifying what physical processes fundamentally drive the variations between different hybrid estimators. In so doing, we aim at deriving new universal UV+IR hybrid estimators to correct the UV for dust attenuation at local and global scales, taking the intrinsic physical properties of galaxies into account. Methods: We use the CIGALE code to model the spatially resolved far-UV to far-IR spectral energy distributions of eight nearby star-forming galaxies drawn from the KINGFISH sample. This allows us to determine their local physical properties, and in particular their UV attenuation, average SFR, average specific SFR (sSFR), and their stellar mass. We then examine how hybrid estimators depend on said properties. Results: We find that hybrid UV+IR estimators strongly depend on the stellar mass surface density (in particular at 70 μm and 100 μm) and on the sSFR (in particular at 24 μm and the total infrared). Consequently, the IR scaling coefficients for UV obscuration can vary by almost an order of magnitude: from 1.55 to 13.45 at 24 μm for instance. This result contrasts with other groups who found relatively constant coefficients with small deviations. We exploit these variations to construct a new class of adaptative hybrid estimators based on observed UV to near-IR colours and near-IR luminosity densities per unit area. We find that they can reliably be extended to entire galaxies. Conclusions: The new estimators provide better estimates of attenuation-corrected UV emission than classical hybrid estimators published in the literature. Taking naturally variable impact of dust heated by old stellar populations into account, they constitute an important step towards universal estimators.
Hewett, Paul; Morey, Sandy Z; Holen, Brian M; Logan, Perry W; Olsen, Geary W
2012-01-01
A study was conducted to construct a job exposure matrix for the roofing granule mine and mill workers at four U.S. plants. Each plant mined different minerals and had unique departments and jobs. The goal of the study was to generate accurate estimates of the mean exposure to respirable crystalline silica for each cell of the job exposure matrix, that is, every combination of plant, department, job, and year represented in the job histories of the study participants. The objectives of this study were to locate, identify, and collect information on all exposure measurements ever collected at each plant, statistically analyze the data to identify deficiencies in the database, identify and resolve questionable measurements, identify all important process and control changes for each plant-department-job combination, construct a time line for each plant-department combination indicating periods where the equipment and conditions were unchanged, and finally, construct a job exposure matrix. After evaluation, 1871 respirable crystalline silica measurements and estimates remained. The primary statistic of interest was the mean exposure for each job exposure matrix cell. The average exposure for each of the four plants was 0.042 mg/m(3) (Belle Mead, N.J.), 0.106 mg/m(3) (Corona, Calif.), 0.051 mg/m(3) (Little Rock, Ark.), and 0.152 mg/m(3) (Wausau, Wis.), suggesting that there may be substantial differences in the employee cumulative exposures. Using the database and the available plant information, the study team assigned an exposure category and mean exposure for every plant-department-job and time interval combination. Despite a fairly large database, the mean exposure for > 95% of the job exposure matrix cells, or specific plant-department-job-year combinations, were estimated by analogy to similar jobs in the plant for which sufficient data were available. This approach preserved plant specificity, hopefully improving the usefulness of the job exposure matrix.
Sanusi, M S M; Ramli, A T; Hassan, W M S W; Lee, M H; Izham, A; Said, M N; Wagiran, H; Heryanshah, A
2017-07-01
Kuala Lumpur has been undergoing rapid urbanisation process, mainly in infrastructure development. The opening of new township and residential in former tin mining areas, particularly in the heavy mineral- or tin-bearing alluvial soil in Kuala Lumpur, is a contentious subject in land-use regulation. Construction practices, i.e. reclamation and dredging in these areas are potential to enhance the radioactivity levels of soil and subsequently, increase the existing background gamma radiation levels. This situation is worsened with the utilisation of tin tailings as construction materials apart from unavoidable soil pollutions due to naturally occurring radioactive materials in construction materials, e.g. granitic aggregate, cement and red clay brick. This study was conducted to assess the urbanisation impacts on background gamma radiation in Kuala Lumpur. The study found that the mean value of measured dose rate was 251±6nGyh -1 (156-392nGyh -1 ) and 4 times higher than the world average value. High radioactivity levels of 238 U (95±12Bqkg -1 ), 232 Th (191±23Bqkg -1 ,) and 40 K (727±130Bqkg -1 ) in soil were identified as the major source of high radiation exposure. Based on statistical ANOVA, t-test, and analyses of cumulative probability distribution, this study has statistically verified the dose enhancements in the background radiation. The effective dose was estimated to be 0.31±0.01mSvy -1 per man. The recommended ICRP reference level (1-20mSvy -1 ) is applicable to the involved existing exposure situation in this study. The estimated effective dose in this study is lower than the ICRP reference level and too low to cause deterministic radiation effects. Nevertheless based on estimations of lifetime radiation exposure risks, this study found that there was small probability for individual in Kuala Lumpur being diagnosed with cancer and dying of cancer. Copyright © 2017 Elsevier Ltd. All rights reserved.
Image reconstruction of IRAS survey scans
NASA Technical Reports Server (NTRS)
Bontekoe, Tj. Romke
1990-01-01
The IRAS survey data can be used successfully to produce images of extended objects. The major difficulties, viz. non-uniform sampling, different response functions for each detector, and varying signal-to-noise levels for each detector for each scan, were resolved. The results of three different image construction techniques are compared: co-addition, constrained least squares, and maximum entropy. The maximum entropy result is superior. An image of the galaxy M51 with an average spatial resolution of 45 arc seconds is presented, using 60 micron survey data. This exceeds the telescope diffraction limit of 1 minute of arc, at this wavelength. Data fusion is a proposed method for combining data from different instruments, with different spacial resolutions, at different wavelengths. Data estimates of the physical parameters, temperature, density and composition, can be made from the data without prior image (re-)construction. An increase in the accuracy of these parameters is expected as the result of this more systematic approach.
Jones, Rachael M; Simmons, Catherine; Boelter, Fred
2011-06-01
Drywall finishing is a dusty construction activity. We describe a mathematical model that predicts the time-weighted average concentration of respirable and total dusts in the personal breathing zone of the sander, and in the area surrounding joint compound sanding activities. The model represents spatial variation in dust concentrations using two-zones, and temporal variation using an exponential function. Interzone flux and the relationships between respirable and total dusts are described using empirical factors. For model evaluation, we measured dust concentrations in two field studies, including three workers from a commercial contracting crew, and one unskilled worker. Data from the field studies confirm that the model assumptions and parameterization are reasonable and thus validate the modeling approach. Predicted dust C(twa) were in concordance with measured values for the contracting crew, but under estimated measured values for the unskilled worker. Further characterization of skill-related exposure factors is indicated.
BASU, ANIRBAN
2014-01-01
SUMMARY This paper builds on the methods of local instrumental variables developed by Heckman and Vytlacil (1999, 2001, 2005) to estimate person-centered treatment (PeT) effects that are conditioned on the person’s observed characteristics and averaged over the potential conditional distribution of unobserved characteristics that lead them to their observed treatment choices. PeT effects are more individualized than conditional treatment effects from a randomized setting with the same observed characteristics. PeT effects can be easily aggregated to construct any of the mean treatment effect parameters and, more importantly, are well suited to comprehend individual-level treatment effect heterogeneity. The paper presents the theory behind PeT effects, and applies it to study the variation in individual-level comparative effects of prostate cancer treatments on overall survival and costs. PMID:25620844
Finite-time output feedback control of uncertain switched systems via sliding mode design
NASA Astrophysics Data System (ADS)
Zhao, Haijuan; Niu, Yugang; Song, Jun
2018-04-01
The problem of sliding mode control (SMC) is investigated for a class of uncertain switched systems subject to unmeasurable state and assigned finite (possible short) time constraint. A key issue is how to ensure the finite-time boundedness (FTB) of system state during reaching phase and sliding motion phase. To this end, a state observer is constructed to estimate the unmeasured states. And then, a state estimate-based SMC law is designed such that the state trajectories can be driven onto the specified integral sliding surface during the assigned finite time interval. By means of partitioning strategy, the corresponding FTB over reaching phase and sliding motion phase are guaranteed and the sufficient conditions are derived via average dwell time technique. Finally, an illustrative example is given to illustrate the proposed method.
42 CFR 447.255 - Related information.
Code of Federal Regulations, 2011 CFR
2011-10-01
... assurances described in § 447.253(a), the following information: (a) The amount of the estimated average... which that estimated average rate increased or decreased relative to the average payment rate in effect... and, to the extent feasible, long-term effect the change in the estimated average rate will have on...
Covariance analysis for evaluating head trackers
NASA Astrophysics Data System (ADS)
Kang, Donghoon
2017-10-01
Existing methods for evaluating the performance of head trackers usually rely on publicly available face databases, which contain facial images and the ground truths of their corresponding head orientations. However, most of the existing publicly available face databases are constructed by assuming that a frontal head orientation can be determined by compelling the person under examination to look straight ahead at the camera on the first video frame. Since nobody can accurately direct one's head toward the camera, this assumption may be unrealistic. Rather than obtaining estimation errors, we present a method for computing the covariance of estimation error rotations to evaluate the reliability of head trackers. As an uncertainty measure of estimators, the Schatten 2-norm of a square root of error covariance (or the algebraic average of relative error angles) can be used. The merit of the proposed method is that it does not disturb the person under examination by asking him to direct his head toward certain directions. Experimental results using real data validate the usefulness of our method.
Lung lobe modeling and segmentation with individualized surface meshes
NASA Astrophysics Data System (ADS)
Blaffert, Thomas; Barschdorf, Hans; von Berg, Jens; Dries, Sebastian; Franz, Astrid; Klinder, Tobias; Lorenz, Cristian; Renisch, Steffen; Wiemker, Rafael
2008-03-01
An automated segmentation of lung lobes in thoracic CT images is of interest for various diagnostic purposes like the quantification of emphysema or the localization of tumors within the lung. Although the separating lung fissures are visible in modern multi-slice CT-scanners, their contrast in the CT-image often does not separate the lobes completely. This makes it impossible to build a reliable segmentation algorithm without additional information. Our approach uses general anatomical knowledge represented in a geometrical mesh model to construct a robust lobe segmentation, which even gives reasonable estimates of lobe volumes if fissures are not visible at all. The paper describes the generation of the lung model mesh including lobes by an average volume model, its adaptation to individual patient data using a special fissure feature image, and a performance evaluation over a test data set showing an average segmentation accuracy of 1 to 3 mm.
Top-Down CO Emissions Based On IASI Observations and Hemispheric Constraints on OH Levels
NASA Astrophysics Data System (ADS)
Müller, J.-F.; Stavrakou, T.; Bauwens, M.; George, M.; Hurtmans, D.; Coheur, P.-F.; Clerbaux, C.; Sweeney, C.
2018-02-01
Assessments of carbon monoxide emissions through inverse modeling are dependent on the modeled abundance of the hydroxyl radical (OH) which controls both the primary sink of CO and its photochemical source through hydrocarbon oxidation. However, most chemistry transport models (CTMs) fall short of reproducing constraints on hemispherically averaged OH levels derived from methylchloroform (MCF) observations. Here we construct five different OH fields compatible with MCF-based analyses, and we prescribe those fields in a global CTM to infer CO fluxes based on Infrared Atmospheric Sounding Interferometer (IASI) CO columns. Each OH field leads to a different set of optimized emissions. Comparisons with independent data (surface, ground-based remotely sensed, aircraft) indicate that the inversion adopting the lowest average OH level in the Northern Hemisphere (7.8 × 105 molec cm-3, ˜18% lower than the best estimate based on MCF measurements) provides the best overall agreement with all tested observation data sets.
Risks of a lifetime in construction. Part II: Chronic occupational diseases.
Ringen, Knut; Dement, John; Welch, Laura; Dong, Xiuwen Sue; Bingham, Eula; Quinn, Patricia S
2014-11-01
We developed working-life estimates of risk for dust-related occupational lung disease, COPD, and hearing loss based on the experience of the Building Trades National Medical Screening Program in order to (1) demonstrate the value of estimates of lifetime risk, and (2) make lifetime risk estimates for common conditions among construction workers. Estimates of lifetime risk were performed based on 12,742 radiographic evaluations, 12,679 spirometry tests, and 11,793 audiograms. Over a 45-year working life, 16% of construction workers developed COPD, 11% developed parenchymal radiological abnormality, and 73.8% developed hearing loss. The risk for occupationally related disease over a lifetime in a construction trade was 2-6 times greater than the risk in non-construction workers. When compared with estimates from annualized cross-sectional data, lifetime risk estimates are highly useful for risk expression, and should help to inform stakeholders in the construction industry as well as policy-makers about magnitudes of risk. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Demmers, T. G. M.; Burgess, L. R.; Short, J. L.; Phillips, V. R.; Clark, J. A.; Wathes, C. M.
A method has been developed to measure the emission rate of ammonia from naturally ventilated U.K. livestock buildings. The method is based on measurements of ammonia concentration and estimates of the ventilation rate of the building by continuous release of carbon monoxide tracer within the building. The tracer concentration is measured at nine positions in openings around the perimeter of the building, as well as around a ring sampling line. Two criteria were evaluated to decide whether, at any given time, a given opening in the building acted as an air inlet or as an air outlet. Carbon monoxide concentration difference across an opening was found to be a better criterion than the temperature difference across the opening. Ammonia concentrations were measured continuously at the sampling points using a chemiluminescence analyser. The method was applied to a straw-bedded beef unit and to a slurry-based dairy unit. Both buildings were of space-boarded construction. Ventilation rates estimated by the ring line sample were consistently higher than by the perimeter samples. During calm weather, the ventilation estimates by both samples were similar (10-20 air changes h -1). However, during windy conditions (>5 m s -1) the ventilation rate was overestimated by the ring line sample (average 100 air changes h -1) compared to the perimeter samples (average 50 air changes h -1). The difference was caused by incomplete mixing of the tracer within the building. The ventilation rate estimated from the perimeter samples was used for the calculation of the emission rate. Preliminary estimates of the ammonia emission factor were 6.0 kg NH 3 (500 kg live-weight) -1 (190 d) -1 for the slurry-based dairy unit and 3.7 for the straw-bedded beef unit.
Using safety inspection data to estimate shaking intensity for the 1994 Northridge earthquake
Thywissen, K.; Boatwright, J.
1998-01-01
We map the shaking intensity suffered in Los Angeles County during the 17 January 1994, Northridge earthquake using municipal safety inspection data. The intensity is estimated from the number of buildings given red, yellow, or green tags, aggregated by census tract. Census tracts contain from 200 to 4000 residential buildings and have an average area of 6 km2 but are as small as 2 and 1 km2 in the most densely populated areas of the San Fernando Valley and downtown Los Angeles, respectively. In comparison, the zip code areas on which standard MMI intensity estimates are based are six times larger, on average, than the census tracts. We group the buildings by age (before and after 1940 and 1976), by number of housing units (one, two to four, and five or more), and by construction type, and we normalize the tags by the total number of similar buildings in each census tract. We analyze the seven most abundant building categories. The fragilities (the fraction of buildings in each category tagged within each intensity level) for these seven building categories are adjusted so that the intensity estimates agree. We calibrate the shaking intensity to correspond with the modified Mercalli intensities (MMI) estimated and compiled by Dewey et al. (1995); the shapes of the resulting isoseismals are similar, although we underestimate the extent of the MMI = 6 and 7 areas. The fragility varies significantly between different building categories (by factors of 10 to 20) and building ages (by factors of 2 to 6). The post-1940 wood-frame multi-family (???5 units) dwellings make up the most fragile building category, and the post-1940 wood-frame single-family dwellings make up the most resistant building category.
The trade-off between hospital cost and quality of care. An exploratory empirical analysis.
Morey, R C; Fine, D J; Loree, S W; Retzlaff-Roberts, D L; Tsubakitani, S
1992-08-01
The debate concerning quality of care in hospitals, its "value" and affordability, is increasingly of concern to providers, consumers, and purchasers in the United States and elsewhere. We undertook an exploratory study to estimate the impact on hospital-wide costs if quality-of-care levels were varied. To do so, we obtained costs and service output data regarding 300 U.S. hospitals, representing approximately a 5% cross section of all hospitals operating in 1983; both inpatient and outpatient services were included. The quality-of-care measure used for the exploratory analysis was the ratio of actual deaths in the hospital for the year in question to the forecasted number of deaths for the hospital; the hospital mortality forecaster had earlier (and elsewhere) been built from analyses of 6 million discharge abstracts, and took into account each hospital's actual individual admissions, including key patient descriptors for each admission. Such adjusted death rates have increasingly been used as potential indicators of quality, with recent research lending support for the viability of that linkage. The authors then utilized the economic construct of allocative efficiency relying on "best practices" concepts and peer groupings, built using the "envelopment" philosophy of Data Envelopment Analysis and Pareto efficiency. These analytical techniques estimated the efficiently delivered costs required to meet prespecified levels of quality of care. The marginal additional cost per each death deferred in 1983 was estimated to be approximately $29,000 (in 1990 dollars) for the average efficient hospital. Also, over a feasible range, a 1% increase in the level of quality of care delivered was estimated to increase hospital cost by an average of 1.34%. This estimated elasticity of quality on cost also increased with the number of beds in the hospital.
NASA Astrophysics Data System (ADS)
Kim, Hanna; Xie, Linmao; Min, Ki-Bok; Bae, Seongho; Stephansson, Ove
2017-12-01
It is desirable to combine the stress measurement data produced by different methods to obtain a more reliable estimation of in situ stress. We present a regional case study of integrated in situ stress estimation by hydraulic fracturing, observations of borehole breakouts and drilling-induced fractures, and numerical modeling of a 1 km-deep borehole (EXP-1) in Pohang, South Korea. Prior to measuring the stress, World Stress Map (WSM) and modern field data in the Korean Peninsula are used to construct a best estimate stress model in this area. Then, new stress data from hydraulic fracturing and borehole observations is added to determine magnitude and orientation of horizontal stresses. Minimum horizontal principal stress is estimated from the shut-in pressure of the hydraulic fracturing measurement at a depth of about 700 m. The horizontal stress ratios ( S Hmax/ S hmin) derived from hydraulic fracturing, borehole breakout, and drilling-induced fractures are 1.4, 1.2, and 1.1-1.4, respectively, and the average orientations of the maximum horizontal stresses derived by field methods are N138°E, N122°E, and N136°E, respectively. The results of hydraulic fracturing and borehole observations are integrated with a result of numerical modeling to produce a final rock stress model. The results of the integration give in situ stress ratios of 1.3/1.0/0.8 ( S Hmax/ S V/ S hmin) with an average azimuth of S Hmax in the orientation range of N130°E-N136°E. It is found that the orientation of S Hmax is deviated by more than 40° clockwise compared to directions reported for the WSM in southeastern Korean peninsula.
1990-09-01
following two chapters. 28 V. COCOMO MODEL A. OVERVIEW The COCOMO model which stands for COnstructive COst MOdel was developed by Barry Boehm and is...estimation model which uses an expert system to automate the Intermediate COnstructive Cost Estimation MOdel (COCOMO), developed by Barry W. Boehm and...cost estimation model which uses an expert system to automate the Intermediate COnstructive Cost Estimation MOdel (COCOMO), developed by Barry W
Emission inventory estimation of an intercity bus terminal.
Qiu, Zhaowen; Li, Xiaoxia; Hao, Yanzhao; Deng, Shunxi; Gao, H Oliver
2016-06-01
Intercity bus terminals are hotspots of air pollution due to concentrated activities of diesel buses. In order to evaluate the bus terminals' impact on air quality, it is necessary to estimate the associated mobile emission inventories. Since the vehicles' operating condition at the bus terminal varies significantly, conventional calculation of the emissions based on average emission factors suffers the loss of accuracy. In this study, we examined a typical intercity bus terminal-the Southern City Bus Station of Xi'an, China-using a multi-scale emission model-(US EPA's MOVES model)-to quantity the vehicle emission inventory. A representative operating cycle for buses within the station is constructed. The emission inventory was then estimated using detailed inputs including vehicle ages, operating speeds, operating schedules, and operating mode distribution, as well as meteorological data (temperature and humidity). Five functional areas (bus yard, platforms, disembarking area, bus travel routes within the station, and bus entrance/exit routes) at the terminal were identified, and the bus operation cycle was established using the micro-trip cycle construction method. Results of our case study showed that switching to compressed natural gas (CNG) from diesel fuel could reduce PM2.5 and CO emissions by 85.64 and 6.21 %, respectively, in the microenvironment of the bus terminal. When CNG is used, tail pipe exhaust PM2.5 emission is significantly reduced, even less than brake wear PM2.5. The estimated bus operating cycles can also offer researchers and policy makers important information for emission evaluation in the planning and design of any typical intercity bus terminals of a similar scale.
Estimation of construction and demolition waste using waste generation rates in Chennai, India.
Ram, V G; Kalidindi, Satyanarayana N
2017-06-01
A large amount of construction and demolition waste is being generated owing to rapid urbanisation in Indian cities. A reliable estimate of construction and demolition waste generation is essential to create awareness about this stream of solid waste among the government bodies in India. However, the required data to estimate construction and demolition waste generation in India are unavailable or not explicitly documented. This study proposed an approach to estimate construction and demolition waste generation using waste generation rates and demonstrated it by estimating construction and demolition waste generation in Chennai city. The demolition waste generation rates of primary materials were determined through regression analysis using waste generation data from 45 case studies. Materials, such as wood, electrical wires, doors, windows and reinforcement steel, were found to be salvaged and sold on the secondary market. Concrete and masonry debris were dumped in either landfills or unauthorised places. The total quantity of construction and demolition debris generated in Chennai city in 2013 was estimated to be 1.14 million tonnes. The proportion of masonry debris was found to be 76% of the total quantity of demolition debris. Construction and demolition debris forms about 36% of the total solid waste generated in Chennai city. A gross underestimation of construction and demolition waste generation in some earlier studies in India has also been shown. The methodology proposed could be utilised by government bodies, policymakers and researchers to generate reliable estimates of construction and demolition waste in other developing countries facing similar challenges of limited data availability.
Sharpe, Tim; Farren, Paul; Howieson, Stirling; Tuohy, Paul; McQuillan, Jonathan
2015-07-21
The need to reduce carbon emissions and fuel poverty has led to increased building envelope air tightness, intended to reduce uncontrolled ventilation heat losses. Ventilation strategies in dwellings still allow the use of trickle ventilators in window frames for background ventilation. The extent to which this results in "healthy" Indoor Air Quality (IAQ) in recently constructed dwellings was a concern of regulators in Scotland. This paper describes research to explore this. First a review of literature was conducted, then data on occupant interactions with ventilation provisions (windows, doors, trickle vents) gathered through an interview-based survey of 200 recently constructed dwellings, and measurements made on a sample of 40 of these. The main measured parameter discussed here is CO2 concentration. It was concluded after the literature review that 1000 ppm absolute was a reasonable threshold to use for "adequate" ventilation. The occupant survey found that there was very little occupant interaction with the trickle ventilators e.g., in bedrooms 63% were always closed, 28% always open, and in only 9% of cases occupants intervened to make occasional adjustments. In the measured dwellings average bedroom CO2 levels of 1520 ppm during occupied (night time) hours were observed. Where windows were open the average bedroom CO2 levels were 972 ppm. With windows closed, the combination of "trickle ventilators open plus doors open" gave an average of 1021 ppm. "Trickle ventilators open" gave an average of 1571 ppm. All other combinations gave averages of 1550 to 2000 ppm. Ventilation rates and air change rates were estimated from measured CO2 levels, for all dwellings calculated ventilation rate was less than 8 L/s/p, in 42% of cases calculated air change rate was less than 0.5 ach. It was concluded that trickle ventilation as installed and used is ineffective in meeting desired ventilation rates, evidenced by high CO2 levels reported across the sampled dwellings. Potential implications of the results are discussed.
Zhu, Lin; Gong, Huili; Chen, Yun; Li, Xiaojuan; Chang, Xiang; Cui, Yijiao
2016-01-01
Hydraulic conductivity is a major parameter affecting the output accuracy of groundwater flow and transport models. The most commonly used semi-empirical formula for estimating conductivity is Kozeny-Carman equation. However, this method alone does not work well with heterogeneous strata. Two important parameters, grain size and porosity, often show spatial variations at different scales. This study proposes a method for estimating conductivity distributions by combining a stochastic hydrofacies model with geophysical methods. The Markov chain model with transition probability matrix was adopted to re-construct structures of hydrofacies for deriving spatial deposit information. The geophysical and hydro-chemical data were used to estimate the porosity distribution through the Archie’s law. Results show that the stochastic simulated hydrofacies model reflects the sedimentary features with an average model accuracy of 78% in comparison with borehole log data in the Chaobai alluvial fan. The estimated conductivity is reasonable and of the same order of magnitude of the outcomes of the pumping tests. The conductivity distribution is consistent with the sedimentary distributions. This study provides more reliable spatial distributions of the hydraulic parameters for further numerical modeling. PMID:26927886
Reduced rank models for travel time estimation of low order mode pulses.
Chandrayadula, Tarun K; Wage, Kathleen E; Worcester, Peter F; Dzieciuch, Matthew A; Mercer, James A; Andrew, Rex K; Howe, Bruce M
2013-10-01
Mode travel time estimation in the presence of internal waves (IWs) is a challenging problem. IWs perturb the sound speed, which results in travel time wander and mode scattering. A standard approach to travel time estimation is to pulse compress the broadband signal, pick the peak of the compressed time series, and average the peak time over multiple receptions to reduce variance. The peak-picking approach implicitly assumes there is a single strong arrival and does not perform well when there are multiple arrivals due to scattering. This article presents a statistical model for the scattered mode arrivals and uses the model to design improved travel time estimators. The model is based on an Empirical Orthogonal Function (EOF) analysis of the mode time series. Range-dependent simulations and data from the Long-range Ocean Acoustic Propagation Experiment (LOAPEX) indicate that the modes are represented by a small number of EOFs. The reduced-rank EOF model is used to construct a travel time estimator based on the Matched Subspace Detector (MSD). Analysis of simulation and experimental data show that the MSDs are more robust to IW scattering than peak picking. The simulation analysis also highlights how IWs affect the mode excitation by the source.
Alternatives to the Moving Average
Paul C. van Deusen
2001-01-01
There are many possible estimators that could be used with annual inventory data. The 5-year moving average has been selected as a default estimator to provide initial results for states having available annual inventory data. User objectives for these estimates are discussed. The characteristics of a moving average are outlined. It is shown that moving average...
NASA Astrophysics Data System (ADS)
Sakano, Toshikazu; Yamaguchi, Takahiro; Fujii, Tatsuya; Okumura, Akira; Furukawa, Isao; Ono, Sadayasu; Suzuki, Junji; Ando, Yutaka; Kohda, Ehiichi; Sugino, Yoshinori; Okada, Yoshiyuki; Amaki, Sachi
2000-05-01
We constructed a high-speed medical information network testbed, which is one of the largest testbeds in Japan, and applied it to practical medical checkups for the first time. The constructed testbed, which we call IMPACT, consists of a Super-High Definition Imaging system, a video conferencing system, a remote database system, and a 6 - 135 Mbps ATM network. The interconnected facilities include the School of Medicine in Keio University, a company's clinic, and an NTT R&D center, all in and around Tokyo. We applied IMPACT to the mass screening of the upper gastrointestinal (UGI) tract at the clinic. All 5419 radiographic images acquired at them clinic for 523 employees were digitized (2048 X 1698 X 12 bits) and transferred to a remote database in NTT. We then picked up about 50 images from five patients and sent them to nine radiological specialists at Keio University. The processing, which includes film digitization, image data transfer, and database registration, took 574 seconds per patient in average. The average reading time at Keio Univ. was 207 seconds. The overall processing time was estimated to be 781 seconds per patient. From these experimental results, we conclude that quasi-real time tele-medical checkups are possible with our prototype system.
Capture-recapture to estimate the number of street children in a city in Brazil
Gurgel, R; da Fonseca, J D C; Neyra-Castaneda, D; Gill, G; Cuevas, L
2004-01-01
Background: Street children are an increasing problem in Latin America. It is however difficult to estimate the number of children in the street as this is a highly mobile population. Aims: To estimate the number of street children in Aracaju, northeast Brazil, and describe the characteristics of this population. Methods: Three independent lists of street children were constructed from a non-governmental organisation and cross-sectional surveys. The number of street children was estimated using the capture-recapture method. The characteristics of the children were recorded during the surveys. Results: The estimated number of street children was 1456. The estimated number of street children before these surveys was 526, although non-official estimates suggested that there was a much larger population. Most street children are male, maintain contact with their families, and are attending school. Children contribute to the family budget a weekly average of R$21.2 (£4.25, €6.0, US$7.5) for boys and R$17.7 (£3.55, €5.0, US$6.3) for girls. Conclusion: Street children of Aracaju have similar characteristics to street children from other cities in Brazil. The capture-recapture method could be a useful method to estimate the size of this highly mobile population. The major advantage of the method is its reproducibility, which makes it more acceptable than estimates from interested parties. PMID:14977695
Time series modelling of increased soil temperature anomalies during long period
NASA Astrophysics Data System (ADS)
Shirvani, Amin; Moradi, Farzad; Moosavi, Ali Akbar
2015-10-01
Soil temperature just beneath the soil surface is highly dynamic and has a direct impact on plant seed germination and is probably the most distinct and recognisable factor governing emergence. Autoregressive integrated moving average as a stochastic model was developed to predict the weekly soil temperature anomalies at 10 cm depth, one of the most important soil parameters. The weekly soil temperature anomalies for the periods of January1986-December 2011 and January 2012-December 2013 were taken into consideration to construct and test autoregressive integrated moving average models. The proposed model autoregressive integrated moving average (2,1,1) had a minimum value of Akaike information criterion and its estimated coefficients were different from zero at 5% significance level. The prediction of the weekly soil temperature anomalies during the test period using this proposed model indicated a high correlation coefficient between the observed and predicted data - that was 0.99 for lead time 1 week. Linear trend analysis indicated that the soil temperature anomalies warmed up significantly by 1.8°C during the period of 1986-2011.
Wrinkle ridges of Arcadia Planitia, Mars
NASA Technical Reports Server (NTRS)
Plescia, J. B.
1993-01-01
Wrinkle ridges of Arcadia Planitia were examined to determine their morphology, spatial distribution, and the amount of crustal shortening and strain they accommodate. Ridges trend generally northward, but their orientation and distribution are strongly controlled by the relief of the underlying hobby material. Ridges begin or end at inselbergs of older terrain and are associated with buried craters. Arcadia Planitia ridges have an average width of 3425 m and accommodate an average folding shortening of 3 m and a faulting shortening of 55 m; mean total shortening is 57 m. Three east-west transects were constructed at 20 deg 25 deg and 28 deg N to estimate regional shortening and strain. Average total shortening across the transects is about 900 m, corresponding to a regional compressive strain of 0.06 percent. The total shortening and compression across Arcadia Planitia are less than in Lungae Planum. Faults associated with the Arcadia ridges are inferred to have a westward dip compared with an eastward dip for Lungae Planum ridges. The general levels of compression and symmetric orientation of the ridges suggest a regionally organized stress system.
2013-01-01
Background This study addresses the growing academic and policy interest in the appropriate provision of local healthcare services to the healthcare needs of local populations to increase health status and decrease healthcare costs. However, for most local areas information on the demand for primary care and supply is missing. The research goal is to examine the construction of a decision tool which enables healthcare planners to analyse local supply and demand in order to arrive at a better match. Methods National sample-based medical record data of general practitioners (GPs) were used to predict the local demand for GP care based on local populations using a synthetic estimation technique. Next, the surplus or deficit in local GP supply were calculated using the national GP registry. Subsequently, a dynamic internet tool was built to present demand, supply and the confrontation between supply and demand regarding GP care for local areas and their surroundings in the Netherlands. Results Regression analysis showed a significant relationship between sociodemographic predictors of postcode areas and GP consultation time (F [14, 269,467] = 2,852.24; P <0.001). The statistical model could estimate GP consultation time for every postcode area with >1,000 inhabitants in the Netherlands covering 97% of the total population. Confronting these estimated demand figures with the actual GP supply resulted in the average GP workload and the number of full-time equivalent (FTE) GP too much/too few for local areas to cover the demand for GP care. An estimated shortage of one FTE GP or more was prevalent in about 19% of the postcode areas with >1,000 inhabitants if the surrounding postcode areas were taken into consideration. Underserved areas were mainly found in rural regions. Conclusions The constructed decision tool is freely accessible on the Internet and can be used as a starting point in the discussion on primary care service provision in local communities and it can make a considerable contribution to a primary care system which provides care when and where people need it. PMID:24161015
Daily estimates of soil ingestion in children.
Stanek, E J; Calabrese, E J
1995-01-01
Soil ingestion estimates play an important role in risk assessment of contaminated sites, and estimates of soil ingestion in children are of special interest. Current estimates of soil ingestion are trace-element specific and vary widely among elements. Although expressed as daily estimates, the actual estimates have been constructed by averaging soil ingestion over a study period of several days. The wide variability has resulted in uncertainty as to which method of estimation of soil ingestion is best. We developed a methodology for calculating a single estimate of soil ingestion for each subject for each day. Because the daily soil ingestion estimate represents the median estimate of eligible daily trace-element-specific soil ingestion estimates for each child, this median estimate is not trace-element specific. Summary estimates for individuals and weeks are calculated using these daily estimates. Using this methodology, the median daily soil ingestion estimate for 64 children participating in the 1989 Amherst soil ingestion study is 13 mg/day or less for 50% of the children and 138 mg/day or less for 95% of the children. Mean soil ingestion estimates (for up to an 8-day period) were 45 mg/day or less for 50% of the children, whereas 95% of the children reported a mean soil ingestion of 208 mg/day or less. Daily soil ingestion estimates were used subsequently to estimate the mean and variance in soil ingestion for each child and to extrapolate a soil ingestion distribution over a year, assuming that soil ingestion followed a log-normal distribution. Images Figure 1. Figure 2. Figure 3. Figure 4. PMID:7768230
Cost-effectiveness in fall prevention for older women.
Hektoen, Liv F; Aas, Eline; Lurås, Hilde
2009-08-01
The aim of this study was to estimate the cost-effectiveness of implementing an exercise-based fall prevention programme for home-dwelling women in the > or = 80-year age group in Norway. The impact of the home-based individual exercise programme on the number of falls is based on a New Zealand study. On the basis of the cost estimates and the estimated reduction in the number of falls obtained with the chosen programme, we calculated the incremental costs and the incremental effect of the exercise programme as compared with no prevention. The calculation of the average healthcare cost of falling was based on assumptions regarding the distribution of fall injuries reported in the literature, four constructed representative case histories, assumptions regarding healthcare provision associated with the treatment of the specified cases, and estimated unit costs from Norwegian cost data. We calculated the average healthcare costs per fall for the first year. We found that the reduction in healthcare costs per individual for treating fall-related injuries was 1.85 times higher than the cost of implementing a fall prevention programme. The reduction in healthcare costs more than offset the cost of the prevention programme for women aged > or = 80 years living at home, which indicates that health authorities should increase their focus on prevention. The main intention of this article is to stipulate costs connected to falls among the elderly in a transparent way and visualize the whole cost picture. Cost-effectiveness analysis is a health policy tool that makes politicians and other makers of health policy conscious of this complexity.
Survival and recovery rates of American woodcock banded in Michigan
Krementz, David G.; Hines, James E.; Luukkonen, David R.
2003-01-01
American woodcock (Scolopax minor) population indices have declined since U.S. Fish and Wildlife Service (USFWS) monitoring began in 1968. Management to stop and/or reverse this population trend has been hampered by the lack of recent information on woodcock population parameters. Without recent information on survival rate trends, managers have had to assume that the recent declines in recruitment indices are the only parameter driving woodcock declines. Using program MARK, we estimated annual survival and recovery rates of adult and juvenile American woodcock, and estimated summer survival of local (young incapable of sustained flight) woodcock banded in Michigan between 1978 and 1998. We constructed a set of candidate models from a global model with age (local, juvenile, adult) and time (year)-dependent survival and recovery rates to no age or time-dependent survival and recovery rates. Five models were supported by the data, with all models suggesting that survival rates differed among age classes, and 4 models had survival rates that were constant over time. The fifth model suggested that juvenile and adult survival rates were linear on a logit scale over time. Survival rates averaged over likelihood-weighted model results were 0.8784 +/- 0.1048 (SE) for locals, 0.2646 +/- 0.0423 (SE) for juveniles, and 0.4898 +/- 0.0329 (SE) for adults. Weighted average recovery rates were 0.0326 +/- 0.0053 (SE) for juveniles and 0.0313 +/- 0.0047 (SE) for adults. Estimated differences between our survival estimates and those from prior years were small, and our confidence around those differences was variable and uncertain. juvenile survival rates were low.
Averaging the Equations of a Planetary Problem in an Astrocentric Reference Frame
NASA Astrophysics Data System (ADS)
Mikryukov, D. V.
2018-05-01
A system of averaged equations of planetary motion around a central star is constructed. An astrocentric coordinate system is used. The two-planet problem is considered, but all constructions are easily generalized to an arbitrary number N of planets. The motion is investigated in modified (complex) Poincarécanonical elements. The averaging is performed by the Hori-Deprit method over the fast mean longitudes to the second order relative to the planetary masses. An expansion of the disturbing function is constructed using the Laplace coefficients. Some terms of the expansion of the disturbing function and the first terms of the expansion of the averaged Hamiltonian are given. The results of this paper can be used to investigate the evolution of orbits with moderate eccentricities and inclinations in various planetary systems.
NASA Astrophysics Data System (ADS)
Lu, Xixi; Ran, Lishan
2015-04-01
The Yellow River system used to have very high sediment export to ocean (around 1.5 Gt/yr in the 1950s) because of severe soil erosion on the Loess Plateau. However, its sediment export has declined to <0.25 Gt/yr in recent years (in the 2000s), mainly due to human activities like construction of reservoirs and check dams and other soil and water conservations such as construction of terraces and vegetation restoration. Such drastic reduction in soil erosion and sediment flux and subsequently in associated Particular Organic Carbon (POC) transport can potentially play a significant role in carbon cycling. Through the sediment flux budget we examined POC budget and carbon sequestration through vegetation restoration and various soil and water conservations including reservoirs construction over the past decades in the Yellow River system. Landsat imageries were used to delineate the reservoirs and check dams for estimating the sediment trapping. The reservoirs and check dams trapped a total amount of sediment 0.94 Gt/yr, equivalent to 6.5 Mt C. Soil erosion controls through vegetation restoration and terrace construction reduced soil erosion 1.82 Gt/yr, equivalent to 12 Mt C. The annual NPP increased from 0.150 Gt C in 2000 to 0.1889 Gt C in 2010 with an average increment rate of 3.4 Mt C per year over the recent decade (from 2000 to 2010) through vegetation restoration. The total carbon stabilized on slope systems through soil erosion controls (12 Mt C per year) was much higher than the direct carbon sequestration via vegetation restoration (3.4 Mt C per year), indicating the importance of horizontal carbon mobilization in carbon cycling, albeit a high estimate uncertainty.
Connolly, Mark P; Tashjian, Cole; Kotsopoulos, Nikolaos; Bhatt, Aomesh; Postma, Maarten J
2017-07-01
Numerous approaches are used to estimate indirect productivity losses using various wage estimates applied to poor health in working aged adults. Considering the different wage estimation approaches observed in the published literature, we sought to assess variation in productivity loss estimates when using average wages compared with age-specific wages. Published estimates for average and age-specific wages for combined male/female wages were obtained from the UK Office of National Statistics. A polynomial interpolation was used to convert 5-year age-banded wage data into annual age-specific wages estimates. To compare indirect cost estimates, average wages and age-specific wages were used to project productivity losses at various stages of life based on the human capital approach. Discount rates of 0, 3, and 6 % were applied to projected age-specific and average wage losses. Using average wages was found to overestimate lifetime wages in conditions afflicting those aged 1-27 and 57-67, while underestimating lifetime wages in those aged 27-57. The difference was most significant for children where average wage overestimated wages by 15 % and for 40-year-olds where it underestimated wages by 14 %. Large differences in projecting productivity losses exist when using the average wage applied over a lifetime. Specifically, use of average wages overestimates productivity losses between 8 and 15 % for childhood illnesses. Furthermore, during prime working years, use of average wages will underestimate productivity losses by 14 %. We suggest that to achieve more precise estimates of productivity losses, age-specific wages should become the standard analytic approach.
NASA Astrophysics Data System (ADS)
Sutawanir
2015-12-01
Mortality tables play important role in actuarial studies such as life annuities, premium determination, premium reserve, valuation pension plan, pension funding. Some known mortality tables are CSO mortality table, Indonesian Mortality Table, Bowers mortality table, Japan Mortality table. For actuary applications some tables are constructed with different environment such as single decrement, double decrement, and multiple decrement. There exist two approaches in mortality table construction : mathematics approach and statistical approach. Distribution model and estimation theory are the statistical concepts that are used in mortality table construction. This article aims to discuss the statistical approach in mortality table construction. The distributional assumptions are uniform death distribution (UDD) and constant force (exponential). Moment estimation and maximum likelihood are used to estimate the mortality parameter. Moment estimation methods are easier to manipulate compared to maximum likelihood estimation (mle). However, the complete mortality data are not used in moment estimation method. Maximum likelihood exploited all available information in mortality estimation. Some mle equations are complicated and solved using numerical methods. The article focus on single decrement estimation using moment and maximum likelihood estimation. Some extension to double decrement will introduced. Simple dataset will be used to illustrated the mortality estimation, and mortality table.
NASA Technical Reports Server (NTRS)
Cakmur, R. V.; Miller, R. L.; Tegen, Ina; Hansen, James E. (Technical Monitor)
2001-01-01
The seasonal cycle and interannual variability of two estimates of soil (or 'mineral') dust aerosols are compared: Advanced Very High Resolution Radiometer (AVHRR) aerosol optical thickness (AOT) and Total Ozone Mapping Spectrometer (TOMS) aerosol index (AI), Both data sets, comprising more than a decade of global, daily images, are commonly used to evaluate aerosol transport models. The present comparison is based upon monthly averages, constructed from daily images of each data set for the period between 1984 and 1990, a period that excludes contamination from volcanic eruptions. The comparison focuses upon the Northern Hemisphere subtropical Atlantic Ocean, where soil dust aerosols make the largest contribution to the aerosol load, and are assumed to dominate the variability of each data set. While each retrieval is sensitive to a different aerosol radiative property - absorption for the TOMS AI versus reflectance for the AVHRR AOT - the seasonal cycles of dust loading implied by each retrieval are consistent, if seasonal variations in the height of the aerosol layer are taken into account when interpreting the TOMS AI. On interannual time scales, the correlation is low at most locations. It is suggested that the poor interannual correlation is at least partly a consequence of data availability. When the monthly averages are constructed using only days common to both data sets, the correlation is substantially increased: this consistency suggests that both TOMS and AVHRR accurately measure the aerosol load in any given scene. However, the two retrievals have only a few days in common per month so that these restricted monthly averages have a large uncertainty. Calculations suggest that at least 7 to 10 daily images are needed to estimate reliably the average dust load during any particular month, a threshold that is rarely satisfied by the AVHRR AOT due to the presence of clouds in the domain. By rebinning each data set onto a coarser grid, the availability of the AVHRR AOT is increased during any particular month, along with its interannual correlation with the TOMS AI The latter easily exceeds the sampling threshold due to its greater ability to infer the aerosol load in the presence of clouds. Whether the TOMS AI should be regarded as a more reliable indicator of interannual variability depends upon the extent of contamination by sub-pixel clouds.
Quantum metrology of spatial deformation using arrays of classical and quantum light emitters
NASA Astrophysics Data System (ADS)
Sidhu, Jasminder S.; Kok, Pieter
2017-06-01
We introduce spatial deformations to an array of light sources and study how the estimation precision of the interspacing distance d changes with the sources of light used. The quantum Fisher information (QFI) is used as the figure of merit in this work to quantify the amount of information we have on the estimation parameter. We derive the generator of translations G ̂ in d due to an arbitrary homogeneous deformation applied to the array. We show how the variance of the generator can be used to easily consider how different deformations and light sources can effect the estimation precision. The single-parameter estimation problem is applied to the array, and we report on the optimal state that maximizes the QFI for d . Contrary to what may have been expected, the higher average mode occupancies of the classical states performs better in estimating d when compared with single photon emitters (SPEs). The optimal entangled state is constructed from the eigenvectors of the generator and found to outperform all these states. We also find the existence of multiple optimal estimators for the measurement of d . Our results find applications in evaluating stresses and strains, fracture prevention in materials expressing great sensitivities to deformations, and selecting frequency distinguished quantum sources from an array of reference sources.
Students' Preference for Science Careers: International comparisons based on PISA 2006
NASA Astrophysics Data System (ADS)
Kjærnsli, Marit; Lie, Svein
2011-01-01
This article deals with 15-year-old students' tendencies to consider a future science-related career. Two aspects have been the focus of our investigation. The first is based on the construct called 'future science orientation', an affective construct consisting of four Likert scale items that measure students' consideration of being involved in future education and careers in science-related areas. Due to the well-known evidence for Likert scales providing culturally biased estimates, the aim has been to go beyond the comparison of simple country averages. In a series of regression and correlation analyses, we have investigated how well the variance of this construct in each of the participating countries can be accounted for by other Programme for International Student Assessment (PISA) student data. The second aspect is based on a question about students' future jobs. By separating science-related jobs into what we have called 'soft' and 'hard' science-related types of jobs, we have calculated and compared country percentages within each category. In particular, gender differences are discussed, and interesting international patterns have been identified. The results in this article have been reported not only for individual countries, but also for groups of countries. These cluster analyses of countries are based on item-by-item patterns of (residual values of) national average values for the combination of cognitive and affective items. The emerging cluster structure of countries has turned out to contribute to the literature of similarities and differences between countries and the factors behind the country clustering both in science education and more generally.
The development and characterisation of a bacterial artificial chromosome library for Fragaria vesca
Bonet, Julio; Girona, Elena Lopez; Sargent, Daniel J; Muñoz-Torres, Monica C; Monfort, Amparo; Abbott, Albert G; Arús, Pere; Simpson, David W; Davik, Jahn
2009-01-01
Background The cultivated strawberry Fragaria ×ananassa is one of the most economically-important soft-fruit species. Few structural genomic resources have been reported for Fragaria and there exists an urgent need for the development of physical mapping resources for the genus. The first stage in the development of a physical map for Fragaria is the construction and characterisation of a high molecular weight bacterial artificial chromosome (BAC) library. Methods A BAC library, consisting of 18,432 clones was constructed from Fragaria vesca f. semperflorens accession 'Ali Baba'. BAC DNA from individual library clones was pooled to create a PCR-based screening assay for the library, whereby individual clones could be identified with just 34 PCR reactions. These pools were used to screen the BAC library and anchor individual clones to the diploid Fragaria reference map (FV×FN). Findings Clones from the BAC library developed contained an average insert size of 85 kb, representing over seven genome equivalents. The pools and superpools developed were used to identify a set of BAC clones containing 70 molecular markers previously mapped to the diploid Fragaria FV×FN reference map. The number of positive colonies identified for each marker suggests the library represents between 4× and 10× coverage of the diploid Fragaria genome, which is in accordance with the estimate of library coverage based on average insert size. Conclusion This BAC library will be used for the construction of a physical map for F. vesca and the superpools will permit physical anchoring of molecular markers using PCR. PMID:19772672
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-17
... Request for the Production Estimate, Quarterly Construction Sand and Gravel and Crushed and Broken Stone... Production Estimate, Quarterly Construction Sand and Gravel and Crushed and Broken Stone. This collection... Construction Sand and Gravel and Crushed and Broken Stone. Type of Request: Extension of a currently approved...
36 CFR 223.83 - Contents of prospectus.
Code of Federal Regulations, 2010 CFR
2010-07-01
... specified roads to be constructed. (16) The estimated road construction cost and the estimated public works..., the prospectus shall also include: (1) The road standards applicable to construction of permanent... permanent roads. (3) A statement explaining how the Forest Service intends to perform road construction by...
Study of Wetland Ecosystem Vegetation Using Satellite Data
NASA Astrophysics Data System (ADS)
Dyukarev, E. A.; Alekseeva, M. N.; Golovatskaya, E. A.
2017-12-01
The normalized difference vegetation index (NDVI) is used to estimate the aboveground net production (ANP) of wetland ecosystems for the key area at the South Taiga zone of West Siberia. The vegetation index and aboveground production are related by linear dependence and are specific for each wetland ecosystem. The NDVI grows with an increase in the ANP at wooded oligotrophic ecosystems. Open oligotrophic bogs and eutrophic wetlands are characterized by an opposite relation. Maps of aboveground production for wetland ecosystems are constructed for each study year and for the whole period of studies. The average aboveground production for all wetland ecosystems of the key area, which was estimated with consideration for the area they occupy and using the data of satellite measurements of the vegetation index, is 305 g C/m2/yr. The total annual carbon accumulation in aboveground wetland vegetation in the key area is 794600 t.
Highway Cost Index Estimator Tool
DOT National Transportation Integrated Search
2017-10-01
To plan and program highway construction projects, the Texas Department of Transportation requires accurate construction cost data. However, due to the number of, and uncertainty of, variables that affect highway construction costs, estimating future...
[A site index model for Larix principis-rupprechtii plantation in Saihanba, north China].
Wang, Dong-zhi; Zhang, Dong-yan; Jiang, Feng-ling; Bai, Ye; Zhang, Zhi-dong; Huang, Xuan-rui
2015-11-01
It is often difficult to estimate site indices for different types of plantation by using an ordinary site index model. The objective of this paper was to establish a site index model for plantations in varied site conditions, and assess the site qualities. In this study, a nonlinear mixed site index model was constructed based on data from the second class forest resources inventory and 173 temporary sample plots. The results showed that the main limiting factors for height growth of Larix principis-rupprechtii were elevation, slope, soil thickness and soil type. A linear regression model was constructed for the main constraining site factors and dominant tree height, with the coefficient of determination being 0.912, and the baseline age of Larix principis-rupprechtii determined as 20 years. The nonlinear mixed site index model parameters for the main site types were estimated (R2 > 0.85, the error between the predicted value and the actual value was in the range of -0.43 to 0.45, with an average root mean squared error (RMSE) in the range of 0.907 to 1.148). The estimation error between the predicted value and the actual value of dominant tree height for the main site types was in the confidence interval of [-0.95, 0.95]. The site quality of the high altitude-shady-sandy loam-medium soil layer was the highest and that of low altitude-sunny-sandy loam-medium soil layer was the lowest, while the other two sites were moderate.
Estimation of typhoon rainfall in GaoPing River: A Multivariate Maximum Entropy Method
NASA Astrophysics Data System (ADS)
Pei-Jui, Wu; Hwa-Lung, Yu
2016-04-01
The heavy rainfall from typhoons is the main factor of the natural disaster in Taiwan, which causes the significant loss of human lives and properties. Statistically average 3.5 typhoons invade Taiwan every year, and the serious typhoon, Morakot in 2009, impacted Taiwan in recorded history. Because the duration, path and intensity of typhoon, also affect the temporal and spatial rainfall type in specific region , finding the characteristics of the typhoon rainfall type is advantageous when we try to estimate the quantity of rainfall. This study developed a rainfall prediction model and can be divided three parts. First, using the EEOF(extended empirical orthogonal function) to classify the typhoon events, and decompose the standard rainfall type of all stations of each typhoon event into the EOF and PC(principal component). So we can classify the typhoon events which vary similarly in temporally and spatially as the similar typhoon types. Next, according to the classification above, we construct the PDF(probability density function) in different space and time by means of using the multivariate maximum entropy from the first to forth moment statistically. Therefore, we can get the probability of each stations of each time. Final we use the BME(Bayesian Maximum Entropy method) to construct the typhoon rainfall prediction model , and to estimate the rainfall for the case of GaoPing river which located in south of Taiwan.This study could be useful for typhoon rainfall predictions in future and suitable to government for the typhoon disaster prevention .
Seo, Seongwon; Hwang, Yongwoo
1999-08-01
Construction and demolition (C&D) debris is generated at the site of various construction activities. However, the amount of the debris is usually so large that it is necessary to estimate the amount of C&D debris as accurately as possible for effective waste management and control in urban areas. In this paper, an effective estimation method using a statistical model was proposed. The estimation process was composed of five steps: estimation of the life span of buildings; estimation of the floor area of buildings to be constructed and demolished; calculation of individual intensity units of C&D debris; and estimation of the future C&D debris production. This method was also applied in the city of Seoul as an actual case, and the estimated amount of C&D debris in Seoul in 2021 was approximately 24 million tons. Of this total amount, 98% was generated by demolition, and the main components of debris were concrete and brick.
NASA Astrophysics Data System (ADS)
Hellaby, Charles
2012-01-01
A new method for constructing exact inhomogeneous universes is presented, that allows variation in 3 dimensions. The resulting spacetime may be statistically uniform on average, or have random, non-repeating variation. The construction utilises the Darmois junction conditions to join many different component spacetime regions. In the initial simple example given, the component parts are spatially flat and uniform, but much more general combinations should be possible. Further inhomogeneity may be added via swiss cheese vacuoles and inhomogeneous metrics. This model is used to explore the proposal, that observers are located in bound, non-expanding regions, while the universe is actually in the process of becoming void dominated, and thus its average expansion rate is increasing. The model confirms qualitatively that the faster expanding components come to dominate the average, and that inhomogeneity results in average parameters which evolve differently from those of any one component, but more realistic modelling of the effect will need this construction to be generalised.
Golden, Christopher D.; Mozaffarian, Dariush
2016-01-01
Insufficient data exist for accurate estimation of global nutrient supplies. Commonly used global datasets contain key weaknesses: 1) data with global coverage, such as the FAO food balance sheets, lack specific information about many individual foods and no information on micronutrient supplies nor heterogeneity among subnational populations, while 2) household surveys provide a closer approximation of consumption, but are often not nationally representative, do not commonly capture many foods consumed outside of the home, and only provide adequate information for a few select populations. Here, we attempt to improve upon these datasets by constructing a new model—the Global Expanded Nutrient Supply (GENuS) model—to estimate nutrient availabilities for 23 individual nutrients across 225 food categories for thirty-four age-sex groups in nearly all countries. Furthermore, the model provides historical trends in dietary nutritional supplies at the national level using data from 1961–2011. We determine supplies of edible food by expanding the food balance sheet data using FAO production and trade data to increase food supply estimates from 98 to 221 food groups, and then estimate the proportion of major cereals being processed to flours to increase to 225. Next, we estimate intake among twenty-six demographic groups (ages 20+, both sexes) in each country by using data taken from the Global Dietary Database, which uses nationally representative surveys to relate national averages of food consumption to individual age and sex-groups; for children and adolescents where GDD data does not yet exist, average calorie-adjusted amounts are assumed. Finally, we match food supplies with nutrient densities from regional food composition tables to estimate nutrient supplies, running Monte Carlo simulations to find the range of potential nutrient supplies provided by the diet. To validate our new method, we compare the GENuS estimates of nutrient supplies against independent estimates by the USDA for historical US nutrition and find very good agreement for 21 of 23 nutrients, though sodium and dietary fiber will require further improvement. PMID:26807571
Smith, Matthew R; Micha, Renata; Golden, Christopher D; Mozaffarian, Dariush; Myers, Samuel S
2016-01-01
Insufficient data exist for accurate estimation of global nutrient supplies. Commonly used global datasets contain key weaknesses: 1) data with global coverage, such as the FAO food balance sheets, lack specific information about many individual foods and no information on micronutrient supplies nor heterogeneity among subnational populations, while 2) household surveys provide a closer approximation of consumption, but are often not nationally representative, do not commonly capture many foods consumed outside of the home, and only provide adequate information for a few select populations. Here, we attempt to improve upon these datasets by constructing a new model--the Global Expanded Nutrient Supply (GENuS) model--to estimate nutrient availabilities for 23 individual nutrients across 225 food categories for thirty-four age-sex groups in nearly all countries. Furthermore, the model provides historical trends in dietary nutritional supplies at the national level using data from 1961-2011. We determine supplies of edible food by expanding the food balance sheet data using FAO production and trade data to increase food supply estimates from 98 to 221 food groups, and then estimate the proportion of major cereals being processed to flours to increase to 225. Next, we estimate intake among twenty-six demographic groups (ages 20+, both sexes) in each country by using data taken from the Global Dietary Database, which uses nationally representative surveys to relate national averages of food consumption to individual age and sex-groups; for children and adolescents where GDD data does not yet exist, average calorie-adjusted amounts are assumed. Finally, we match food supplies with nutrient densities from regional food composition tables to estimate nutrient supplies, running Monte Carlo simulations to find the range of potential nutrient supplies provided by the diet. To validate our new method, we compare the GENuS estimates of nutrient supplies against independent estimates by the USDA for historical US nutrition and find very good agreement for 21 of 23 nutrients, though sodium and dietary fiber will require further improvement.
John R. Brooks
2007-01-01
A technique for estimating stand average dominant height based solely on field inventory data is investigated. Using only 45.0919 percent of the largest trees per acre in the diameter distribution resulted in estimates of average dominant height that were within 4.3 feet of the actual value, when averaged over stands of very different structure and history. Cubic foot...
The Nation's top 25 construction aggregates producers
Willett, Jason Christopher
2013-01-01
U.S. production of construction aggregates in 2011 was 2.17 billion short tons, valued at $17.2 billion, free on board (f.o.b.) at plant. Construction aggregates production decreased by 37 percent, and the associated value decreased by 25 percent, compared with the record highs reported in 2006. In 2011, construction aggregates production increased for the first time since 2006, owing to a very slight increase in the production of both construction sand and gravel and crushed stone. The average unit value, which is the f.o.b. at plant price of a ton of material, increased slightly, but is still less than the average unit value of two years prior.
Pairwise measures of causal direction in the epidemiology of sleep problems and depression.
Rosenström, Tom; Jokela, Markus; Puttonen, Sampsa; Hintsanen, Mirka; Pulkki-Råback, Laura; Viikari, Jorma S; Raitakari, Olli T; Keltikangas-Järvinen, Liisa
2012-01-01
Depressive mood is often preceded by sleep problems, suggesting that they increase the risk of depression. Sleep problems can also reflect prodromal symptom of depression, thus temporal precedence alone is insufficient to confirm causality. The authors applied recently introduced statistical causal-discovery algorithms that can estimate causality from cross-sectional samples in order to infer the direction of causality between the two sets of symptoms from a novel perspective. Two common-population samples were used; one from the Young Finns study (690 men and 997 women, average age 37.7 years, range 30-45), and another from the Wisconsin Longitudinal study (3101 men and 3539 women, average age 53.1 years, range 52-55). These included three depression questionnaires (two in Young Finns data) and two sleep problem questionnaires. Three different causality estimates were constructed for each data set, tested in a benchmark data with a (practically) known causality, and tested for assumption violations using simulated data. Causality algorithms performed well in the benchmark data and simulations, and a prediction was drawn for future empirical studies to confirm: for minor depression/dysphoria, sleep problems cause significantly more dysphoria than dysphoria causes sleep problems. The situation may change as depression becomes more severe, or more severe levels of symptoms are evaluated; also, artefacts due to severe depression being less well presented in the population data than minor depression may intervene the estimation for depression scales that emphasize severe symptoms. The findings are consistent with other emerging epidemiological and biological evidence.
Assessment of historical exposures in a nickel refinery in Norway.
Grimsrud, T K; Berge, S R; Resmann, F; Norseth, T; Andersen, A
2000-08-01
The aim of the study was, on the basis of new information on nickel species and exposure levels, to generate a specific exposure matrix for epidemiologic analyses in a cohort of Norwegian nickel-refinery workers with a known excess of respiratory cancer. A department-time-exposure matrix was constructed with average exposure to total nickel estimated as the arithmetic mean of personal measurements for periods between 1973 and 1994. From 1972 back to the start of production in 1910, exposure concentrations were estimated through retrograde calculation with multiplication factors developed on the basis of reported changes in the metallurgical process and work environment. The relative distribution of water-soluble nickel salts (sulfates and chlorides), metallic nickel, and particulates with limited solubility (sulfides and oxides) was mainly derived from speciation analyses conducted in the 1990s. The average concentration of nickel in the breathing zone was < or = 0.7 mg/m3 for all workers after 1978. Exposure levels for smelter and roaster day workers were 2-6 mg/m3 before 1970, while workers in nickel electrolysis and electrolyte purification were exposed to concentrations in the range of 0.15-1.2 mg/m3. The level of water-soluble nickel was of the same order for workers in the smelting and roasting departments as in some of the electrolyte purification departments. Compared with earlier estimates, the present matrix probably offers a more reliable description of past exposures at the plant.
Earthquake prediction analysis based on empirical seismic rate: the M8 algorithm
NASA Astrophysics Data System (ADS)
Molchan, G.; Romashkova, L.
2010-12-01
The quality of space-time earthquake prediction is usually characterized by a 2-D error diagram (n, τ), where n is the fraction of failures-to-predict and τ is the local rate of alarm averaged in space. The most reasonable averaging measure for analysis of a prediction strategy is the normalized rate of target events λ(dg) in a subarea dg. In that case the quantity H = 1 - (n + τ) determines the prediction capability of the strategy. The uncertainty of λ(dg) causes difficulties in estimating H and the statistical significance, α, of prediction results. We investigate this problem theoretically and show how the uncertainty of the measure can be taken into account in two situations, viz., the estimation of α and the construction of a confidence zone for the (n, τ)-parameters of the random strategies. We use our approach to analyse the results from prediction of M >= 8.0 events by the M8 method for the period 1985-2009 (the M8.0+ test). The model of λ(dg) based on the events Mw >= 5.5, 1977-2004, and the magnitude range of target events 8.0 <= M < 8.5 are considered as basic to this M8 analysis. We find the point and upper estimates of α and show that they are still unstable because the number of target events in the experiment is small. However, our results argue in favour of non-triviality of the M8 prediction algorithm.
Lower Charles River Bathymetry: 108 Years of Fresh Water
NASA Astrophysics Data System (ADS)
Yoder, M.; Sacarny, M.
2017-12-01
The Lower Charles River is a heavily utilized urban river that runs between Cambridge and Boston in Massachusetts. The recreational usage of the river is dependent on adequate water depths, but there have been no definitive prior studies on the sedimentation rate of the Lower Charles River. The river transitioned from tidal to a freshwater basin in 1908 due to the construction of the (old) Charles River Dam. Water surface height on the Lower Charles River is maintained within ±1 foot through controlled discharge at the new Charles River Dam. The current study area for historical comparisons is from the old Charles River Dam to the Boston University Bridge. This study conducted a bathymetric survey of the Lower Charles River, digitized three prior surveys in the study area, calculated volumes and depth distributions for each survey, and estimated sedimentation rates from fits to the volumes over time. The oldest chart digitized was produced in 1902 during dam construction deliberations. The average sedimentation rate is estimated as 5-10 mm/year, which implies 1.8-3.5 feet sedimentation since 1908. Sedimentation rates and distributions are necessary to develop comprehensive management plans for the river and there is evidence to suggest that sedimentation rates in the shallow upstream areas are higher than the inferred rates in the study area.
Integrated Model for Performance Analysis of All-Optical Multihop Packet Switches
NASA Astrophysics Data System (ADS)
Jeong, Han-You; Seo, Seung-Woo
2000-09-01
The overall performance of an all-optical packet switching system is usually determined by two criteria, i.e., switching latency and packet loss rate. In some real-time applications, however, in which packets arriving later than a timeout period are discarded as loss, the packet loss rate becomes the most dominant criterion for system performance. Here we focus on evaluating the performance of all-optical packet switches in terms of the packet loss rate, which normally arises from the insufficient hardware or the degradation of an optical signal. Considering both aspects, we propose what we believe is a new analysis model for the packet loss rate that reflects the complicated interactions between physical impairments and system-level parameters. On the basis of the estimation model for signal quality degradation in a multihop path we construct an equivalent analysis model of a switching network for evaluating an average bit error rate. With the model constructed we then propose an integrated model for estimating the packet loss rate in three architectural examples of multihop packet switches, each of which is based on a different switching concept. We also derive the bounds on the packet loss rate induced by bit errors. Finally, it is verified through simulation studies that our analysis model accurately predicts system performance.
NASA Astrophysics Data System (ADS)
Rotzoll, K.; Izuka, S. K.; Nishikawa, T.; Fienen, M. N.; El-Kadi, A. I.
2015-12-01
The volcanic-rock aquifers of Kauai, Oahu, and Maui are heavily developed, leading to concerns related to the effects of groundwater withdrawals on saltwater intrusion and streamflow. A numerical modeling analysis using the most recently available data (e.g., information on recharge, withdrawals, hydrogeologic framework, and conceptual models of groundwater flow) will substantially advance current understanding of groundwater flow and provide insight into the effects of human activity and climate change on Hawaii's water resources. Three island-wide groundwater-flow models were constructed using MODFLOW 2005 coupled with the Seawater-Intrusion Package (SWI2), which simulates the transition between saltwater and freshwater in the aquifer as a sharp interface. This approach allowed relatively fast model run times without ignoring the freshwater-saltwater system at the regional scale. Model construction (FloPy3), automated-parameter estimation (PEST), and analysis of results were streamlined using Python scripts. Model simulations included pre-development (1870) and current (average of 2001-10) scenarios for each island. Additionally, scenarios for future withdrawals and climate change were simulated for Oahu. We present our streamlined approach and preliminary results showing estimated effects of human activity on the groundwater resource by quantifying decline in water levels, reduction in stream base flow, and rise of the freshwater-saltwater interface.
Choi, J.; Harvey, J.W.
2000-01-01
Developing a more thorough understanding of water and chemical budgets in wetlands depends in part on our ability to quantify time-varying interactions between ground water and surface water. We used a combined water and solute mass balance approach to estimate time-varying ground-water discharge and recharge in the Everglades Nutrient Removal project (ENR), a relatively large constructed wetland (1544 hectare) built for removing nutrients from agricultural drainage in the norther Everglades in South Florida, USA. Over a 4-year period (1994 through 1998), ground-water recharge averaged 13.4 hectare-meter per day (ha-m/day) or 0.9 cm/day, which is approximately 31% of surface water pumped into the ENR for treatment. In contrast, ground-water discharge was much smaller (1.4 ha-m/day, or 0.09 cm/day, or 2.8% of water input to ENR for treatment). Using a water-balance approach alone only allowed net ground-water exchange (discharge - recharge) to be estimated (-12 ?? 2.4 ha-ma/day). Disharge and recharge were individually determined by combining a chloride mass balance with the water balance. For a variety of reasons, the ground-water discharge estimated by the combined mass balance approach was not reliable (1.4 ?? 37 ha-m/day). As a result, ground-water interactions could only be reliably estimated by comparing the mass-balance results with other independent approaches, including direct seepage-meter measurements and previous estimates using ground-water modeling. All three independent approaches provided similar estimates of average ground-water recharge, ranging from 13 to 14 ha-m/day. There was also relatively good agreement between ground-water discharge estimates for the mass balance and seepage meter methods, 1.4 and 0.9 ha-m/day, respectively. However, ground-water-flow modeling provided an average discharge estimate that was approximately a factor of four higher (5.4 ha-m/day) than the other two methods. Our study developed an initial understanding of how the design and operation of the ENR increases interactions between ground water and surface water. A considerable portion of recharged ground water (73%) was collected and returned to the ENR by a seepage canal. Additional recharge that was not captured by the seepage canal only occurred when pumped inflow rates to ENR (and ENR water levels) were relatively high. Management of surface water in the northern Everglades therefore clearly has the potential to increase interactions with ground water.
Jishi, Tomohiro; Matsuda, Ryo; Fujiwara, Kazuhiro
2018-06-01
Square-wave pulsed light is characterized by three parameters, namely average photosynthetic photon flux density (PPFD), pulsed-light frequency, and duty ratio (the ratio of light-period duration to that of the light-dark cycle). In addition, the light-period PPFD is determined by the averaged PPFD and duty ratio. We investigated the effects of these parameters and their interactions on net photosynthetic rate (P n ) of cos lettuce leaves for every combination of parameters. Averaged PPFD values were 0-500 µmol m -2 s -1 . Frequency values were 0.1-1000 Hz. White LED arrays were used as the light source. Every parameter affected P n and interactions between parameters were observed for all combinations. The P n under pulsed light was lower than that measured under continuous light of the same averaged PPFD, and this difference was enhanced with decreasing frequency and increasing light-period PPFD. A mechanistic model was constructed to estimate the amount of stored photosynthetic intermediates over time under pulsed light. The results indicated that all effects of parameters and their interactions on P n were explainable by consideration of the dynamics of accumulation and consumption of photosynthetic intermediates.
Probability distribution functions for intermittent scrape-off layer plasma fluctuations
NASA Astrophysics Data System (ADS)
Theodorsen, A.; Garcia, O. E.
2018-03-01
A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.
An AFLP genetic linkage map of pacific abalone ( Haliotis discus hannai)
NASA Astrophysics Data System (ADS)
Qi, Li; Yanhong, Xu; Ruihai, Yu; Akihiro, Kijima
2007-07-01
A genetic linkage map of Pacific abalone ( Haliotis discus hannai) was constructed using AFLP markers based on a two-way pseudo-testeross strategy in a full-sib family. With 33 primer combinations, a total of 455 markers (225 from the female parent and 230 from the male parent) segregated in a 1:1 ratio, corresponding to DNA polymorphism: heterozygous in one parent and null in the other. The female framework map consisted of 174 markers distributed in 18 linkage groups, equivalent to the H. discus hannai haploid chromosome number, and spanning a total length of 2031.4 cM, with an average interval of 13.0 cM between adjacent markers. The male framework map consisted of 195 markers mapped on 19 linkage groups, spanning a total length of 2273.4 cM, with an average spacing of 12.9 cM between adjacent markers. The estimated coverage for the framework linkage maps was 81.2% for the female and 82.1% for the male, on the basis of two estimates of genome length. Fifty-two markers (11.4%) remained unlinked. The level of segregation distortion observed in this cross was 20.4%. These linkage maps will serve as a starting point for linkage studies in the Pacific abalone with potential application for marker-assisted selection in breeding programs.
Eruption history of the Tharsis shield volcanoes, Mars
NASA Technical Reports Server (NTRS)
Plescia, J. B.
1993-01-01
The Tharsis Montes volcanoes and Olympus Mons are giant shield volcanoes. Although estimates of their average surface age have been made using crater counts, the length of time required to build the shields has not been considered. Crater counts for the volcanoes indicate the constructs are young; average ages are Amazonian to Hesperian. In relative terms; Arsia Mons is the oldest, Pavonis Mons intermediate, and Ascreaus Mons the youngest of the Tharsis Montes shield; Olympus Mons is the youngest of the group. Depending upon the calibration, absolute ages range from 730 Ma to 3100 Ma for Arsia Mons and 25 Ma to 100 Ma for Olympus Mons. These absolute chronologies are highly model dependent, and indicate only the time surficial volcanism ceased, not the time over which the volcano was built. The problem of estimating the time necessary to build the volcanoes can be attacked in two ways. First, eruption rates from terrestrial and extraterrestrial examples can be used to calculate the required period of time to build the shields. Second, some relation of eruptive activity between the volcanoes can be assumed, such as they all began at a speficic time or they were active sequentially, and calculate the eruptive rate. Volumes of the shield volcanoes were derived from topographic/volume data.
Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis
Peng, Zhenyun; Zhang, Yaohui
2014-01-01
Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying. PMID:24592182
Marcum, Jennifer L; Foley, Michael; Adams, Darrin; Bonauto, Dave
2018-06-01
Construction is high-hazard industry, and continually ranks among those with the highest workers' compensation (WC) claim rates in Washington State (WA). However, not all construction firms are at equal risk. We tested the ability to identify those construction firms most at risk for future claims using only administrative WC and unemployment insurance data. We collected information on construction firms with 10-50 average full time equivalent (FTE) employees from the WA unemployment insurance and WC data systems (n=1228). Negative binomial regression was used to test the ability of firm characteristics measured during 2011-2013 to predict time-loss claim rates in the following year, 2014. Claim rates in 2014 varied by construction industry groups, ranging from 0.7 (Land Subdivision) to 4.6 (Foundation, Structure, and Building Construction) claims per 100 FTE. Construction firms with higher average WC premium rates, a history of WC claims, increasing number of quarterly FTE, and lower average wage rates during 2011-2013 were predicted to have higher WC claim rates in 2014. We demonstrate the ability to leverage administrative data to identify construction firms predicted to have future WC claims. This study should be repeated to determine if these results are applicable to other high-hazard industries. Practical Applications: This study identified characteristics that may be used to further refine targeted outreach and prevention to construction firms at risk. Published by Elsevier Ltd.
Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model
NASA Astrophysics Data System (ADS)
Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato
2018-02-01
This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.
Cost estimators for construction of forest roads in the central Appalachians
Deborah, A. Layton; Chris O. LeDoux; Curt C. Hassler; Curt C. Hassler
1992-01-01
Regression equations were developed for estimating the total cost of road construction in the central Appalachian region. Estimators include methods for predicting total costs for roads constructed using hourly rental methods and roads built on a total-job bid basis. Results show that total-job bid roads cost up to five times as much as roads built than when equipment...
2015-01-01
Among the recent data mining techniques available, the boosting approach has attracted a great deal of attention because of its effective learning algorithm and strong boundaries in terms of its generalization performance. However, the boosting approach has yet to be used in regression problems within the construction domain, including cost estimations, but has been actively utilized in other domains. Therefore, a boosting regression tree (BRT) is applied to cost estimations at the early stage of a construction project to examine the applicability of the boosting approach to a regression problem within the construction domain. To evaluate the performance of the BRT model, its performance was compared with that of a neural network (NN) model, which has been proven to have a high performance in cost estimation domains. The BRT model has shown results similar to those of NN model using 234 actual cost datasets of a building construction project. In addition, the BRT model can provide additional information such as the importance plot and structure model, which can support estimators in comprehending the decision making process. Consequently, the boosting approach has potential applicability in preliminary cost estimations in a building construction project. PMID:26339227
Shin, Yoonseok
2015-01-01
Among the recent data mining techniques available, the boosting approach has attracted a great deal of attention because of its effective learning algorithm and strong boundaries in terms of its generalization performance. However, the boosting approach has yet to be used in regression problems within the construction domain, including cost estimations, but has been actively utilized in other domains. Therefore, a boosting regression tree (BRT) is applied to cost estimations at the early stage of a construction project to examine the applicability of the boosting approach to a regression problem within the construction domain. To evaluate the performance of the BRT model, its performance was compared with that of a neural network (NN) model, which has been proven to have a high performance in cost estimation domains. The BRT model has shown results similar to those of NN model using 234 actual cost datasets of a building construction project. In addition, the BRT model can provide additional information such as the importance plot and structure model, which can support estimators in comprehending the decision making process. Consequently, the boosting approach has potential applicability in preliminary cost estimations in a building construction project.
Daniels, Joan S. (Thullen); Cade, Brian S.; Sartoris, James J.
2010-01-01
Assessment of emergent vegetation biomass can be time consuming and labor intensive. To establish a less onerous, yet accurate method, for determining emergent plant biomass than by direct measurements we collected vegetation data over a six-year period and modeled biomass using easily obtained variables: culm (stem) diameter, culm height and culm density. From 1998 through 2005, we collected emergent vegetation samples (Schoenoplectus californicus andSchoenoplectus acutus) at a constructed treatment wetland in San Jacinto, California during spring and fall. Various statistical models were run on the data to determine the strongest relationships. We found that the nonlinear relationship: CB=β0DHβ110ε, where CB was dry culm biomass (g m−2), DH was density of culms × average height of culms in a plot, and β0 and β1 were parameters to estimate, proved to be the best fit for predicting dried-live above-ground biomass of the two Schoenoplectus species. The random error distribution, ε, was either assumed to be normally distributed for mean regression estimates or assumed to be an unspecified continuous distribution for quantile regression estimates.
Szczegielniak, Jan; Łuniewski, Jacek; Stanisławski, Rafał; Bogacz, Katarzyna; Krajczy, Marcin; Rydel, Marek
2018-01-01
Background The six-minute walk test (6MWT) is considered to be a simple and inexpensive tool for the assessment of functional tolerance of submaximal effort. The aim of this work was 1) to background the nonlinear nature of the energy expenditure process due to physical activity, 2) to compare the results/scores of the submaximal treadmill exercise test and those of 6MWT in pulmonary patients and 3) to develop nonlinear mathematical models relating the two. Methods The study group included patients with the COPD. All patients were subjected to a submaximal exercise test and a 6MWT. To develop an optimal mathematical solution and compare the results of the exercise test and the 6MWT, the least squares and genetic algorithms were employed to estimate parameters of polynomial expansion and piecewise linear models. Results Mathematical analysis enabled to construct nonlinear models for estimating the MET result of submaximal exercise test based on average walk velocity (or distance) in the 6MWT. Conclusions Submaximal effort tolerance in COPD patients can be effectively estimated from new, rehabilitation-oriented, nonlinear models based on the generalized MET concept and the 6MWT. PMID:29425213
Syntax and reading comprehension: a meta-analysis of different spoken-syntax assessments.
Brimo, Danielle; Lund, Emily; Sapp, Alysha
2018-05-01
Syntax is a language skill purported to support children's reading comprehension. However, researchers who have examined whether children with average and below-average reading comprehension score significantly different on spoken-syntax assessments report inconsistent results. To determine if differences in how syntax is measured affect whether children with average and below-average reading comprehension score significantly different on spoken-syntax assessments. Studies that included a group comparison design, children with average and below-average reading comprehension, and a spoken-syntax assessment were selected for review. Fourteen articles from a total of 1281 reviewed met the inclusionary criteria. The 14 articles were coded for the age of the children, score on the reading comprehension assessment, type of spoken-syntax assessment, type of syntax construct measured and score on the spoken-syntax assessment. A random-effects model was used to analyze the difference between the effect sizes of the types of spoken-syntax assessments and the difference between the effect sizes of the syntax construct measured. There was a significant difference between children with average and below-average reading comprehension on spoken-syntax assessments. Those with average and below-average reading comprehension scored significantly different on spoken-syntax assessments when norm-referenced and researcher-created assessments were compared. However, when the type of construct was compared, children with average and below-average reading comprehension scored significantly different on assessments that measured knowledge of spoken syntax, but not on assessments that measured awareness of spoken syntax. The results of this meta-analysis confirmed that the type of spoken-syntax assessment, whether norm-referenced or researcher-created, did not explain why some researchers reported that there were no significant differences between children with average and below-average reading comprehension, but the syntax construct, awareness or knowledge, did. Thus, when selecting how to measure syntax among school-age children, researchers and practitioners should evaluate whether they are measuring children's awareness of spoken syntax or knowledge of spoken syntax. Other differences, such as participant diagnosis and the format of items on the spoken-syntax assessments, also were discussed as possible explanations for why researchers found that children with average and below-average reading comprehension did not score significantly differently on spoken-syntax assessments. © 2017 Royal College of Speech and Language Therapists.
Sobel, Michael E; Lindquist, Martin A
2014-07-01
Functional magnetic resonance imaging (fMRI) has facilitated major advances in understanding human brain function. Neuroscientists are interested in using fMRI to study the effects of external stimuli on brain activity and causal relationships among brain regions, but have not stated what is meant by causation or defined the effects they purport to estimate. Building on Rubin's causal model, we construct a framework for causal inference using blood oxygenation level dependent (BOLD) fMRI time series data. In the usual statistical literature on causal inference, potential outcomes, assumed to be measured without systematic error, are used to define unit and average causal effects. However, in general the potential BOLD responses are measured with stimulus dependent systematic error. Thus we define unit and average causal effects that are free of systematic error. In contrast to the usual case of a randomized experiment where adjustment for intermediate outcomes leads to biased estimates of treatment effects (Rosenbaum, 1984), here the failure to adjust for task dependent systematic error leads to biased estimates. We therefore adjust for systematic error using measured "noise covariates" , using a linear mixed model to estimate the effects and the systematic error. Our results are important for neuroscientists, who typically do not adjust for systematic error. They should also prove useful to researchers in other areas where responses are measured with error and in fields where large amounts of data are collected on relatively few subjects. To illustrate our approach, we re-analyze data from a social evaluative threat task, comparing the findings with results that ignore systematic error.
Guest, Julian F; Jenssen, Trond; Houge, Gunnar; Aaseboe, Willy; Tøndel, Camilla; Svarstad, Einar
2010-12-01
The aim of this study was to estimate the resource implications and budget impact of managing adults with Fabry disease in Norway, from the perspective of the publicly funded healthcare system. A decision model was constructed using published clinical outcomes and clinician-derived resource utilization estimates. The model was used to estimate the annual healthcare cost of managing a cohort of 64 adult Fabry patients in an average year. The expected annual cost of managing 60 existing Fabry patients and four new patients in Norway each year was estimated to be NOK 55·8 million (€6·7 million). In an average year, patients receiving enzyme replacement therapy (ERT) with agalsidase alfa (Replagal(®)) at 0·2 mg kg⁻¹ or agalsidase beta (Fabrazyme(®)) at 1·0 mg kg⁻¹ are collectively expected to make 586 attendances to their family practitioner's office for their infusions, which equates to 128 eight-hour days associated with ERT. Encouraging more patients to undergo home-based infusions has substantial potential to free-up community-based resources. In comparison, the community-related benefit that can be obtained by switching from agalsidase beta (1·0 mg kg⁻¹) to agalsidase alpha (0·2 mg kg⁻¹) is marginal, and dependent on the two doses being clinically equivalent. Maximizing the proportion of adults with Fabry disease undergoing home-based infusions has the potential to release community-based resources for alternative use by non-Fabry patients, thereby improving the efficiency of the publicly funded healthcare system in Norway. © 2010 The Authors. European Journal of Clinical Investigation © 2010 Stichting European Society for Clinical Investigation Journal Foundation.
London, L.; Myers, J. E.
1998-01-01
RATIONALE: Job exposure matrices (JEMs) are widely used in occupational epidemiology, particularly when biological or environmental monitoring data are scanty. However, as with most exposure estimates, JEMs may be vulnerable to misclassification. OBJECTIVES: To estimate the long term exposure of farm workers based on a JEM developed for use in a study of the neurotoxic effects of organophosphates and to evaluate the repeatability and validity of the JEM. METHODS: A JEM was constructed with secondary data from industry and expert opinion of the estimate of agrichemical exposure within every possible job activity in the JEM to weight job days for exposure to organophosphates. Cumulative lifetime and average intensity exposure of organophosphate exposure were calculated for 163 pesticide applicators and 84 controls. Repeat questionnaires were given to 29 participants three months later to test repeatability of measurements. The ability of JEM based exposure to predict a known marker of organophosphate exposure was used to validate the JEM. RESULTS: Cumulative lifetime exposure as measured in kg organophosphate exposure, was significantly associated with erythrocyte cholinesterase concentrations (partial r2 = 5%; p < 0.01), controlled for a range of confounders. Repeatability in a subsample of 29 workers of the estimates of cumulative (Pearson's r = 0.67; 95% confidence interval (95% CI) 0.41 to 0.83), and average lifetime intensity of exposure (Pearson's r = 0.60 95% CI 0.31 to 0.79) was adequate. CONCLUSION: The JEM seems promising for farming settings, particularly in developing countries where data on chemical application and biological monitoring are unavailable. PMID:9624271
NASA Astrophysics Data System (ADS)
Lizurek, Grzegorz; Marmureanu, Alexandru; Wiszniowski, Jan
2017-03-01
Bucharest, with a population of approximately 2 million people, has suffered damage from earthquakes in the Vrancea seismic zone, which is located about 170 km from Bucharest, at a depth of 80-200 km. Consequently, an earthquake early warning system (Bucharest Rapid earthquake Early Warning System or BREWS) was constructed to provide some warning about impending shaking from large earthquakes in the Vrancea zone. In order to provide quick estimates of magnitude, seismic moment was first determined from P-waves and then a moment magnitude was determined from the moment. However, this magnitude may not be consistent with previous estimates of magnitude from the Romanian Seismic Network. This paper introduces the algorithm using P-wave spectral levels and compares them with catalog estimates. The testing procedure used waveforms from about 90 events with catalog magnitudes from 3.5 to 5.4. Corrections to the P-wave determined magnitudes according to dominant intermediate depth events mechanism were tested for November 22, 2014, M5.6 and October 17, M6 events. The corrections worked well, but unveiled overestimation of the average magnitude result of about 0.2 magnitude unit in the case of shallow depth event ( H < 60 km). The P-wave spectral approach allows for the relatively fast estimates of magnitude for use in BREWS. The average correction taking into account the most common focal mechanism for radiation pattern coefficient may lead to overestimation of the magnitude for shallow events of about 0.2 magnitude unit. However, in case of events of intermediate depth of M6 the resulting M w is underestimated at about 0.1-0.2. We conclude that our P-wave spectral approach is sufficiently robust for the needs of BREWS for both shallow and intermediate depth events.
Twining, Brian V.; Bartholomay, Roy C.; Hodges, Mary K.V.
2014-01-01
In 2013, the U.S. Geological Survey, in cooperation with the U.S. Department of Energy, drilled and constructed boreholes USGS 140 and USGS 141 for stratigraphic framework analyses and long-term groundwater monitoring of the eastern Snake River Plain aquifer at the Idaho National Laboratory in southeast Idaho. Borehole USGS 140 initially was cored to collect continuous geologic data, and then re-drilled to complete construction as a monitor well. Borehole USGS 141 was drilled and constructed as a monitor well without coring. Boreholes USGS 140 and USGS 141 are separated by about 375 feet (ft) and have similar geologic layers and hydrologic characteristics based on geophysical and aquifer test data collected. The final construction for boreholes USGS 140 and USGS 141 required 6-inch (in.) diameter carbon-steel well casing and 5-in. diameter stainless-steel well screen; the screened monitoring interval was completed about 50 ft into the eastern Snake River Plain aquifer, between 496 and 546 ft below land surface (BLS) at both sites. Following construction and data collection, dedicated pumps and water-level access lines were placed to allow for aquifer testing, for collecting periodic water samples, and for measuring water levels. Borehole USGS 140 was cored continuously, starting from land surface to a depth of 543 ft BLS. Excluding surface sediment, recovery of basalt and sediment core at borehole USGS 140 was about 98 and 65 percent, respectively. Based on visual inspection of core and geophysical data, about 32 basalt flows and 4 sediment layers were collected from borehole USGS 140 between 34 and 543 ft BLS. Basalt texture for borehole USGS 140 generally was described as aphanitic, phaneritic, and porphyritic; rubble zones and flow mold structure also were described in recovered core material. Sediment layers, starting near 163 ft BLS, generally were composed of fine-grained sand and silt with a lesser amount of clay; however, between 223 and 228 ft BLS, silt with gravel was described. Basalt flows generally ranged in thickness from 3 to 76 ft (average of 14 ft) and varied from highly fractured to dense with high to low vesiculation. Geophysical and borehole video logs were collected during certain stages of the drilling and construction process at boreholes USGS 140 and USGS 141. Geophysical logs were examined synergistically with the core material for borehole USGS 140; additionally, geophysical data were examined to confirm geologic and hydrologic similarities between boreholes USGS 140 and USGS 141 because core was not collected for borehole USGS 141. Geophysical data suggest the occurrence of fractured and (or) vesiculated basalt, dense basalt, and sediment layering in both the saturated and unsaturated zones in borehole USGS 141. Omni-directional density measurements were used to assess the completeness of the grout annular seal behind 6-in. diameter well casing. Furthermore, gyroscopic deviation measurements were used to measure horizontal and vertical displacement at all depths in boreholes USGS 140 and USGS 141. Single-well aquifer tests were done following construction at wells USGS 140 and USGS 141 and data examined after the tests were used to provide estimates of specific-capacity, transmissivity, and hydraulic conductivity. The specific capacity, transmissivity, and hydraulic conductivity for well USGS 140 were estimated at 2,370 gallons per minute per foot [(gal/min)/ft)], 4.06 × 105 feet squared per day (ft2/d), and 740 feet per day (ft/d), respectively. The specific capacity, transmissivity, and hydraulic conductivity for well USGS 141 were estimated at 470 (gal/min)/ft, 5.95 × 104 ft2/d, and 110 ft/d, respectively. Measured flow rates remained relatively constant in well USGS 140 with averages of 23.9 and 23.7 gal/min during the first and second aquifer tests, respectively, and in well USGS 141 with an average of 23.4 gal/min. Water samples were analyzed for cations, anions, metals, nutrients, volatile organic compounds, stable isotopes, and radionuclides. Water samples from both wells indicated that concentrations of tritium, sulfate, and chromium were affected by wastewater disposal practices at the Advanced Test Reactor Complex. Most constituents in water from wells USGS 140 and USGS 141 had concentrations similar to concentrations in well USGS 136, which is upgradient from wells USGS 140 and USGS 141.
Scheduling on the basis of the research of dependences among the construction process parameters
NASA Astrophysics Data System (ADS)
Romanovich, Marina; Ermakov, Alexander; Mukhamedzhanova, Olga
2017-10-01
The dependences among the construction process parameters are investigated in the article: average integrated value of qualification of the shift, number of workers per shift and average daily amount of completed work on the basis of correlation coefficient are considered. Basic data for the research of dependences among the above-stated parameters have been collected during the construction of two standard objects A and B (monolithic houses), in four months of construction (October, November, December, January). Kobb-Douglas production function has proved the values of coefficients of correlation close to 1. Function is simple to be used and is ideal for the description of the considered dependences. The development function, describing communication among the considered parameters of the construction process, is developed. The function of the development gives the chance to select optimum quantitative and qualitative (qualification) structure of the brigade link for the work during the next period of time, according to a preset value of amount of works. Function of the optimized amounts of works, which reflects interrelation of key parameters of construction process, is developed. Values of function of the optimized amounts of works should be used as the average standard for scheduling of the storming periods of construction.
Wang, Wei; Griswold, Michael E
2016-11-30
The random effect Tobit model is a regression model that accommodates both left- and/or right-censoring and within-cluster dependence of the outcome variable. Regression coefficients of random effect Tobit models have conditional interpretations on a constructed latent dependent variable and do not provide inference of overall exposure effects on the original outcome scale. Marginalized random effects model (MREM) permits likelihood-based estimation of marginal mean parameters for the clustered data. For random effect Tobit models, we extend the MREM to marginalize over both the random effects and the normal space and boundary components of the censored response to estimate overall exposure effects at population level. We also extend the 'Average Predicted Value' method to estimate the model-predicted marginal means for each person under different exposure status in a designated reference group by integrating over the random effects and then use the calculated difference to assess the overall exposure effect. The maximum likelihood estimation is proposed utilizing a quasi-Newton optimization algorithm with Gauss-Hermite quadrature to approximate the integration of the random effects. We use these methods to carefully analyze two real datasets. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Estimation of average annual streamflows and power potentials for Alaska and Hawaii
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verdin, Kristine L.
2004-05-01
This paper describes the work done to develop average annual streamflow estimates and power potential for the states of Alaska and Hawaii. The Elevation Derivatives for National Applications (EDNA) database was used, along with climatic datasets, to develop flow and power estimates for every stream reach in the EDNA database. Estimates of average annual streamflows were derived using state-specific regression equations, which were functions of average annual precipitation, precipitation intensity, drainage area, and other elevation-derived parameters. Power potential was calculated through the use of the average annual streamflow and the hydraulic head of each reach, which is calculated from themore » EDNA digital elevation model. In all, estimates of streamflow and power potential were calculated for over 170,000 stream segments in the Alaskan and Hawaiian datasets.« less
Robust estimation for class averaging in cryo-EM Single Particle Reconstruction.
Huang, Chenxi; Tagare, Hemant D
2014-01-01
Single Particle Reconstruction (SPR) for Cryogenic Electron Microscopy (cryo-EM) aligns and averages the images extracted from micrographs to improve the Signal-to-Noise ratio (SNR). Outliers compromise the fidelity of the averaging. We propose a robust cross-correlation-like w-estimator for combating the effect of outliers on the average images in cryo-EM. The estimator accounts for the natural variation of signal contrast among the images and eliminates the need for a threshold for outlier rejection. We show that the influence function of our estimator is asymptotically bounded. Evaluations of the estimator on simulated and real cryo-EM images show good performance in the presence of outliers.
Veas, Alejandro; Gilar, Raquel; Miñano, Pablo; Castejón, Juan-Luis
2016-01-01
There are very few studies in Spain that treat underachievement rigorously, and those that do are typically related to gifted students. The present study examined the proportion of underachieving students using the Rasch measurement model. A sample of 643 first-year high school students (mean age = 12.09; SD = 0.47) from 8 schools in the province of Alicante (Spain) completed the Battery of Differential and General Skills (Badyg), and these students' General Points Average (GPAs) were recovered by teachers. Dichotomous and Partial credit Rasch models were performed. After adjusting the measurement instruments, the individual underachievement index provided a total sample of 181 underachieving students, or 28.14% of the total sample across the ability levels. This study confirms that the Rasch measurement model can accurately estimate the construct validity of both the intelligence test and the academic grades for the calculation of underachieving students. Furthermore, the present study constitutes a pioneer framework for the estimation of the prevalence of underachievement in Spain. PMID:26973586
Clément, D; Lanaud, C; Sabau, X; Fouet, O; Le Cunff, L; Ruiz, E; Risterucci, A M; Glaszmann, J C; Piffanelli, P
2004-05-01
We have constructed and validated the first cocoa ( Theobroma cacao L.) BAC library, with the aim of developing molecular resources to study the structure and evolution of the genome of this perennial crop. This library contains 36,864 clones with an average insert size of 120 kb, representing approximately ten haploid genome equivalents. It was constructed from the genotype Scavina-6 (Sca-6), a Forastero clone highly resistant to cocoa pathogens and a parent of existing mapping populations. Validation of the BAC library was carried out with a set of 13 genetically-anchored single copy and one duplicated markers. An average of nine BAC clones per probe was identified, giving an initial experimental estimation of the genome coverage represented in the library. Screening of the library with a set of resistance gene analogues (RGAs), previously mapped in cocoa and co-localizing with QTL for resistance to Phytophthora traits, confirmed at the physical level the tight clustering of RGAs in the cocoa genome and provided the first insights into the relationships between genetic and physical distances in the cocoa genome. This library represents an available BAC resource for structural genomic studies or map-based cloning of genes corresponding to important QTLs for agronomic traits such as resistance genes to major cocoa pathogens like Phytophthora spp ( palmivora and megakarya), Crinipellis perniciosa and Moniliophthora roreri.
Gravity anomaly and geoid undulation results in local areas from GEOS-3 altimeter data
NASA Technical Reports Server (NTRS)
Rapp, R. H.
1979-01-01
The adjusted GEOS-3 altimeter data, taken as averages within a data frame, have been used to construct free air anomaly and geoid undulation profiles and maps in areas of geophysical interest. Profiles were constructed across the Philippine Trench (at a latitude of 6 deg) and across the Bonin Trench (at a latitude of 28 deg). In the latter case an anomaly variation of 443 mgals in 143 km was derived from the altimeter data. These variations agreed reasonably with terrestrial estimates, considering the predicted point accuracy was about + or - 27 mgals. An area over the Patton Sea mounts was also investigated with the altimeter anomaly field agreeing well with the terrestrial data except for the point directly over the top of the sea mount. It is concluded that the GEOS-3 altimeter data is valuable not only for determining 5 deg and 1 deg x 1 deg mean anomalies, but also can be used to describe more local anomaly variations.
Kayen, Robert E.; Barnhardt, Walter A.; Ashford, Scott; Rollins, Kyle
2000-01-01
A ground penetrating radar (GPR) experiment at the Treasure Island Test Site [TILT] was performed to non-destructively image the soil column for changes in density prior to, and following, a liquefaction event. The intervening liquefaction was achieved by controlled blasting. A geotechnical borehole radar technique was used to acquire high-resolution 2-D radar velocity data. This method of non-destructive site characterization uses radar trans-illumination surveys through the soil column and tomographic data manipulation techniques to construct radar velocity tomograms, from which averaged void ratios can be derived at 0.25 - 0.5m pixel footprints. Tomograms of void ratio were constructed through the relation between soil porosity and dielectric constant. Both pre- and post-blast tomograms were collected and indicate that liquefaction related densification occurred at the site. Volumetric strains estimated from the tomograms correlate well with the observed settlement at the site. The 2-D imagery of void ratio can serve as high-resolution data layers for numerical site response analysis.
Estimating diesel fuel consumption and carbon dioxide emissions from forest road construction
Dan Loeffler; Greg Jones; Nikolaus Vonessen; Sean Healey; Woodam Chung
2009-01-01
Forest access road construction is a necessary component of many on-the-ground forest vegetation treatment projects. However, the fuel energy requirements and associated carbon dioxide emissions from forest road construction are unknown. We present a method for estimating diesel fuel consumed and related carbon dioxide emissions from constructing forest roads using...
Passive quantum error correction of linear optics networks through error averaging
NASA Astrophysics Data System (ADS)
Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.
2018-02-01
We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov
2007-01-01
Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.
Southwell, Colin; Emmerson, Louise; Newbery, Kym; McKinlay, John; Kerry, Knowles; Woehler, Eric; Ensor, Paul
2015-01-01
Seabirds and other land-breeding marine predators are considered to be useful and practical indicators of the state of marine ecosystems because of their dependence on marine prey and the accessibility of their populations at breeding colonies. Historical counts of breeding populations of these higher-order marine predators are one of few data sources available for inferring past change in marine ecosystems. However, historical abundance estimates derived from these population counts may be subject to unrecognised bias and uncertainty because of variable attendance of birds at breeding colonies and variable timing of past population surveys. We retrospectively accounted for detection bias in historical abundance estimates of the colonial, land-breeding Adélie penguin through an analysis of 222 historical abundance estimates from 81 breeding sites in east Antarctica. The published abundance estimates were de-constructed to retrieve the raw count data and then re-constructed by applying contemporary adjustment factors obtained from remotely operating time-lapse cameras. The re-construction process incorporated spatial and temporal variation in phenology and attendance by using data from cameras deployed at multiple sites over multiple years and propagating this uncertainty through to the final revised abundance estimates. Our re-constructed abundance estimates were consistently higher and more uncertain than published estimates. The re-constructed estimates alter the conclusions reached for some sites in east Antarctica in recent assessments of long-term Adélie penguin population change. Our approach is applicable to abundance data for a wide range of colonial, land-breeding marine species including other penguin species, flying seabirds and marine mammals.
A Method for the Estimation of p-Mode Parameters from Averaged Solar Oscillation Power Spectra
NASA Astrophysics Data System (ADS)
Reiter, J.; Rhodes, E. J., Jr.; Kosovichev, A. G.; Schou, J.; Scherrer, P. H.; Larson, T. P.
2015-04-01
A new fitting methodology is presented that is equally well suited for the estimation of low-, medium-, and high-degree mode parameters from m-averaged solar oscillation power spectra of widely differing spectral resolution. This method, which we call the “Windowed, MuLTiple-Peak, averaged-spectrum” or WMLTP Method, constructs a theoretical profile by convolving the weighted sum of the profiles of the modes appearing in the fitting box with the power spectrum of the window function of the observing run, using weights from a leakage matrix that takes into account observational and physical effects, such as the distortion of modes by solar latitudinal differential rotation. We demonstrate that the WMLTP Method makes substantial improvements in the inferences of the properties of the solar oscillations in comparison with a previous method, which employed a single profile to represent each spectral peak. We also present an inversion for the internal solar structure, which is based upon 6366 modes that we computed using the WMLTP method on the 66 day 2010 Solar and Heliospheric Observatory/MDI Dynamics Run. To improve both the numerical stability and reliability of the inversion, we developed a new procedure for the identification and correction of outliers in a frequency dataset. We present evidence for a pronounced departure of the sound speed in the outer half of the solar convection zone and in the subsurface shear layer from the radial sound speed profile contained in Model S of Christensen-Dalsgaard and his collaborators that existed in the rising phase of Solar Cycle 24 during mid-2010.
NASA Astrophysics Data System (ADS)
Li, Jianyong; Dodson, John; Yan, Hong; Cheng, Bo; Zhang, Xiaojian; Xu, Qinghai; Ni, Jian; Lu, Fengyan
2017-05-01
Quantitative information regarding the long-term variability of precipitation and vegetation during the period covering both the Late Glacial and the Holocene on the Qinghai-Tibetan Plateau (QTP) is scarce. Herein, we provide new and numerical reconstructions for annual mean precipitation (PANN) and vegetation history over the last 18,000 years using high-resolution pollen data from Lakes Dalianhai and Qinghai on the northeastern QTP. Hitherto, five calibration techniques including weighted averaging, weighted average-partial least squares regression, modern analogue technique, locally weighted weighted averaging regression, and maximum likelihood were first employed to construct robust inference models and to produce reliable PANN estimates on the QTP. The biomization method was applied for reconstructing the vegetation dynamics. The study area was dominated by steppe and characterized with a highly variable, relatively dry climate at 18,000-11,000 cal years B.P. PANN increased since the early Holocene, obtained a maximum at 8000-3000 cal years B.P. with coniferous-temperate mixed forest as the dominant biome, and thereafter declined to present. The PANN reconstructions are broadly consistent with other proxy-based paleoclimatic records from the northeastern QTP and the northern region of monsoonal China. The possible mechanisms behind the precipitation changes may be tentatively attributed to the internal feedback processes of higher latitude (e.g., North Atlantic) and lower latitude (e.g., subtropical monsoon) competing climatic regimes, which are primarily modulated by solar energy output as the external driving force. These findings may provide important insights into understanding the future Asian precipitation dynamics under the projected global warming.
Sharif Nia, Hamid; Pahlevan Sharif, Saeed; Koocher, Gerald P; Yaghoobzadeh, Ameneh; Haghdoost, Ali Akbar; Mar Win, Ma Thin; Soleimani, Mohammad Ali
2017-01-01
This study aimed to evaluate the validity and reliability of the Persian version of Death Anxiety Scale-Extended (DAS-E). A total of 507 patients with end-stage renal disease completed the DAS-E. The factor structure of the scale was evaluated using exploratory factor analysis with an oblique rotation and confirmatory factor analysis. The content and construct validity of the DAS-E were assessed. Average variance extracted, maximum shared squared variance, and average shared squared variance were estimated to assess discriminant and convergent validity. Reliability was assessed using Cronbach's alpha coefficient (α = .839 and .831), composite reliability (CR = .845 and .832), Theta (θ = .893 and .867), and McDonald Omega (Ω = .796 and .743). The analysis indicated a two-factor solution. Reliability and discriminant validity of the factors was established. Findings revealed that the present scale was a valid and reliable instrument that can be used in assessment of death anxiety in Iranian patients with end-stage renal disease.
A Semantics-Based Approach to Construction Cost Estimating
ERIC Educational Resources Information Center
Niknam, Mehrdad
2015-01-01
A construction project requires collaboration of different organizations such as owner, designer, contractor, and resource suppliers. These organizations need to exchange information to improve their teamwork. Understanding the information created in other organizations requires specialized human resources. Construction cost estimating is one of…
Medical Spending and the Health of the Elderly
Hadley, Jack; Waidmann, Timothy; Zuckerman, Stephen; Berenson, Robert A
2011-01-01
Objective To estimate the relationship between variations in medical spending and health outcomes of the elderly. Data Sources 1992–2002 Medicare Current Beneficiary Surveys. Study Design We used instrumental variable (IV) estimation to identify the relationships between alternative measures of elderly Medicare beneficiaries' medical spending over a 3-year observation period and health status, measured by the Health and Activity Limitation Index (HALex) and survival status at the end of the 3 years. We used the Dartmouth Atlas End-of-Life Expenditure Index defined for hospital referral regions in 1996 as the exogenous identifying variable to construct the IVs for medical spending. Data Collection/Extraction Methods The analysis sample includes 17,438 elderly (age >64) beneficiaries who entered the Medicare Current Beneficiary Survey in the fall of each year from 1991 to 1999, were not institutionalized at baseline, stayed in fee-for-service Medicare for the entire observation period, and survived for at least 2 years. Measures of baseline health were constructed from information obtained in the fall of the year the person entered the survey, and changes in health were from subsequent interviews over the entire observation period. Medicare and total medical spending were constructed from Medicare claims and self-reports of other spending over the entire observation period. Principal Findings IV estimation results in a positive and statistically significant relationship between medical spending and better health: 10 percent greater medical spending over the prior 3 years (mean = U.S.$2,709) is associated with a 1.9 percent larger HALex value (p = .045; range 1.2–2.2 percent depending on medical spending measure) and a 1.5 percent greater survival probability (p = .039; range 1.2–1.7 percent). Conclusions On average, greater medical spending is associated with better health status of Medicare beneficiaries, implying that across-the-board reductions in Medicare spending may result in poorer health for some beneficiaries. PMID:21609331
NASA Astrophysics Data System (ADS)
Tessler, Z. D.; Vorosmarty, C. J.; Overeem, I.; Syvitski, J. P.
2017-12-01
Modern deltas are dependent on human-mediated freshwater and sediment fluxes. Changes to these fluxes impact delta biogeophysical functioning, and affect the long-term sustainability of these landscapes for both human and natural systems. Here we present contemporary estimates of long-term mean sediment balance and relative sea-level rise across 46 global deltas. We model ongoing development and scenarios of future water resource management and hydropower infrastructure in upstream river basins to explore how changing sediment fluxes impact relative sea-level in coastal delta systems. Model results show that contemporary sediment fluxes, anthropogenic drivers of land subsidence, and sea-level rise result in relative sea-level rise rates in deltas that average 6.8 mm/year. Currently planned or under-construction dams can be expected to increase rates of relative sea-level rise on the order of 1 mm/year. Some deltas systems, including the Magdalena, Orinoco, and Indus, are highly sensitive to future impoundment of river basins, with RSLR rates increasing up to 4 mm/year in a high-hydropower-utilization scenario. Sediment fluxes may be reduced by up to 60% in the Danube and 21% in the Ganges-Brahmaputra-Megnha if all currently planned dams are constructed. Reduced sediment retention on deltas due to increased river channelization and local flood controls increases RSLR on average by nearly 2 mm/year. Long-term delta sustainability requires a more complete understanding of how geophysical and anthropogenic change impact delta geomorphology. Strategies for sustainable delta management that focus on local and regional drivers of change, especially groundwater and hydrocarbon extraction and upstream dam construction, can be highly impactful even in the context of global climate-induced sea-level rise.
Pei, Shiling; van de Lindt, John W.; Hartzell, Stephen; Luco, Nicolas
2014-01-01
Earthquake damage to light-frame wood buildings is a major concern for North America because of the volume of this construction type. In order to estimate wood building damage using synthetic ground motions, we need to verify the ability of synthetically generated ground motions to simulate realistic damage for this structure type. Through a calibrated damage potential indicator, four different synthetic ground motion models are compared with the historically recorded ground motions at corresponding sites. We conclude that damage for sites farther from the fault (>20 km) is under-predicted on average and damage at closer sites is sometimes over-predicted.
Proof of concept and dose estimation with binary responses under model uncertainty.
Klingenberg, B
2009-01-30
This article suggests a unified framework for testing Proof of Concept (PoC) and estimating a target dose for the benefit of a more comprehensive, robust and powerful analysis in phase II or similar clinical trials. From a pre-specified set of candidate models, we choose the ones that best describe the observed dose-response. To decide which models, if any, significantly pick up a dose effect, we construct the permutation distribution of the minimum P-value over the candidate set. This allows us to find critical values and multiplicity adjusted P-values that control the familywise error rate of declaring any spurious effect in the candidate set as significant. Model averaging is then used to estimate a target dose. Popular single or multiple contrast tests for PoC, such as the Cochran-Armitage, Dunnett or Williams tests, are only optimal for specific dose-response shapes and do not provide target dose estimates with confidence limits. A thorough evaluation and comparison of our approach to these tests reveal that its power is as good or better in detecting a dose-response under various shapes with many more additional benefits: It incorporates model uncertainty in PoC decisions and target dose estimation, yields confidence intervals for target dose estimates and extends to more complicated data structures. We illustrate our method with the analysis of a Phase II clinical trial. Copyright (c) 2008 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Vitásek, Stanislav; Matějka, Petr
2017-09-01
The article deals with problematic parts of automated processing of quantity takeoff (QTO) from data generated in BIM model. It focuses on models of road constructions, and uses volumes and dimensions of excavation work to create an estimate of construction costs. The article uses a case study and explorative methods to discuss possibilities and problems of data transfer from a model to a price system of construction production when such transfer is used for price estimates of construction works. Current QTOs and price tenders are made with 2D documents. This process is becoming obsolete because more modern tools can be used. The BIM phenomenon enables partial automation in processing volumes and dimensions of construction units and matching the data to units in a given price scheme. Therefore price of construction can be estimated and structured without lengthy and often imprecise manual calculations. The use of BIM for QTO is highly dependent on local market budgeting systems, therefore proper push/pull strategy is required. It also requires proper requirements specification, compatible pricing database and software.
Tidal-flow, circulation, and flushing changes caused by dredge and fill in Hillsborough Bay, Florida
Goodwin, Carl R.
1991-01-01
Hillsborough Bay, Florida, underwent extensive physical changes between 1880 and 1972 because of the construction of islands, channels, and shoreline fills. These changes resulted in a progressive reduction in the quantity of tidal water that enters and leaves the bay. Dredging and filling also changed the magnitude and direction of tidal flow in most of the bay. A two-dimensional, finite-difference hydrodynamic model was used to simulate flood, ebb, and residual water transport for physical conditions in Hillsborough Bay and the northeastern part of Middle Tampa Bay during 1880, 1972, and 1985. The calibrated and verified model was used to evaluate cumulative water-transport changes resulting from construction in the study area between 1880 and 1972. The model also was used to evaluate water-transport changes as a result of a major Federal dredging project completed in 1985. The model indicates that transport changes resulting from the Federal dredging project are much less areally extensive than the corresponding transport changes resulting from construction between 1880 and 1972. Dredging-caused changes of more than 50 percent in flood and ebb water transport were computed to occur over only about 8 square miles of the 65-square-mile study area between 1972 and 1985. Model results indicate that construction between 1880 and 1972 caused changes of similar magnitude over about 23 square miles. Dredging-caused changes of more than 50 percent in residual water transport were computed to occur over only 17 square miles between 1972 and 1985. Between 1880 and 1972, changes of similar magnitude were computed to occur over an area of 45 square miles. Model results also reveal historical tide-induced circulation patterns. The patterns consist of a series of about 8 interconnected circulatory features in 1880 and as many as 15 in 1985. Dredging- and construction-caused changes in number, size, position, shape, and intensity of the circulatory features increase tide-induced circulation throughout the bay. Circulation patterns for 1880, 1972, and 1985 levels of development differ in many details, but all exhibit residual landward flow of water in the deep, central part of the bay and residual seaward flow in the shallows along the bay margins. This general residual flow pattern is confirmed by both computed transport of a hypothetical constituent and long-term salinity observations in Hillsborough Bay. The concept has been used to estimate the average time it takes a particle to move from the head to the mouth of the bay. The mean transit time was computed to be 58 days in 1880 and 29 days in 1972 and 1985. This increase in circulation and decrease in transit time since 1880 is estimated to have caused an increase in average salinity of Hillsborough Bay of about 2 parts per thousand. Dredge and fill construction is concluded to have significantly increased circulation and flushing between 1880 and 1972. Little circulation or flushing change is attributed to dredging activity since 1972.
[Challenges in building a surgical obesity center].
Fischer, L; El Zein, Z; Bruckner, T; Hünnemeyer, K; Rudofsky, G; Reichenberger, M; Schommer, K; Gutt, C N; Büchler, M W; Müller-Stich, B P
2014-04-01
It is estimated that approximately 1 million adults in Germany suffer from grade III obesity. The aim of this article is to describe the challenges faced when constructing an operative obesity center. The inflow of patients as well as personnel and infrastructure of the interdisciplinary Diabetes and Obesity Center in Heidelberg were analyzed. The distribution of continuous data was described by mean values and standard deviation and analyzed using variance analysis. The interdisciplinary Diabetes and Obesity Center in Heidelberg was founded in 2006 and offers conservative therapeutic treatment and all currently available operative procedures. For every operative intervention carried out an average of 1.7 expert reports and 0.3 counter expertises were necessary. The time period from the initial presentation of patients in the department of surgery to an operation was on average 12.8 months (standard deviation SD ± 4.5 months). The 47 patients for whom remuneration for treatment was initially refused had an average body mass index (BMI) of 49.2 kg/m(2) and of these 39 had at least the necessity for treatment of a comorbidity. Of the 45 patients for whom the reason for the refusal of treatment costs was given as a lack of conservative treatment, 30 had undertaken a medically supervised attempt at losing weight over at least 6 months. Additionally, 19 of these patients could document participation in a course at a rehabilitation center, a Xenical® or Reduktil® therapy or had undertaken the Optifast® program. For the 20 patients who supposedly lacked a psychosomatic evaluation, an adequate psychosomatic evaluation was carried out in all cases. The establishment of an operative obesity center can last for several years. A essential prerequisite for success seems to be the constructive and targeted cooperation with the health insurance companies.
Waterfowl nesting on small man-made islands in prairie wetlands
Johnson, R.F.; Woodward, R.O.; Kirsch, L.M.
1978-01-01
Small islands constructed in prairie wetlands were attractive nesting sites for mallards (Anas platyrhynchos) and Canada geese (Branta canadensis). Nest densities of mallards on islands averaged 135 per ha compared to 0.03 per ha on adjacent upland habitats. Construction time averaged 2 hours per island and cost $50. No maintenance was required during the first 10 years.
36 CFR 223.82 - Contents of advertisement.
Code of Federal Regulations, 2010 CFR
2010-07-01
... sale which includes specified road construction with total estimated construction costs of $50,000 or more, the advertisement shall also include: (1) The total estimated construction cost of the permanent roads. (2) A statement extending to small business concerns qualified for preferential bidding on timber...
NASA Technical Reports Server (NTRS)
Curtis, Scott; Huffman, George; Nelkin, Eric
1999-01-01
Satellite estimates and gauge observations of precipitation are useful in understanding the water cycle, analyzing climatic variability, and validating climate models. The Global Precipitation Climatology Project (GPCP) released a community merged precipitation data set for the period July 1987 through the present, and has recently extended that data set back to 1986. One objective of this study is to use GPCP estimates to describe and quantify the seasonal variation of precipitation, with emphasis on the Asian summer monsoon. Another focus is the 1997-98 El Nino Southern Oscillation (ENSO) and associated extreme precipitation events. The summer monsoon tends to be drier than normal in El Nino ears. This was not observed for 1997 or 1998, while for 1997 the NCEP model produced the largest summer rain rates over India in years. This inconsistency will be examined. The average annual global precipitation rate is 2.7 mm day as estimated by GPCP, which is similar to values computed from long-term climatologies. From 30 deg N to 30 deg S the average precipitation rate is 2.7 mm day over land with a maximum in the annual cycle occurring in February-March, when the Amazon basin receives abundant rainfall. The average precipitation rate is 3.1 mm day over the tropical oceans, with a peak earlier in the season (November-December), corresponding with the transition from a strong Pacific Intertropical Convergence Zone (ITCZ) from June to November to a strong South Pacific Convergence Zone (SPCZ) from December to March. The seasonal evolution of C, C, the Asian summer monsoon stands out with rains in excess of 15 mm day off the coast of Burma in June. The GPROF pentad data also captures the onset of the tropical Pacific rainfall patterns associated with the 1997-98 ENSO. From February to October 1997 at least four rain-producing systems traveled from West to East in the equatorial corridor. A rapid transition from El Nino to La Nina conditions occurred in May-June 1998. GPCP and GPROF were used to construct precipitation-based ENSO indices to monitor El Ninos (EL) and La Ninas and (LI).
2SLS versus 2SRI: Appropriate methods for rare outcomes and/or rare exposures.
Basu, Anirban; Coe, Norma B; Chapman, Cole G
2018-06-01
This study used Monte Carlo simulations to examine the ability of the two-stage least squares (2SLS) estimator and two-stage residual inclusion (2SRI) estimators with varying forms of residuals to estimate the local average and population average treatment effect parameters in models with binary outcome, endogenous binary treatment, and single binary instrument. The rarity of the outcome and the treatment was varied across simulation scenarios. Results showed that 2SLS generated consistent estimates of the local average treatment effects (LATE) and biased estimates of the average treatment effects (ATE) across all scenarios. 2SRI approaches, in general, produced biased estimates of both LATE and ATE under all scenarios. 2SRI using generalized residuals minimized the bias in ATE estimates. Use of 2SLS and 2SRI is illustrated in an empirical application estimating the effects of long-term care insurance on a variety of binary health care utilization outcomes among the near-elderly using the Health and Retirement Study. Copyright © 2018 John Wiley & Sons, Ltd.
Pedicle screw versus hybrid posterior instrumentation for dystrophic neurofibromatosis scoliosis.
Wang, Jr-Yi; Lai, Po-Liang; Chen, Wen-Jer; Niu, Chi-Chien; Tsai, Tsung-Ting; Chen, Lih-Huei
2017-06-01
Surgical management of severe rigid dystrophic neurofibromatosis (NF) scoliosis is technically demanding and produces varying results. In the current study, we reviewed 9 patients who were treated with combined anterior and posterior fusion using different types of instrumentation (i.e., pedicle screw, hybrid, and all-hook constructs) at our institute.Between September 2001 and July 2010 at our institute, 9 patients received anterior release/fusion and posterior fusion with different types of instrumentation, including a pedicle screw construct (n = 5), a hybrid construct (n = 3), and an all-hook construct (n = 1). We compared the pedicle screw group with the hybrid group to analyze differences in preoperative curve angle, immediate postoperative curve reduction, and latest follow-up curve angle.The mean follow-up period was 9.5 ± 2.9 years. The average age at surgery was 10.3 ± 3.9 years. The average preoperative scoliosis curve was 61.3 ± 13.8°, and the average preoperative kyphosis curve was 39.8 ± 19.7°. The average postoperative scoliosis and kyphosis curves were 29.7 ± 10.7° and 21.0 ± 13.5°, respectively. The most recent follow-up scoliosis and kyphosis curves were 43.4 ± 17.3° and 29.4 ± 18.9°, respectively. There was no significant difference in the correction angle (either coronal or sagittal), and there was no significant difference in the loss of sagittal correction between the pedicle screw construct group and the hybrid construct group. However, the patients who received pedicle screw constructs had significantly less loss of coronal correction (P < .05). Two patients with posterior instrumentation, one with an all-hook construct and the other with a hybrid construct, required surgical revision because of progression of deformity.It is difficult to intraoperatively correct dystrophic deformity and to maintain this correction after surgery. Combined anterior release/fusion and posterior fusion using either a pedicle screw construct or a hybrid construct provide similar curve corrections both sagittally and coronally. After long-term follow-up, sagittal correction was maintained with both constructs. However, patients treated with posterior instrumentation using pedicle screw constructs had significantly less loss of coronal correction.
Pedicle screw versus hybrid posterior instrumentation for dystrophic neurofibromatosis scoliosis
Wang, Jr-Yi; Lai, Po-Liang; Chen, Wen-Jer; Niu, Chi-Chien; Tsai, Tsung-Ting; Chen, Lih-Huei
2017-01-01
Abstract Surgical management of severe rigid dystrophic neurofibromatosis (NF) scoliosis is technically demanding and produces varying results. In the current study, we reviewed 9 patients who were treated with combined anterior and posterior fusion using different types of instrumentation (i.e., pedicle screw, hybrid, and all-hook constructs) at our institute. Between September 2001 and July 2010 at our institute, 9 patients received anterior release/fusion and posterior fusion with different types of instrumentation, including a pedicle screw construct (n = 5), a hybrid construct (n = 3), and an all-hook construct (n = 1). We compared the pedicle screw group with the hybrid group to analyze differences in preoperative curve angle, immediate postoperative curve reduction, and latest follow-up curve angle. The mean follow-up period was 9.5 ± 2.9 years. The average age at surgery was 10.3 ± 3.9 years. The average preoperative scoliosis curve was 61.3 ± 13.8°, and the average preoperative kyphosis curve was 39.8 ± 19.7°. The average postoperative scoliosis and kyphosis curves were 29.7 ± 10.7° and 21.0 ± 13.5°, respectively. The most recent follow-up scoliosis and kyphosis curves were 43.4 ± 17.3° and 29.4 ± 18.9°, respectively. There was no significant difference in the correction angle (either coronal or sagittal), and there was no significant difference in the loss of sagittal correction between the pedicle screw construct group and the hybrid construct group. However, the patients who received pedicle screw constructs had significantly less loss of coronal correction (P < .05). Two patients with posterior instrumentation, one with an all-hook construct and the other with a hybrid construct, required surgical revision because of progression of deformity. It is difficult to intraoperatively correct dystrophic deformity and to maintain this correction after surgery. Combined anterior release/fusion and posterior fusion using either a pedicle screw construct or a hybrid construct provide similar curve corrections both sagittally and coronally. After long-term follow-up, sagittal correction was maintained with both constructs. However, patients treated with posterior instrumentation using pedicle screw constructs had significantly less loss of coronal correction. PMID:28562548
Method for detection and correction of errors in speech pitch period estimates
NASA Technical Reports Server (NTRS)
Bhaskar, Udaya (Inventor)
1989-01-01
A method of detecting and correcting received values of a pitch period estimate of a speech signal for use in a speech coder or the like. An average is calculated of the nonzero values of received pitch period estimate since the previous reset. If a current pitch period estimate is within a range of 0.75 to 1.25 times the average, it is assumed correct, while if not, a correction process is carried out. If correction is required successively for more than a preset number of times, which will most likely occur when the speaker changes, the average is discarded and a new average calculated.
Antarctic Circumpolar Current Transport Variability during 2003-05 from GRACE
NASA Technical Reports Server (NTRS)
Zlotnicki, Victor; Wahr, John; Fukumori, Ichiro; Song, Yuhe T.
2006-01-01
Gravity Recovery and Climate Experiment (GRACE) gravity data spanning January 2003 - November 2005 are used as proxies for ocean bottom pressure (BP) averaged over 1 month, spherical Gaussian caps 500 km in radius, and along paths bracketing the Antarctic Circumpolar Current's various fronts. The GRACE BP signals are compared with those derived from the Estimating the Circulation and Climate of the Ocean (ECCO) ocean modeling-assimilation system, and to a non-Boussinesq version of the Regional Ocean Model System (ROMS). The discrepancy found between GRACE and the models is 1.7 cm(sub H2O) (1 cm(sub H2O) similar to 1 hPa), slightly lower than the 1.9 cm(sub H2O) estimated by the authors independently from propagation of GRACE errors. The northern signals are weak and uncorrelated among basins. The southern signals are strong, with a common seasonality. The seasonal cycle GRACE data observed in the Pacific and Indian Ocean sectors of the ACC are consistent, with annual and semiannual amplitudes of 3.6 and 0.6 cm(sub H2O) (1.1 and 0.6 cm(sub H2O) with ECCO), the average over the full southern path peaks (stronger ACC) in the southern winter, on days of year 197 and 97 for the annual and semiannual components, respectively; the Atlantic Ocean annual peak is 20 days earlier. An approximate conversion factor of 3.1 Sv ( Sv equivalent to 10(exp 6) m(exp 3) s(exp -1)) of barotropic transport variability per cm(sub H2O) of BP change is estimated. Wind stress data time series from the Quick Scatterometer (QuikSCAT), averaged monthly, zonally, and over the latitude band 40 de - 65 deg S, are also constructed and subsampled at the same months as with the GRACE data. The annual and semiannual harmonics of the wind stress peak on days 198 and 82, respectively. A decreasing trend over the 3 yr is observed in the three data types.
Antarctic Circumpolar Current Transport Variability during 2003-05 from GRACE
NASA Technical Reports Server (NTRS)
Zlotnicki, Victor; Wahr, John; Fukumori, Ichiro; Song, Yuhe T.
2007-01-01
Gravity Recovery and Climate Experiment (GRACE) gravity data spanning January 2003-November 2005 are used as proxies for ocean bottom pressure (BP) averaged over 1 month, spherical Gaussian caps 500 km in radius, and along paths bracketing the Antarctic Circumpolar Current's various fronts. The GRACE BP signals are compared with those derived from the Estimating the Circulation and Climate of the Ocean (ECCO) ocean modeling-assimilation system, and to a non-Boussinesq version of the Regional Ocean Model System (ROMS). The discrepancy found between GRACE and the models is 1.7 cm
Gupta, Manoj; Gupta, T C
2017-10-01
The present study aims to accurately estimate inertial, physical, and dynamic parameters of human body vibratory model consistent with physical structure of the human body that also replicates its dynamic response. A 13 degree-of-freedom (DOF) lumped parameter model for standing person subjected to support excitation is established. Model parameters are determined from anthropometric measurements, uniform mass density, elastic modulus of individual body segments, and modal damping ratios. Elastic moduli of ellipsoidal body segments are initially estimated by comparing stiffness of spring elements, calculated from a detailed scheme, and values available in literature for same. These values are further optimized by minimizing difference between theoretically calculated platform-to-head transmissibility ratio (TR) and experimental measurements. Modal damping ratios are estimated from experimental transmissibility response using two dominant peaks in the frequency range of 0-25 Hz. From comparison between dynamic response determined form modal analysis and experimental results, a set of elastic moduli for different segments of human body and a novel scheme to determine modal damping ratios from TR plots, are established. Acceptable match between transmissibility values calculated from the vibratory model and experimental measurements for 50th percentile U.S. male, except at very low frequencies, establishes the human body model developed. Also, reasonable agreement obtained between theoretical response curve and experimental response envelop for average Indian male, affirms the technique used for constructing vibratory model of a standing person. Present work attempts to develop effective technique for constructing subject specific damped vibratory model based on its physical measurements.
NASA Technical Reports Server (NTRS)
Brown, J. A.
1983-01-01
Kennedy Space Center data aid in efficient construction-cost managment. Report discusses development and use of NASA TR-1508, Kennedy Space Center Aerospace Construction price book for preparing conceptual budget, funding cost estimating, and preliminary cost engineering reports. Report based on actual bid prices and Government estimates.
75 FR 41556 - Proposed Collection Renewal; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-16
... global education in the classroom. Estimated annual number of respondents: 300. Estimated average time to... the annual World Wise Schools Conference. The information is used as a record of attendance. 2. Title... global education in the classroom. Estimated annual number of responses: 300. Estimated average time to...
Savoca, Mark E.; Senay, Gabriel B.; Maupin, Molly A.; Kenny, Joan F.; Perry, Charles A.
2013-01-01
Remote-sensing technology and surface-energy-balance methods can provide accurate and repeatable estimates of actual evapotranspiration (ETa) when used in combination with local weather datasets over irrigated lands. Estimates of ETa may be used to provide a consistent, accurate, and efficient approach for estimating regional water withdrawals for irrigation and associated consumptive use (CU), especially in arid cropland areas that require supplemental water due to insufficient natural supplies from rainfall, soil moisture, or groundwater. ETa in these areas is considered equivalent to CU, and represents the part of applied irrigation water that is evaporated and/or transpired, and is not available for immediate reuse. A recent U.S. Geological Survey study demonstrated the application of the remote-sensing-based Simplified Surface Energy Balance (SSEB) model to estimate 10-year average ETa at 1-kilometer resolution on national and regional scales, and compared those ETa values to the U.S. Geological Survey’s National Water-Use Information Program’s 1995 county estimates of CU. The operational version of the operational SSEB (SSEBop) method is now used to construct monthly, county-level ETa maps of the conterminous United States for the years 2000, 2005, and 2010. The performance of the SSEBop was evaluated using eddy covariance flux tower datasets compiled from 2005 datasets, and the results showed a strong linear relationship in different land cover types across diverse ecosystems in the conterminous United States (correlation coefficient [r] ranging from 0.75 to 0.95). For example, r for woody savannas (0.75), grassland (0.75), forest (0.82), cropland (0.84), shrub land (0.89), and urban (0.95). A comparison of the remote-sensing SSEBop method for estimating ETa and the Hamon temperature method for estimating potential ET (ETp) also was conducted, using regressions of all available county averages of ETa for 2005 and 2010, and yielded correlations of r = 0.60 and r = 0.71, respectively. Correlations generally are stronger in the Southeast where ETa is close to ETp. SSEBop ETa provides more spatial detail and accuracy in the Southwest where irrigation is practiced in a smaller proportion of the region.
Montes-Restrepo, Victoria; Carrette, Evelien; Strobbe, Gregor; Gadeyne, Stefanie; Vandenberghe, Stefaan; Boon, Paul; Vonck, Kristl; Mierlo, Pieter van
2016-07-01
We investigated the influence of different skull modeling approaches on EEG source imaging (ESI), using data of six patients with refractory temporal lobe epilepsy who later underwent successful epilepsy surgery. Four realistic head models with different skull compartments, based on finite difference methods, were constructed for each patient: (i) Three models had skulls with compact and spongy bone compartments as well as air-filled cavities, segmented from either computed tomography (CT), magnetic resonance imaging (MRI) or a CT-template and (ii) one model included a MRI-based skull with a single compact bone compartment. In all patients we performed ESI of single and averaged spikes marked in the clinical 27-channel EEG by the epileptologist. To analyze at which time point the dipole estimations were closer to the resected zone, ESI was performed at two time instants: the half-rising phase and peak of the spike. The estimated sources for each model were validated against the resected area, as indicated by the postoperative MRI. Our results showed that single spike analysis was highly influenced by the signal-to-noise ratio (SNR), yielding estimations with smaller distances to the resected volume at the peak of the spike. Although averaging reduced the SNR effects, it did not always result in dipole estimations lying closer to the resection. The proposed skull modeling approaches did not lead to significant differences in the localization of the irritative zone from clinical EEG data with low spatial sampling density. Furthermore, we showed that a simple skull model (MRI-based) resulted in similar accuracy in dipole estimation compared to more complex head models (based on CT- or CT-template). Therefore, all the considered head models can be used in the presurgical evaluation of patients with temporal lobe epilepsy to localize the irritative zone from low-density clinical EEG recordings.
Using computational modeling of river flow with remotely sensed data to infer channel bathymetry
Nelson, Jonathan M.; McDonald, Richard R.; Kinzel, Paul J.; Shimizu, Y.
2012-01-01
As part of an ongoing investigation into the use of computational river flow and morphodynamic models for the purpose of correcting and extending remotely sensed river datasets, a simple method for inferring channel bathymetry is developed and discussed. The method is based on an inversion of the equations expressing conservation of mass and momentum to develop equations that can be solved for depth given known values of vertically-averaged velocity and water-surface elevation. The ultimate goal of this work is to combine imperfect remotely sensed data on river planform, water-surface elevation and water-surface velocity in order to estimate depth and other physical parameters of river channels. In this paper, the technique is examined using synthetic data sets that are developed directly from the application of forward two-and three-dimensional flow models. These data sets are constrained to satisfy conservation of mass and momentum, unlike typical remotely sensed field data sets. This provides a better understanding of the process and also allows assessment of how simple inaccuracies in remotely sensed estimates might propagate into depth estimates. The technique is applied to three simple cases: First, depth is extracted from a synthetic dataset of vertically averaged velocity and water-surface elevation; second, depth is extracted from the same data set but with a normally-distributed random error added to the water-surface elevation; third, depth is extracted from a synthetic data set for the same river reach using computed water-surface velocities (in place of depth-integrated values) and water-surface elevations. In each case, the extracted depths are compared to the actual measured depths used to construct the synthetic data sets (with two- and three-dimensional flow models). Errors in water-surface elevation and velocity that are very small degrade depth estimates and cannot be recovered. Errors in depth estimates associated with assuming water-surface velocities equal to depth-integrated velocities are substantial, but can be reduced with simple corrections.
Mullins, C Daniel; Wang, Junling; Cooke, Jesse L; Blatt, Lisa; Baquet, Claudia R
2004-01-01
Projecting future breast cancer treatment expenditure is critical for budgeting purposes, medical decision making and the allocation of resources in order to maximise the overall impact on health-related outcomes of care. Currently, both longitudinal and cross-sectional methodologies are used to project the economic burden of cancer. This pilot study examined the differences in estimates that were obtained using these two methods, focusing on Maryland, US Medicaid reimbursement data for chemotherapy and prescription drugs for the years 1999-2000. Two different methodologies for projecting life cycles of cancer expenditure were considered. The first examined expenditure according to chronological time (calendar quarter) for all cancer patients in the database in a given quarter. The second examined only the most recent quarter and constructed a hypothetical expenditure life cycle by taking into consideration the number of quarters since the respective patient had her first claim. We found different average expenditures using the same data and over the same time period. The longitudinal measurement had less extreme peaks and troughs, and yielded average expenditure in the final period that was 60% higher than that produced using the cross-sectional analysis; however, the longitudinal analysis had intermediate periods with significantly lower estimated expenditure than the cross-sectional data. These disparate results signify that each of the methods has merit. The longitudinal method tracks changes over time while the cross-sectional approach reflects more recent data, e.g. current practice patterns. Thus, this study reiterates the importance of considering the methodology when projecting future cancer expenditure.
Water resources of the Port Madison Indian Reservation, Washington
Lum, W.E.
1979-01-01
The study summarized in this report was made to provide Suquamish Tribal leaders with information on the reservation 's surface- and ground-water resources. The Tribal leaders need this information to help manage and protect their water resources against over-development. The quantity of ground water estimated to be available for withdrawal on a long-term basis is about 600 million gallons per year in the western part of the reservation and 400 million gallons per year in the eastern part of the reservation. It should be possible, economically and practically, to capture at least 40 percent of this ground water with properly constructed and located wells before it is discharged into the sea. This is enough water to supply at least 5,000 and 3,500 people with domestic water in these respective areas--about four times the present population. Of nine stream sites that were studied, the lowest average streamflows for a 7-day period estimated to occur an average of once in 2 years were 1.3 cubic feet per second or less. Streams at three of the sites have been observed dry at least once. The short period of data collection during this study limits the accuracy of statistical estimates of low flows. Both surface and ground water are of good quality with no unusual or harmful constituents; there was no evidence of major pollution in 1977. In the future, seawater intrusion into the ground-water system and pollution of the surface water by improperly treated sewage waste water could become problems. (Woodard-USGS).
Pairwise Measures of Causal Direction in the Epidemiology of Sleep Problems and Depression
Rosenström, Tom; Jokela, Markus; Puttonen, Sampsa; Hintsanen, Mirka; Pulkki-Råback, Laura; Viikari, Jorma S.; Raitakari, Olli T.; Keltikangas-Järvinen, Liisa
2012-01-01
Depressive mood is often preceded by sleep problems, suggesting that they increase the risk of depression. Sleep problems can also reflect prodromal symptom of depression, thus temporal precedence alone is insufficient to confirm causality. The authors applied recently introduced statistical causal-discovery algorithms that can estimate causality from cross-sectional samples in order to infer the direction of causality between the two sets of symptoms from a novel perspective. Two common-population samples were used; one from the Young Finns study (690 men and 997 women, average age 37.7 years, range 30–45), and another from the Wisconsin Longitudinal study (3101 men and 3539 women, average age 53.1 years, range 52–55). These included three depression questionnaires (two in Young Finns data) and two sleep problem questionnaires. Three different causality estimates were constructed for each data set, tested in a benchmark data with a (practically) known causality, and tested for assumption violations using simulated data. Causality algorithms performed well in the benchmark data and simulations, and a prediction was drawn for future empirical studies to confirm: for minor depression/dysphoria, sleep problems cause significantly more dysphoria than dysphoria causes sleep problems. The situation may change as depression becomes more severe, or more severe levels of symptoms are evaluated; also, artefacts due to severe depression being less well presented in the population data than minor depression may intervene the estimation for depression scales that emphasize severe symptoms. The findings are consistent with other emerging epidemiological and biological evidence. PMID:23226400
Estimating 1970-99 average annual groundwater recharge in Wisconsin using streamflow data
Gebert, Warren A.; Walker, John F.; Kennedy, James L.
2011-01-01
Average annual recharge in Wisconsin for the period 1970-99 was estimated using streamflow data from U.S. Geological Survey continuous-record streamflow-gaging stations and partial-record sites. Partial-record sites have discharge measurements collected during low-flow conditions. The average annual base flow of a stream divided by the drainage area is a good approximation of the recharge rate; therefore, once average annual base flow is determined recharge can be calculated. Estimates of recharge for nearly 72 percent of the surface area of the State are provided. The results illustrate substantial spatial variability of recharge across the State, ranging from less than 1 inch to more than 12 inches per year. The average basin size for partial-record sites (50 square miles) was less than the average basin size for the gaging stations (305 square miles). Including results for smaller basins reveals a spatial variability that otherwise would be smoothed out using only estimates for larger basins. An error analysis indicates that the techniques used provide base flow estimates with standard errors ranging from 5.4 to 14 percent.
48 CFR 36.214 - Special procedures for price negotiation in construction contracting.
Code of Federal Regulations, 2013 CFR
2013-10-01
... price negotiation in construction contracting. 36.214 Section 36.214 Federal Acquisition Regulations... negotiation in construction contracting. (a) Agencies shall follow the policies and procedures in part 15 when... scope of the work. If negotiations reveal errors in the Government estimate, the estimate shall be...
48 CFR 36.214 - Special procedures for price negotiation in construction contracting.
Code of Federal Regulations, 2011 CFR
2011-10-01
... price negotiation in construction contracting. 36.214 Section 36.214 Federal Acquisition Regulations... negotiation in construction contracting. (a) Agencies shall follow the policies and procedures in part 15 when... scope of the work. If negotiations reveal errors in the Government estimate, the estimate shall be...
48 CFR 36.214 - Special procedures for price negotiation in construction contracting.
Code of Federal Regulations, 2012 CFR
2012-10-01
... price negotiation in construction contracting. 36.214 Section 36.214 Federal Acquisition Regulations... negotiation in construction contracting. (a) Agencies shall follow the policies and procedures in part 15 when... scope of the work. If negotiations reveal errors in the Government estimate, the estimate shall be...
48 CFR 36.214 - Special procedures for price negotiation in construction contracting.
Code of Federal Regulations, 2014 CFR
2014-10-01
... price negotiation in construction contracting. 36.214 Section 36.214 Federal Acquisition Regulations... negotiation in construction contracting. (a) Agencies shall follow the policies and procedures in part 15 when... scope of the work. If negotiations reveal errors in the Government estimate, the estimate shall be...
48 CFR 36.214 - Special procedures for price negotiation in construction contracting.
Code of Federal Regulations, 2010 CFR
2010-10-01
... price negotiation in construction contracting. 36.214 Section 36.214 Federal Acquisition Regulations... negotiation in construction contracting. (a) Agencies shall follow the policies and procedures in part 15 when... scope of the work. If negotiations reveal errors in the Government estimate, the estimate shall be...
Montague, Marjorie; van Garderen, Delinda
2003-01-01
This study investigated students' mathematics achievement, estimation ability, use of estimation strategies, and academic self-perception. Students with learning disabilities (LD), average achievers, and intellectually gifted students (N = 135) in fourth, sixth, and eighth grade participated in the study. They were assessed to determine their mathematics achievement, ability to estimate discrete quantities, knowledge and use of estimation strategies, and perception of academic competence. The results indicated that the students with LD performed significantly lower than their peers on the math achievement measures, as expected, but viewed themselves to be as academically competent as the average achievers did. Students with LD and average achievers scored significantly lower than gifted students on all estimation measures, but they differed significantly from one another only on the estimation strategy use measure. Interestingly, even gifted students did not seem to have a well-developed understanding of estimation and, like the other students, did poorly on the first estimation measure. The accuracy of their estimates seemed to improve, however, when students were asked open-ended questions about the strategies they used to arrive at their estimates. Although students with LD did not differ from average achievers in their estimation accuracy, they used significantly fewer effective estimation strategies. Implications for instruction are discussed.
ERIC Educational Resources Information Center
Malloch, Douglas C.; Michael, William B.
1981-01-01
This study was designed to determine whether an unweighted linear combination of community college students' scores on standardized achievement tests and a measure of motivational constructs derived from Vroom's expectance theory model of motivation was predictive of academic success (grade point average earned during one quarter of an academic…
DIDA - Dynamic Image Disparity Analysis.
1982-12-31
register the image only where the disparity estimates are believed to be correct. Therefore, in our 60 implementation we register in proportion to the...average motion is computed as a the average of neighbors motions weighted by their confidence. Since estimates contribute oniy in proportion to their...confidence statistics in the same proportion as they contribute to the average disparity estimate. Two confidences are derived from the weighted
Survival of European mouflon (Artiodactyla: Bovidae) in Hawai'i based on tooth cementum lines
Hess, S.C.; Stephens, R.M.; Thompson, T.L.; Danner, R.M.; Kawakami, B.
2011-01-01
Reliable techniques for estimating age of ungulates are necessary to determine population parameters such as age structure and survival. Techniques that rely on dentition, horn, and facial patterns have limited utility for European mouflon sheep (Ovis gmelini musimon), but tooth cementum lines may offer a useful alternative. Cementum lines may not be reliable outside temperate regions, however, because lack of seasonality in diet may affect annulus formation. We evaluated the utility of tooth cementum lines for estimating age of mouflon in Hawai'i in comparison to dentition. Cementum lines were present in mouflon from Mauna Loa, island of Hawai'i, but were less distinct than in North American sheep. The two age-estimation methods provided similar estimates for individuals aged ???3 yr by dentition (the maximum age estimable by dentition), with exact matches in 51% (18/35) of individuals, and an average difference of 0.8 yr (range 04). Estimates of age from cementum lines were higher than those from dentition in 40% (14/35) and lower in 9% (3/35) of individuals. Discrepancies in age estimates between techniques and between paired tooth samples estimated by cementum lines were related to certainty categories assigned by the clarity of cementum lines, reinforcing the importance of collecting a sufficient number of samples to compensate for samples of lower quality, which in our experience, comprised approximately 22% of teeth. Cementum lines appear to provide relatively accurate age estimates for mouflon in Hawai'i, allow estimating age beyond 3 yr, and they offer more precise estimates than tooth eruption patterns. After constructing an age distribution, we estimated annual survival with a log-linear model to be 0.596 (95% CI 0.5540.642) for this heavily controlled population. ?? 2011 by University of Hawai'i Press.
Pandit, Maharaj K; Grumbine, R Edward
2012-12-01
Indian Himalayan basins are earmarked for widespread dam building, but aggregate effects of these dams on terrestrial ecosystems are unknown. We mapped distribution of 292 dams (under construction and proposed) and projected effects of these dams on terrestrial ecosystems under different scenarios of land-cover loss. We analyzed land-cover data of the Himalayan valleys, where dams are located. We estimated dam density on fifth- through seventh-order rivers and compared these estimates with current global figures. We used a species-area relation model (SAR) to predict short- and long-term species extinctions driven by deforestation. We used scatter plots and correlation studies to analyze distribution patterns of species and dams and to reveal potential overlap between species-rich areas and dam sites. We investigated effects of disturbance on community structure of undisturbed forests. Nearly 90% of Indian Himalayan valleys would be affected by dam building and 27% of these dams would affect dense forests. Our model projected that 54,117 ha of forests would be submerged and 114,361 ha would be damaged by dam-related activities. A dam density of 0.3247/1000 km(2) would be nearly 62 times greater than current average global figures; the average of 1 dam for every 32 km of river channel would be 1.5 times higher than figures reported for U.S. rivers. Our results show that most dams would be located in species-rich areas of the Himalaya. The SAR model projected that by 2025, deforestation due to dam building would likely result in extinction of 22 angiosperm and 7 vertebrate taxa. Disturbance due to dam building would likely reduce tree species richness by 35%, tree density by 42%, and tree basal cover by 30% in dense forests. These results, combined with relatively weak national environmental impact assessment and implementation, point toward significant loss of species if all proposed dams in the Indian Himalaya are constructed. ©2012 Society for Conservation Biology.
Shrub Abundance Mapping in Arctic Tundra with Misr
NASA Astrophysics Data System (ADS)
Duchesne, R.; Chopping, M. J.; Wang, Z.; Schaaf, C.; Tape, K. D.
2013-12-01
Over the last 60 years an increase in shrub abundance has been observed in the Arctic tundra in connection with a rapid surface warming trend. Rapid shrub expansion may have consequences in terms of ecosystem structure and function, albedo, and feedbacks to climate; however, its rate is not yet known. The goal of this research effort is thus to map large scale changes in Arctic tundra vegetation by exploiting the structural signal in moderate resolution satellite remote sensing images from NASA's Multiangle Imaging SpectroRadiometer (MISR), mapped onto a 250m Albers Conic Equal Area grid. We present here large area shrub mapping supported by reference data collated using extensive field inventory data and high resolution panchromatic imagery. MISR Level 1B2 Terrain radiance scenes from the Terra satellite from 15 June-31 July, 2000 - 2010 were converted to surface bidirectional reflectance factors (BRF) using MISR Toolkit routines and the MISR 1 km LAND product BRFs. The red band data in all available cameras were used to invert the RossThick-LiSparse-Reciprocal BRDF model to retrieve kernel weights, model-fitting RMSE, and Weights of Determination. The reference database was constructed using aerial survey, three field campaigns (field inventory for shrub count, cover, mean radius and height), and high resolution imagery. Tall shrub number, mean crown radius, cover, and mean height estimates were obtained from QuickBird and GeoEye panchromatic image chips using the CANAPI algorithm, and calibrated using field-based estimates, thus extending the database to over eight hundred locations. Tall shrub fractional cover maps for the North Slope of Alaska were constructed using the bootstrap forest machine learning algorithm that exploits the surface information provided by MISR. The reference database was divided into two datasets for training and validation. The model derived used a set of 19 independent variables(the three kernel weights, ratios and interaction terms; white and black sky albedos; and blue, green, red, and NIR nadir camera BRFs), to grow a forest of decision trees. The final estimate is the average of the predicted values from each tree. Observations not used in constructing the trees were used in validation. The model was applied with a large volume of MISR data and the resulting fractional cover estimates were combined into annual maps using a compositing algorithm that flags results affected by cloud, cloud shadow, surface water, extreme outliers, topographic shading, and burned areas. The maps show that shrub cover is lower on the north slope in comparison to southern part, as expected, however, a preliminary assessment of the fractional cover change over the last decade, achieved by averaging fractional cover values for 2000-2002 and 2008-2010 and then calculating the change between the two periods, revealed that there are large areas for which we cannot determine the sign of the change with high confidence, as the precision of our estimate is close to the magnitude of the cover values. Additional research is thus required to reliably map shrub cover in this environment at annual intervals.
Wheeler, Matthew W; Bailer, A John
2007-06-01
Model averaging (MA) has been proposed as a method of accounting for model uncertainty in benchmark dose (BMD) estimation. The technique has been used to average BMD dose estimates derived from dichotomous dose-response experiments, microbial dose-response experiments, as well as observational epidemiological studies. While MA is a promising tool for the risk assessor, a previous study suggested that the simple strategy of averaging individual models' BMD lower limits did not yield interval estimators that met nominal coverage levels in certain situations, and this performance was very sensitive to the underlying model space chosen. We present a different, more computationally intensive, approach in which the BMD is estimated using the average dose-response model and the corresponding benchmark dose lower bound (BMDL) is computed by bootstrapping. This method is illustrated with TiO(2) dose-response rat lung cancer data, and then systematically studied through an extensive Monte Carlo simulation. The results of this study suggest that the MA-BMD, estimated using this technique, performs better, in terms of bias and coverage, than the previous MA methodology. Further, the MA-BMDL achieves nominal coverage in most cases, and is superior to picking the "best fitting model" when estimating the benchmark dose. Although these results show utility of MA for benchmark dose risk estimation, they continue to highlight the importance of choosing an adequate model space as well as proper model fit diagnostics.
Kauppinen, Timo; Uuksulainen, Sanni; Saalo, Anja; Mäkinen, Ilpo; Pukkala, Eero
2014-04-01
This paper reviews the use of the Finnish Information System on Occupational Exposure (Finnish job-exposure matrix, FINJEM) in different applications in Finland and other countries. We describe and discuss studies on FINJEM and studies utilizing FINJEM in regard to the validity of exposure estimates, occupational epidemiology, hazard surveillance and prevention, the assessment of health risks and the burden of disease, the assessment of exposure trends and future hazards, and the construction of job-exposure matrices (JEMs) in countries other than Finland. FINJEM can be used as an exposure assessment tool in occupational epidemiology, particularly in large register-based studies. It also provides information for hazard surveillance at the national level. It is able to identify occupations with high average exposures to chemical agents and can therefore serve the priority setting of prevention. However, it has only limited use at the workplace level due to the variability of exposure between workplaces. The national estimates of exposure and their temporal trends may contribute to the assessment of both the recent and future burden of work-related health outcomes. FINJEM has also proved to be useful in the construction of other national JEMs, for example in the Nordic Occupational Cancer study in the Nordic countries. FINJEM is a quantitative JEM, which can serve many purposes and its comprehensive documentation also makes it potentially useful in countries other than Finland.
NASA Astrophysics Data System (ADS)
Rotzoll, K.; Izuka, S. K.; Nishikawa, T.; Fienen, M. N.; El-Kadi, A. I.
2016-12-01
Some of the volcanic-rock aquifers of the islands of Hawaii are substantially developed, leading to concerns related to the effects of groundwater withdrawals on saltwater intrusion and stream base-flow reduction. A numerical modeling analysis using recent available information (e.g., recharge, withdrawals, hydrogeologic framework, and conceptual models of groundwater flow) advances current understanding of groundwater flow and provides insight into the effects of human activity and climate change on Hawaii's water resources. Three island-wide groundwater-flow models (Kauai, Oahu, and Maui) were constructed using MODFLOW 2005 coupled with the Seawater-Intrusion Package (SWI2), which simulates the transition between saltwater and freshwater in the aquifer as a sharp interface. This approach allowed coarse vertical discretization (maximum of two layers) without ignoring the freshwater-saltwater system at the regional scale. Model construction (FloPy3), parameter estimation (PEST), and analysis of results were streamlined using Python scripts. Model simulations included pre-development (1870) and recent (average of 2001-10) scenarios for each island. Additionally, scenarios for future withdrawals and climate change were simulated for Oahu. We present our streamlined approach and results showing estimated effects of human activity on the groundwater resource by quantifying decline in water levels, rise of the freshwater-saltwater interface, and reduction in stream base flow. Water-resource managers can use this information to evaluate consequences of groundwater development that can constrain future groundwater availability.
Arzola, Cristian; Carvalho, Jose C A; Cubillos, Javier; Ye, Xiang Y; Perlas, Anahi
2013-08-01
Focused assessment of the gastric antrum by ultrasound is a feasible tool to evaluate the quality of the stomach content. We aimed to determine the amount of training an anesthesiologist would need to achieve competence in the bedside ultrasound technique for qualitative assessment of gastric content. Six anesthesiologists underwent a teaching intervention followed by a formative assessment; then learning curves were constructed. Participants received didactic teaching (reading material, picture library, and lecture) and an interactive hands-on workshop on live models directed by an expert sonographer. The participants were instructed on how to perform a systematic qualitative assessment to diagnose one of three distinct categories of gastric content (empty, clear fluid, solid) in healthy volunteers. Individual learning curves were constructed using the cumulative sum method, and competence was defined as a 90% success rate in a series of ultrasound examinations. A predictive model was further developed based on the entire cohort performance to determine the number of cases required to achieve a 95% success rate. Each anesthesiologist performed 30 ultrasound examinations (a total of 180 assessments), and three of the six participants achieved competence. The average number of cases required to achieve 90% and 95% success rates was estimated to be 24 and 33, respectively. With appropriate training and supervision, it is estimated that anesthesiologists will achieve a 95% success rate in bedside qualitative ultrasound assessment after performing approximately 33 examinations.
Daban, J R
2000-04-11
The local concentration of DNA in metaphase chromosomes of different organisms has been determined in several laboratories. The average of these measurements is 0.17 g/mL. In the first level of chromosome condensation, DNA is wrapped around histones forming nucleosomes. This organization limits the DNA concentration in nucleosomes to 0. 3-0.4 g/mL. Furthermore, in the structural models suggested in different laboratories for the 30-40 nm chromatin fiber, the estimated DNA concentration is significantly reduced; it ranges from 0.04 to 0.27 g/mL. The DNA concentration is further reduced when the fiber is folded into the successive higher order structures suggested in different models for metaphase chromosomes; the estimated minimum decrease of DNA concentration represents an additional 40%. These observations suggest that most of the models proposed for the 30-40 nm chromatin fiber are not dense enough for the construction of metaphase chromosomes. In contrast, it is well-known that the linear packing ratio increases dramatically in each level of DNA folding in chromosomes. Thus, the consideration of the linear packing ratio is not enough for the study of chromatin condensation; the constraint resulting from the actual DNA concentration in metaphase chromosomes must be considered for the construction of models for condensed chromatin.
Roberts, Laura N. Robinson
1991-01-01
The coal-bearing Upper Cretaceous Fruitland Formation occupies an area of about 14 square miles in the extreme southeast corner of the Ute Mountain Ute Indian Reservation in San Juan County, New Mexico. In this area, the Fruitland Formation contains an estimated 252 million short tons of coal in beds that range from 1.2 to 14 feet thick. About 100 million short tons of coal occur under less than 500 feet of overburden in the Ute Canyon, Upper Main, and Main coal beds. These three coal beds reach a cumulative coal thickness of about 18 feet in a stratigraphic interval that averages about 120 feet thick in the prospecting permit area, which is located in the extreme southwestern part of the study area. The southwestern part of the study area is probably best suited for surface mining, although steep dips may reduce minability locally. A major haul road that was recently constructed across the eastern half of the study area greatly improves the potential for surface mining. Core sample analyses indicate that the apparent rank of the Ute Canyon, Upper Main, and Main coal beds is high-volatile C bituminous. Average heat-of-combustion on an as-received basis is 10,250 British thermal units per pound, average ash content is 15.5 percent, and average sulfur content is 1.0 percent.
Uncertainties in Estimates of Fleet Average Fuel Economy : A Statistical Evaluation
DOT National Transportation Integrated Search
1977-01-01
Research was performed to assess the current Federal procedure for estimating the average fuel economy of each automobile manufacturer's new car fleet. Test vehicle selection and fuel economy estimation methods were characterized statistically and so...
Sherwood, James M.; Huitger, Carrie A.; Ebner, Andrew D.; Koltun, G.F.
2008-01-01
The USGS, in cooperation with the Ohio Emergency Management Agency, conducted a study in the Wheeling Creek Basin to (1) evaluate and contrast land-cover characteristics from 2001 with characteristics from 1979 and 1992; (2) compare current streambed elevation, slope, and geometry with conditions present in the late 1980s; (3) look for evidence of channel filling and over widening in selected undredged reaches; (4) estimate flood elevations for existing conditions in both undredged and previously dredged reaches; (5) evaluate the height of the levees required to contain floods with selected recurrence intervals in previously dredged reaches; and (6) estimate flood elevations for several hypothetical dredging and streambed aggradation scenarios in undredged reaches. The amount of barren land in the Wheeling Creek watershed has decreased from 20 to 1 percent of the basin area based on land-cover characteristics from 1979 and 2001. Barren lands appear to have been converted primarily to pasture, presumably as a result of surface-mine reclamation. Croplands also decreased from 13 to 8 percent of the basin area. The combined decrease in barren lands and croplands is approximately offset by the increase in pasture. Stream-channel surveys conducted in 1987 and again in 2006 at 21 sites in four previously dredged reaches of Wheeling Creek indicate little change in the elevation, slope, and geometry of the channel at most sites. The mean change in width-averaged bed and thalweg elevations for the 21 cross sections was 0.1 feet. Bankfull widths, mean depths, and cross-sectional areas measured at 12 sites in undredged reaches were compared to estimates determined from regional equations. The mean percentage difference between measured and estimated bankfull widths was -0.2 percent, suggesting that bankfull widths in the Wheeling Creek Basin are generally about the same as regional averages for undisturbed basins of identical drainage area. For bankfull mean depth and cross-sectional area, the mean percentage differences between the measured and estimated values were -16.0 and -11.2, respectively. The predominantly negative bias in differences between the measured and estimated values indicates that bankfull mean depths and cross-sectional areas in studied reaches generally are smaller than the regional trend. This may be an indication of channel filling and over widening or it may reflect insufficient representation in the regional dataset of basins with characteristics like that of Wheeling Creek. Step-backwater models were constructed for four previously dredged reaches to determine the height of levees required to contain floods with recurrence intervals of 2, 10, 50, and 100 years. Existing levees (all of which are uncertified) were found to contain the 100-year flood at only 20 percent of the surveyed cross sections. At the other 80 percent of the surveyed cross sections, levee heights would have to be raised an average of 2.5 feet and as much as 6.3 feet to contain the 100-year flood. Step-backwater models also were constructed for three undredged reaches to assess the impacts of selected dredging and streambed aggradation scenarios on water-surface elevations corresponding to the 2-, 10-, 50-, and 100-year floods. Those models demonstrated that changes in water-surface elevations associated with a given depth of dredging were proportionately smaller for larger floods due to the fact that more of the flood waters are outside of the main channel. For example, 2.0 feet of dredging in the three study reaches would lower the water-surface elevation an average of 1.30 feet for the 2-year flood and 0.64 feet for the 100-year flood.
W. H. Reid; D. B. McKeever
Estimates of the amounts of wood products used in constructing civil conservation and development projects by the Corps of Engineers in the United States are presented for the years 1962 and 1978. Amounts of lumber, laminated lumber, poles and piling, and plywood used in construction are stratified by five construction categories, and three types of uses. Estimates of...
Code of Federal Regulations, 2012 CFR
2012-07-01
... averages into the appropriate averaging times and units? 60.3042 Section 60.3042 Protection of Environment... Construction On or Before December 9, 2004 Model Rule-Monitoring § 60.3042 How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? (a) Use Equation 1 in § 60.3076 to...
Code of Federal Regulations, 2010 CFR
2010-07-01
... averages into the appropriate averaging times and units? 60.3042 Section 60.3042 Protection of Environment... Construction On or Before December 9, 2004 Model Rule-Monitoring § 60.3042 How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? (a) Use Equation 1 in § 60.3076 to...
Code of Federal Regulations, 2013 CFR
2013-07-01
... averages into the appropriate averaging times and units? 60.3042 Section 60.3042 Protection of Environment... Construction On or Before December 9, 2004 Model Rule-Monitoring § 60.3042 How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? (a) Use Equation 1 in § 60.3076 to...
Code of Federal Regulations, 2011 CFR
2011-07-01
... averages into the appropriate averaging times and units? 60.3042 Section 60.3042 Protection of Environment... Construction On or Before December 9, 2004 Model Rule-Monitoring § 60.3042 How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? (a) Use Equation 1 in § 60.3076 to...
Code of Federal Regulations, 2014 CFR
2014-07-01
... averages into the appropriate averaging times and units? 60.3042 Section 60.3042 Protection of Environment... Construction On or Before December 9, 2004 Model Rule-Monitoring § 60.3042 How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? (a) Use Equation 1 in § 60.3076 to...
Semi-Degradable Scaffold for Articular Cartilage Replacement
Charlton, DC; Peterson, MGE; Spiller, K; Lowman, A; Torzilli, PA; Maher, SA
2009-01-01
The challenge of designing a construct for the repair of focal cartilage defects such that it mimics the mechanical properties of and can integrate with native cartilage has not been met by existing technologies. Herein we describe a novel construct consisting of a non-degradable poly-vinyl alcohol scaffold to provide long-term mechanical stability, interconnected pores to allow for the infiltration of chondrocytes and poly-lactic glycolic acid microspheres for the incorporation of growth factors to enhance cellular migration. The objective of this study was to characterize the morphological features and mechanical properties of our porous PVA-PLGA construct as a function of PLGA content. Varying the PLGA content was found to have a significant effect on the morphological features of the construct. As PLGA content increased from 10 – 75%, samples exhibited a six-fold increase in average percent porosity, an increase in average microsphere diameter from 8 – 34 µm, and an increase in average pore diameter from 29 – 111 µm. The effect of PLGA content on Aggregate Modulus and Permeability was less profound. Our findings suggest that that morphology of the construct can be tailored to optimize cellular infiltration and the dynamic mechanical response. PMID:18333818
Age-dependence of the average and equivalent refractive indices of the crystalline lens
Charman, W. Neil; Atchison, David A.
2013-01-01
Lens average and equivalent refractive indices are required for purposes such as lens thickness estimation and optical modeling. We modeled the refractive index gradient as a power function of the normalized distance from lens center. Average index along the lens axis was estimated by integration. Equivalent index was estimated by raytracing through a model eye to establish ocular refraction, and then backward raytracing to determine the constant refractive index yielding the same refraction. Assuming center and edge indices remained constant with age, at 1.415 and 1.37 respectively, average axial refractive index increased (1.408 to 1.411) and equivalent index decreased (1.425 to 1.420) with age increase from 20 to 70 years. These values agree well with experimental estimates based on different techniques, although the latter show considerable scatter. The simple model of index gradient gives reasonable estimates of average and equivalent lens indices, although refinements in modeling and measurements are required. PMID:24466474
Kontosic, I; Vukelić, M; Pancić, M; Kunisek, J
1994-12-01
Physical work load was estimated in a female conveyor-belt worker in a bottling plant. Estimation was based on continuous measurement and on calculation of average heart rate values in three-minute and one-hour periods and during the total measuring period. The thermal component of the heart rate was calculated by means of the corrected effective temperature, for the one-hour periods. The average heart rate at rest was also determined. The work component of the heart rate was calculated by subtraction of the resting heart rate and the heart rate measured at 50 W, using a regression equation. The average estimated gross energy expenditure during the work was 9.6 +/- 1.3 kJ/min corresponding to the category of light industrial work. The average estimated oxygen uptake was 0.42 +/- 0.06 L/min. The average performed mechanical work was 12.2 +/- 4.2 W, i.e. the energy expenditure was 8.3 +/- 1.5%.
NASA Astrophysics Data System (ADS)
Titov, O. A.; Lopez, Yu. R.
2018-03-01
We consider a method of reconstructing the structure delay of extended radio sources without constructing their radio images. The residuals derived after the adjustment of geodetic VLBI observations are used for this purpose. We show that the simplest model of a radio source consisting of two point components can be represented by four parameters (the angular separation of the components, the mutual orientation relative to the poleward direction, the flux-density ratio, and the spectral index difference) that are determined for each baseline of a multi-baseline VLBI network. The efficiency of this approach is demonstrated by estimating the coordinates of the radio source 0014+813 observed during the two-week CONT14 program organized by the International VLBI Service (IVS) in May 2014. Large systematic deviations have been detected in the residuals of the observations for the radio source 0014+813. The averaged characteristics of the radio structure of 0014+813 at a frequency of 8.4 GHz can be calculated from these deviations. Our modeling using four parameters has confirmed that the source consists of two components at an angular separation of 0.5 mas in the north-south direction. Using the structure delay when adjusting the CONT14 observations leads to a correction of the average declination estimate for the radio source 0014+813 by 0.070 mas.
Linkage map of the honey bee, Apis mellifera, based on RAPD markers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunt, G.J.; Page, R.E. Jr.
A linkage map was constructed for the honey bee based on the segregation of 365 random amplified polymorphic DNA (RAPD) markers in haploid male progeny of a single female bee. The X locus for sex determination and genes for black body color and malate dehydrogenase were mapped to separate linkage groups. RAPD markers were very efficient for mapping, with an average of about 2.8 loci mapped for each 10-nucleotide primer that was used in polymerase chain reactions. The mean interval size between markers on the map was 9.1 cM. The map covered 3110 cM of linked markers on 26 linkagemore » groups. We estimate the total genome size to be {approximately}3450 cM. The size of the map indicated a very high recombination rate for the honey bee. The relationship of physical to genetic distance was estimated at 52 kb/cM, suggesting that map-based cloning of genes will be feasible for this species. 71 refs., 6 figs., 1 tab.« less
An efficient approach to ARMA modeling of biological systems with multiple inputs and delays
NASA Technical Reports Server (NTRS)
Perrott, M. H.; Cohen, R. J.
1996-01-01
This paper presents a new approach to AutoRegressive Moving Average (ARMA or ARX) modeling which automatically seeks the best model order to represent investigated linear, time invariant systems using their input/output data. The algorithm seeks the ARMA parameterization which accounts for variability in the output of the system due to input activity and contains the fewest number of parameters required to do so. The unique characteristics of the proposed system identification algorithm are its simplicity and efficiency in handling systems with delays and multiple inputs. We present results of applying the algorithm to simulated data and experimental biological data In addition, a technique for assessing the error associated with the impulse responses calculated from estimated ARMA parameterizations is presented. The mapping from ARMA coefficients to impulse response estimates is nonlinear, which complicates any effort to construct confidence bounds for the obtained impulse responses. Here a method for obtaining a linearization of this mapping is derived, which leads to a simple procedure to approximate the confidence bounds.
Estimation After a Group Sequential Trial.
Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert
2015-10-01
Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.
Applications of Kelly's Personal Construct Theory to Vocational Guidance
ERIC Educational Resources Information Center
Paszkowska-Rogacz, Anna; Kabzinska, Zofia
2012-01-01
This paper outlines selected applications of Kelly's Personal Construct Theory to vocational guidance. The authors elicited personal constructs using the Rep Test (Role Construct Repertory Test) and compared them with Holland's occupational typology. The sample (N = 136, F = 85, M = 51, average age of 21.97) was composed of students of various…
77 FR 42696 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-20
... construction awards, 30 requests for amendments to non-construction awards, 2 project service maps). Average Hours Per Response: 2 hours for an amendment to a construction award, 1 hour for an amendment to a non-construction award, 6 hours for a project service map. Burden Hours: 1,242. Needs and Uses: A recipient must...
The Oval Female Facial Shape--A Study in Beauty.
Goodman, Greg J
2015-12-01
Our understanding of who is beautiful seems to be innate but has been argued to conform to mathematical principles and proportions. One aspect of beauty is facial shape that is gender specific. In women, an oval facial shape is considered attractive. To study the facial shape of beautiful actors, pageant title winners, and performers across ethnicities and in different time periods and to construct an ideal oval shape based on the average of their facial shape dimensions. Twenty-one full-face photographs of purportedly beautiful female actors, performers, and pageant winners were analyzed and an oval constructed from their facial parameters. Only 3 of the 21 faces were totally symmetrical, with the most larger in the left upper and lower face. The average oval was subsequently constructed from an average bizygomatic distance (horizontal parameter) of 4.3 times their intercanthal distance (ICD) and a vertical dimension that averaged 6.3 times their ICD. This average oval could be fitted to many of the individual subjects showing a smooth flow from the forehead through temples, cheeks, jaw angle, jawline, and chin with all these facial aspects abutting the oval. Where they did not abut, treatment may have improved these subjects.
A robust method of thin plate spline and its application to DEM construction
NASA Astrophysics Data System (ADS)
Chen, Chuanfa; Li, Yanyan
2012-11-01
In order to avoid the ill-conditioning problem of thin plate spline (TPS), the orthogonal least squares (OLS) method was introduced, and a modified OLS (MOLS) was developed. The MOLS of TPS (TPS-M) can not only select significant points, termed knots, from large and dense sampling data sets, but also easily compute the weights of the knots in terms of back-substitution. For interpolating large sampling points, we developed a local TPS-M, where some neighbor sampling points around the point being estimated are selected for computation. Numerical tests indicate that irrespective of sampling noise level, the average performance of TPS-M can advantage with smoothing TPS. Under the same simulation accuracy, the computational time of TPS-M decreases with the increase of the number of sampling points. The smooth fitting results on lidar-derived noise data indicate that TPS-M has an obvious smoothing effect, which is on par with smoothing TPS. The example of constructing a series of large scale DEMs, located in Shandong province, China, was employed to comparatively analyze the estimation accuracies of the two versions of TPS and the classical interpolation methods including inverse distance weighting (IDW), ordinary kriging (OK) and universal kriging with the second-order drift function (UK). Results show that regardless of sampling interval and spatial resolution, TPS-M is more accurate than the classical interpolation methods, except for the smoothing TPS at the finest sampling interval of 20 m, and the two versions of kriging at the spatial resolution of 15 m. In conclusion, TPS-M, which avoids the ill-conditioning problem, is considered as a robust method for DEM construction.
Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method
Liu, Y.; Liu, Z.; Zhang, S.; ...
2014-05-29
Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less
Zhang, Guomin; Sandanayake, Malindu; Setunge, Sujeeva; Li, Chunqing; Fang, Jun
2017-02-01
Emissions from equipment usage and transportation at the construction stage are classified as the direct emissions which include both greenhouse gas (GHG) and non-GHG emissions due to partial combustion of fuel. Unavailability of a reliable and complete inventory restricts an accurate emission evaluation on construction work. The study attempts to review emission factor standards readily available worldwide for estimating emissions from construction equipment. Emission factors published by United States Environmental Protection Agency (US EPA), Australian National Greenhouse Accounts (AUS NGA), Intergovernmental Panel on Climate Change (IPCC) and European Environmental Agency (EEA) are critically reviewed to identify their strengths and weaknesses. A selection process based on the availability and applicability is then developed to help identify the most suitable emission factor standards for estimating emissions from construction equipment in the Australian context. A case study indicates that a fuel based emission factor is more suitable for GHG emission estimation and a time based emission factor is more appropriate for estimation of non-GHG emissions. However, the selection of emission factor standards also depends on factors like the place of analysis (country of origin), data availability and the scope of analysis. Therefore, suitable modifications and assumptions should be incorporated in order to represent these factors. Copyright © 2016 Elsevier Ltd. All rights reserved.
Estimating physical activity in children: impact of pedometer wear time and metric.
Laurson, Kelly R; Welk, Gregory J; Eisenmann, Joey C
2015-01-01
The purpose of this study was to provide a practical demonstration of the impact of monitoring frame and metric when assessing pedometer-determined physical activity (PA) in youth. Children (N = 1111) were asked to wear pedometers over a 7-day period during which time worn and steps were recorded each day. Varying data-exclusion criteria were used to demonstrate changes in estimates of PA. Steps were expressed using several metrics and criteria, and construct validity was demonstrated via correlations with adiposity. Meaningful fluctuations in average steps per day and percentage meeting PA recommendations were apparent when different criteria were used. Children who wore the pedometer longer appeared more active, with each minute the pedometer was worn each day accounting for an approximate increase of 11 and 8 steps for boys and girls, respectively (P < .05). Using more restrictive exclusion criteria led to stronger correlations between indices of steps per day, steps per minute, steps per leg length, steps per minute per leg length, and obesity. Wear time has a meaningful impact on estimates of PA. This should be considered when determining exclusion criteria and making comparisons between studies. Results also suggest that incorporating wear time per day and leg length into the metric may increase validity of PA estimates.
A patient-specific segmentation framework for longitudinal MR images of traumatic brain injury
NASA Astrophysics Data System (ADS)
Wang, Bo; Prastawa, Marcel; Irimia, Andrei; Chambers, Micah C.; Vespa, Paul M.; Van Horn, John D.; Gerig, Guido
2012-02-01
Traumatic brain injury (TBI) is a major cause of death and disability worldwide. Robust, reproducible segmentations of MR images with TBI are crucial for quantitative analysis of recovery and treatment efficacy. However, this is a significant challenge due to severe anatomy changes caused by edema (swelling), bleeding, tissue deformation, skull fracture, and other effects related to head injury. In this paper, we introduce a multi-modal image segmentation framework for longitudinal TBI images. The framework is initialized through manual input of primary lesion sites at each time point, which are then refined by a joint approach composed of Bayesian segmentation and construction of a personalized atlas. The personalized atlas construction estimates the average of the posteriors of the Bayesian segmentation at each time point and warps the average back to each time point to provide the updated priors for Bayesian segmentation. The difference between our approach and segmenting longitudinal images independently is that we use the information from all time points to improve the segmentations. Given a manual initialization, our framework automatically segments healthy structures (white matter, grey matter, cerebrospinal fluid) as well as different lesions such as hemorrhagic lesions and edema. Our framework can handle different sets of modalities at each time point, which provides flexibility in analyzing clinical scans. We show results on three subjects with acute baseline scans and chronic follow-up scans. The results demonstrate that joint analysis of all the points yields improved segmentation compared to independent analysis of the two time points.
Jiang, Likun; You, Weiwei; Zhang, Xiaojun; Xu, Jian; Jiang, Yanliang; Wang, Kai; Zhao, Zixia; Chen, Baohua; Zhao, Yunfeng; Mahboob, Shahid; Al-Ghanim, Khalid A; Ke, Caihuan; Xu, Peng
2016-02-01
The small abalone (Haliotis diversicolor) is one of the most important aquaculture species in East Asia. To facilitate gene cloning and characterization, genome analysis, and genetic breeding of it, we constructed a large-insert bacterial artificial chromosome (BAC) library, which is an important genetic tool for advanced genetics and genomics research. The small abalone BAC library includes 92,610 clones with an average insert size of 120 Kb, equivalent to approximately 7.6× of the small abalone genome. We set up three-dimensional pools and super pools of 18,432 BAC clones for target gene screening using PCR method. To assess the approach, we screened 12 target genes in these 18,432 BAC clones and identified 16 positive BAC clones. Eight positive BAC clones were then sequenced and assembled with the next generation sequencing platform. The assembled contigs representing these 8 BAC clones spanned 928 Kb of the small abalone genome, providing the first batch of genome sequences for genome evaluation and characterization. The average GC content of small abalone genome was estimated as 40.33%. A total of 21 protein-coding genes, including 7 target genes, were annotated into the 8 BACs, which proved the feasibility of PCR screening approach with three-dimensional pools in small abalone BAC library. One hundred fifty microsatellite loci were also identified from the sequences for marker development in the future. The BAC library and clone pools provided valuable resources and tools for genetic breeding and conservation of H. diversicolor.
Liu, Changqing; Bai, Chunyu; Guo, Yu; Liu, Dan; Lu, Taofeng; Li, Xiangchen; Ma, Jianzhang; Ma, Yuehui; Guan, Weijun
2014-01-01
Bacterial artificial chromosome (BAC) libraries are extremely valuable for the genome-wide genetic dissection of complex organisms. The Siberian tiger, one of the most well-known wild primitive carnivores in China, is an endangered animal. In order to promote research on its genome, a high-redundancy BAC library of the Siberian tiger was constructed and characterized. The library is divided into two sub-libraries prepared from blood cells and two sub-libraries prepared from fibroblasts. This BAC library contains 153,600 individually archived clones; for PCR-based screening of the library, BACs were placed into 40 superpools of 10 × 384-deep well microplates. The average insert size of BAC clones was estimated to be 116.5 kb, representing approximately 6.46 genome equivalents of the haploid genome and affording a 98.86% statistical probability of obtaining at least one clone containing a unique DNA sequence. Screening the library with 19 microsatellite markers and a SRY sequence revealed that each of these markers were present in the library; the average number of positive clones per marker was 6.74 (range 2 to 12), consistent with 6.46 coverage of the tiger genome. Additionally, we identified 72 microsatellite markers that could potentially be used as genetic markers. This BAC library will serve as a valuable resource for physical mapping, comparative genomic study and large-scale genome sequencing in the tiger. PMID:24608928
Two means of sampling sexual minority women: how different are the samples of women?
Boehmer, Ulrike; Clark, Melissa; Timm, Alison; Ozonoff, Al
2008-01-01
We compared 2 sampling approaches of sexual minority women in 1 limited geographic area to better understand the implications of these 2 sampling approaches. Sexual minority women identified through the Census did not differ on average age or the prevalence of raising children from those sampled using nonrandomized methods. Women in the convenience sample were better educated and lived in smaller households. Modeling the likelihood of disability in this population resulted in contradictory parameter estimates by sampling approach. The degree of variation observed both between sampling approaches and between different parameters suggests that the total population of sexual minority women is still unmeasured. Thoroughly constructed convenience samples will continue to be a useful sampling strategy to further research on this population.
Node-node correlations and transport properties in scale-free networks
NASA Astrophysics Data System (ADS)
Obregon, Bibiana; Guzman, Lev
2011-03-01
We study some transport properties of complex networks. We focus our attention on transport properties of scale-free and small-world networks and compare two types of transport: Electric and max-flow cases. In particular, we construct scale-free networks, with a given degree sequence, to estimate the distribution of conductances for different values of assortative/dissortative mixing. For the electric case we find that the distributions of conductances are affect ed by the assortative mixing of the network whereas for the max-flow case, the distributions almost do not show changes when node-node correlations are altered. Finally, we compare local and global transport in terms of the average conductance for the small-world (Watts-Strogatz) model
Are preservice teachers prepared to teach struggling readers?
Washburn, Erin K; Joshi, R Malatesha; Binks Cantrell, Emily
2011-06-01
Reading disabilities such as dyslexia, a specific learning disability that affects an individual's ability to process written language, are estimated to affect 15-20% of the general population. Consequently, elementary school teachers encounter students who struggle with inaccurate or slow reading, poor spelling, poor writing, and other language processing difficulties. However, recent evidence may suggest that teacher preparation programs are not providing preservice teachers with information about basic language constructs and other components related to scientifically based reading instruction. As a consequence preservice teachers have not exhibited explicit knowledge of such concepts in previous studies. Few studies have sought to assess preservice teachers' knowledge about dyslexia in conjunction with knowledge of basic language concepts. The purpose of the present study was to examine elementary school preservice teachers' knowledge of basic language constructs and their perceptions and knowledge about dyslexia. Findings from the present study suggest that preservice teachers, on average, are able to display implicit skills related to certain basic language constructs (i.e., syllable counting), but fail to demonstrate explicit knowledge of others (i.e., phonics principles). Also, preservice teachers seem to hold the common misconception that dyslexia is a visual perception deficit rather than a problem with phonological processing. Implications for future research as well as teacher preparation are discussed.
Life Cycle Energy Assessment of a Multi-storey Residential Building
NASA Astrophysics Data System (ADS)
Mehta, Sourabh; Chandur, Arjun; Palaniappan, Sivakumar
2017-06-01
This study presents the findings of life cycle energy assessment of two multi-storey residential buildings. These buildings consist of a total of 60 homes. The usable floor area is 43.14 m2 (463.36 ft2) per home. A detailed estimation of embodied energy is carried out by considering the use of materials during building construction. Major contributors of embodied energy are found to be steel, cement and aluminum. Monthly building operation energy was assessed using a total of 2520 data samples corresponding to 3 years of building operation. Analysis of a base case scenario, with 50 years of service life and average monthly operation energy, indicates that the embodied energy and the operation energy account for 16 and 84% of the life cycle energy respectively. Sensitivity analysis is carried out to study the influence of service life and operation energy on the relative contribution of embodied energy and operation energy. It is found that the embodied energy represents as high as 31% of the life cycle energy depending upon the variation in the operation energy and the service life. Hence, strategies towards sustainable building construction should also focus on reducing the embodied energy in the design and construction phases in addition to operation energy.
Nitrogen management in reservoir catchments through constructed wetland systems.
Tunçiper, B; Ayaz, S C; Akça, L; Samsunlu, A
2005-01-01
In this study, nitrogen removal was investigated in pilot-scale subsurface flow (SSF) and in free water surface flow (FWS) constructed wetlands installed in the campus of TUBITAK-Marmara Research Center, Gebze, near Istanbul, Turkey. The main purposes of this study are to apply constructed wetlands for the protection of water reservoirs and to reuse wastewater. Experiments were carried out at continuous flow reactors. The effects of the type of plants on the removal were investigated by using emergent (Canna, Cyperus, Typhia spp., Phragmites spp., Juncus, Poaceae, Paspalum and Iris.), submerged (Elodea, Egeria) and floating (Pistia, Salvina and Lemna) marsh plants at different conditions. During the study period HLRs were 30, 50, 70, 80 and 120 L m(2)d(-1) respectively. The average annual NH4-N, NO(3)-N, organic N and TN treatment efficiencies in SSF and FWS wetlands are 81% and 68%, 37% and 49%, 75% and 68%, 47% and 53%, respectively. Nitrification, denitrification and ammonification rate constant (k20) values in SSF and FNS systems have been found as 0.898 d(-1) and 0.541 d(-1), 0.488 d(-1) and 0.502 d(-1), 0.986 d(-1) and 0.908 respectively. Two types of the models (first-order plug flow and multiple regression) were tried to estimate the system performances.
Construction and performance of a long-term earthen liner experiment
Cartwright, Keros; Krapac, Ivan G.; Bonaparte, Rudolph
1990-01-01
In land burial schemes, compacted soil barriers with low hydraulic conductivity are commonly used in cover and liner systems to control the movement of liquids and prevent groundwater contamination. An experimental liner measuring 8 x 15 x 0.9 m was constructed with design criteria and equipment to simulate construction of soil liners built at waste disposal facilities. The surface of the liner was flooded with a 29.5 cm deep pond on April 12, 1988. Infiltration of water into the liner has been monitored for two years using 4 large-ring (1.5 m OD) and 32 small-ring (0.28 m OD) infiltrometers, and a water-balance that accounts for total infiltration and evaporation. Average long-term infiltration fluxes based on two years of monitoring are 5.8 x 10-9 cm/s, 6.0 x 10-8 cm/s and 5.6 x 10-8 for the large-ring, small-ring, and water-balance data, respectively. The saturated hydraulic conductivity of the liner based on small-ring data, estimated using Darcy's Law and the Green-Ampt Approximation, is 3 x 10-8 and 4 x 10-8 cm/s, respectively. All sets of data indicate that the liner's performance exceed that which is required by the U.S. EPA.
Imaging the Lower Crust and Moho Beneath Long Beach, CA Using Autocorrelations
NASA Astrophysics Data System (ADS)
Clayton, R. W.
2017-12-01
Three-dimensional images of the lower crust and Moho in a 10x10 km region beneath Long Beach, CA are constructed from autocorrelations of ambient noise. The results show the Moho at a depth of 15 km at the coast and dipping at 45 degrees inland to a depth of 25 km. The shape of the Moho interface is irregular in both the coast perpendicular and parallel directions. The lower crust appears as a zone of enhanced reflectivity with numerous small-scale structures. The autocorrelations are constructed from virtual source gathers that were computed from the dense Long Beach array that were used in the Lin et al (2013) study. All near zero-offset traces within a 200 m disk are stacked to produce a single autocorrelation at that point. The stack typically is over 50-60 traces. To convert the auto correlation to reflectivity as in Claerbout (1968), the noise source autocorrelation, which is estimated as the average of all autocorrelations is subtracted from each trace. The subsurface image is then constructed with a 0.1-2 Hz filter and AGC scaling. The main features of the image are confirmed with broadband receiver functions from the LASSIE survey (Ma et al, 2016). The use of stacked autocorrelations extends ambient noise into the lower crust.
Lacourt, Aude; Pintos, Javier; Lavoué, Jérôme; Richardson, Lesley; Siemiatycki, Jack
2015-09-22
Given the large number of workers in the construction industry, it is important to derive accurate and valid estimates of cancer risk, and in particular lung cancer risk. In most previous studies, risks among construction workers were compared with general populations including blue and white collar workers. The main objectives of this study were to assess whether construction workers experience excess lung cancer risk, and whether exposure to selected construction industry exposures carries excess risks. We wished to address these objectives within the sub-population of blue collar workers. Two case-control studies were conducted in Montreal. Combined, they included 1593 lung cancer cases and 1427 controls, of whom 1304 cases and 1081 controls had been blue collar workers. Detailed lifetime job histories were obtained and translated by experts into histories of exposure to chemical agents. The two key analyses were to estimate odds ratio (OR) estimates of lung cancer risk: a) for all blue-collar construction workers compared with other blue-collar workers, and b) for construction workers exposed to each of 20 exposure agents found in the construction industry compared with construction workers unexposed to those agents. All analyses were conducted using unconditional logistic regression adjusted for socio-demographic factors and smoking history. The OR for all construction workers combined was 1.11 (95 % CI: 0.90-1.38), based on 381 blue collar construction workers. Analyses of specific exposures were hampered by small numbers and imprecise estimates. While none of 20 occupational agents examined was significantly associated with lung cancer, the following agents manifested non-significantly elevated ORs: asbestos, silica, Portland cement, soil dust, calcium oxide and calcium sulfate. Compared with other blue collar workers, there was only a slight increased risk of lung cancer for subjects who ever held an occupation in the construction industry. The analyses of agents within the construction industry produced imprecise estimates of risk, but nevertheless pointed to some plausible associations. Excess risks for asbestos and silica were in line with previous knowledge. The possible excess risks with the other inorganic dusts require further corroboration.
On decentralized estimation. [for large linear systems
NASA Technical Reports Server (NTRS)
Siljak, D. D.; Vukcevic, M. B.
1978-01-01
A multilevel scheme is proposed to construct decentralized estimators for large linear systems. The scheme is numerically attractive since only observability tests of low-order subsystems are required. Equally important is the fact that the constructed estimators are reliable under structural perturbations and can tolerate a wide range of nonlinearities in coupling among the subsystems.
Joint channel/frequency offset estimation and correction for coherent optical FBMC/OQAM system
NASA Astrophysics Data System (ADS)
Wang, Daobin; Yuan, Lihua; Lei, Jingli; wu, Gang; Li, Suoping; Ding, Runqi; Wang, Dongye
2017-12-01
In this paper, we focus on analysis of the preamble-based joint estimation for channel and laser-frequency offset (LFO) in coherent optical filter bank multicarrier systems with offset quadrature amplitude modulation (CO-FBMC/OQAM). In order to reduce the noise impact on the estimation accuracy, we proposed an estimation method based on inter-frame averaging. This method averages the cross-correlation function of real-valued pilots within multiple FBMC frames. The laser-frequency offset is estimated according to the phase of this average. After correcting LFO, the final channel response is also acquired by averaging channel estimation results within multiple frames. The principle of the proposed method is analyzed theoretically, and the preamble structure is thoroughly designed and optimized to suppress the impact of inherent imaginary interference (IMI). The effectiveness of our method is demonstrated numerically using different fiber and LFO values. The obtained results show that the proposed method can improve transmission performance significantly.
Reliability of mobile systems in construction
NASA Astrophysics Data System (ADS)
Narezhnaya, Tamara; Prykina, Larisa
2017-10-01
The purpose of the article is to analyze the influence of the mobility of construction production in the article taking into account the properties of reliability and readiness. Basing on the studied systems the effectiveness and efficiency is estimated. The construction system is considered to be the complete organizational structure providing creation or updating of construction facilities. At the same time the production sphere of these systems joins the production on the building site itself, material and technical resources of the construction production and live labour in these spheres within the construction dynamics. The author concludes, that the estimation of the degree of mobility of systems the of construction production makes a great positive effect in the project.
NASA Astrophysics Data System (ADS)
Yanai, R. D.; Bae, K.; Levine, C. R.; Lilly, P.; Vadeboncoeur, M. A.; Fatemi, F. R.; Blum, J. D.; Arthur, M.; Hamburg, S.
2013-12-01
Ecosystem nutrient budgets are difficult to construct and even more difficult to replicate. As a result, uncertainty in the estimates of pools and fluxes are rarely reported, and opportunities to assess confidence through replicated measurements are rare. In this study, we report nutrient concentrations and contents of soil and biomass pools in northern hardwood stands in replicate plots within replicate stands in 3 age classes (14-19 yr, 26-29 yr, and > 100 yr) at the Bartlett Experimental Forest, USA. Soils were described by quantitative soil pits in three plots per stand, excavated by depth increment to the C horizon and analyzed by a sequential extraction procedure. Variation in soil mass among pits within stands averaged 28% (coefficient of variation); variation among stands within an age class ranged from 9-25%. Variation in nutrient concentrations were higher still (averaging 38%, within element, depth increment, and extraction type), perhaps because the depth increments contained varying proportions of genetic horizons. To estimate nutrient contents of aboveground biomass, we propagated model uncertainty through allometric equations, and found errors ranging from 3-7%, depending on the stand. The variation in biomass among plots within stands (6-19%) was always larger than the allometric uncertainties. Variability in measured nutrient concentrations of tree tissues were more variable than the uncertainty in biomass. Foliage had the lowest variability (averaging 16% for Ca, Mg, K, N and P within age class and species), and wood had the highest (averaging 30%), when reported in proportion to the mean, because concentrations in wood are low. For Ca content of aboveground biomass, sampling variation was the greatest source of uncertainty. Coefficients of variation among plots within a stand averaged 16%; stands within an age class ranged from 5-25% CV, including uncertainties in tree allometry and tissue chemistry. Uncertainty analysis can help direct research effort to areas most in need of improvement. In systems such as the one we studied, more intensive sampling would be the best approach to reducing uncertainty, as natural spatial variation was higher than model or measurement uncertainties.
Structural Equation Modeling: A Framework for Ocular and Other Medical Sciences Research
Christ, Sharon L.; Lee, David J.; Lam, Byron L.; Diane, Zheng D.
2017-01-01
Structural equation modeling (SEM) is a modeling framework that encompasses many types of statistical models and can accommodate a variety of estimation and testing methods. SEM has been used primarily in social sciences but is increasingly used in epidemiology, public health, and the medical sciences. SEM provides many advantages for the analysis of survey and clinical data, including the ability to model latent constructs that may not be directly observable. Another major feature is simultaneous estimation of parameters in systems of equations that may include mediated relationships, correlated dependent variables, and in some instances feedback relationships. SEM allows for the specification of theoretically holistic models because multiple and varied relationships may be estimated together in the same model. SEM has recently expanded by adding generalized linear modeling capabilities that include the simultaneous estimation of parameters of different functional form for outcomes with different distributions in the same model. Therefore, mortality modeling and other relevant health outcomes may be evaluated. Random effects estimation using latent variables has been advanced in the SEM literature and software. In addition, SEM software has increased estimation options. Therefore, modern SEM is quite general and includes model types frequently used by health researchers, including generalized linear modeling, mixed effects linear modeling, and population average modeling. This article does not present any new information. It is meant as an introduction to SEM and its uses in ocular and other health research. PMID:24467557
NASA Astrophysics Data System (ADS)
Liang, Liang; Martin, Caitlin; Wang, Qian; Sun, Wei; Duncan, James
2016-03-01
Aortic valve (AV) disease is a significant cause of morbidity and mortality. The preferred treatment modality for severe AV disease is surgical resection and replacement of the native valve with either a mechanical or tissue prosthetic. In order to develop effective and long-lasting treatment methods, computational analyses, e.g., structural finite element (FE) and computational fluid dynamic simulations, are very effective for studying valve biomechanics. These computational analyses are based on mesh models of the aortic valve, which are usually constructed from 3D CT images though many hours of manual annotation, and therefore an automatic valve shape reconstruction method is desired. In this paper, we present a method for estimating the aortic valve shape from 3D cardiac CT images, which is represented by triangle meshes. We propose a pipeline for aortic valve shape estimation which includes novel algorithms for building local shape dictionaries and for building landmark detectors and curve detectors using local shape dictionaries. The method is evaluated on real patient image dataset using a leave-one-out approach and achieves an average accuracy of 0.69 mm. The work will facilitate automatic patient-specific computational modeling of the aortic valve.
Peck, Jay; Oluwole, Oluwayemisi O; Wong, Hsi-Wu; Miake-Lye, Richard C
2013-03-01
To provide accurate input parameters to the large-scale global climate simulation models, an algorithm was developed to estimate the black carbon (BC) mass emission index for engines in the commercial fleet at cruise. Using a high-dimensional model representation (HDMR) global sensitivity analysis, relevant engine specification/operation parameters were ranked, and the most important parameters were selected. Simple algebraic formulas were then constructed based on those important parameters. The algorithm takes the cruise power (alternatively, fuel flow rate), altitude, and Mach number as inputs, and calculates BC emission index for a given engine/airframe combination using the engine property parameters, such as the smoke number, available in the International Civil Aviation Organization (ICAO) engine certification databank. The algorithm can be interfaced with state-of-the-art aircraft emissions inventory development tools, and will greatly improve the global climate simulations that currently use a single fleet average value for all airplanes. An algorithm to estimate the cruise condition black carbon emission index for commercial aircraft engines was developed. Using the ICAO certification data, the algorithm can evaluate the black carbon emission at given cruise altitude and speed.
A method for estimating fall adult sex ratios from production and survival data
Wight, H.M.; Heath, R.G.; Geis, A.D.
1965-01-01
This paper presents a method of utilizing data relating to the production and survival of a bird population to estimate a basic fall adult sex ratio. This basic adult sex ratio is an average value derived from average production and survival rates. It is an estimate of the average sex ratio about which the fall adult ratios will fluctuate according to annual variations in production and survival. The basic fall adult sex ratio has been calculated as an asymptotic value which is the limit of an infinite series wherein average population characteristics are used as constants. Graphs are provided that allow the determination of basic sex ratios from production and survival data of a population. Where the respective asymptote has been determined, it may be possible to estimate various production and survival rates by use of variations of the formula for estimating the asymptote.
Model averaging and muddled multimodel inferences.
Cade, Brian S
2015-09-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t statistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.
Model averaging and muddled multimodel inferences
Cade, Brian S.
2015-01-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the tstatistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.
Begum, Rawshan Ara; Siwar, Chamhuri; Pereira, Joy Jacqueline; Jaafar, Abdul Hamid
2007-01-01
Malaysia is facing an increase in the generation of waste and of accompanying problems with the disposal of this waste. In the last two decades, extensive building and infrastructure development projects have led to an increase in the generation of construction waste material. The construction industry has a substantial impact on the environment, and its environmental effects are in direct relation to the quality and quantity of the waste it generates. This paper discusses general characteristics of the construction contractors, the contractors' willingness to pay (WTP) for improved construction waste management, determining factors which affect the amount of their willingness to pay, and suggestions and policy implications in the perspective of construction waste management in Malaysia. The data in this study is based on contractors registered with the construction industry development board (CIDB) of Malaysia. Employing the open ended contingent valuation method, the study assessed the contractors' average maximum WTP for improved construction waste management to be RM69.88 (1US$=3.6 RM) per tonne of waste. The result shows that the average maximum WTP is higher for large contractors than for medium and small contractors. The highest average maximum WTP value is RM88.00 for Group A (large contractors) RM78.25 for Group B (medium-size contractors) and RM55.80 for Group C (small contractors). One of the contributions of this study is to highlight the difference of CIDB registration grade in the WTP for improved construction waste management. It is found that contractors' WTP for improved waste collection and disposal services increases with the increase in contractors' current paid up capital. The identified factors and determinants of the WTP will assist the formulation of appropriate policies in addressing the construction waste problem in Malaysia and indirectly improve the quality of construction in the country.
NASA Technical Reports Server (NTRS)
Whitney, J. M.
1983-01-01
The notch strength of composites is discussed. The point stress and average stress criteria relate the notched strength of a laminate to the average strength of a relatively long tensile coupon. Tests of notched specimens in which microstrain gages have been placed at or near the edges of the holes have measured strains much larger that those measured in an unnotched tensile coupon. Orthotropic stress concentration analyses of failed notched laminates have also indicated that failure occurred at strains much larger than those experienced on tensile coupons with normal gage lengths. This suggests that the high strains at the edge of a hole can be related to the very short length of fiber subjected to these strains. Lockheed has attempted to correlate a series of tests of several laminates with holes ranging from 0.19 to 0.50 in. Although the average stress criterion correlated well with test results for hole sizes equal to or greater than 0.50 in., it over-estimated the laminate strength in the range of hole sizes from 0.19 to 0.38 in. It thus appears that a theory is needed that is based on the mechanics of failure and is more generally applicable to the range of hole sizes and the varieties of laminates found in aircraft construction.
Li, Jining; Kosugi, Tomoya; Riya, Shohei; Hashimoto, Yohey; Hou, Hong; Terada, Akihiko; Hosomi, Masaaki
2018-01-01
Leaching of hazardous trace elements from excavated urban soils during construction of cities has received considerable attention in recent years in Japan. A new concept, the pollution potential leaching index (PPLI), was applied to assess the risk of arsenic (As) leaching from excavated soils. Sequential leaching tests (SLT) with two liquid-to-solid (L/S) ratios (10 and 20Lkg -1 ) were conducted to determine the PPLI values, which represent the critical cumulative L/S ratios at which the average As concentrations in the cumulative leachates are reduced to critical values (10 or 5µgL -1 ). Two models (a logarithmic function model and an empirical two-site first-order leaching model) were compared to estimate the PPLI values. The fractionations of As before and after SLT were extracted according to a five-step sequential extraction procedure. Ten alkaline excavated soils were obtained from different construction projects in Japan. Although their total As contents were low (from 6.75 to 79.4mgkg -1 ), the As leaching was not negligible. Different L/S ratios at each step of the SLT had little influence on the cumulative As release or PPLI values. Experimentally determined PPLI values were in agreement with those from model estimations. A five-step SLT with an L/S of 10Lkg -1 at each step, combined with a logarithmic function fitting was suggested for the easy estimation of PPLI. Results of the sequential extraction procedure showed that large portions of more labile As fractions (non-specifically and specifically sorbed fractions) were removed during long-term leaching and so were small, but non-negligible, portions of strongly bound As fractions. Copyright © 2017 Elsevier Inc. All rights reserved.
Creating "Intelligent" Ensemble Averages Using a Process-Based Framework
NASA Astrophysics Data System (ADS)
Baker, Noel; Taylor, Patrick
2014-05-01
The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is used to add value to individual model projections and construct a consensus projection. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, individual models reproduce certain climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. The intention is to produce improved ("intelligent") unequal-weight ensemble averages. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Several climate process metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument in combination with surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing the equal-weighted ensemble average and an ensemble weighted using the process-based metric. Additionally, this study investigates the dependence of the metric weighting scheme on the climate state using a combination of model simulations including a non-forced preindustrial control experiment, historical simulations, and several radiative forcing Representative Concentration Pathway (RCP) scenarios. Ultimately, the goal of the framework is to advise better methods for ensemble averaging models and create better climate predictions.
A character network study of two Sci-Fi TV series
NASA Astrophysics Data System (ADS)
Tan, M. S. A.; Ujum, E. A.; Ratnavelu, K.
2014-03-01
This work is an analysis of the character networks in two science fiction television series: Stargate and Star Trek. These networks are constructed on the basis of scene co-occurrence between characters to indicate the presence of a connection. Global network structure measures such as the average path length, graph density, network diameter, average degree, median degree, maximum degree, and average clustering coefficient are computed as well as individual node centrality scores. The two fictional networks constructed are found to be quite similar in structure which is astonishing given that Stargate only ran for 18 years in comparison to the 48 years for Star Trek.
A profile of the nonresidential nonbuilding construction market for lumber and plywood
H. N. Spelter
Estimates of the amounts of lumber and plywood used in constructing nonresidential nonbuilding structures in 1982 are presented. The market is stratified by six construction types. Lumber and plywood use is stratified by two end-use categories. Total lumber use is estimated at 507 million board feet. Total plywood use at 362 million square feet (3/8-in. basis)....
Military Construction and Family Housing Program. Fiscal Year (FY) 2001 Budget Estimates
2000-02-01
Department of the Air Force Military Construction and Family Housing Program Fiscal Year (FY) 2001 Budget Estimates Justification Data...Department of the Air Force Military Construction and Military Family Housing Program Summary Fiscal Year 2001 Appropriation Authorization Request... FISCAL YEAR 2001 (DOLLARS IN THOUSANDS) STATE/COUNTRY INSTALLATION TITLE APPROP REQUEST AUTH REQUEST PAGE INSIDE THE U.S. ALABAMA
Fontana, Marianna; Asaria, Perviz; Moraldo, Michela; Finegold, Judith; Hassanally, Khalil; Manisty, Charlotte H; Francis, Darrel P
2014-06-17
Primary prevention guidelines focus on risk, often assuming negligible aversion to medication, yet most patients discontinue primary prevention statins within 3 years. We quantify real-world distribution of medication disutility and separately calculate the average utilities for a range of risk strata. We randomly sampled 360 members of the general public in London. Medication aversion was quantified as the gain in lifespan required by each individual to offset the inconvenience (disutility) of taking an idealized daily preventative tablet. In parallel, we constructed tables of expected gain in lifespan (utility) from initiating statin therapy for each age group, sex, and cardiovascular risk profile in the population. This allowed comparison of the widths of the distributions of medication disutility and of group-average expectation of longevity gain. Observed medication disutility ranged from 1 day to >10 years of life being required by subjects (median, 6 months; interquartile range, 1-36 months) to make daily preventative therapy worthwhile. Average expected longevity benefit from statins at ages ≥50 years ranges from 3.6 months (low-risk women) to 24.3 months (high-risk men). We can no longer assume that medication disutility is almost zero. Over one-quarter of subjects had disutility exceeding the group-average longevity gain from statins expected even for the highest-risk (ie, highest-gain) group. Future primary prevention studies might explore medication disutility in larger populations. Patients may differ more in disutility than in prospectively definable utility (which provides only group-average estimates). Consultations could be enriched by assessing disutility and exploring its reasons. © 2014 American Heart Association, Inc.
Comparative dynamics of avian communities across edges and interiors of North American ecoregions
Karanth, K.K.; Nichols, J.D.; Sauer, J.R.; Hines, J.E.
2006-01-01
Aim Based on a priori hypotheses, we developed predictions about how avian communities might differ at the edges vs. interiors of ecoregions. Specifically, we predicted lower species richness and greater local turnover and extinction probabilities for regional edges. We tested these predictions using North American Breeding Bird Survey (BBS) data across nine ecoregions over a 20-year time period. Location Data from 2238 BBS routes within nine ecoregions of the United States were used. Methods The estimation methods used accounted for species detection probabilities < 1. Parameter estimates for species richness, local turnover and extinction probabilities were obtained using the program COMDYN. We examined the difference in community-level parameters estimated from within exterior edges (the habitat interface between ecoregions), interior edges (the habitat interface between two bird conservation regions within the same ecoregion) and interior (habitat excluding interfaces). General linear models were constructed to examine sources of variation in community parameters for five ecoregions (containing all three habitat types) and all nine ecoregions (containing two habitat types). Results Analyses provided evidence that interior habitats and interior edges had on average higher bird species richness than exterior edges, providing some evidence of reduced species richness near habitat edges. Lower average extinction probabilities and turnover rates in interior habitats (five-region analysis) provided some support for our predictions about these quantities. However, analyses directed at all three response variables, i.e. species richness, local turnover, and local extinction probability, provided evidence of an interaction between habitat and region, indicating that the relationships did not hold in all regions. Main conclusions The overall predictions of lower species richness, higher local turnover and extinction probabilities in regional edge habitats, as opposed to interior habitats, were generally supported. However, these predicted tendencies did not hold in all regions.
NASA Astrophysics Data System (ADS)
Hallez, Hans; Staelens, Steven; Lemahieu, Ignace
2009-10-01
EEG source analysis is a valuable tool for brain functionality research and for diagnosing neurological disorders, such as epilepsy. It requires a geometrical representation of the human head or a head model, which is often modeled as an isotropic conductor. However, it is known that some brain tissues, such as the skull or white matter, have an anisotropic conductivity. Many studies reported that the anisotropic conductivities have an influence on the calculated electrode potentials. However, few studies have assessed the influence of anisotropic conductivities on the dipole estimations. In this study, we want to determine the dipole estimation errors due to not taking into account the anisotropic conductivities of the skull and/or brain tissues. Therefore, head models are constructed with the same geometry, but with an anisotropically conducting skull and/or brain tissue compartment. These head models are used in simulation studies where the dipole location and orientation error is calculated due to neglecting anisotropic conductivities of the skull and brain tissue. Results show that not taking into account the anisotropic conductivities of the skull yields a dipole location error between 2 and 25 mm, with an average of 10 mm. When the anisotropic conductivities of the brain tissues are neglected, the dipole location error ranges between 0 and 5 mm. In this case, the average dipole location error was 2.3 mm. In all simulations, the dipole orientation error was smaller than 10°. We can conclude that the anisotropic conductivities of the skull have to be incorporated to improve the accuracy of EEG source analysis. The results of the simulation, as presented here, also suggest that incorporation of the anisotropic conductivities of brain tissues is not necessary. However, more studies are needed to confirm these suggestions.
Hough, S.E.; Page, M.
2011-01-01
At the heart of the conundrum of seismogenesis in the New Madrid Seismic Zone is the apparently substantial discrepancy between low strain rate and high recent seismic moment release. In this study we revisit the magnitudes of the four principal 1811–1812 earthquakes using intensity values determined from individual assessments from four experts. Using these values and the grid search method of Bakun and Wentworth (1997), we estimate magnitudes around 7.0 for all four events, values that are significantly lower than previously published magnitude estimates based on macroseismic intensities. We further show that the strain rate predicted from postglacial rebound is sufficient to produce a sequence with the moment release of one Mmax6.8 every 500 years, a rate that is much lower than previous estimates of late Holocene moment release. However, Mw6.8 is at the low end of the uncertainty range inferred from analysis of intensities for the largest 1811–1812 event. We show that Mw6.8 is also a reasonable value for the largest main shock given a plausible rupture scenario. One can also construct a range of consistent models that permit a somewhat higher Mmax, with a longer average recurrence rate. It is thus possible to reconcile predicted strain and seismic moment release rates with alternative models: one in which 1811–1812 sequences occur every 500 years, with the largest events being Mmax∼6.8, or one in which sequences occur, on average, less frequently, with Mmax of ∼7.0. Both models predict that the late Holocene rate of activity will continue for the next few to 10 thousand years.
Webb, R.M.T.; Wieczorek, M.E.; Nolan, B.T.; Hancock, T.C.; Sandstrom, M.W.; Barbash, J.E.; Bayless, E.R.; Healy, R.W.; Linard, J.
2008-01-01
Pesticide leaching through variably thick soils beneath agricultural fields in Morgan Creek, Maryland was simulated for water years 1995 to 2004 using LEACHM (Leaching Estimation and Chemistry Model). Fifteen individual models were constructed to simulate five depths and three crop rotations with associated pesticide applications. Unsaturated zone thickness averaged 4.7 m but reached a maximum of 18.7 m. Average annual recharge to ground water decreased from 15.9 to 11.1 cm as the unsaturated zone increased in thickness from 1 to 10 m. These point estimates of recharge are at the lower end of previously published values, which used methods that integrate over larger areas capturing focused recharge in the numerous detention ponds in the watershed. The total amount of applied and leached masses for five parent pesticide compounds and seven metabolites were estimated for the 32-km2 Morgan Creek watershed by associating each hectare to the closest one-dimensional model analog of model depth and crop rotation scenario as determined from land-use surveys. LEACHM parameters were set such that branched, serial, first-order decay of pesticides and metabolites was realistically simulated. Leaching is predicted to be greatest for shallow soils and for persistent compounds with low sorptivity. Based on simulation results, percent parent compounds leached within the watershed can be described by a regression model of the form e−depth (a ln t½−b ln KOC) where t 1/2 is the degradation half-life in aerobic soils, K OC is the organic carbon normalized sorption coefficient, and a and b are fitted coefficients (R 2 = 0.86, p value = 7 × 10−9).
Modeling particle number concentrations along Interstate 10 in El Paso, Texas
Olvera, Hector A.; Jimenez, Omar; Provencio-Vasquez, Elias
2014-01-01
Annual average daily particle number concentrations around a highway were estimated with an atmospheric dispersion model and a land use regression model. The dispersion model was used to estimate particle concentrations along Interstate 10 at 98 locations within El Paso, Texas. This model employed annual averaged wind speed and annual average daily traffic counts as inputs. A land use regression model with vehicle kilometers traveled as the predictor variable was used to estimate local background concentrations away from the highway to adjust the near-highway concentration estimates. Estimated particle number concentrations ranged between 9.8 × 103 particles/cc and 1.3 × 105 particles/cc, and averaged 2.5 × 104 particles/cc (SE 421.0). Estimates were compared against values measured at seven sites located along I10 throughout the region. The average fractional error was 6% and ranged between -1% and -13% across sites. The largest bias of -13% was observed at a semi-rural site where traffic was lowest. The average bias amongst urban sites was 5%. The accuracy of the estimates depended primarily on the emission factor and the adjustment to local background conditions. An emission factor of 1.63 × 1014 particles/veh-km was based on a value proposed in the literature and adjusted with local measurements. The integration of the two modeling techniques ensured that the particle number concentrations estimates captured the impact of traffic along both the highway and arterial roadways. The performance and economical aspects of the two modeling techniques used in this study shows that producing particle concentration surfaces along major roadways would be feasible in urban regions where traffic and meteorological data are readily available. PMID:25313294
Jafaruddin; Indratno, Sapto W; Nuraini, Nuning; Supriatna, Asep K; Soewono, Edy
2015-01-01
Estimating the basic reproductive ratio ℛ 0 of dengue fever has continued to be an ever-increasing challenge among epidemiologists. In this paper we propose two different constructions to estimate ℛ 0 which is derived from a dynamical system of host-vector dengue transmission model. The construction is based on the original assumption that in the early states of an epidemic the infected human compartment increases exponentially at the same rate as the infected mosquito compartment (previous work). In the first proposed construction, we modify previous works by assuming that the rates of infection for mosquito and human compartments might be different. In the second construction, we add an improvement by including more realistic conditions in which the dynamics of an infected human compartments are intervened by the dynamics of an infected mosquito compartment, and vice versa. We apply our construction to the real dengue epidemic data from SB Hospital, Bandung, Indonesia, during the period of outbreak Nov. 25, 2008-Dec. 2012. We also propose two scenarios to determine the take-off rate of infection at the beginning of a dengue epidemic for construction of the estimates of ℛ 0: scenario I from equation of new cases of dengue with respect to time (daily) and scenario II from equation of new cases of dengue with respect to cumulative number of new cases of dengue. The results show that our first construction of ℛ 0 accommodates the take-off rate differences between mosquitoes and humans. Our second construction of the ℛ 0 estimation takes into account the presence of infective mosquitoes in the early growth rate of infective humans and vice versa. We conclude that the second approach is more realistic, compared with our first approach and the previous work.
NASA Astrophysics Data System (ADS)
Meng, Deyuan; Tao, Guoliang; Liu, Hao; Zhu, Xiaocong
2014-07-01
Friction compensation is particularly important for motion trajectory tracking control of pneumatic cylinders at low speed movement. However, most of the existing model-based friction compensation schemes use simple classical models, which are not enough to address applications with high-accuracy position requirements. Furthermore, the friction force in the cylinder is time-varying, and there exist rather severe unmodelled dynamics and unknown disturbances in the pneumatic system. To deal with these problems effectively, an adaptive robust controller with LuGre model-based dynamic friction compensation is constructed. The proposed controller employs on-line recursive least squares estimation (RLSE) to reduce the extent of parametric uncertainties, and utilizes the sliding mode control method to attenuate the effects of parameter estimation errors, unmodelled dynamics and disturbances. In addition, in order to realize LuGre model-based friction compensation, the modified dual-observer structure for estimating immeasurable friction internal state is developed. Therefore, a prescribed motion tracking transient performance and final tracking accuracy can be guaranteed. Since the system model uncertainties are unmatched, the recursive backstepping design technology is applied. In order to solve the conflicts between the sliding mode control design and the adaptive control design, the projection mapping is used to condition the RLSE algorithm so that the parameter estimates are kept within a known bounded convex set. Finally, the proposed controller is tested for tracking sinusoidal trajectories and smooth square trajectory under different loads and sudden disturbance. The testing results demonstrate that the achievable performance of the proposed controller is excellent and is much better than most other studies in literature. Especially when a 0.5 Hz sinusoidal trajectory is tracked, the maximum tracking error is 0.96 mm and the average tracking error is 0.45 mm. This paper constructs an adaptive robust controller which can compensate the friction force in the cylinder.
NASA Astrophysics Data System (ADS)
Gaschnig, Richard M.; Rudnick, Roberta L.; McDonough, William F.; Kaufman, Alan J.; Valley, John W.; Hu, Zhaochu; Gao, Shan; Beck, Michelle L.
2016-08-01
The composition of the fine-grained matrix of glacial diamictites from the Mesoarchean, Paleoproterozoic, Neoproterozoic, and Paleozoic, collected from four modern continents, reflects the secular evolution of the average composition of the upper continental crust (UCC). The effects of localized provenance are present in some cases, but distinctive geochemical signatures exist in diamictites of the same age from different localities, suggesting that these are global signatures. Archean UCC, dominated by greenstone basalts and to a lesser extent komatiites, was more mafic, based on major elements and transition metal trace elements. Temporal changes in oxygen isotope ratios, rare earth elements, and high field strength elements indicate that the UCC became more differentiated and that tonalite-trondhjemite-granodiorite suites became less important with time, findings consistent with previous studies. We also document the concentrations of siderophile and chalcophile elements (Ga, Ge, Cd, In, Sn, Sb, W, Tl, Bi) and lithophile Be in the UCC through time, and use the data for the younger diamictites to construct a new estimate of average UCC along with associated uncertainties.
Variation in geographic access to specialist inpatient hospices in England and Wales.
Gatrell, Anthony C; Wood, D Justin
2012-07-01
We seek to map and describe variation in geographic access to the set of 189 specialist adult inpatient hospices in England and Wales. Using almost 35,000 small Census areas (Local Super Output Areas: LSOAs) as our units of analysis, the locations of hospices, and estimated drive times from LSOAs to hospices we construct an accessibility 'score' for each LSOA, for England and Wales as a whole. Data on cancer mortality are used as a proxy for the 'demand' for hospice care and we then identify that subset of small areas in which accessibility (service supply) is relatively poor yet the potential 'demand' for hospice services is above average. That subset is then filtered according to the deprivation score for each LSOA, in order to identify those LSOAs which are also above average in terms of deprivation. While urban areas are relatively well served, large parts of England and Wales have poor access to hospices, and there is a risk that the needs of those living in relatively deprived areas may be unmet. Copyright © 2012 Elsevier Ltd. All rights reserved.
No clustering for linkage map based on low-copy and undermethylated microsatellites.
Zhou, Yi; Gwaze, David P; Reyes-Valdés, M Humberto; Bui, Thomas; Williams, Claire G
2003-10-01
Clustering has been reported for conifer genetic maps based on hypomethylated or low-copy molecular markers, resulting in uneven marker distribution. To test this, a framework genetic map was constructed from three types of microsatellites: low-copy, undermethylated, and genomic. These Pinus taeda L. microsatellites were mapped using a three-generation pedigree with 118 progeny. The microsatellites were highly informative; of the 32 markers in intercross configuration, 29 were segregating for three or four alleles in the progeny. The sex-averaged map placed 51 of the 95 markers in 15 linkage groups at LOD > 4.0. No clustering or uneven distribution across the genome was observed. The three types of P. taeda microsatellites were randomly dispersed within each linkage group. The 51 microsatellites covered a map distance of 795 cM, an average distance of 21.8 cM between markers, roughly half of the estimated total map length. The minimum and maximum distances between any two bins was 4.4 and 45.3 cM, respectively. These microsatellites provided anchor points for framework mapping for polymorphism in P. taeda and other closely related hard pines.
Leigh, J Paul; Du, Juan; McCurdy, Stephen A
2014-04-01
Debate surrounds the accuracy of U.S. government's estimates of job-related injuries and illnesses in agriculture. Whereas studies have attempted to estimate the undercount for all industries combined, none have specifically addressed agriculture. Data were drawn from the U.S. government's premier sources for workplace injuries and illnesses and employment: the Bureau of Labor Statistics databanks for the Survey of Occupational Injuries and Illnesses (SOII), the Quarterly Census of Employment and Wages, and the Current Population Survey. Estimates were constructed using transparent assumptions; for example, that the rate (cases-per-employee) of injuries and illnesses on small farms was the same as on large farms (an assumption we altered in sensitivity analysis). We estimated 74,932 injuries and illnesses for crop farms and 68,504 for animal farms, totaling 143,436 cases in 2011. We estimated that SOII missed 73.7% of crop farm cases and 81.9% of animal farm cases for an average of 77.6% for all agriculture. Sensitivity analyses suggested that the percent missed ranged from 61.5% to 88.3% for all agriculture. We estimate considerable undercounting of nonfatal injuries and illnesses in agriculture and believe the undercounting is larger than any other industry. Reasons include: SOII's explicit exclusion of employees on small farms and of farmers and family members and Quarterly Census of Employment and Wages's undercounts of employment. Undercounting limits our ability to identify and address occupational health problems in agriculture, affecting both workers and society. Copyright © 2014 Elsevier Inc. All rights reserved.
An eye model for uncalibrated eye gaze estimation under variable head pose
NASA Astrophysics Data System (ADS)
Hnatow, Justin; Savakis, Andreas
2007-04-01
Gaze estimation is an important component of computer vision systems that monitor human activity for surveillance, human-computer interaction, and various other applications including iris recognition. Gaze estimation methods are particularly valuable when they are non-intrusive, do not require calibration, and generalize well across users. This paper presents a novel eye model that is employed for efficiently performing uncalibrated eye gaze estimation. The proposed eye model was constructed from a geometric simplification of the eye and anthropometric data about eye feature sizes in order to circumvent the requirement of calibration procedures for each individual user. The positions of the two eye corners and the midpupil, the distance between the two eye corners, and the radius of the eye sphere are required for gaze angle calculation. The locations of the eye corners and midpupil are estimated via processing following eye detection, and the remaining parameters are obtained from anthropometric data. This eye model is easily extended to estimating eye gaze under variable head pose. The eye model was tested on still images of subjects at frontal pose (0 °) and side pose (34 °). An upper bound of the model's performance was obtained by manually selecting the eye feature locations. The resulting average absolute error was 2.98 ° for frontal pose and 2.87 ° for side pose. The error was consistent across subjects, which indicates that good generalization was obtained. This level of performance compares well with other gaze estimation systems that utilize a calibration procedure to measure eye features.
Base Reutilization Status: An Assessment
2000-03-01
substantially as office build- ings and retail buildings are constructed. No data are available on payrolls for the new jobs . However, earnings are likely to...date is 30.3 The average earnings from the new jobs are above the state average. The average earnings from the existing number of biotechnology
[Influence of traffic restriction on road and construction fugitive dust].
Tian, Gang; Li, Gang; Qin, Jian-Ping; Fan, Shou-Bin; Huang, Yu-Hu; Nie, Lei
2009-05-15
By monitoring the road and construction dust fall continuously during the "Good Luck Beijing" sport events, the reduction of road and construction dust fall caused by traffic restriction was studied. The contribution rate of road and construction dust to particulate matter of Beijing atmosphere environment, and the emission ratio of it to total local PM10 emission were analyzed. The results show that the traffic restriction reduces road and construction dust fall significantly. The dust fall average value of ring roads was 0.27 g x (m2 x d)(-1) in the "traffic restriction" period, and the values were 0.81 and 0.59 g x (m2 x d)(-1) 1 month and 7 days before. The dust fall average value of major arterial and minor arterial was 0.21 g x (m2 x d)(-1) in the "traffic restriction" period, and the values were 0.54 and 0.58 g x (m2 x d)(-1) 1 month and 7 days before. The roads emission reduced 60%-70% compared with before traffic restriction. The dust fall average values of civil architecture and utility architecture were 0.61 and 1.06 g x (m2 x d)(-1) in the "traffic restriction" period, and the values were 1.15 and 1.55 g x (m2 x d)(-1) 20 days before. The construction dust reduced 30%-47% compared with 20 days before traffic restriction. Road and construction dust emission are the main source of atmosphere particulate matter in Beijing, and its contribution to ambient PM10 concentration is 21%-36%. PM10 emitted from roads and constructions account for 42%-72% and 30%-51% of local emission while the local PM10 account for 50% and 70% of the total emission.
10 CFR Appendix D to Subpart D of... - Classes of Actions That Normally Require EISs
Code of Federal Regulations, 2010 CFR
2010-01-01
... average megawatts or more over a 12 month period. This applies to power marketing operations and to siting... Systems D2. Siting/construction/operation/decommissioning of nuclear fuel reprocessing facilities D3. Siting/construction/operation/decommissioning of uranium enrichment facilities D4. Siting/construction...
Nicoletti, S; Battevi, N; Colafemmina, G; Di Leone, G; Satriani, G; Ragone, P; Occhipinti, E
2013-01-01
The Basilicata Regional Headquarters of the Italian Institute for Insurance against Occupational Accidents and Disease (INAIL) and the Basilicata association of small building enterprises (Edilcassa di Basilicata) promoted a research project to assess the risk of manual lifting and manual transport in construction enterprises in the Basilicata Region and estimate the prevalence of related diseases. Manual lifting risk assessment was performed by calculating the VLI of 204 working days in as many building workers. Manual transport risk assessment was carried out comparing the weights transported (on the 204 days tested) with the reference values of the "Snoock and Ciriello" tables. Manual Ifting risk was present on 195 of the 204 days, with an average value of VLI equal to 2.1 (min 0.4, max 8.5), with higher values in the restructuring sector (VLI average of 2.3, min 0.4, max 8.5), and no significant differences between the different tasks. Manual transport risk was present on 129 of the 204 days, with average values of 1.2 (min 0.2, max 3.3), with no significant differences between the different tasks analyzed For both risks additional factors were present that were not analyzed by the methods of assessment used (for manual lifting: 8.8% of the geometries in the critical area; for manual transport: 39% of transport on shoulders, 42.5% on a route with uneven surface and 31.9% on a sloping route), so it is likely that the actual risk is greater than that indicated by the synthetic indices of exposure. The medical questionnaire showed from the case histories that 148 out of 546 subjects were positive for the threshold forpain or discomfort in the lumbosacral spine area and 99 out of 546 subjects reported suffering from an already diagnosed herniated spinal disk. Only 18% of osteoarticular diseases was reported to the Insurance Institute, al though there was widespread awareness that the diseases in question might be related to work. Diseases of the spine were responsible for 1.9% of absenteeism, equal to 30-40% of total absenteeism ofworkers enrolled in "Edilcassa di Basilicata". The method used provides a solid basis for evaluating the two risks in the construction industry, where employment is subject to extreme organizational, environmental and structural (machines, tools, operators involved) variability. Employment in the construction industry involves significant exposure to the two risks, counting for 30-40% of total absenteeism in this sector.
High order cell-centered scheme totally based on cell average
NASA Astrophysics Data System (ADS)
Liu, Ze-Yu; Cai, Qing-Dong
2018-05-01
This work clarifies the concept of cell average by pointing out the differences between cell average and cell centroid value, which are averaged cell-centered value and pointwise cell-centered value, respectively. Interpolation based on cell averages is constructed and high order QUICK-like numerical scheme is designed for such interpolation. A new approach of error analysis is introduced in this work, which is similar to Taylor’s expansion.
Method of estimating natural recharge to the Edwards Aquifer in the San Antonio area, Texas
Puente, Celso
1978-01-01
The principal errors in the estimates of annual recharge are related to errors in estimating runoff in ungaged areas, which represent about 30 percent of the infiltration area. The estimated long-term average annual recharge in each basin, however, is probably representative of the actual recharge because the averaging procedure tends to cancel out the major errors.
NASA Astrophysics Data System (ADS)
Namysłowska-Wilczyńska, Barbara; Wynalek, Janusz
2017-12-01
Geostatistical methods make the analysis of measurement data possible. This article presents the problems directed towards the use of geostatistics in spatial analysis of displacements based on geodetic monitoring. Using methods of applied (spatial) statistics, the research deals with interesting and current issues connected to space-time analysis, modeling displacements and deformations, as applied to any large-area objects on which geodetic monitoring is conducted (e.g., water dams, urban areas in the vicinity of deep excavations, areas at a macro-regional scale subject to anthropogenic influences caused by mining, etc.). These problems are very crucial, especially for safety assessment of important hydrotechnical constructions, as well as for modeling and estimating mining damage. Based on the geodetic monitoring data, a substantial basic empirical material was created, comprising many years of research results concerning displacements of controlled points situated on the crown and foreland of an exemplary earth dam, and used to assess the behaviour and safety of the object during its whole operating period. A research method at a macro-regional scale was applied to investigate some phenomena connected with the operation of the analysed big hydrotechnical construction. Applying a semivariogram function enabled the spatial variability analysis of displacements. Isotropic empirical semivariograms were calculated and then, theoretical parameters of analytical functions were determined, which approximated the courses of the mentioned empirical variability measure. Using ordinary (block) kriging at the grid nodes of an elementary spatial grid covering the analysed object, the values of the Z* estimated means of displacements were calculated together with the accompanying assessment of uncertainty estimation - a standard deviation of estimation σk. Raster maps of the distribution of estimated averages Z* and raster maps of deviations of estimation σk (in perspective) were obtained for selected years (1995 and 2007), taking the ground height 136 m a.s.l. into calculation. To calculate raster maps of Z* interpolated values, methods of quick interpolation were also used, such as the technique of the inverse distance squares, a linear model of kriging, a spline kriging, which made the recognition of the general background of displacements possible, without the accuracy assessment of Z* value estimation, i.e., the value of σk. These maps are also related to 1995 and 2007 and the elevation. As a result of applying these techniques, clear boundaries of subsiding areas, upthrusting and also horizontal displacements on the examined hydrotechnical object were marked out, which can be interpreted as areas of local deformations of the object, important for the safety of the construction. The effect of geostatistical research conducted, including the structural analysis, semivariograms modeling, estimating the displacements of the hydrotechnical object, are rich cartographic characteristic (semivariograms, raster maps, block diagrams), which present the spatial visualization of the conducted various analyses of the monitored displacements. The prepared geostatistical model (3D) of displacement variability (analysed within the area of the dam, during its operating period and including its height) will be useful not only in the correct assessment of displacements and deformations, but it will also make it possible to forecast these phenomena, which is crucial when the operating safety of such constructions is taken into account.
SU-E-I-07: An Improved Technique for Scatter Correction in PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, S; Wang, Y; Lue, K
2014-06-01
Purpose: In positron emission tomography (PET), the single scatter simulation (SSS) algorithm is widely used for scatter estimation in clinical scans. However, bias usually occurs at the essential steps of scaling the computed SSS distribution to real scatter amounts by employing the scatter-only projection tail. The bias can be amplified when the scatter-only projection tail is too small, resulting in incorrect scatter correction. To this end, we propose a novel scatter calibration technique to accurately estimate the amount of scatter using pre-determined scatter fraction (SF) function instead of the employment of scatter-only tail information. Methods: As the SF depends onmore » the radioactivity distribution and the attenuating material of the patient, an accurate theoretical relation cannot be devised. Instead, we constructed an empirical transformation function between SFs and average attenuation coefficients based on a serious of phantom studies with different sizes and materials. From the average attenuation coefficient, the predicted SFs were calculated using empirical transformation function. Hence, real scatter amount can be obtained by scaling the SSS distribution with the predicted SFs. The simulation was conducted using the SimSET. The Siemens Biograph™ 6 PET scanner was modeled in this study. The Software for Tomographic Image Reconstruction (STIR) was employed to estimate the scatter and reconstruct images. The EEC phantom was adopted to evaluate the performance of our proposed technique. Results: The scatter-corrected image of our method demonstrated improved image contrast over that of SSS. For our technique and SSS of the reconstructed images, the normalized standard deviation were 0.053 and 0.182, respectively; the root mean squared errors were 11.852 and 13.767, respectively. Conclusion: We have proposed an alternative method to calibrate SSS (C-SSS) to the absolute scatter amounts using SF. This method can avoid the bias caused by the insufficient tail information and therefore improve the accuracy of scatter estimation.« less
Hydrology of the Valley-fill and carbonate-rock reservoirs, Pahrump Valley, Nevada-California
Malmberg, Glenn T.
1967-01-01
This is the second appraisal of the water supply of Pahrump Valley, made 15 years after the first cooperative study. In the first report the average recharge was estimated to be 23,000 acre-feet per year, only 1,000 acre-feet more than the estimate made in this report. All this recharge was considered to be available for development. Because of the difficulty in salvaging the subsurface outflow from the deep carbonate-rock reservoir, this report concludes that the perennial yield may be only 25,000 acre-feet. In 1875, Bennetts and Manse Springs reportedly discharged a total of nearly 10,000 acre-feet of water from the valley-fill reservoir. After the construction of several flowing wells in 1910, the spring discharge began to decline. In the mid-1940's many irrigation wells were drilled, and large-capacity pumps were installed. During the 4-year period of this study (1959-62), the net pumping draft averaged about 25,000 acre-feet per year, or about twice the estimated yield. In 1962 Bennetts Spring was dry, and the discharge from Marse Spring was only 1,400 acre-feet. During the period February 1959-February 1962, pumping caused an estimated storage depletion of 45,000 acre-feet, or 15,000 acre-feet per year. If the overdraft is maintained, depletion of stored water will continue and pumping costs will increase. Water levels in the vicinity of the Pahrump, Manse, and Fowler Ranches declined more than ]0 feet in response to the pumping during this period, and they can be expected to continue to decline at ,the projected rate of more than 3 feet per year. The chemical quality of the pumped water has been satisfactory for irrigation and domestic use. Recycling of water pumped or irrigation, however, could result in deterioration of the water quality with time.
Mills, Patrick C.; Healy, Richard W.
1993-01-01
The movement of water and tritium through the unsaturated zone was studied at a low-level radioactive-waste disposal site near Sheffield, Bureau County, Illinois, from 1981 to 1985. Water and tritium movement occurred in an annual, seasonally timed cycle; recharge to the saturated zone generally occurred in the spring and early summer. Mean annual precipitation (1982-85) was 871 mm (millimeters); mean annual recharge to the disposal trenches (July 1982 through June 1984) was estimated to be 107 mm. Average annual tritium flux below the study trenches was estimated to be 3.4 mCi/yr (millicuries per year). Site geology, climate, and waste-disposal practices influenced the spatial and temporal variability of water and tritium movement. Of the components of the water budget, evapotranspiration contributed most to the temporal variability of water and tritium movement. Disposal trenches are constructed in complexly layered glacial and postglacial deposits that average 17 m (meters) in thickness and overlie a thick sequence of Pennsylvanian shale. The horizontal saturated hydraulic conductivity of the clayey-silt to sand-sized glacial and postglacial deposits ranges from 4.8x10 -1 to 3.4x10 4 mm/d (millimeters per day). A 120-m-long horizontal tunnel provided access for hydrologic measurements and collection of sediment and water samples from the unsaturated and saturated geologic deposits below four disposal trenches. Trench-cover and subtrench deposits were monitored with soil-moisture tensiometers, vacuum and gravity lysimeters, piezometers, and a nuclear soil-moisture gage. A cross-sectional, numerical ground-water-flow model was used to simulate water movement in the variably saturated geologic deposits in the tunnel area. Concurrent studies at the site provided water-budget data for estimating recharge to the disposal trenches. Vertical water movement directly above the trenches was impeded by a zone of compaction within the clayey-silt trench covers. Water entered the trenches primarily at the trench edges where the compacted zone was absent and the cover was relatively thin. Collapse holes in the trench covers that resulted from inadequate compaction of wastes within the trenches provided additional preferential pathways for surface-water drainage into the trenches; drainage into one collapse hole during a rainstorm was estimated to be 1,700 L (liters). Till deposits near trench bases induced lateral water and tritium movement. Limited temporal variation in water movement and small flow gradients (relative to the till deposits) were detected in the unsaturated subtrench sand deposit; maximum gradients during the spring recharge period averaged 1.62 mm/mm (millimeter per millimeter). Time-of-travel of water moving from the trench covers to below the trenches was estimated to be as rapid as 41 days (assuming individual water molecules move this distance in one recharge cycle). Tritium concentrations in water from the unsaturated zone ranged from 200 (background) to 10,000,000 pCi/L (picocuries per liter). Tritium concentrations generally were higher below trench bases (averaging 91,000 pCi/L) than below intertrench sediments (averaging 3,300 pCi/L), and in the subtrench Toulon Member of the Glasford Formation (sand) (averaging 110,000 pCi/L) than in the Hulick Till Member of the Glasford Formation (clayey silt) (averaging 59,000 pCi/L). Average subtrench tritium concentration increased from 28,000 to 100,000 pCi/L during the study period. Within the trench covers, there was a strong seasonal trend in tritium concentrations; the highest concentrations occurred in late summer when soil-moisture contents were at a minimum. Subtrench tritium movement occurred in association with the annual cycle of water movement, as well as independently of the cycle, in apparent response to continuous water movement through the subtrench sand deposits and to the deterioration of trench-waste containers. The increase in concen
Mills, Patrick C.; Healy, R.W.
1991-01-01
The movement of water and tritium through the unsaturated zone was studied at a low-level radioactive-waste disposal site near Sheffield, Bureau County, Illinois, from 1981 to 1985. Water and tritium movement occurred in an annual, seasonally timed cycle; recharge to the saturated zone generally occurred in the spring and early summer. Mean annual precipitation (1982-85) was 871 millimeters; mean annual recharge to the disposal trenches (July 1982 through June 1984) was estimated to be 107 millimeters. Average annual tritium flux below the study trenches was estimated to be 3.4 millicuries per year. Site geology, climate, and waste-disposal practices influenced the spatial and temporal variability of water and tritium movement. Of the components of the water budget, evapotranspiration contributed most to the temporal variability of water and tritium movement. Disposal trenches are constructed in complexly layered glacial and postglacial deposits that average 17 meters in thickness and overlie a thick sequence of Pennsylvanian shale. The horizontal saturated hydraulic conductivity of the clayey-silt to sand-sized glacial and postglacial deposits ranges from 4.8x10^-1 to 3.4x10^4 millimeters per day. A 120-meter-long horizontal tunnel provided access for hydrologic measurements and collection of sediment and water samples from the unsaturated and saturated geologic deposits below four disposal trenches. Trench-cover and subtrench deposits were monitored with soil-moisture tensiometers, vacuum and gravity lysimeters, piezometers, and a nuclear soil-moisture gage. A cross-sectional, numerical ground-water-flow model was used to simulate water movement in the variably saturated geologic deposits in the tunnel area. Concurrent studies at the site provided water-budget data for estimating recharge to the disposal trenches. Vertical water movement directly above the trenches was impeded by a zone of compaction within the clayey-silt trench covers. Water entered the trenches primarily at the trench edges where the compacted zone was absent and the cover was relatively thin. Collapse holes in the trench covers that resulted from inadequate compaction of wastes within the trenches provided additional preferential pathways for surface-water drainage into the trenches; drainage into one collapse hole during a rainstorm was estimated to be 1,700 liters. Till deposits near trench bases induced lateral water and tritium movement. Limited temporal variation in water movement and small flow gradients (relative to the till deposits) were detected in the unsaturated subtrench sand deposit; maximum gradients during the spring recharge period averaged 1.62 millimeters per millimeter. Time-of-travel of water moving from the trench covers to below the trenches was estimated to be as rapid as 41 days (assuming individual water molecules move this distance in one recharge cycle). Tritium concentrations in water from the unsaturated zone ranged from 200 (background) to 10,000,000 pCi/L (picocuries per liter). Tritium concentrations generally were higher below trench bases (averaging 91,000 pCi/L) than below intertrench sediments (averaging 3,300 pCi/L), and in the subtrench Toulon Member of the Glasford Formation (sand) (averaging 110,000 pCi/L) than in the Hulick Till Member of the Glasford Formation (clayey silt) (averaging 59,000 pCi/L). Average subtrench tritium concentration increased from 28,000 to 100,000 pCi/L during the study period. Within the trench covers, there was a strong seasonal trend in tritium concentrations; the highest concentrations occurred in late summer when soil-moisture contents were at a minimum. Subtrench tritium movement occurred in association with the annual cycle of water movement, as well as independently of the cycle, in apparent response to continuous water movement through the subtrench sand deposits and to the deterioration of trench-waste containers. The increase in concentrations of tritium with incre
Syamlal, Madhava; Celik, Ismail B.; Benyahia, Sofiane
2017-07-12
The two-fluid model (TFM) has become a tool for the design and troubleshooting of industrial fluidized bed reactors. To use TFM for scale up with confidence, the uncertainty in its predictions must be quantified. Here, we study two sources of uncertainty: discretization and time-averaging. First, we show that successive grid refinement may not yield grid-independent transient quantities, including cross-section–averaged quantities. Successive grid refinement would yield grid-independent time-averaged quantities on sufficiently fine grids. A Richardson extrapolation can then be used to estimate the discretization error, and the grid convergence index gives an estimate of the uncertainty. Richardson extrapolation may not workmore » for industrial-scale simulations that use coarse grids. We present an alternative method for coarse grids and assess its ability to estimate the discretization error. Second, we assess two methods (autocorrelation and binning) and find that the autocorrelation method is more reliable for estimating the uncertainty introduced by time-averaging TFM data.« less
Efficient Measurement of Quantum Gate Error by Interleaved Randomized Benchmarking
NASA Astrophysics Data System (ADS)
Magesan, Easwar; Gambetta, Jay M.; Johnson, B. R.; Ryan, Colm A.; Chow, Jerry M.; Merkel, Seth T.; da Silva, Marcus P.; Keefe, George A.; Rothwell, Mary B.; Ohki, Thomas A.; Ketchen, Mark B.; Steffen, M.
2012-08-01
We describe a scalable experimental protocol for estimating the average error of individual quantum computational gates. This protocol consists of interleaving random Clifford gates between the gate of interest and provides an estimate as well as theoretical bounds for the average error of the gate under test, so long as the average noise variation over all Clifford gates is small. This technique takes into account both state preparation and measurement errors and is scalable in the number of qubits. We apply this protocol to a superconducting qubit system and find a bounded average error of 0.003 [0,0.016] for the single-qubit gates Xπ/2 and Yπ/2. These bounded values provide better estimates of the average error than those extracted via quantum process tomography.
Testing the Wisconsin Phosphorus Index with year-round, field-scale runoff monitoring.
Good, Laura W; Vadas, Peter; Panuska, John C; Bonilla, Carlos A; Jokela, William E
2012-01-01
The Wisconsin Phosphorus Index (WPI) is one of several P indices in the United States that use equations to describe actual P loss processes. Although for nutrient management planning the WPI is reported as a dimensionless whole number, it is calculated as average annual dissolved P (DP) and particulate P (PP) mass delivered per unit area. The WPI calculations use soil P concentration, applied manure and fertilizer P, and estimates of average annual erosion and average annual runoff. We compared WPI estimated P losses to annual P loads measured in surface runoff from 86 field-years on crop fields and pastures. As the erosion and runoff generated by the weather in the monitoring years varied substantially from the average annual estimates used in the WPI, the WPI and measured loads were not well correlated. However, when measured runoff and erosion were used in the WPI field loss calculations, the WPI accurately estimated annual total P loads with a Nash-Sutcliffe Model Efficiency (NSE) of 0.87. The DP loss estimates were not as close to measured values (NSE = 0.40) as the PP loss estimates (NSE = 0.89). Some errors in estimating DP losses may be unavoidable due to uncertainties in estimating on-farm manure P application rates. The WPI is sensitive to field management that affects its erosion and runoff estimates. Provided that the WPI methods for estimating average annual erosion and runoff are accurately reflecting the effects of management, the WPI is an accurate field-level assessment tool for managing runoff P losses. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Cost estimation using ministerial regulation of public work no. 11/2013 in construction projects
NASA Astrophysics Data System (ADS)
Arumsari, Putri; Juliastuti; Khalifah Al'farisi, Muhammad
2017-12-01
One of the first tasks in starting a construction project is to estimate the total cost of building a project. In Indonesia there are several standards that are used to calculate the cost estimation of a project. One of the standards used in based on the Ministerial Regulation of Public Work No. 11/2013. However in a construction project, contractor often has their own cost estimation based on their own calculation. This research aimed to compare the construction project total cost using calculation based on the Ministerial Regulation of Public Work No. 11/2013 against the contractor’s calculation. Two projects were used as case study to compare the results. The projects were a 4 storey building located in Pantai Indah Kapuk area (West Jakarta) and a warehouse located in Sentul (West Java) which was built by 2 different contractors. The cost estimation from both contractors’ calculation were compared to the one based on the Ministerial Regulation of Public Work No. 11/2013. It is found that there were differences between the two calculation around 1.80 % - 3.03% in total cost, in which the cost estimation based on Ministerial Regulation was higher than the contractors’ calculations.
Villoria Sáez, Paola; del Río Merino, Mercedes; Porras-Amores, César
2012-02-01
The management planning of construction and demolition (C&D) waste uses a single indicator which does not provide enough detailed information. Therefore the determination and implementation of other innovative and precise indicators should be determined. The aim of this research work is to improve existing C&D waste quantification tools in the construction of new residential buildings in Spain. For this purpose, several housing projects were studied to determine an estimation of C&D waste generated during their construction process. This paper determines the values of three indicators to estimate the generation of C&D waste in new residential buildings in Spain, itemizing types of waste and construction stages. The inclusion of two more accurate indicators, in addition to the global one commonly in use, provides a significant improvement in C&D waste quantification tools and management planning.
Estimating stand age for Douglas-fir.
Floyd A. Johnson
1954-01-01
Stand age for Douglas-fir has been defined as the average age of dominant and codominant trees. It is commonly estimated by measuring the age of several dominants and codominants and computing their arithmetic average.
Why do vulnerability cycles matter in financial networks?
NASA Astrophysics Data System (ADS)
Silva, Thiago Christiano; Tabak, Benjamin Miranda; Guerra, Solange Maria
2017-04-01
We compare two widely employed models that estimate systemic risk: DebtRank and Differential DebtRank. We show that not only network cyclicality but also the average vulnerability of banks are essential concepts that contribute to widening the gap in the systemic risk estimates of both approaches. We find that systemic risk estimates are the same whenever the network has no cycles. However, in case the network presents cyclicality, then we need to inspect the average vulnerability of banks to estimate the underestimation gap. We find that the gap is small regardless of the cyclicality of the network when its average vulnerability is large. In contrast, the observed gap follows a quadratic behavior when the average vulnerability is small or intermediate. We show results using an econometric exercise and draw guidelines both on artificial and real-world financial networks.
Annual forest inventory estimates based on the moving average
Francis A. Roesch; James R. Steinman; Michael T. Thompson
2002-01-01
Three interpretations of the simple moving average estimator, as applied to the USDA Forest Service's annual forest inventory design, are presented. A corresponding approach to composite estimation over arbitrarily defined land areas and time intervals is given for each interpretation, under the assumption that the investigator is armed with only the spatial/...
B. Lane Rivenbark; C. Rhett Jackson
2004-01-01
Regional average evapotranspiration estimates developed by water balance techniques are frequently used to estimate average discharge in ungaged strttams. However, the lower stream size range for the validity of these techniques has not been explored. Flow records were collected and evaluated for 16 small streams in the Southern Appalachians to test whether the...
NASA Astrophysics Data System (ADS)
Arabzadeh, Vida; Niaki, S. T. A.; Arabzadeh, Vahid
2017-10-01
One of the most important processes in the early stages of construction projects is to estimate the cost involved. This process involves a wide range of uncertainties, which make it a challenging task. Because of unknown issues, using the experience of the experts or looking for similar cases are the conventional methods to deal with cost estimation. The current study presents data-driven methods for cost estimation based on the application of artificial neural network (ANN) and regression models. The learning algorithms of the ANN are the Levenberg-Marquardt and the Bayesian regulated. Moreover, regression models are hybridized with a genetic algorithm to obtain better estimates of the coefficients. The methods are applied in a real case, where the input parameters of the models are assigned based on the key issues involved in a spherical tank construction. The results reveal that while a high correlation between the estimated cost and the real cost exists; both ANNs could perform better than the hybridized regression models. In addition, the ANN with the Levenberg-Marquardt learning algorithm (LMNN) obtains a better estimation than the ANN with the Bayesian-regulated learning algorithm (BRNN). The correlation between real data and estimated values is over 90%, while the mean square error is achieved around 0.4. The proposed LMNN model can be effective to reduce uncertainty and complexity in the early stages of the construction project.
Carcinogens in the construction industry.
Järvholm, Bengt
2006-09-01
The construction industry is a complex work environment. The work sites are temporary and rapidly changing. Asbestos has been widely used in construction industry, but the risks were primarily detected in specialized trades, such as insulation workers and plumbers. Today, the majority of cases related to asbestos exposure will occur in other occupational groups in the construction industry. In a large cohort of Swedish construction workers, insulators and plumbers constituted 37% of all cases of pleural mesothelioma between 1975 and 1984 while they constituted 21% of the cases between 1998 and 2002. It is estimated that 25-40% of all male cases of pleural mesothelioma in Sweden are caused by asbestos exposure in the construction trades. There are many other known carcinogens occurring in the construction industry, including PAHs, diesel exhausts, silica, asphalt fumes, solvents, etc., but it is difficult to estimate exposures and thus the size of the risk. The risk of cancer is less easy to detect with traditional epidemiological methods in the construction industry than in other industrial sectors. It is not sufficient to rely upon broad epidemiological data to estimate the risk of cancer due chemicals in the construction industry. Thus, a strategy to decrease exposure, e.g., to dust, seems a feasible way to reduce the risk.
On the construction of a time base and the elimination of averaging errors in proxy records
NASA Astrophysics Data System (ADS)
Beelaerts, V.; De Ridder, F.; Bauwens, M.; Schmitz, N.; Pintelon, R.
2009-04-01
Proxies are sources of climate information which are stored in natural archives (e.g. ice-cores, sediment layers on ocean floors and animals with calcareous marine skeletons). Measuring these proxies produces very short records and mostly involves sampling solid substrates, which is subject to the following two problems: Problem 1: Natural archives are equidistantly sampled at a distance grid along their accretion axis. Starting from these distance series, a time series needs to be constructed, as comparison of different data records is only meaningful on a time grid. The time series will be non-equidistant, as the accretion rate is non-constant. Problem 2: A typical example of sampling solid substrates is drilling. Because of the dimensions of the drill, the holes drilled will not be infinitesimally small. Consequently, samples are not taken at a point in distance, but rather over a volume in distance. This holds for most sampling methods in solid substrates. As a consequence, when the continuous proxy signal is sampled, it will be averaged over the volume of the sample, resulting in an underestimation of the amplitude. Whether this averaging effect is significant, depends on the volume of the sample and the variations of interest of the proxy signal. Starting from the measured signal, the continuous signal needs to be reconstructed in order eliminate these averaging errors. The aim is to provide an efficient identification algorithm to identify the non-linearities in the distance-time relationship, called time base distortions, and to correct for the averaging effects. Because this is a parametric method, an assumption about the proxy signal needs to be made: the proxy record on a time base is assumed to be harmonic, this is an obvious assumption because natural archives often exhibit a seasonal cycle. In a first approach the averaging effects are assumed to be in one direction only, i.e. the direction of the axis on which the measurements were performed. The measured averaged proxy signal is modeled by following signal model: -- Δ ∫ n+12Δδ- y(n,θ) = δ- 1Δ- y(m,θ)dm n-2 δ where m is the position, x(m) = Δm; θ are the unknown parameters and y(m,θ) is the proxy signal we want to identify (the proxy signal as found in the natural archive), which we model as: y(m, θ) = A +∑H [A sin(kωt(m ))+ A cos(kωt(m ))] 0 k=1 k k+H With t(m): t(m) = mTS + g(m )TS Here TS = 1/fS is the sampling period, fS the sampling frequency, and g(m) the unknown time base distortion (TBD). In this work a splines approximation of the TBD is chosen: ∑ g(m ) = b blφl(m ) l=1 where, b is a vector of unknown time base distortion parameters, and φ is a set of splines. The estimates of the unknown parameters were obtained with a nonlinear least squares algorithm. The vessel density measured in the mangrove tree R mucronata was used to illustrate the method. The vessel density is a proxy for the rain fall in tropical regions. The proxy data on the newly constructed time base showed a yearly periodicity, this is what we expected and the correction for the averaging effect increased the amplitude by 11.18%.
Robust w-Estimators for Cryo-EM Class Means.
Huang, Chenxi; Tagare, Hemant D
2016-02-01
A critical step in cryogenic electron microscopy (cryo-EM) image analysis is to calculate the average of all images aligned to a projection direction. This average, called the class mean, improves the signal-to-noise ratio in single-particle reconstruction. The averaging step is often compromised because of the outlier images of ice, contaminants, and particle fragments. Outlier detection and rejection in the majority of current cryo-EM methods are done using cross-correlation with a manually determined threshold. Empirical assessment shows that the performance of these methods is very sensitive to the threshold. This paper proposes an alternative: a w-estimator of the average image, which is robust to outliers and which does not use a threshold. Various properties of the estimator, such as consistency and influence function are investigated. An extension of the estimator to images with different contrast transfer functions is also provided. Experiments with simulated and real cryo-EM images show that the proposed estimator performs quite well in the presence of outliers.
O'Loughlin, Declan; Oliveira, Bárbara L; Elahi, Muhammad Adnan; Glavin, Martin; Jones, Edward; Popović, Milica; O'Halloran, Martin
2017-12-06
Inaccurate estimation of average dielectric properties can have a tangible impact on microwave radar-based breast images. Despite this, recent patient imaging studies have used a fixed estimate although this is known to vary from patient to patient. Parameter search algorithms are a promising technique for estimating the average dielectric properties from the reconstructed microwave images themselves without additional hardware. In this work, qualities of accurately reconstructed images are identified from point spread functions. As the qualities of accurately reconstructed microwave images are similar to the qualities of focused microscopic and photographic images, this work proposes the use of focal quality metrics for average dielectric property estimation. The robustness of the parameter search is evaluated using experimental dielectrically heterogeneous phantoms on the three-dimensional volumetric image. Based on a very broad initial estimate of the average dielectric properties, this paper shows how these metrics can be used as suitable fitness functions in parameter search algorithms to reconstruct clear and focused microwave radar images.
Robust Alternatives to the Standard Deviation in Processing of Physics Experimental Data
NASA Astrophysics Data System (ADS)
Shulenin, V. P.
2016-10-01
Properties of robust estimations of the scale parameter are studied. It is noted that the median of absolute deviations and the modified estimation of the average Gini differences have asymptotically normal distributions and bounded influence functions, are B-robust estimations, and hence, unlike the estimation of the standard deviation, are protected from the presence of outliers in the sample. Results of comparison of estimations of the scale parameter are given for a Gaussian model with contamination. An adaptive variant of the modified estimation of the average Gini differences is considered.
NASA Astrophysics Data System (ADS)
Cao, B.; Domke, G. M.; Russell, M.; McRoberts, R. E.; Walters, B. F.
2017-12-01
Forest ecosystems contribute substantially to carbon (C) storage. The dynamics of litter decomposition, translocation and stabilization into soil layers are essential processes in the functioning of forest ecosystems, as they control the cycling of soil organic matter and the accumulation and release of C to the atmosphere. Therefore, the spatial distributions of litter and soil C stocks are important in greenhouse gas estimation and reporting and inform land management decisions, policy, and climate change mitigation strategies. In this study, we explored the effects of spatial aggregation of climatic, biotic, topographic and soil input data on national estimates of litter and soil C stocks and characterized the spatial distribution of litter and soil C stocks in the conterminous United States. Data from the Forest Inventory and Analysis (FIA) program within the US Forest Service were used with vegetation phenology data estimated from LANDSAT imagery (30 m) and raster data describing relevant environmental parameters (e.g. temperature, precipitation, topographic properties) for the entire conterminous US. Litter and soil C stocks were estimated and mapped through geostatistical analysis and statistical uncertainty bounds on the pixel level predictions were constructed using a Monte Carlo-bootstrap technique, by which credible variance estimates for the C stocks were calculated. The sensitivity of model estimates to spatial aggregation depends on geographic region. Further, using long-term (30-year) climate averages during periods with strong climatic trends results in large differences in litter and soil C stock estimates. In addition, results suggest that local topographic aspect is an important variable in litter and soil C estimation at the continental scale.
An assessment of air pollutant exposure methods in Mexico City, Mexico.
Rivera-González, Luis O; Zhang, Zhenzhen; Sánchez, Brisa N; Zhang, Kai; Brown, Daniel G; Rojas-Bracho, Leonora; Osornio-Vargas, Alvaro; Vadillo-Ortega, Felipe; O'Neill, Marie S
2015-05-01
Geostatistical interpolation methods to estimate individual exposure to outdoor air pollutants can be used in pregnancy cohorts where personal exposure data are not collected. Our objectives were to a) develop four assessment methods (citywide average (CWA); nearest monitor (NM); inverse distance weighting (IDW); and ordinary Kriging (OK)), and b) compare daily metrics and cross-validations of interpolation models. We obtained 2008 hourly data from Mexico City's outdoor air monitoring network for PM10, PM2.5, O3, CO, NO2, and SO2 and constructed daily exposure metrics for 1,000 simulated individual locations across five populated geographic zones. Descriptive statistics from all methods were calculated for dry and wet seasons, and by zone. We also evaluated IDW and OK methods' ability to predict measured concentrations at monitors using cross validation and a coefficient of variation (COV). All methods were performed using SAS 9.3, except ordinary Kriging which was modeled using R's gstat package. Overall, mean concentrations and standard deviations were similar among the different methods for each pollutant. Correlations between methods were generally high (r=0.77 to 0.99). However, ranges of estimated concentrations determined by NM, IDW, and OK were wider than the ranges for CWA. Root mean square errors for OK were consistently equal to or lower than for the IDW method. OK standard errors varied considerably between pollutants and the computed COVs ranged from 0.46 (least error) for SO2 and PM10 to 3.91 (most error) for PM2.5. OK predicted concentrations measured at the monitors better than IDW and NM. Given the similarity in results for the exposure methods, OK is preferred because this method alone provides predicted standard errors which can be incorporated in statistical models. The daily estimated exposures calculated using these different exposure methods provide flexibility to evaluate multiple windows of exposure during pregnancy, not just trimester or pregnancy-long exposures. Many studies evaluating associations between outdoor air pollution and adverse pregnancy outcomes rely on outdoor air pollution monitoring data linked to information gathered from large birth registries, and often lack residence location information needed to estimate individual exposure. This study simulated 1,000 residential locations to evaluate four air pollution exposure assessment methods, and describes possible exposure misclassification from using spatial averaging versus geostatistical interpolation models. An implication of this work is that policies to reduce air pollution and exposure among pregnant women based on epidemiologic literature should take into account possible error in estimates of effect when spatial averages alone are evaluated.
Estimation of average daily traffic on local roads in Kentucky.
DOT National Transportation Integrated Search
2016-07-01
Kentucky Transportation Cabinet (KYTC) officials use annual average daily traffic (AADT) to estimate intersection : performance across the state maintained highway system. KYTC currently collects AADTs for state maintained : roads but frequently lack...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-05
... review of the ALJ's determination concerning the ALJ's findings on claim construction, infringement... Commission has also determined to review the ID's construction of the ``extracting'' limitation of claim 8 as... construction of the claim limitation ``accumulatively averaging working conditions of lots previously processed...
Tracy, Sally K; Tracy, Mark B
2003-08-01
To estimate the cost of "the cascade" of obstetric interventions introduced during labour for low risk women. A cost formula derived from population data. New South Wales, Australia. All 171,157 women having a live baby during 1996 and 1997. Four groups of interventions that occur during labour were identified. A cost model was constructed using the known age-adjusted rates for low risk women having one of three birth outcomes following these pre-specified interventions. Costs were based on statewide averages for the cost of labour and birth in hospital. The outcome measure is an "average cost unit per woman" for low risk women, predicted by the level of intervention during labour. Obstetric care is classified as either private obstetric care in a private or public hospital, or routine public hospital care. The relative cost of birth increased by up to 50% for low risk primiparous women and up to 36% for low risk multiparous women as labour interventions accumulated. An epidural was associated with a sharp increase in cost of up to 32% for some primiparous low risk women, and up to 36% for some multiparous low risk women. Private obstetric care increased the overall relative cost by 9% for primiparous low risk women and 4% for multiparous low risk women. The initiation of a cascade of obstetric interventions during labour for low risk women is costly to the health system. Private obstetric care adds further to the cost of care for low risk women.
DOT National Transportation Integrated Search
2010-09-01
Tools are proposed for carbon footprint estimation of transportation construction projects and decision support : for construction firms that must make equipment choice and usage decisions that affect profits, project duration : and greenhouse gas em...
NASA Astrophysics Data System (ADS)
Tuan, Nguyen Huy; Van Au, Vo; Khoa, Vo Anh; Lesnic, Daniel
2017-05-01
The identification of the population density of a logistic equation backwards in time associated with nonlocal diffusion and nonlinear reaction, motivated by biology and ecology fields, is investigated. The diffusion depends on an integral average of the population density whilst the reaction term is a global or local Lipschitz function of the population density. After discussing the ill-posedness of the problem, we apply the quasi-reversibility method to construct stable approximation problems. It is shown that the regularized solutions stemming from such method not only depend continuously on the final data, but also strongly converge to the exact solution in L 2-norm. New error estimates together with stability results are obtained. Furthermore, numerical examples are provided to illustrate the theoretical results.
Adaptive aperture for Geiger mode avalanche photodiode flash ladar systems.
Wang, Liang; Han, Shaokun; Xia, Wenze; Lei, Jieyu
2018-02-01
Although the Geiger-mode avalanche photodiode (GM-APD) flash ladar system offers the advantages of high sensitivity and simple construction, its detection performance is influenced not only by the incoming signal-to-noise ratio but also by the absolute number of noise photons. In this paper, we deduce a hyperbolic approximation to estimate the noise-photon number from the false-firing percentage in a GM-APD flash ladar system under dark conditions. By using this hyperbolic approximation function, we introduce a method to adapt the aperture to reduce the number of incoming background-noise photons. Finally, the simulation results show that the adaptive-aperture method decreases the false probability in all cases, increases the detection probability provided that the signal exceeds the noise, and decreases the average ranging error per frame.
Adaptive aperture for Geiger mode avalanche photodiode flash ladar systems
NASA Astrophysics Data System (ADS)
Wang, Liang; Han, Shaokun; Xia, Wenze; Lei, Jieyu
2018-02-01
Although the Geiger-mode avalanche photodiode (GM-APD) flash ladar system offers the advantages of high sensitivity and simple construction, its detection performance is influenced not only by the incoming signal-to-noise ratio but also by the absolute number of noise photons. In this paper, we deduce a hyperbolic approximation to estimate the noise-photon number from the false-firing percentage in a GM-APD flash ladar system under dark conditions. By using this hyperbolic approximation function, we introduce a method to adapt the aperture to reduce the number of incoming background-noise photons. Finally, the simulation results show that the adaptive-aperture method decreases the false probability in all cases, increases the detection probability provided that the signal exceeds the noise, and decreases the average ranging error per frame.
Estimating the D-Region Ionospheric Electron Density Profile Using VLF Narrowband Transmitters
NASA Astrophysics Data System (ADS)
Gross, N. C.; Cohen, M.
2016-12-01
The D-region ionospheric electron density profile plays an important role in many applications, including long-range and transionospheric communications, and coupling between the lower atmosphere and the upper ionosphere occurs, and estimation of very low frequency (VLF) wave propagation within the earth-ionosphere waveguide. However, measuring the D-region ionospheric density profile has been a challenge. The D-region is about 60 to 90 [km] in altitude, which is higher than planes and balloons can fly but lower than satellites can orbit. Researchers have previously used VLF remote sensing techniques, from either narrowband transmitters or sferics, to estimate the density profile, but these estimations are typically during a short time frame and over a single propagation path.We report on an effort to construct estimates of the D-region ionospheric electron density profile over multiple narrowband transmission paths for long periods of time. Measurements from multiple transmitters at multiple receivers are analyzed concurrently to minimize false solutions and improve accuracy. Likewise, time averaging is used to remove short transient noise at the receivers. The cornerstone of the algorithm is an artificial neural network (ANN), where input values are the received amplitude and phase for the narrowband transmitters and the outputs are the commonly known h' and beta two parameter exponential electron density profile. Training data for the ANN is generated using the Navy's Long-Wavelength Propagation Capability (LWPC) model. Results show the algorithm performs well under smooth ionospheric conditions and when proper geometries for the transmitters and receivers are used.
Soil Moisture Content Estimation using GPR Reflection Travel Time
NASA Astrophysics Data System (ADS)
Lunt, I. A.; Hubbard, S. S.; Rubin, Y.
2003-12-01
Ground-penetrating radar (GPR) reflection travel time data were used to estimate changes in soil water content under a range of soil saturation conditions throughout the growing season at a California winery. Data were collected during four data acquisition campaigns over an 80 by 180 m area using 100 MHz surface GPR antennae. GPR reflections were associated with a thin, low permeability clay layer located between 0.8 to 1.3 m below the ground surface that was calibrated with borehole information and mapped across the study area. Field infiltration tests and neutron probe logs suggest that the thin clay layer inhibited vertical water flow, and was coincident with high volumetric water content (VWC) values. The GPR reflection two-way travel time and the depth of the reflector at borehole locations were used to calculate an average dielectric constant for soils above the reflector. A site-specific relationship between the dielectric constant and VWC was then used to estimate the depth-averaged VWC of the soils above the reflector. Compared to average VWC measurements from calibrated neutron probe logs over the same depth interval, the average VWC estimates obtained from GPR reflections had an RMS error of 2 percent. We also investigated the estimation of VWC using reflections associated with an advancing water front, and found that estimates of average VWC to the water front could be obtained with similar accuracy. These results suggested that the two-way travel time to a GPR reflection associated with a geological surface or wetting front can be used under natural conditions to obtain estimates of average water content when borehole control is available. The GPR reflection method therefore has potential for monitoring soil water content over large areas and under variable hydrological conditions.
A NEW INSAR DERIVED DEM OF BLACK RAPIDS GLACIER
NASA Astrophysics Data System (ADS)
Shugar, D. H.; Rabus, B.; Clague, J. J.
2009-12-01
We have constructed a new digital elevation model representing the 1995 surface of surge-type Black Rapids Glacier and the surrounding central Alaska Range, using ERS-1/2 repeat-pass interferometry. First, we isolated the topographic phase from three interferograms with contrasting perpendicular baselines. Next we attempted to automatically unwrap this topographic phase but encountered numerous errors due to the terrain containing areas of poor coherence from fringe aliasing, radar layover or shadow. We then consistently corrected these persistent phase-unwrapping errors in all three interferograms using an iterative semi-automated approach that capitalizes on the multi-baseline nature of the data set. Over the surface of Black Rapids Glacier, the accuracy of the new DEM is estimated at better than +/- 12 m. Ground-surveyed spot elevations from 1995 corroborate this accuracy estimate. Comparison of the new DEM with a 1951 U.S. Geological Survey topographic map, and with ground survey data from other years, shows the gradual return of Black Rapids Glacier to pre-surge conditions. In the 44-year period between 1951 and 1995 the observed average steepening of the longitudinal profile is ~0.6°. The maximum elevation changes in the ablation and accumulation zones are -256 m and +75 m, respectively, suggesting corresponding average rates of elevation change of about -5.8 m/yr and +1.7 m/yr. These rates are 1.5-2 times higher than those indicated by the ground survey spot elevation measurements over the period 1975 to 2005. Considering the significant overlap of the two periods of measurement, the inferred average rates for 1951-1975 would have to be very large (-7.5 m/yr and +2.3 m/yr, respectively) for these two findings to be consistent. A second comparison with the recently released ASTER G-DEM (data from 2001) led to no glaciologically usable results due to major artifacts in the ASTER G-DEM. We therefore conclude that the 1951 U.S. Geological Survey map and the ASTER G-DEM both appear biased over the Black Rapids Glacier surface and caution is advised when using either for quantitative estimates of elevation change over the glacier surface.
Development of Economic Factors in Tunnel Construction
DOT National Transportation Integrated Search
1977-12-01
The escalating cost of underground construction of urban transportation systems has made transit planning, especially construction cost estimating, difficult. This is a study of the cost of construction of underground, rapid transit tunnels in soft g...
Groupwise registration of MR brain images with tumors.
Tang, Zhenyu; Wu, Yihong; Fan, Yong
2017-08-04
A novel groupwise image registration framework is developed for registering MR brain images with tumors. Our method iteratively estimates a normal-appearance counterpart for each tumor image to be registered and constructs a directed graph (digraph) of normal-appearance images to guide the groupwise image registration. Particularly, our method maps each tumor image to its normal appearance counterpart by identifying and inpainting brain tumor regions with intensity information estimated using a low-rank plus sparse matrix decomposition based image representation technique. The estimated normal-appearance images are groupwisely registered to a group center image guided by a digraph of images so that the total length of 'image registration paths' to be the minimum, and then the original tumor images are warped to the group center image using the resulting deformation fields. We have evaluated our method based on both simulated and real MR brain tumor images. The registration results were evaluated with overlap measures of corresponding brain regions and average entropy of image intensity information, and Wilcoxon signed rank tests were adopted to compare different methods with respect to their regional overlap measures. Compared with a groupwise image registration method that is applied to normal-appearance images estimated using the traditional low-rank plus sparse matrix decomposition based image inpainting, our method achieved higher image registration accuracy with statistical significance (p = 7.02 × 10 -9 ).
Sauvé, Jean-François; Beaudry, Charles; Bégin, Denis; Dion, Chantal; Gérin, Michel; Lavoué, Jérôme
2012-09-01
A quantitative determinants-of-exposure analysis of respirable crystalline silica (RCS) levels in the construction industry was performed using a database compiled from an extensive literature review. Statistical models were developed to predict work-shift exposure levels by trade. Monte Carlo simulation was used to recreate exposures derived from summarized measurements which were combined with single measurements for analysis. Modeling was performed using Tobit models within a multimodel inference framework, with year, sampling duration, type of environment, project purpose, project type, sampling strategy and use of exposure controls as potential predictors. 1346 RCS measurements were included in the analysis, of which 318 were non-detects and 228 were simulated from summary statistics. The model containing all the variables explained 22% of total variability. Apart from trade, sampling duration, year and strategy were the most influential predictors of RCS levels. The use of exposure controls was associated with an average decrease of 19% in exposure levels compared to none, and increased concentrations were found for industrial, demolition and renovation projects. Predicted geometric means for year 1999 were the highest for drilling rig operators (0.238 mg m(-3)) and tunnel construction workers (0.224 mg m(-3)), while the estimated exceedance fraction of the ACGIH TLV by trade ranged from 47% to 91%. The predicted geometric means in this study indicated important overexposure compared to the TLV. However, the low proportion of variability explained by the models suggests that the construction trade is only a moderate predictor of work-shift exposure levels. The impact of the different tasks performed during a work shift should also be assessed to provide better management and control of RCS exposure levels on construction sites.
Curtis L. VanderSchaaf; Harold E. Burkhart
2010-01-01
Maximum size-density relationships (MSDR) provide natural resource managers useful information about the relationship between tree density and average tree size. Obtaining a valid estimate of how maximum tree density changes as average tree size changes is necessary to accurately describe these relationships. This paper examines three methods to estimate the slope of...
Variation in leader length of bitterbrush
Richard L. Hubbard; David. Dunaway
1958-01-01
The estimation of herbage production and· utilization in browse plants has been a problem for many years. Most range technicians have simply estimated the average length of twigs or leaders. then expressed use by deer and livestock as a percentage thereof based on the estimated average length left after grazing. Riordan used this method on mountain mahogany (
SU-C-207-02: A Method to Estimate the Average Planar Dose From a C-Arm CBCT Acquisition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Supanich, MP
2015-06-15
Purpose: The planar average dose in a C-arm Cone Beam CT (CBCT) acquisition had been estimated in the past by averaging the four peripheral dose measurements in a CTDI phantom and then using the standard 2/3rds peripheral and 1/3 central CTDIw method (hereafter referred to as Dw). The accuracy of this assumption has not been investigated and the purpose of this work is to test the presumed relationship. Methods: Dose measurements were made in the central plane of two consecutively placed 16cm CTDI phantoms using a 0.6cc ionization chamber at each of the 4 peripheral dose bores and in themore » central dose bore for a C-arm CBCT protocol. The same setup was scanned with a circular cut-out of radiosensitive gafchromic film positioned between the two phantoms to capture the planar dose distribution. Calibration curves for color pixel value after scanning were generated from film strips irradiated at different known dose levels. The planar average dose for red and green pixel values was calculated by summing the dose values in the irradiated circular film cut out. Dw was calculated using the ionization chamber measurements and film dose values at the location of each of the dose bores. Results: The planar average dose using both the red and green pixel color calibration curves were within 10% agreement of the planar average dose estimated using the Dw method of film dose values at the bore locations. Additionally, an average of the planar average doses calculated using the red and green calibration curves differed from the ionization chamber Dw estimate by only 5%. Conclusion: The method of calculating the planar average dose at the central plane of a C-arm CBCT non-360 rotation by calculating Dw from peripheral and central dose bore measurements is a reasonable approach to estimating the planar average dose. Research Grant, Siemens AG.« less
Estimating soil water content from ground penetrating radar coarse root reflections
NASA Astrophysics Data System (ADS)
Liu, X.; Cui, X.; Chen, J.; Li, W.; Cao, X.
2016-12-01
Soil water content (SWC) is an indispensable variable for understanding the organization of natural ecosystems and biodiversity. Especially in semiarid and arid regions, soil moisture is the plants primary source of water and largely determine their strategies for growth and survival, such as root depth, distribution and competition between them. Ground penetrating radar (GPR), a kind of noninvasive geophysical technique, has been regarded as an accurate tool for measuring soil water content at intermediate scale in past decades. For soil water content estimation with surface GPR, fixed antenna offset reflection method has been considered to have potential to obtain average soil water content between land surface and reflectors, and provide high resolution and few measurement time. In this study, 900MHz surface GPR antenna was used to estimate SWC with fixed offset reflection method; plant coarse roots (with diameters greater than 5 mm) were regarded as reflectors; a kind of advanced GPR data interpretation method, HADA (hyperbola automatic detection algorithm), was introduced to automatically obtain average velocity by recognizing coarse root hyperbolic reflection signals on GPR radargrams during estimating SWC. In addition, a formula was deduced to determine interval average SWC between two roots at different depths as well. We examined the performance of proposed method on a dataset simulated under different scenarios. Results showed that HADA could provide a reasonable average velocity to estimate SWC without knowledge of root depth and interval average SWC also be determined. When the proposed method was applied to estimation of SWC on a real-field measurement dataset, a very small soil water content vertical variation gradient about 0.006 with depth was captured as well. Therefore, the proposed method could be used to estimate average soil water content from ground penetrating radar coarse root reflections and obtain interval average SWC between two roots at different depths. It is very promising for measuring root-zone-soil-moisture and mapping soil moisture distribution around a shrub or even in field plot scale.
Hall, Justin M; Azar, Frederick M; Miller, Robert H; Smith, Richard; Throckmorton, Thomas W
2014-09-01
We compared accuracy and reliability of a traditional method of measurement (most cephalad vertebral spinous process that can be reached by a patient with the extended thumb) to estimates made with the shoulder in abduction to determine if there were differences between the two methods. Six physicians with fellowship training in sports medicine or shoulder surgery estimated measurements in 48 healthy volunteers. Three were randomly chosen to make estimates of both internal rotation measurements for each volunteer. An independent observer made objective measurements on lateral scoliosis films (spinous process method) or with a goniometer (abduction method). Examiners were blinded to objective measurements as well as to previous estimates. Intraclass coefficients for interobserver reliability for the traditional method averaged 0.75, indicating good agreement among observers. The difference in vertebral level estimated by the examiner and the actual radiographic level averaged 1.8 levels. The intraclass coefficient for interobserver reliability for the abduction method averaged 0.81 for all examiners, indicating near-perfect agreement. Confidence intervals indicated that estimates were an average of 8° different from the objective goniometer measurements. Pearson correlation coefficients of intraobserver reliability for the abduction method averaged 0.94, indicating near-perfect agreement within observers. Confidence intervals demonstrated repeated estimates between 5° and 10° of the original. Internal rotation estimates made with the shoulder abducted demonstrated interobserver reliability superior to that of spinous process estimates, and reproducibility was high. On the basis of this finding, we now take glenohumeral internal rotation measurements with the shoulder in abduction and use a goniometer to maximize accuracy and objectivity. Copyright © 2014 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.
Statistical strategies for averaging EC50 from multiple dose-response experiments.
Jiang, Xiaoqi; Kopp-Schneider, Annette
2015-11-01
In most dose-response studies, repeated experiments are conducted to determine the EC50 value for a chemical, requiring averaging EC50 estimates from a series of experiments. Two statistical strategies, the mixed-effect modeling and the meta-analysis approach, can be applied to estimate average behavior of EC50 values over all experiments by considering the variabilities within and among experiments. We investigated these two strategies in two common cases of multiple dose-response experiments in (a) complete and explicit dose-response relationships are observed in all experiments and in (b) only in a subset of experiments. In case (a), the meta-analysis strategy is a simple and robust method to average EC50 estimates. In case (b), all experimental data sets can be first screened using the dose-response screening plot, which allows visualization and comparison of multiple dose-response experimental results. As long as more than three experiments provide information about complete dose-response relationships, the experiments that cover incomplete relationships can be excluded from the meta-analysis strategy of averaging EC50 estimates. If there are only two experiments containing complete dose-response information, the mixed-effects model approach is suggested. We subsequently provided a web application for non-statisticians to implement the proposed meta-analysis strategy of averaging EC50 estimates from multiple dose-response experiments.
Estimating sales and sales market share from sales rank data for consumer appliances
NASA Astrophysics Data System (ADS)
Touzani, Samir; Van Buskirk, Robert
2016-06-01
Our motivation in this work is to find an adequate probability distribution to fit sales volumes of different appliances. This distribution allows for the translation of sales rank into sales volume. This paper shows that the log-normal distribution and specifically the truncated version are well suited for this purpose. We demonstrate that using sales proxies derived from a calibrated truncated log-normal distribution function can be used to produce realistic estimates of market average product prices, and product attributes. We show that the market averages calculated with the sales proxies derived from the calibrated, truncated log-normal distribution provide better market average estimates than sales proxies estimated with simpler distribution functions.
NASA Technical Reports Server (NTRS)
Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.
1977-01-01
Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.
Evaluation of Techniques Used to Estimate Cortical Feature Maps
Katta, Nalin; Chen, Thomas L.; Watkins, Paul V.; Barbour, Dennis L.
2011-01-01
Functional properties of neurons are often distributed nonrandomly within a cortical area and form topographic maps that reveal insights into neuronal organization and interconnection. Some functional maps, such as in visual cortex, are fairly straightforward to discern with a variety of techniques, while other maps, such as in auditory cortex, have resisted easy characterization. In order to determine appropriate protocols for establishing accurate functional maps in auditory cortex, artificial topographic maps were probed under various conditions, and the accuracy of estimates formed from the actual maps was quantified. Under these conditions, low-complexity maps such as sound frequency can be estimated accurately with as few as 25 total samples (e.g., electrode penetrations or imaging pixels) if neural responses are averaged together. More samples are required to achieve the highest estimation accuracy for higher complexity maps, and averaging improves map estimate accuracy even more than increasing sampling density. Undersampling without averaging can result in misleading map estimates, while undersampling with averaging can lead to the false conclusion of no map when one actually exists. Uniform sample spacing only slightly improves map estimation over nonuniform sample spacing typical of serial electrode penetrations. Tessellation plots commonly used to visualize maps estimated using nonuniform sampling are always inferior to linearly interpolated estimates, although differences are slight at higher sampling densities. Within primary auditory cortex, then, multiunit sampling with at least 100 samples would likely result in reasonable feature map estimates for all but the highest complexity maps and the highest variability that might be expected. PMID:21889537
Waldron, Marcus C.; Archfield, Stacey A.
2006-01-01
Factors affecting reservoir firm yield, as determined by application of the Massachusetts Department of Environmental Protection's Firm Yield Estimator (FYE) model, were evaluated, modified, and tested on 46 streamflow-dominated reservoirs representing 15 Massachusetts drinking-water supplies. The model uses a mass-balance approach to determine the maximum average daily withdrawal rate that can be sustained during a period of record that includes the 1960s drought-of-record. The FYE methodology to estimate streamflow to the reservoir at an ungaged site was tested by simulating streamflow at two streamflow-gaging stations in Massachusetts and comparing the simulated streamflow to the observed streamflow. In general, the FYE-simulated flows agreed well with observed flows. There were substantial deviations from the measured values for extreme high and low flows. A sensitivity analysis determined that the model's streamflow estimates are most sensitive to input values for average annual precipitation, reservoir drainage area, and the soil-retention number-a term that describes the amount of precipitation retained by the soil in the basin. The FYE model currently provides the option of using a 1,000-year synthetic record constructed by randomly sampling 2-year blocks of concurrent streamflow and precipitation records 500 times; however, the synthetic record has the potential to generate records of precipitation and streamflow that do not reflect the worst historical drought in Massachusetts. For reservoirs that do not have periods of drawdown greater than 2 years, the bootstrap does not offer any additional information about the firm yield of a reservoir than the historical record does. For some reservoirs, the use of a synthetic record to determine firm yield resulted in as much as a 30-percent difference between firm-yield values from one simulation to the next. Furthermore, the assumption that the synthetic traces of streamflow are statistically equivalent to the historical record is not valid. For multiple-reservoir systems, the firm-yield estimate was dependent on the reservoir system's configuration. The firm yield of a system is sensitive to how the water is transferred from one reservoir to another, the capacity of the connection between the reservoirs, and how seasonal variations in demand are represented in the FYE model. Firm yields for 25 (14 single-reservoir systems and 11 multiple-reservoir systems) reservoir systems were determined by using the historical records of streamflow and precipitation. Current water-use data indicate that, on average, 20 of the 25 reservoir systems in the study were operating below their estimated firm yield; during months with peak demands, withdrawals exceeded the firm yield for 8 reservoir systems.
HIV infection in the South African construction industry.
Bowen, Paul; Govender, Rajen; Edwards, Peter; Lake, Antony
2018-06-01
South Africa has one of the highest HIV prevalences in the world, and compared with other sectors of the national economy, the construction industry is disproportionately adversely affected. Using data collected nationally from more than 57,000 construction workers, HIV infection among South African construction workers was estimated, together with an assessment of the association between worker HIV serostatus and worker characteristics of gender, age, nature of employment, occupation, and HIV testing history. The HIV infection of construction workers was estimated to be lower than that found in a smaller 2008 sample. All worker characteristics are significantly associated with HIV serostatus. In terms of most at-risk categories: females are more at risk of HIV infection than males; workers in the 30-49 year old age group are more at risk than other age groups; workers employed on a less permanent basis are more at risk; as are workers not having recently tested for HIV. Among occupations in the construction industry, general workers, artisans, and operator/drivers are those most at risk. Besides yielding more up-to-date estimated infection statistics, this research also identifies vulnerable sub-groups as valuable pointers for more targeted workplace interventions by construction firms.
Hevesi, Joseph A.; Flint, Alan L.; Istok, Jonathan D.
1992-01-01
Values of average annual precipitation (AAP) may be important for hydrologic characterization of a potential high-level nuclear-waste repository site at Yucca Mountain, Nevada. Reliable measurements of AAP are sparse in the vicinity of Yucca Mountain, and estimates of AAP were needed for an isohyetal mapping over a 2600-square-mile watershed containing Yucca Mountain. Estimates were obtained with a multivariate geostatistical model developed using AAP and elevation data from a network of 42 precipitation stations in southern Nevada and southeastern California. An additional 1531 elevations were obtained to improve estimation accuracy. Isohyets representing estimates obtained using univariate geostatistics (kriging) defined a smooth and continuous surface. Isohyets representing estimates obtained using multivariate geostatistics (cokriging) defined an irregular surface that more accurately represented expected local orographic influences on AAP. Cokriging results included a maximum estimate within the study area of 335 mm at an elevation of 7400 ft, an average estimate of 157 mm for the study area, and an average estimate of 172 mm at eight locations in the vicinity of the potential repository site. Kriging estimates tended to be lower in comparison because the increased AAP expected for remote mountainous topography was not adequately represented by the available sample. Regression results between cokriging estimates and elevation were similar to regression results between measured AAP and elevation. The position of the cokriging 250-mm isohyet relative to the boundaries of pinyon pine and juniper woodlands provided indirect evidence of improved estimation accuracy because the cokriging result agreed well with investigations by others concerning the relationship between elevation, vegetation, and climate in the Great Basin. Calculated estimation variances were also mapped and compared to evaluate improvements in estimation accuracy. Cokriging estimation variances were reduced by an average of 54% relative to kriging variances within the study area. Cokriging reduced estimation variances at the potential repository site by 55% relative to kriging. The usefulness of an existing network of stations for measuring AAP within the study area was evaluated using cokriging variances, and twenty additional stations were located for the purpose of improving the accuracy of future isohyetal mappings. Using the expanded network of stations, the maximum cokriging estimation variance within the study area was reduced by 78% relative to the existing network, and the average estimation variance was reduced by 52%.
Automated side-chain model building and sequence assignment by template matching.
Terwilliger, Thomas C
2003-01-01
An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer.
NASA Astrophysics Data System (ADS)
Wong, Pak-kin; Vong, Chi-man; Wong, Hang-cheong; Li, Ke
2010-05-01
Modern automotive spark-ignition (SI) power performance usually refers to output power and torque, and they are significantly affected by the setup of control parameters in the engine management system (EMS). EMS calibration is done empirically through tests on the dynamometer (dyno) because no exact mathematical engine model is yet available. With an emerging nonlinear function estimation technique of Least squares support vector machines (LS-SVM), the approximate power performance model of a SI engine can be determined by training the sample data acquired from the dyno. A novel incremental algorithm based on typical LS-SVM is also proposed in this paper, so the power performance models built from the incremental LS-SVM can be updated whenever new training data arrives. With updating the models, the model accuracies can be continuously increased. The predicted results using the estimated models from the incremental LS-SVM are good agreement with the actual test results and with the almost same average accuracy of retraining the models from scratch, but the incremental algorithm can significantly shorten the model construction time when new training data arrives.
Adhikari, S; Biswas, A; Bandyopadhyay, T K; Ghosh, P D
2014-06-01
Pointed gourd (Trichosanthes dioica Roxb.) is an economically important cucurbit and is extensively propagated through vegetative means, viz vine and root cuttings. As the accessions are poorly characterized it is important at the beginning of a breeding programme to discriminate among available genotypes to establish the level of genetic diversity. The genetic diversity of 10 pointed gourd races, referred to as accessions was evaluated. DNA profiling was generated using 10 sequence independent RAPD markers. A total of 58 scorable loci were observed out of which 18 (31.03%) loci were considered polymorphic. Genetic diversity parameters [average and effective number of alleles, Shannon's index, percent polymorphism, Nei's gene diversity, polymorphic information content (PIC)] for RAPD along with UPGMA clustering based on Jaccard's coefficient were estimated. The UPGMA dendogram constructed based on RAPD analysis in 10 pointed gourd accessions were found to be grouped in a single cluster and may represent members of one heterotic group. RAPD analysis showed promise as an effective tool in estimating genetic polymorphism in different accessions of pointed gourd.
Use of vegetation health data for estimation of aus rice yield in bangladesh.
Rahman, Atiqur; Roytman, Leonid; Krakauer, Nir Y; Nizamuddin, Mohammad; Goldberg, Mitch
2009-01-01
Rice is a vital staple crop for Bangladesh and surrounding countries, with interannual variation in yields depending on climatic conditions. We compared Bangladesh yield of aus rice, one of the main varieties grown, from official agricultural statistics with Vegetation Health (VH) Indices [Vegetation Condition Index (VCI), Temperature Condition Index (TCI) and Vegetation Health Index (VHI)] computed from Advanced Very High Resolution Radiometer (AVHRR) data covering a period of 15 years (1991-2005). A strong correlation was found between aus rice yield and VCI and VHI during the critical period of aus rice development that occurs during March-April (weeks 8-13 of the year), several months in advance of the rice harvest. Stepwise principal component regression (PCR) was used to construct a model to predict yield as a function of critical-period VHI. The model reduced the yield prediction error variance by 62% compared with a prediction of average yield for each year. Remote sensing is a valuable tool for estimating rice yields well in advance of harvest and at a low cost.
Use of Vegetation Health Data for Estimation of Aus Rice Yield in Bangladesh
Rahman, Atiqur; Roytman, Leonid; Krakauer, Nir Y.; Nizamuddin, Mohammad; Goldberg, Mitch
2009-01-01
Rice is a vital staple crop for Bangladesh and surrounding countries, with interannual variation in yields depending on climatic conditions. We compared Bangladesh yield of aus rice, one of the main varieties grown, from official agricultural statistics with Vegetation Health (VH) Indices [Vegetation Condition Index (VCI), Temperature Condition Index (TCI) and Vegetation Health Index (VHI)] computed from Advanced Very High Resolution Radiometer (AVHRR) data covering a period of 15 years (1991–2005). A strong correlation was found between aus rice yield and VCI and VHI during the critical period of aus rice development that occurs during March–April (weeks 8–13 of the year), several months in advance of the rice harvest. Stepwise principal component regression (PCR) was used to construct a model to predict yield as a function of critical-period VHI. The model reduced the yield prediction error variance by 62% compared with a prediction of average yield for each year. Remote sensing is a valuable tool for estimating rice yields well in advance of harvest and at a low cost. PMID:22574057
Harley, Brendan A; Freyman, Toby M; Wong, Matthew Q; Gibson, Lorna J
2007-10-15
Cell-mediated contraction plays a critical role in many physiological and pathological processes, notably organized contraction during wound healing. Implantation of an appropriately formulated (i.e., mean pore size, chemical composition, degradation rate) three-dimensional scaffold into an in vivo wound site effectively blocks the majority of organized wound contraction and results in induced regeneration rather than scar formation. Improved understanding of cell contraction within three-dimensional constructs therefore represents an important area of study in tissue engineering. Studies of cell contraction within three-dimensional constructs typically calculate an average contractile force from the gross deformation of a macroscopic substrate by a large cell population. In this study, cellular solids theory has been applied to conventional column buckling relationships to quantify the magnitude of individual cell contraction events within a three-dimensional, collagen-glycosaminoglycan scaffold. This new technique can be used for studying cell mechanics with a wide variety of porous scaffolds that resemble low-density, open-cell foams. It extends previous methods for analyzing cell buckling of two-dimensional substrates to three-dimensional constructs. From data available in the literature, the mean contractile force (Fc) generated by individual dermal fibroblasts within the collagen-glycosaminoglycan scaffold was calculated to range between 11 and 41 nN (Fc=26+/-13 nN, mean+/-SD), with an upper bound of cell contractility estimated at 450 nN.
Do drug-free workplace programs prevent occupational injuries? Evidence from Washington State.
Wickizer, Thomas M; Kopjar, Branko; Franklin, Gary; Joesch, Jutta
2004-02-01
To evaluate the effect of a publicly sponsored drug-free workplace program on reducing the risk of occupational injuries. Workers' compensation claims data from the Washington State Department of Labor and Industries covering the period 1994 through 2000 and work-hours data reported by employers served as the data sources for the analysis. We used a pre-post design with a nonequivalent comparison group to assess the impact of the intervention on injury risk, measured in terms of differences in injury incidence rates. Two hundred and sixty-one companies that enrolled in the drug-free workplace program during the latter half of 1996 were compared with approximately 20,500 nonintervention companies. We tested autoregressive, integrated moving-average (ARIMA) models to assess the robustness of our findings. The drug-free workplace intervention was associated (p < .05) with a statistically significant decrease in injury rates for three industry groups: construction, manufacturing, and services. It was associated (p < .05) with a reduction in the incidence rate of more serious injuries involving four or more days of lost work time for two industry groups: construction and services. The ARIMA analysis supported The drug-free workplace program we studied was associated with a selective, industry-specific preventive effect. The strongest evidence of an intervention effect was for the construction industry. Estimated net cost savings for this industry were positive though small in magnitude.
dos Anjos, Daniela Brianne Martins; Rodrigues, Roberta Cunha Matheus; Padilha, Kátia Melissa; Pedrosa, Rafaela Batista dos Santos; Gallani, Maria Cecília Bueno Jayme
2016-01-01
ABSTRACT Objective: evaluate the practicality, acceptability and the floor and ceiling effects, estimate the reliability and verify the convergent construct's validity with the instrument called the Heart Valve Disease Impact on daily life (IDCV) of the valve disease in patients with mitral and or aortic heart valve disease. Method: data was obtained from 86 heart valve disease patients through 3 phases: a face to face interview for a socio-demographic and clinic characterization and then other two done through phone calls of the interviewed patients for application of the instrument (test and repeat test). Results: as for the practicality and acceptability, the instrument was applied with an average time of 9,9 minutes and with 110% of responses, respectively. Ceiling and floor effects observed for all domains, especially floor effect. Reliability was tested using the test - repeating pattern to give evidence of temporal stability of the measurement. Significant negative correlations with moderate to strong magnitude were found between the score of the generic question about the impact of the disease and the scores of IDCV, which points to the validity of the instrument convergent construct. Conclusion: the instrument to measure the impact of valve heart disease on the patient's daily life showed evidence of reliability and validity when applied to patients with heart valve disease. PMID:27992024
Children's Sense of Being a Writer: Identity Construction in Second Grade Writers Workshop
ERIC Educational Resources Information Center
Seban, Demet; Tavsanli, Ömer Faruk
2015-01-01
Literacy activities in which children invest in and understand literacy creates spaces for them to construct their identity as readers/writers and build their personal theories of literacy. This study presents the identity construction of second grade students who identified as successful, average or struggling in their first time engagement with…
Effects of a constructed wetland and pond system upon shallow groundwater quality
Ying Ouyang
2013-01-01
Constructed wetland (CW) and constructed pond (CP) are commonly utilized for removal of excess nutrients and certain pollutants from stormwater. This study characterized shallow groundwater quality for pre- and post-CW and CP system conditions using data from monitoring wells. Results showed that the average concentrations of groundwater phosphorus (P) decreased from...
[Simulation study of air quality health index in 5 cities in China: 2013-2015].
Wang, W T; Sun, Q H; Qin, J; Li, T T; Shi, X M
2017-03-10
Objective: To construct the air quality health index (AQHI) by inclusion of air pollutants PM(2.5) and O(3) in Guangzhou, Shanghai, Xi' an, Beijing, Shenyang, and explore scientificity and feasibility of its application in China. Methods: The daily average concentrations of PM(2.5) and O(3) in air, and daily average mortality from 2013 to 2015 in the 5 cities in China, the exposure-response coefficients of PM(2.5) and O(3) and total mortality from Meta studies in China were used to construct local AQHI. The health risk levels of air pollution in the 5 cities were calculated and compared with the characteristics of single pollutant concentrationof PM(2.5) or O(3). Results: In the 5 cities, the average concentration of PM(2.5) was highest in Beijing (82 μg/m(3)) and lowest in Guangzhou (46 μg/m(3)). And the average concentration of O(3) was highest in Shanghai (72 μg/m(3)) and lowest in Xi' an (45 μg/m(3)). In all the cities, the average concentration of PM(2.5) was highest in winter and lowest in summer. In summer, the average concentration of O(3) was lowest. But the health risk level of AQHI showed that the 5 cities had higher frequency of low or medium risk averagely. And Beijing had the highest frequency of high risk in summer (5.69%). Xi' an had the highest frequency of extremely high risk in winter (1.63%). Conclusions: In this study, AQHI could be constructed by using air PM(2.5) and O(3) concentration data which can be obtained in many areas in China. The application of this index is scientific and feasible in China.
Estimation of Rainfall Sampling Uncertainty: A Comparison of Two Diverse Approaches
NASA Technical Reports Server (NTRS)
Steiner, Matthias; Zhang, Yu; Baeck, Mary Lynn; Wood, Eric F.; Smith, James A.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)
2002-01-01
The spatial and temporal intermittence of rainfall causes the averages of satellite observations of rain rate to differ from the "true" average rain rate over any given area and time period, even if the satellite observations are perfectly accurate. The difference of satellite averages based on occasional observation by satellite systems and the continuous-time average of rain rate is referred to as sampling error. In this study, rms sampling error estimates are obtained for average rain rates over boxes 100 km, 200 km, and 500 km on a side, for averaging periods of 1 day, 5 days, and 30 days. The study uses a multi-year, merged radar data product provided by Weather Services International Corp. at a resolution of 2 km in space and 15 min in time, over an area of the central U.S. extending from 35N to 45N in latitude and 100W to 80W in longitude. The intervals between satellite observations are assumed to be equal, and similar In size to what present and future satellite systems are able to provide (from 1 h to 12 h). The sampling error estimates are obtained using a resampling method called "resampling by shifts," and are compared to sampling error estimates proposed by Bell based on earlier work by Laughlin. The resampling estimates are found to scale with areal size and time period as the theory predicts. The dependence on average rain rate and time interval between observations is also similar to what the simple theory suggests.
19 CFR 351.414 - Comparison of normal value with export price (constructed export price).
Code of Federal Regulations, 2010 CFR
2010-04-01
... export price). (a) Introduction. The Secretary normally will average prices used as the basis for normal... calculate weighted averages for such shorter period as the Secretary deems appropriate. (e) Application of...
Motor vehicle fatalities in the United States construction industry.
Ore, T; Fosbroke, D E
1997-09-01
A death certificate-based surveillance system was used to identify 2144 work-related motor vehicle fatalities among civilian workers in the United States construction industry over the years 1980-92. Construction workers were twice as likely to be killed by a motor vehicle as the average worker, with an annual crude mortality rate of 2.3/100,000 workers. Injury prevention efforts in construction have had limited effect on motor vehicle-related deaths, with death rates falling by only 11% during the 13-year period, compared with 43% for falls, 54% for electrocutions and 48% for machinery. In all industries combined, motor vehicle fatality rates dropped by 47%. The largest proportion of motor vehicle deaths (40%) occurred among pedestrians, with construction accounting for more than one-fourth of all pedestrian deaths. A minimum of 54 (6%) of these pedestrian fatalities were flaggers or surveyors. Flaggers accounted for half the 34 pedestrian fatalities among women, compared with only 3% among men. Along with previous studies and recent trends in the amount and type of road construction, these results underscore the need for better traffic control management in construction work areas to reduce pedestrian fatalities. As the second leading cause of traumatic death in construction, with an annual average share of 15% of the total deaths, exceeded only by falls, prevention of work-related motor vehicle research should become a greater priority in the construction industry.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-11
... to the form are to allow applicants to pay the transfer tax by credit or debit card, and combine... amount of time estimated for an average respondent to respond: It is estimated that 9,662 respondents will take an average of approximately 1.69 hours to complete. (6) An estimate of the total burden (in...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-11
... and local law. The changes to the form are to allow the applicant to pay the transfer tax by credit or...) An estimate of the total number of respondents and the amount of time estimated for an average respondent to respond: It is estimated that 65,085 respondents will take an average of 1.68 hours to complete...
ERIC Educational Resources Information Center
Saupe, Joe L.; Eimers, Mardy T.
2013-01-01
The purpose of this paper is to explore differences in the reliabilities of cumulative college grade point averages (GPAs), estimated for unweighted and weighted, one-semester, 1-year, 2-year, and 4-year GPAs. Using cumulative GPAs for a freshman class at a major university, we estimate internal consistency (coefficient alpha) reliabilities for…
Construction and demolition waste generation rates for high-rise buildings in Malaysia.
Mah, Chooi Mei; Fujiwara, Takeshi; Ho, Chin Siong
2016-12-01
Construction and demolition waste continues to sharply increase in step with the economic growth of less developed countries. Though the construction industry is large, it is composed of small firms with individual waste management practices, often leading to the deleterious environmental outcomes. Quantifying construction and demolition waste generation allows policy makers and stakeholders to understand the true internal and external costs of construction, providing a necessary foundation for waste management planning that may overcome deleterious environmental outcomes and may be both economically and environmentally optimal. This study offers a theoretical method for estimating the construction and demolition project waste generation rate by utilising available data, including waste disposal truck size and number, and waste volume and composition. This method is proposed as a less burdensome and more broadly applicable alternative, in contrast to waste estimation by on-site hand sorting and weighing. The developed method is applied to 11 projects across Malaysia as the case study. This study quantifies waste generation rate and illustrates the construction method in influencing the waste generation rate, estimating that the conventional construction method has a waste generation rate of 9.88 t 100 m -2 , the mixed-construction method has a waste generation rate of 3.29 t 100 m -2 , and demolition projects have a waste generation rate of 104.28 t 100 m -2 . © The Author(s) 2016.
Warren, Sam A; Huszti, Ella; Bradley, Steven M; Chan, Paul S; Bryson, Chris L; Fitzpatrick, Annette L; Nichol, Graham
2014-03-01
Expert guidelines for treatment of cardiac arrest recommend administration of adrenaline (epinephrine) every three to five minutes. However, the effects of different dosing periods of epinephrine remain unclear. We sought to evaluate the association between epinephrine average dosing period and survival to hospital discharge in adults with an in-hospital cardiac arrest (IHCA). We performed a retrospective review of prospectively collected data on 20,909 IHCA events from 505 hospitals participating in the Get With The Guidelines-Resuscitation (GWTG-R) quality improvement registry. Epinephrine average dosing period was defined as the time between the first epinephrine dose and the resuscitation endpoint, divided by the total number of epinephrine doses received subsequent to the first epinephrine dose. Associations with survival to hospital discharge were assessed by using generalized estimating equations to construct multivariable logistic regression models. Compared to a referent epinephrine average dosing period of 4 to <5 min per dose, survival to hospital discharge was significantly higher in patients with the following epinephrine average dosing periods: for 6 to <7 min/dose, adjusted odds ratio [OR], 1.41 (95%CI: 1.12, 1.78); for 7 to <8 min/dose, adjusted OR, 1.30 (95%CI: 1.02, 1.65); for 8 to <9 min/dose, adjusted OR, 1.79 (95%CI: 1.38, 2.32); for 9 to <10 min/dose, adjusted OR, 2.17 (95%CI: 1.62, 2.92). This pattern was consistent for both shockable and non-shockable cardiac arrest rhythms. Less frequent average epinephrine dosing than recommended by consensus guidelines was associated with improved survival of in-hospital cardiac arrest. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
On estimating attenuation from the amplitude of the spectrally whitened ambient seismic field
NASA Astrophysics Data System (ADS)
Weemstra, Cornelis; Westra, Willem; Snieder, Roel; Boschi, Lapo
2014-06-01
Measuring attenuation on the basis of interferometric, receiver-receiver surface waves is a non-trivial task: the amplitude, more than the phase, of ensemble-averaged cross-correlations is strongly affected by non-uniformities in the ambient wavefield. In addition, ambient noise data are typically pre-processed in ways that affect the amplitude itself. Some authors have recently attempted to measure attenuation in receiver-receiver cross-correlations obtained after the usual pre-processing of seismic ambient-noise records, including, most notably, spectral whitening. Spectral whitening replaces the cross-spectrum with a unit amplitude spectrum. It is generally assumed that cross-terms have cancelled each other prior to spectral whitening. Cross-terms are peaks in the cross-correlation due to simultaneously acting noise sources, that is, spurious traveltime delays due to constructive interference of signal coming from different sources. Cancellation of these cross-terms is a requirement for the successful retrieval of interferometric receiver-receiver signal and results from ensemble averaging. In practice, ensemble averaging is replaced by integrating over sufficiently long time or averaging over several cross-correlation windows. Contrary to the general assumption, we show in this study that cross-terms are not required to cancel each other prior to spectral whitening, but may also cancel each other after the whitening procedure. Specifically, we derive an analytic approximation for the amplitude difference associated with the reversed order of cancellation and normalization. Our approximation shows that an amplitude decrease results from the reversed order. This decrease is predominantly non-linear at small receiver-receiver distances: at distances smaller than approximately two wavelengths, whitening prior to ensemble averaging causes a significantly stronger decay of the cross-spectrum.
Fischer, Jason L.; Bennion, David; Roseman, Edward F.; Manny, Bruce A.
2015-01-01
Lake sturgeon (Acipenser fulvescens) populations have suffered precipitous declines in the St. Clair–Detroit River system, following the removal of gravel spawning substrates and overfishing in the late 1800s to mid-1900s. To assist the remediation of lake sturgeon spawning habitat, three hydrodynamic models were integrated into a spatial model to identify areas in two large rivers, where water velocities were appropriate for the restoration of lake sturgeon spawning habitat. Here we use water velocity data collected with an acoustic Doppler current profiler (ADCP) to assess the ability of the spatial model and its sub-models to correctly identify areas where water velocities were deemed suitable for restoration of fish spawning habitat. ArcMap 10.1 was used to create raster grids of water velocity data from model estimates and ADCP measurements which were compared to determine the percentage of cells similarly classified as unsuitable, suitable, or ideal for fish spawning habitat remediation. The spatial model categorized 65% of the raster cells the same as depth-averaged water velocity measurements from the ADCP and 72% of the raster cells the same as surface water velocity measurements from the ADCP. Sub-models focused on depth-averaged velocities categorized the greatest percentage of cells similar to ADCP measurements where 74% and 76% of cells were the same as depth-averaged water velocity measurements. Our results indicate that integrating depth-averaged and surface water velocity hydrodynamic models may have biased the spatial model and overestimated suitable spawning habitat. A model solely integrating depth-averaged velocity models could improve identification of areas suitable for restoration of fish spawning habitat.
NASA Technical Reports Server (NTRS)
Scruggs, T.; Moraguez, M.; Patankar, K.; Fitz-Coy, N.; Liou, J.-C.; Sorge, M.; Huynh, T.
2016-01-01
Debris fragments from the hypervelocity impact testing of DebriSat are being collected and characterized for use in updating existing satellite breakup models. One of the key parameters utilized in these models is the ballistic coefficient of the fragment which is directly related to its area-to-mass ratio. However, since the attitude of fragments varies during their orbital lifetime, it is customary to use the average cross-sectional area in the calculation of the area-to-mass ratio. The average cross-sectional area is defined as the average of the projected surface areas perpendicular to the direction of motion and has been shown to be equal to one-fourth of the total surface area of a convex object. Unfortunately, numerous fragments obtained from the DebriSat experiment show significant concavity (i.e., shadowing) and thus we have explored alternate methods for computing the average cross-sectional area of the fragments. An imaging system based on the volumetric reconstruction of a 3D object from multiple 2D photographs of the object was developed for use in determining the size characteristic (i.e., characteristics length) of the DebriSat fragments. For each fragment, the imaging system generates N number of images from varied azimuth and elevation angles and processes them using a space-carving algorithm to construct a 3D point cloud of the fragment. This paper describes two approaches for calculating the average cross-sectional area of debris fragments based on the 3D imager. Approach A utilizes the constructed 3D object to generate equally distributed cross-sectional area projections and then averages them to determine the average cross-sectional area. Approach B utilizes a weighted average of the area of the 2D photographs to directly compute the average cross-sectional area. A comparison of the accuracy and computational needs of each approach is described as well as preliminary results of an analysis to determine the "optimal" number of images needed for the 3D imager to accurately measure the average cross sectional area of objects with known dimensions.
Harwell, Glenn R.
2012-01-01
Organizations responsible for the management of water resources, such as the U.S. Army Corps of Engineers (USACE), are tasked with estimation of evaporation for water-budgeting and planning purposes. The USACE has historically used Class A pan evaporation data (pan data) to estimate evaporation from reservoirs but many USACE Districts have been experimenting with other techniques for an alternative to collecting pan data. The energy-budget method generally is considered the preferred method for accurate estimation of open-water evaporation from lakes and reservoirs. Complex equations to estimate evaporation, such as the Penman, DeBruin-Keijman, and Priestley-Taylor, perform well when compared with energy-budget method estimates when all of the important energy terms are included in the equations and ideal data are collected. However, sometimes nonideal data are collected and energy terms, such as the change in the amount of stored energy and advected energy, are not included in the equations. When this is done, the corresponding errors in evaporation estimates are not quantifiable. Much simpler methods, such as the Hamon method and a method developed by the U.S. Weather Bureau (USWB) (renamed the National Weather Service in 1970), have been shown to provide reasonable estimates of evaporation when compared to energy-budget method estimates. Data requirements for the Hamon and USWB methods are minimal and sometimes perform well with remotely collected data. The Hamon method requires average daily air temperature, and the USWB method requires daily averages of air temperature, relative humidity, wind speed, and solar radiation. Estimates of annual lake evaporation from pan data are frequently within 20 percent of energy-budget method estimates. Results of evaporation estimates from the Hamon method and the USWB method were compared against historical pan data at five selected reservoirs in Texas (Benbrook Lake, Canyon Lake, Granger Lake, Hords Creek Lake, and Sam Rayburn Lake) to evaluate their performance and to develop coefficients to minimize bias for the purpose of estimating reservoir evaporation with accuracies similar to estimates of evaporation obtained from pan data. The modified Hamon method estimates of reservoir evaporation were similar to estimates of reservoir evaporation from pan data for daily, monthly, and annual time periods. The modified Hamon method estimates of annual reservoir evaporation were always within 20 percent of annual reservoir evaporation from pan data. Unmodified and modified USWB method estimates of annual reservoir evaporation were within 20 percent of annual reservoir evaporation from pan data for about 91 percent of the years compared. Average daily differences between modified USWB method estimates and estimates from pan data as a percentage of the average amount of daily evaporation from pan data were within 20 percent for 98 percent of the months. Without any modification to the USWB method, average daily differences as a percentage of the average amount of daily evaporation from pan data were within 20 percent for 73 percent of the months. Use of the unmodified USWB method is appealing because it means estimates of average daily reservoir evaporation can be made from air temperature, relative humidity, wind speed, and solar radiation data collected from remote weather stations without the need to develop site-specific coefficients from historical pan data. Site-specific coefficients would need to be developed for the modified version of the Hamon method.
Using state-issued identification cards for obesity tracking.
Morris, Daniel S; Schubert, Stacey S; Ngo, Duyen L; Rubado, Dan J; Main, Eric; Douglas, Jae P
2015-01-01
Obesity prevention has emerged as one of public health's top priorities. Public health agencies need reliable data on population health status to guide prevention efforts. Existing survey data sources provide county-level estimates; obtaining sub-county estimates from survey data can be prohibitively expensive. State-issued identification cards are an alternate data source for community-level obesity estimates. We computed body mass index for 3.2 million adult Oregonians who were issued a driver license or identification card between 2003 and 2010. Statewide estimates of obesity prevalence and average body mass index were compared to the Oregon Behavioral Risk Factor Surveillance System (BRFSS). After geocoding addresses we calculated average adult body mass index for every census tract and block group in the state. Sub-county estimates reveal striking patterns in the population's weight status. Annual obesity prevalence estimates from identification cards averaged 18% lower than the BRFSS for men and 31% lower for women. Body mass index estimates averaged 2% lower than the BRFSS for men and 5% lower for women. Identification card records are a promising data source to augment tracking of obesity. People do tend to misrepresent their weight, but the consistent bias does not obscure patterns and trends. Large numbers of records allow for stable estimates for small geographic areas. Copyright © 2014 Asian Oceanian Association for the Study of Obesity. All rights reserved.
Craig, Benjamin M; Busschbach, Jan JV
2009-01-01
Background To present an episodic random utility model that unifies time trade-off and discrete choice approaches in health state valuation. Methods First, we introduce two alternative random utility models (RUMs) for health preferences: the episodic RUM and the more common instant RUM. For the interpretation of time trade-off (TTO) responses, we show that the episodic model implies a coefficient estimator, and the instant model implies a mean slope estimator. Secondly, we demonstrate these estimators and the differences between the estimates for 42 health states using TTO responses from the seminal Measurement and Valuation in Health (MVH) study conducted in the United Kingdom. Mean slopes are estimates with and without Dolan's transformation of worse-than-death (WTD) responses. Finally, we demonstrate an exploded probit estimator, an extension of the coefficient estimator for discrete choice data that accommodates both TTO and rank responses. Results By construction, mean slopes are less than or equal to coefficients, because slopes are fractions and, therefore, magnify downward errors in WTD responses. The Dolan transformation of WTD responses causes mean slopes to increase in similarity to coefficient estimates, yet they are not equivalent (i.e., absolute mean difference = 0.179). Unlike mean slopes, coefficient estimates demonstrate strong concordance with rank-based predictions (Lin's rho = 0.91). Combining TTO and rank responses under the exploded probit model improves the identification of health state values, decreasing the average width of confidence intervals from 0.057 to 0.041 compared to TTO only results. Conclusion The episodic RUM expands upon the theoretical framework underlying health state valuation and contributes to health econometrics by motivating the selection of coefficient and exploded probit estimators for the analysis of TTO and rank responses. In future MVH surveys, sample size requirements may be reduced through the incorporation of multiple responses under a single estimator. PMID:19144115
Finkelstein, Julia L; Schleinitz, Mark D; Carabin, Hélène; McGarvey, Stephen T
2008-03-05
Schistosomiasis is among the most prevalent parasitic infections worldwide. However, current Global Burden of Disease (GBD) disability-adjusted life year estimates indicate that its population-level impact is negligible. Recent studies suggest that GBD methodologies may significantly underestimate the burden of parasitic diseases, including schistosomiasis. Furthermore, strain-specific disability weights have not been established for schistosomiasis, and the magnitude of human disease burden due to Schistosoma japonicum remains controversial. We used a decision model to quantify an alternative disability weight estimate of the burden of human disease due to S. japonicum. We reviewed S. japonicum morbidity data, and constructed decision trees for all infected persons and two age-specific strata, <15 years (y) and > or =15 y. We conducted stochastic and probabilistic sensitivity analyses for each model. Infection with S. japonicum was associated with an average disability weight of 0.132, with age-specific disability weights of 0.098 (<15 y) and 0.186 (> or =15 y). Re-estimated disability weights were seven to 46 times greater than current GBD measures; no simulations produced disability weight estimates lower than 0.009. Nutritional morbidities had the greatest contribution to the S. japonicum disability weight in the <15 y model, whereas major organ pathologies were the most critical variables in the older age group. GBD disability weights for schistosomiasis urgently need to be revised, and species-specific disability weights should be established. Even a marginal increase in current estimates would result in a substantial rise in the estimated global burden of schistosomiasis, and have considerable implications for public health prioritization and resource allocation for schistosomiasis research, monitoring, and control.
Finkelstein, Julia L.; Schleinitz, Mark D.; Carabin, Hélène; McGarvey, Stephen T.
2008-01-01
Schistosomiasis is among the most prevalent parasitic infections worldwide. However, current Global Burden of Disease (GBD) disability-adjusted life year estimates indicate that its population-level impact is negligible. Recent studies suggest that GBD methodologies may significantly underestimate the burden of parasitic diseases, including schistosomiasis. Furthermore, strain-specific disability weights have not been established for schistosomiasis, and the magnitude of human disease burden due to Schistosoma japonicum remains controversial. We used a decision model to quantify an alternative disability weight estimate of the burden of human disease due to S. japonicum. We reviewed S. japonicum morbidity data, and constructed decision trees for all infected persons and two age-specific strata, <15 years (y) and ≥15 y. We conducted stochastic and probabilistic sensitivity analyses for each model. Infection with S. japonicum was associated with an average disability weight of 0.132, with age-specific disability weights of 0.098 (<15 y) and 0.186 (≥15 y). Re-estimated disability weights were seven to 46 times greater than current GBD measures; no simulations produced disability weight estimates lower than 0.009. Nutritional morbidities had the greatest contribution to the S. japonicum disability weight in the <15 y model, whereas major organ pathologies were the most critical variables in the older age group. GBD disability weights for schistosomiasis urgently need to be revised, and species-specific disability weights should be established. Even a marginal increase in current estimates would result in a substantial rise in the estimated global burden of schistosomiasis, and have considerable implications for public health prioritization and resource allocation for schistosomiasis research, monitoring, and control. PMID:18320018
Estimation of stream conditions in tributaries of the Klamath River, northern California
Manhard, Christopher V.; Som, Nicholas A.; Jones, Edward C.; Perry, Russell W.
2018-01-01
Because of their critical ecological role, stream temperature and discharge are requisite inputs for models of salmonid population dynamics. Coho Salmon inhabiting the Klamath Basin spend much of their freshwater life cycle inhabiting tributaries, but environmental data are often absent or only seasonally available at these locations. To address this information gap, we constructed daily averaged water temperature models that used simulated meteorological data to estimate daily tributary temperatures, and we used flow differentials recorded on the mainstem Klamath River to estimate daily tributary discharge. Observed temperature data were available for fourteen of the major salmon bearing tributaries, which enabled estimation of tributary-specific model parameters at those locations. Water temperature data from six mid-Klamath Basin tributaries were used to estimate a global set of parameters for predicting water temperatures in the remaining tributaries. The resulting parameter sets were used to simulate water temperatures for each of 75 tributaries from 1980-2015. Goodness-of-fit statistics computed from a cross-validation analysis demonstrated a high precision of the tributary-specific models in predicting temperature in unobserved years and of the global model in predicting temperatures in unobserved streams. Klamath River discharge has been monitored by four gages that broadly intersperse the 292 kilometers from the Iron Gate Dam to the Klamath River mouth. These gages defined the upstream and downstream margins of three reaches. Daily discharge of tributaries within a reach was estimated from 1980-2015 based on drainage-area proportionate allocations of the discharge differential between the upstream and downstream margin. Comparisons with measured discharge on Indian Creek, a moderate-sized tributary with naturally regulated flows, revealed that the estimates effectively approximated both the variability and magnitude of discharge.
Are EMS call volume predictions based on demand pattern analysis accurate?
Brown, Lawrence H; Lerner, E Brooke; Larmon, Baxter; LeGassick, Todd; Taigman, Michael
2007-01-01
Most EMS systems determine the number of crews they will deploy in their communities and when those crews will be scheduled based on anticipated call volumes. Many systems use historical data to calculate their anticipated call volumes, a method of prediction known as demand pattern analysis. To evaluate the accuracy of call volume predictions calculated using demand pattern analysis. Seven EMS systems provided 73 consecutive weeks of hourly call volume data. The first 20 weeks of data were used to calculate three common demand pattern analysis constructs for call volume prediction: average peak demand (AP), smoothed average peak demand (SAP), and 90th percentile rank (90%R). The 21st week served as a buffer. Actual call volumes in the last 52 weeks were then compared to the predicted call volumes by using descriptive statistics. There were 61,152 hourly observations in the test period. All three constructs accurately predicted peaks and troughs in call volume but not exact call volume. Predictions were accurate (+/-1 call) 13% of the time using AP, 10% using SAP, and 19% using 90%R. Call volumes were overestimated 83% of the time using AP, 86% using SAP, and 74% using 90%R. When call volumes were overestimated, predictions exceeded actual call volume by a median (Interquartile range) of 4 (2-6) calls for AP, 4 (2-6) for SAP, and 3 (2-5) for 90%R. Call volumes were underestimated 4% of time using AP, 4% using SAP, and 7% using 90%R predictions. When call volumes were underestimated, call volumes exceeded predictions by a median (Interquartile range; maximum under estimation) of 1 (1-2; 18) call for AP, 1 (1-2; 18) for SAP, and 2 (1-3; 20) for 90%R. Results did not vary between systems. Generally, demand pattern analysis estimated or overestimated call volume, making it a reasonable predictor for ambulance staffing patterns. However, it did underestimate call volume between 4% and 7% of the time. Communities need to determine if these rates of over-and underestimation are acceptable given their resources and local priorities.
A computerized method for the hydrologic design of culverts.
DOT National Transportation Integrated Search
1974-02-01
Nationwide, about five cents of each highway construction dollar is : spent on culverts. In Iowa, average annual construction costs on the : interstate, primary, and federal-aid secondary systems are about : $120,000,000. Assuming the national figure...
Tadano, Shigeru; Takeda, Ryo; Miyagawa, Hiroaki
2013-01-01
This paper proposes a method for three dimensional gait analysis using wearable sensors and quaternion calculations. Seven sensor units consisting of a tri-axial acceleration and gyro sensors, were fixed to the lower limbs. The acceleration and angular velocity data of each sensor unit were measured during level walking. The initial orientations of the sensor units were estimated using acceleration data during upright standing position and the angular displacements were estimated afterwards using angular velocity data during gait. Here, an algorithm based on quaternion calculation was implemented for orientation estimation of the sensor units. The orientations of the sensor units were converted to the orientations of the body segments by a rotation matrix obtained from a calibration trial. Body segment orientations were then used for constructing a three dimensional wire frame animation of the volunteers during the gait. Gait analysis was conducted on five volunteers, and results were compared with those from a camera-based motion analysis system. Comparisons were made for the joint trajectory in the horizontal and sagittal plane. The average RMSE and correlation coefficient (CC) were 10.14 deg and 0.98, 7.88 deg and 0.97, 9.75 deg and 0.78 for the hip, knee and ankle flexion angles, respectively. PMID:23877128
[Cost at the first level of care].
Villarreal-Ríos, E; Montalvo-Almaguer, G; Salinas-Martínez, M; Guzmán-Padilla, J E; Tovar-Castillo, N H; Garza-Elizondo, M E
1996-01-01
To estimate the unit cost of 15 causes of demand for primary care per health clinic in an institutional (social security) health care system, and to determine the average cost at the state level. The cost of 80% of clinic visits was estimated in 35 of 40 clinics in the social security health care system in the state of Nuevo Leon, Mexico. The methodology for fixed costs consisted of: departmentalization, inputs, cost, weights and construction of matrices. Variable costs were estimated for standard patients by type of health care sought and with the consensus of experts; the sum of fixed and variable costs gave the unit cost. A computerized model was employed for data processing. A large variation in unit cost was observed between health clinics studied for all causes of demand, in both metropolitan and non-metropolitan areas. Prenatal care ($92.26) and diarrhea ($93.76) were the least expensive while diabetes ($240.42) and hypertension ($312.54) were the most expensive. Non-metropolitan costs were higher than metropolitan costs (p < 0.05); controlling for number of physician's offices showed that this was determined by medical units with only one physician's office. Knowledge of unit costs is a tool that, when used by medical administrators, allows adequate health care planning and efficient allocation of health resources.
Study on highway transportation greenhouse effect external cost estimation in China
NASA Astrophysics Data System (ADS)
Chu, Chunchao; Pan, Fengming
2017-03-01
This paper focuses on estimating highway transportation greenhouse gas emission volume and greenhouse gas external cost in China. At first, composition and characteristics of greenhouse gases were analysed about highway transportation emissions. Secondly, an improved model of emission volume was presented on basis of highway transportation energy consumption, which may be calculated by virtue of main affecting factors such as the annual average operation miles of each type of the motor vehicles and the unit consumption level. the model of emission volume was constructed which considered not only the availability of energy consumption statistics of highway transportation but also the greenhouse gas emission factors of various fuel types issued by IPCC. Finally, the external cost estimation model was established about highway transportation greenhouse gas emission which combined emission volume with the unit external cost of CO2 emissions. An example was executed to confirm presented model which ranged from 2011 to 2015 Year in China. The calculated result shows that the highway transportation total emission volume and greenhouse gas external cost are growing up, but the unit turnover external cost is steadily declining. On the whole overall, the situation is still grim about highway transportation greenhouse gas emission, and the green transportation strategy should be put into effect as soon as possible.
Doi, Shunsuke; Ide, Hiroo; Takeuchi, Koichi; Fujita, Shinsuke; Takabayashi, Katsuhiko
2017-01-01
Accessibility to healthcare service providers, the quantity, and the quality of them are important for national health. In this study, we focused on geographic accessibility to estimate and evaluate future demand and supply of healthcare services. We constructed a simulation model called the patient access area model (PAAM), which simulates patients’ access time to healthcare service institutions using a geographic information system (GIS). Using this model, to evaluate the balance of future healthcare services demand and supply in small areas, we estimated the number of inpatients every five years in each area and compared it with the number of hospital beds within a one-hour drive from each area. In an experiment with the Tokyo metropolitan area as a target area, when we assumed hospital bed availability to be 80%, it was predicted that over 78,000 inpatients would not receive inpatient care in 2030. However, this number would decrease if we lowered the rate of inpatient care by 10% and the average length of the hospital stay. Using this model, recommendations can be made regarding what action should be undertaken and by when to prevent a dramatic increase in healthcare demand. This method can help plan the geographical resource allocation in healthcare services for healthcare policy. PMID:29125585
Intracranial Procedures and Expected Frequency of Creutzfeldt-Jakob Disease.
Abrams, Joseph Y; Maddox, Ryan A; Schonberger, Lawrence B; Belay, Ermias D
2016-01-01
To assess the frequency and characteristics of intracranial procedures (ICPs) performed and the number of U.S. residents living with a history of ICP. These data are used to calculate the expected annual number of sporadic Creutzfeldt-Jakob disease (CJD) cases among U.S. residents with a history of ICP. The Nationwide Inpatient Sample provided data on the frequency and types of ICPs, and data from the National Center for Health Statistics was used to produce age-adjusted mortality rates. A model was constructed, which estimated long-term survival and sporadic CJD rates among ICP patients based on procedure type and age. There were an estimated 2,070,488 ICPs in the United States from 1998 to 2007, an average of over 200,000 per year. There were an estimated 2,023,726 U.S. residents in 2013 with a history of ICP in the previous 30 years. In 2013, there was expected to be 4.1 sporadic CJD cases (95% CI 1-8) among people with a history of ICP in the past 30 years. The considerable proportion of U.S. residents living with a history of ICP is important information for retrospective assessments of CJD or any other suspected long-term outcome of ICPs. © 2015 S. Karger AG, Basel.
GIS Tools to Estimate Average Annual Daily Traffic
DOT National Transportation Integrated Search
2012-06-01
This project presents five tools that were created for a geographical information system to estimate Annual Average Daily : Traffic using linear regression. Three of the tools can be used to prepare spatial data for linear regression. One tool can be...
Belchansky, G.I.; Douglas, David C.; Alpatsky, I.V.; Platonov, Nikita G.
2004-01-01
Arctic multiyear sea ice concentration maps for January 1988-2001 were generated from SSM/I brightness temperatures (19H, 19V, and 37V) using modified multiple layer perceptron neural networks. Learning data for the neural networks were extracted from ice maps derived from Okean and ERS satellite imagery to capitalize on the stability of active radar multiyear ice signatures. Evaluations of three learning algorithms and several topologies indicated that networks constructed with error back propagation learning and 3-20-1 topology produced the most consistent and physically plausible results. Operational neural networks were developed specifically with January learning data, and then used to estimate daily multiyear ice concentrations from daily-averaged SSM/I brightness temperatures during January. Monthly mean maps were produced for analysis by averaging the respective daily estimates. The 14-year series of January multiyear ice distributions revealed dense and persistent cover in the central Arctic surrounded by expansive regions of highly fluctuating interannual cover. Estimates of total multiyear ice area by the neural network were intermediate to those of other passive microwave algorithms, but annual fluctuations and trends were similar among all algorithms. When compared to Radarsat estimates of multiyear ice concentration in the Beaufort and Chukchi Seas (1997-1999), average discrepancies were small (0.9-2.5%) and spatial coherency was reasonable, indicating the neural network's Okean and ERS learning data facilitated passive microwave inversion that emulated backscatter signatures. During 1988-2001, total January multiyear ice area declined at a significant linear rate of -54.3 x 103 km2/yr-1 (-1.4%/yr-1). The most persistent and extensive decline in multiyear ice concentration (-3.3%/yr-1) occurred in the southern Beaufort and Chukchi Seas. In autumn 1996, a large multiyear ice recruitment of over 106 km2 (mostly in the Siberian Arctic) fully replenished the previous 8-year decline in total area, but it was followed by an accelerated and compensatory decline during the subsequent 4 years. Seventy-five percent of the interannual variation in January multiyear sea ice area was explained by linear regression on two atmospheric parameters: the previous inter's (JFM) Arctic Oscillation index as a proxy to melt duration and the previous year's average sea level pressure gradient across the Fram Strait as a proxy to annual ice export. Consecutive year changes (1994-2001) in January multiyear ice volume were significantly correlated with duration of the intervening melt season (R2 = 0.73, -80.0 km3/d-1), emphasizing a large thermodynamic influence on the Arctic's mass sea ice balance during summers with anomalous melt durations.
How to estimate green house gas (GHG) emissions from an excavator by using CAT's performance chart
NASA Astrophysics Data System (ADS)
Hajji, Apif M.; Lewis, Michael P.
2017-09-01
Construction equipment activities are a major part of many infrastructure projects. This type of equipment typically releases large quantities of green house gas (GHG) emissions. GHG emissions may come from fuel consumption. Furthermore, equipment productivity affects the fuel consumption. Thus, an estimating tool based on the construction equipment productivity rate is able to accurately assess the GHG emissions resulted from the equipment activities. This paper proposes a methodology to estimate the environmental impact for a common construction activity. This paper delivers sensitivity analysis and a case study for an excavator based on trench excavation activity. The methodology delivered in this study can be applied to a stand-alone model, or a module that is integrated with other emissions estimators. The GHG emissions are highly correlated to diesel fuel use, which is approximately 10.15 kilograms (kg) of CO2 per gallon of diesel fuel. The results showed that the productivity rate model as the result from multiple regression analysis can be used as the basis for estimating GHG emissions, and also as the framework for developing emissions footprint and understanding the environmental impact from construction equipment activities introduction.
ERIC Educational Resources Information Center
Moraleda, Jorge; Stork, David G.
2012-01-01
We introduce Lake Wobegon dice, where each die is "better than the set average." Specifically, these dice have the paradoxical property that on every roll, each die is more likely to roll greater than the set average on the roll, than less than this set average. We also show how to construct minimal optimal Lake Wobegon sets for all "n" [greater…
Revealing nonergodic dynamics in living cells from a single particle trajectory
NASA Astrophysics Data System (ADS)
Lanoiselée, Yann; Grebenkov, Denis S.
2016-05-01
We propose the improved ergodicity and mixing estimators to identify nonergodic dynamics from a single particle trajectory. The estimators are based on the time-averaged characteristic function of the increments and can thus capture additional information on the process as compared to the conventional time-averaged mean-square displacement. The estimators are first investigated and validated for several models of anomalous diffusion, such as ergodic fractional Brownian motion and diffusion on percolating clusters, and nonergodic continuous-time random walks and scaled Brownian motion. The estimators are then applied to two sets of earlier published trajectories of mRNA molecules inside live Escherichia coli cells and of Kv2.1 potassium channels in the plasma membrane. These statistical tests did not reveal nonergodic features in the former set, while some trajectories of the latter set could be classified as nonergodic. Time averages along such trajectories are thus not representative and may be strongly misleading. Since the estimators do not rely on ensemble averages, the nonergodic features can be revealed separately for each trajectory, providing a more flexible and reliable analysis of single-particle tracking experiments in microbiology.
36 CFR 223.41 - Payment when purchaser elects government road construction.
Code of Federal Regulations, 2010 CFR
2010-07-01
... government road construction. 223.41 Section 223.41 Parks, Forests, and Public Property FOREST SERVICE... Conditions and Provisions § 223.41 Payment when purchaser elects government road construction. Each contract having a provision for construction of specified roads with total estimated construction costs of $50,000...
Bettens, Ryan P A
2003-01-15
Collins' method of interpolating a potential energy surface (PES) from quantum chemical calculations for reactive systems (Jordan, M. J. T.; Thompson, K. C.; Collins, M. A. J. Chem. Phys. 1995, 102, 5647. Thompson, K. C.; Jordan, M. J. T.; Collins, M. A. J. Chem. Phys. 1998, 108, 8302. Bettens, R. P. A.; Collins, M. A. J. Chem. Phys. 1999, 111, 816) has been applied to a bound state problem. The interpolation method has been combined for the first time with quantum diffusion Monte Carlo calculations to obtain an accurate ground state zero-point energy, the vibrationally average rotational constants, and the vibrationally averaged internal coordinates. In particular, the system studied was fluoromethane using a composite method approximating the QCISD(T)/6-311++G(2df,2p) level of theory. The approach adopted in this work (a) is fully automated, (b) is fully ab initio, (c) includes all nine nuclear degrees of freedom, (d) requires no assumption of the functional form of the PES, (e) possesses the full symmetry of the system, (f) does not involve fitting any parameters of any kind, and (g) is generally applicable to any system amenable to quantum chemical calculations and Collins' interpolation method. The calculated zero-point energy agrees to within 0.2% of its current best estimate. A0 and B0 are within 0.9 and 0.3%, respectively, of experiment.
[Prediction of modality-specific working memory performance in kindergarten age].
Kiese-Himmel, Christiane
2018-04-10
Working memory (WM) as a central cognitive construct is a fundamental prerequisite for learning and provides a marker of developmental disorders. It has received considerable attention in recent years. Here, multivariate regression analyses using generalized linear models were conducted to determine predictor variables for phonological and visuospatial WM. The phonological WM was investigated by repetition of non-words (subtest PGN of the German SETK 3-5) and number recall (K-ABC-subtest), the visuospatial WM by the imitation of a sequence hand movements (K-ABC-subtest hand movements). The estimation of intelligence was operationalized by the performance in the K-ABC-scale "Simultaneous Processing". Kindergarten kids (N = 169; 49 % boys; 51 % girls), mostly with migration background and German as second language (mean age: 45.9; SD 6.2; min 36, max 61 months). They visited the kindergarten at the time of testing for 9.9 (SD 6.9) months, on average and had an average intelligence. Independent variables were chronological age, gender, kindergarten attendance until the test examination, intelligence, migration background. Both phonological and visuospatial working WM performance were on average not reduced. Chronological age and simultaneous processing were found to be significant predictors for the performance in all WM tests. In the age from 36 to 61 months both working memory systems can be described as a congenital, maturity-dependent and rather gender non-specific mechanism. © Georg Thieme Verlag KG Stuttgart · New York.
DOT National Transportation Integrated Search
2014-03-01
The GASCAP model was developed to provide a software tool for analysis of the life-cycle GHG : emissions associated with the construction and maintenance of transportation projects. This phase : of development included techniques for estimating emiss...
Cullings, H M; Grant, E J; Egbert, S D; Watanabe, T; Oda, T; Nakamura, F; Yamashita, T; Fuchi, H; Funamoto, S; Marumo, K; Sakata, R; Kodama, Y; Ozasa, K; Kodama, K
2017-01-01
Individual dose estimates calculated by Dosimetry System 2002 (DS02) for the Life Span Study (LSS) of atomic bomb survivors are based on input data that specify location and shielding at the time of the bombing (ATB). A multi-year effort to improve information on survivors' locations ATB has recently been completed, along with comprehensive improvements in their terrain shielding input data and several improvements to computational algorithms used in combination with DS02 at RERF. Improvements began with a thorough review and prioritization of original questionnaire data on location and shielding that were taken from survivors or their proxies in the period 1949-1963. Related source documents varied in level of detail, from relatively simple lists to carefully-constructed technical drawings of structural and other shielding and surrounding neighborhoods. Systematic errors were reduced in this work by restoring the original precision of map coordinates that had been truncated due to limitations in early data processing equipment and by correcting distortions in the old (WWII-era) maps originally used to specify survivors' positions, among other improvements. Distortion errors were corrected by aligning the old maps and neighborhood drawings to orthophotographic mosaics of the cities that were newly constructed from pre-bombing aerial photographs. Random errors that were reduced included simple transcription errors and mistakes in identifying survivors' locations on the old maps. Terrain shielding input data that had been originally estimated for limited groups of survivors using older methods and data sources were completely re-estimated for all survivors using new digital terrain elevation data. Improvements to algorithms included a fix to an error in the DS02 code for coupling house and terrain shielding, a correction for elevation at the survivor's location in calculating angles to the horizon used for terrain shielding input, an improved method for truncating high dose estimates to 4 Gy to reduce the effect of dose error, and improved methods for calculating averaged shielding transmission factors that are used to calculate doses for survivors without detailed shielding input data. Input data changes are summarized and described here in some detail, along with the resulting changes in dose estimates and a simple description of changes in risk estimates for solid cancer mortality. This and future RERF publications will refer to the new dose estimates described herein as "DS02R1 doses."
NASA Astrophysics Data System (ADS)
Zengmei, L.; Guanghua, Q.; Zishen, C.
2015-05-01
The direct benefit of a waterlogging control project is reflected by the reduction or avoidance of waterlogging loss. Before and after the construction of a waterlogging control project, the disaster-inducing environment in the waterlogging-prone zone is generally different. In addition, the category, quantity and spatial distribution of the disaster-bearing bodies are also changed more or less. Therefore, under the changing environment, the direct benefit of a waterlogging control project should be the reduction of waterlogging losses compared to conditions with no control project. Moreover, the waterlogging losses with or without the project should be the mathematical expectations of the waterlogging losses when rainstorms of all frequencies meet various water levels in the drainage-accepting zone. So an estimation model of the direct benefit of waterlogging control is proposed. Firstly, on the basis of a Copula function, the joint distribution of the rainstorms and the water levels are established, so as to obtain their joint probability density function. Secondly, according to the two-dimensional joint probability density distribution, the dimensional domain of integration is determined, which is then divided into small domains so as to calculate the probability for each of the small domains and the difference between the average waterlogging loss with and without a waterlogging control project, called the regional benefit of waterlogging control project, under the condition that rainstorms in the waterlogging-prone zone meet the water level in the drainage-accepting zone. Finally, it calculates the weighted mean of the project benefit of all small domains, with probability as the weight, and gets the benefit of the waterlogging control project. Taking the estimation of benefit of a waterlogging control project in Yangshan County, Guangdong Province, as an example, the paper briefly explains the procedures in waterlogging control project benefit estimation. The results show that the waterlogging control benefit estimation model constructed is applicable to the changing conditions that occur in both the disaster-inducing environment of the waterlogging-prone zone and disaster-bearing bodies, considering all conditions when rainstorms of all frequencies meet different water levels in the drainage-accepting zone. Thus, the estimation method of waterlogging control benefit can reflect the actual situation more objectively, and offer a scientific basis for rational decision-making for waterlogging control projects.
NASA Astrophysics Data System (ADS)
Stolzenburg, Maribeth; Marshall, Thomas C.; Karunarathne, Sumedhe; Orville, Richard E.
2018-10-01
Using video data recorded at 50,000 frames per second for nearby negative lightning flashes, estimates are derived for the length of positive upward connecting leaders (UCLs) that presumably formed prior to new ground attachments. Return strokes were 1.7 to 7.8 km distant, yielding image resolutions of 4.25 to 19.5 m. No UCLs are imaged in these data, indicating those features were too transient or too dim compared to other lightning processes that are imaged at these resolutions. Upper bound lengths for 17 presumed UCLs are determined from the height above flat ground or water of the successful stepped leader tip in the image immediately prior to (within 20 μs before) the return stroke. Better estimates of maximum UCL lengths are determined using the downward stepped leader tip's speed of advance and the estimated return stroke time within its first frame. For 17 strokes, the upper bound length of the possible UCL averages 31.6 m and ranges from 11.3 to 50.3 m. Among the close strokes (those with spatial resolution <8 m per pixel), the five which connected to water (salt water lagoon) have UCL upper bound estimates averaging significantly shorter (24.1 m) than the average for the three close strokes which connected to land (36.9 m). The better estimates of maximum UCL lengths for the eight close strokes average 20.2 m, with slightly shorter average of 18.3 m for the five that connected to water. All the better estimates of UCL maximum lengths are <38 m in this dataset
23 CFR 635.115 - Agreement estimate.
Code of Federal Regulations, 2010 CFR
2010-04-01
... CONSTRUCTION AND MAINTENANCE Contract Procedures § 635.115 Agreement estimate. (a) Following the award of contract, an agreement estimate based on the contract unit prices and estimated quantities shall be...
Origins of the Kuroshio and Mindanao Currents
2016-03-30
REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 The public reporting burden for this collection of information is estimated to average 1...Objectively napped time- averaged sea surface hieght (right) and surface geostrophic current (left). The mapped fields are estimated by objectively mapping
Li, Xiang; Kuk, Anthony Y C; Xu, Jinfeng
2014-12-10
Human biomonitoring of exposure to environmental chemicals is important. Individual monitoring is not viable because of low individual exposure level or insufficient volume of materials and the prohibitive cost of taking measurements from many subjects. Pooling of samples is an efficient and cost-effective way to collect data. Estimation is, however, complicated as individual values within each pool are not observed but are only known up to their average or weighted average. The distribution of such averages is intractable when the individual measurements are lognormally distributed, which is a common assumption. We propose to replace the intractable distribution of the pool averages by a Gaussian likelihood to obtain parameter estimates. If the pool size is large, this method produces statistically efficient estimates, but regardless of pool size, the method yields consistent estimates as the number of pools increases. An empirical Bayes (EB) Gaussian likelihood approach, as well as its Bayesian analog, is developed to pool information from various demographic groups by using a mixed-effect formulation. We also discuss methods to estimate the underlying mean-variance relationship and to select a good model for the means, which can be incorporated into the proposed EB or Bayes framework. By borrowing strength across groups, the EB estimator is more efficient than the individual group-specific estimator. Simulation results show that the EB Gaussian likelihood estimates outperform a previous method proposed for the National Health and Nutrition Examination Surveys with much smaller bias and better coverage in interval estimation, especially after correction of bias. Copyright © 2014 John Wiley & Sons, Ltd.
Current water ingestion estimates are important for the assessment of risk to human populations of exposure to water-borne pollutants. This paper reports mean and percentile estimates of the distributions of daily average per capita water ingestion for 12 age range groups. The a...
Electrical fatalities among U.S. construction workers.
Ore, T; Casini, V
1996-06-01
Over 2000 electrocution deaths were identified among U.S. construction workers from 1980 to 1991, with the highest mean annual crude mortality rate (2.5 per 100,000 people), and second highest mean age-adjusted rate (2.7 per 100,000 people) of all industries. Although the crude fatality rates showed a downward trend, construction workers are still about four times more likely to be electrocuted at work than are workers in all industries combined. Nearly 40% of the 5083 fatal electrocutions in all industries combined occurred in construction, and 80% were associated with industrial wiring, appliances, and transmission lines. Electrocutions ranked as the second leading cause of death among construction workers, accounting for an average of 15% of traumatic deaths in the industry from 1980 to 1991. The study indicates that the workers most at risk of electrical injury are male, young, nonwhite, and electricians, structural metal workers, and laborers. The most likely time of injury is 11 a.m. to 3 p.m. from June to August. Focusing prevention on these populations and characteristics through better methods of worker and supervisor electrical safety training, use of adequate protective clothing, and compliance with established procedures could minimize the average annual loss of 168 U.S. construction workers.
Workplace smoking related absenteeism and productivity costs in Taiwan
Tsai, S; Wen, C; Hu, S; Cheng, T; Huang, S
2005-01-01
Objective: To estimate productivity losses and financial costs to employers caused by cigarette smoking in the Taiwan workplace. Methods: The human capital approach was used to calculate lost productivity. Assuming the value of lost productivity was equal to the wage/salary rate and basing the calculations on smoking rate in the workforce, average days of absenteeism, average wage/salary rate, and increased risk and absenteeism among smokers obtained from earlier research, costs due to smoker absenteeism were estimated. Financial losses caused by passive smoking, smoking breaks, and occupational injuries were calculated. Results: Using a conservative estimate of excess absenteeism from work, male smokers took off an average of 4.36 sick days and male non-smokers took off an average of 3.30 sick days. Female smokers took off an average of 4.96 sick days and non-smoking females took off an average of 3.75 sick days. Excess absenteeism caused by employee smoking was estimated to cost US$178 million per annum for males and US$6 million for females at a total cost of US$184 million per annum. The time men and women spent taking smoking breaks amounted to nine days per year and six days per year, respectively, resulting in reduced output productivity losses of US$733 million. Increased sick leave costs due to passive smoking were approximately US$81 million. Potential costs incurred from occupational injuries among smoking employees were estimated to be US$34 million. Conclusions: Financial costs caused by increased absenteeism and reduced productivity from employees who smoke are significant in Taiwan. Based on conservative estimates, total costs attributed to smoking in the workforce were approximately US$1032 million. PMID:15923446
Workplace smoking related absenteeism and productivity costs in Taiwan.
Tsai, S P; Wen, C P; Hu, S C; Cheng, T Y; Huang, S J
2005-06-01
To estimate productivity losses and financial costs to employers caused by cigarette smoking in the Taiwan workplace. The human capital approach was used to calculate lost productivity. Assuming the value of lost productivity was equal to the wage/salary rate and basing the calculations on smoking rate in the workforce, average days of absenteeism, average wage/salary rate, and increased risk and absenteeism among smokers obtained from earlier research, costs due to smoker absenteeism were estimated. Financial losses caused by passive smoking, smoking breaks, and occupational injuries were calculated. Using a conservative estimate of excess absenteeism from work, male smokers took off an average of 4.36 sick days and male non-smokers took off an average of 3.30 sick days. Female smokers took off an average of 4.96 sick days and non-smoking females took off an average of 3.75 sick days. Excess absenteeism caused by employee smoking was estimated to cost USD 178 million per annum for males and USD 6 million for females at a total cost of USD 184 million per annum. The time men and women spent taking smoking breaks amounted to nine days per year and six days per year, respectively, resulting in reduced output productivity losses of USD 733 million. Increased sick leave costs due to passive smoking were approximately USD 81 million. Potential costs incurred from occupational injuries among smoking employees were estimated to be USD 34 million. Financial costs caused by increased absenteeism and reduced productivity from employees who smoke are significant in Taiwan. Based on conservative estimates, total costs attributed to smoking in the workforce were approximately USD 1032 million.
Hauschild, L; Lovatto, P A; Pomar, J; Pomar, C
2012-07-01
The objective of this study was to develop and evaluate a mathematical model used to estimate the daily amino acid requirements of individual growing-finishing pigs. The model includes empirical and mechanistic model components. The empirical component estimates daily feed intake (DFI), BW, and daily gain (DG) based on individual pig information collected in real time. Based on DFI, BW, and DG estimates, the mechanistic component uses classic factorial equations to estimate the optimal concentration of amino acids that must be offered to each pig to meet its requirements. The model was evaluated with data from a study that investigated the effect of feeding pigs with a 3-phase or daily multiphase system. The DFI and BW values measured in this study were compared with those estimated by the empirical component of the model. The coherence of the values estimated by the mechanistic component was evaluated by analyzing if it followed a normal pattern of requirements. Lastly, the proposed model was evaluated by comparing its estimates with those generated by the existing growth model (InraPorc). The precision of the proposed model and InraPorc in estimating DFI and BW was evaluated through the mean absolute error. The empirical component results indicated that the DFI and BW trajectories of individual pigs fed ad libitum could be predicted 1 d (DFI) or 7 d (BW) ahead with the average mean absolute error of 12.45 and 1.85%, respectively. The average mean absolute error obtained with the InraPorc for the average individual of the population was 14.72% for DFI and 5.38% for BW. Major differences were observed when estimates from InraPorc were compared with individual observations. The proposed model, however, was effective in tracking the change in DFI and BW for each individual pig. The mechanistic model component estimated the optimal standardized ileal digestible Lys to NE ratio with reasonable between animal (average CV = 7%) and overtime (average CV = 14%) variation. Thus, the amino acid requirements estimated by model are animal- and time-dependent and follow, in real time, the individual DFI and BW growth patterns. The proposed model can follow the average feed intake and feed weight trajectory of each individual pig in real time with good accuracy. Based on these trajectories and using classical factorial equations, the model makes it possible to estimate dynamically the AA requirements of each animal, taking into account the intake and growth changes of the animal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ishida, Hideshi, E-mail: ishida@me.es.osaka-u.ac.jp
2014-06-15
In this study, a family of local quantities defined on each partition and its averaging on a macroscopic small region, site, are defined on a multibaker chain system. On its averaged quantities, a law of order estimation in the bulk system is proved, making it possible to estimate the order of the quantities with respect to the representative partition scale parameter Δ. Moreover, the form of the leading-order terms of the averaged quantities is obtained, and the form enables us to have the macroscopic quantity in the continuum limit, as Δ → 0, and to confirm its partitioning independency. Thesemore » deliverables fully explain the numerical results obtained by Ishida, consistent with the irreversible thermodynamics.« less
Model averaging in linkage analysis.
Matthysse, Steven
2006-06-05
Methods for genetic linkage analysis are traditionally divided into "model-dependent" and "model-independent," but there may be a useful place for an intermediate class, in which a broad range of possible models is considered as a parametric family. It is possible to average over model space with an empirical Bayes prior that weights models according to their goodness of fit to epidemiologic data, such as the frequency of the disease in the population and in first-degree relatives (and correlations with other traits in the pleiotropic case). For averaging over high-dimensional spaces, Markov chain Monte Carlo (MCMC) has great appeal, but it has a near-fatal flaw: it is not possible, in most cases, to provide rigorous sufficient conditions to permit the user safely to conclude that the chain has converged. A way of overcoming the convergence problem, if not of solving it, rests on a simple application of the principle of detailed balance. If the starting point of the chain has the equilibrium distribution, so will every subsequent point. The first point is chosen according to the target distribution by rejection sampling, and subsequent points by an MCMC process that has the target distribution as its equilibrium distribution. Model averaging with an empirical Bayes prior requires rapid estimation of likelihoods at many points in parameter space. Symbolic polynomials are constructed before the random walk over parameter space begins, to make the actual likelihood computations at each step of the random walk very fast. Power analysis in an illustrative case is described. (c) 2006 Wiley-Liss, Inc.
Runoff Analysis Considering Orographical Features Using Dual Polarization Radar Rainfall
NASA Astrophysics Data System (ADS)
Noh, Hui-seong; Shin, Hyun-seok; Kang, Na-rae; Lee, Choong-Ke; Kim, Hung-soo
2013-04-01
Recently, the necessity for rainfall estimation and forecasting using the radar is being highlighted, due to the frequent occurrence of torrential rainfall resulting from abnormal changes of weather. Radar rainfall data represents temporal and spatial distributions properly and replace the existing rain gauge networks. It is also frequently applied in many hydrologic field researches. However, the radar rainfall data has an accuracy limitation since it estimates rainfall, by monitoring clouds and precipitation particles formed around the surface of the earth(1.5-3km above the surface) or the atmosphere. In a condition like Korea where nearly 70% of the land is covered by mountainous areas, there are lots of restrictions to use rainfall radar, because of the occurrence of beam blocking areas by topography. This study is aiming at analyzing runoff and examining the applicability of (R(Z), R(ZDR) and R(KDP)) provided by the Han River Flood Control Office(HRFCO) based on the basin elevation of Nakdong river watershed. For this purpose, the amount of radar rainfall of each rainfall event was estimated according to three sub-basins of Nakdong river watershed with the average basin elevation above 400m which are Namgang dam, Andong dam and Hapcheon dam and also another three sub-basins with the average basin elevation below 150m which are Waegwan, Changryeong and Goryeong. After runoff analysis using a distribution model, Vflo model, the results were reviewed and compared with the observed runoff. This study estimated the rainfall by using the radar-rainfall transform formulas, (R(Z), R(Z,ZDR) and R(Z,ZDR,KDP) for four stormwater events and compared the results with the point rainfall of the rain gauge. As the result, it was overestimated or underestimated, depending on rainfall events. Also, calculation indicates that the values from R(Z,ZDR) and R(Z,ZDR,KDP) relatively showed the most similar results. Moreover the runoff analysis using the estimated radar rainfall is performed. Then hydrologic component of the runoff hydrographs, peak flows and total runoffs from the estimated rainfall and the observed rainfall are compared. The results show that hydrologic components have high fluctuations depending on storm rainfall event. Thus, it is necessary to choose appropriate radar rainfall data derived from the above radar rainfall transform formulas to analyze the runoff of radar rainfall. The simulated hydrograph by radar in the three basins of agricultural areas is more similar to the observed hydrograph than the other three basins of mountainous areas. Especially the peak flow and shape of hydrograph of the agricultural areas is much closer to the observed ones than that of mountainous areas. This result comes from the difference of radar rainfall depending on the basin elevation. Therefore we need the examination of radar rainfall transform formulas following rainfall event and runoff analysis based on basin elevation for the improvement of radar rainfall application. Acknowledgment This study was financially supported by the Construction Technology Innovation Program(08-Tech-Inovation-F01) through the Research Center of Flood Defence Technology for Next Generation in Korea Institute of Construction & Transportation Technology Evaluation and Planning(KICTEP) of Ministry of Land, Transport and Maritime Affairs(MLTM)
The contemporary cement cycle of the United States
Kapur, A.; Van Oss, H. G.; Keoleian, G.; Kesler, S.E.; Kendall, A.
2009-01-01
A country-level stock and flow model for cement, an important construction material, was developed based on a material flow analysis framework. Using this model, the contemporary cement cycle of the United States was constructed by analyzing production, import, and export data for different stages of the cement cycle. The United States currently supplies approximately 80% of its cement consumption through domestic production and the rest is imported. The average annual net addition of in-use new cement stock over the period 2000-2004 was approximately 83 million metric tons and amounts to 2.3 tons per capita of concrete. Nonfuel carbon dioxide emissions (42 million metric tons per year) from the calcination phase of cement manufacture account for 62% of the total 68 million tons per year of cement production residues. The end-of-life cement discards are estimated to be 33 million metric tons per year, of which between 30% and 80% is recycled. A significant portion of the infrastructure in the United States is reaching the end of its useful life and will need to be replaced or rehabilitated; this could require far more cement than might be expected from economic forecasts of demand for cement. ?? 2009 Springer Japan.
1990-01-01
76,951,000 shall be available for study , planning, design, architect and engineer services, as authorized by law, unless the Secretary of Defense...FROM OTHER APPROPRIATIONS - (NCN-ADD) ( 0) 10. DESCRIPTION OF PROPOSED CONSTRUCTION This is a public private venture project using 10 U.S.C. 2809...FROM OTHER APPROPRIATIONS - - (NDN-ADD) ( 0) 10. DESCRIPTION OF PROPOSED CONSTRUCTION This is a public/private venture project using 10 U.S.C. 2809
Measuring Time-Averaged Blood Pressure
NASA Technical Reports Server (NTRS)
Rothman, Neil S.
1988-01-01
Device measures time-averaged component of absolute blood pressure in artery. Includes compliant cuff around artery and external monitoring unit. Ceramic construction in monitoring unit suppresses ebb and flow of pressure-transmitting fluid in sensor chamber. Transducer measures only static component of blood pressure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jafarov, E. E.; Parsekian, A. D.; Schaefer, K.
Ground penetrating radar (GPR) has emerged as an effective tool for estimating active layer thickness (ALT) and volumetric water content (VWC) within the active layer. In August 2013, we conducted a series of GPR and probing surveys using a 500 MHz antenna and metallic probe around Barrow, Alaska. Here, we collected about 15 km of GPR data and 1.5 km of probing data. We describe the GPR data processing workflow from raw GPR data to the estimated ALT and VWC. We then include the corresponding uncertainties for each measured and estimated parameter. The estimated average GPR-derived ALT was 41 cm,more » with a standard deviation of 9 cm. The average probed ALT was 40 cm, with a standard deviation of 12 cm. The average GPR-derived VWC was 0.65, with a standard deviation of 0.14.« less
Jafarov, E. E.; Parsekian, A. D.; Schaefer, K.; ...
2018-01-09
Ground penetrating radar (GPR) has emerged as an effective tool for estimating active layer thickness (ALT) and volumetric water content (VWC) within the active layer. In August 2013, we conducted a series of GPR and probing surveys using a 500 MHz antenna and metallic probe around Barrow, Alaska. Here, we collected about 15 km of GPR data and 1.5 km of probing data. We describe the GPR data processing workflow from raw GPR data to the estimated ALT and VWC. We then include the corresponding uncertainties for each measured and estimated parameter. The estimated average GPR-derived ALT was 41 cm,more » with a standard deviation of 9 cm. The average probed ALT was 40 cm, with a standard deviation of 12 cm. The average GPR-derived VWC was 0.65, with a standard deviation of 0.14.« less
2016-03-01
million in guaranty claims under the construction contract. • LCS 3 and 4—According to Navy contracting officers, the Navy negotiated lower...their responsibility for both guaranty claims and follow-on work to correct the defects. Shipbuilders earn profit under the construction contract, and...for its ships with guarantees. On average, commercial ship buyers told us that the number of warranty claims totals 1 to 2 percent of the construction
Robust w-Estimators for Cryo-EM Class Means
Huang, Chenxi; Tagare, Hemant D.
2016-01-01
A critical step in cryogenic electron microscopy (cryo-EM) image analysis is to calculate the average of all images aligned to a projection direction. This average, called the “class mean”, improves the signal-to-noise ratio in single particle reconstruction (SPR). The averaging step is often compromised because of outlier images of ice, contaminants, and particle fragments. Outlier detection and rejection in the majority of current cryo-EM methods is done using cross-correlation with a manually determined threshold. Empirical assessment shows that the performance of these methods is very sensitive to the threshold. This paper proposes an alternative: a “w-estimator” of the average image, which is robust to outliers and which does not use a threshold. Various properties of the estimator, such as consistency and influence function are investigated. An extension of the estimator to images with different contrast transfer functions (CTFs) is also provided. Experiments with simulated and real cryo-EM images show that the proposed estimator performs quite well in the presence of outliers. PMID:26841397
A Geomagnetic Estimate of Mean Paleointensity
NASA Technical Reports Server (NTRS)
Voorhies, Coerte V.
2004-01-01
To test a statistical hypothesis about Earth's magnetic field against paleomagnetism, the present field is used to estimate time averaged paleointensity. The estimate used the modern magnetic multipole spectrum R(n), which gives the mean square induction represented by spherical harmonics of degree n averaged over the sphere of radius a = 6371.2 km. The hypothesis asserts that low degree multi-pole powers of the coresource field are distributed as chi-squared with 2n+1 degrees of freedom and expectation values, where c is the 3480 km radius of the Earth's core. (This is compatible with a usually mainly geocentric axial dipolar field). Amplitude K is estimated by fitting theoretical to observational spectra through degree 12. The resulting calibrated expectation spectrum is summed through degree 12 to estimate expected square intensity F(exp 2). The sum also estimates F(exp 2) averaged over geologic time, in so far as the present magnetic spectrum is a fair sample of that generated in the past by core geodynamic processes. Additional information is included in the original extended abstract.
Little, Callie W; Haughbrook, Rasheda; Hart, Sara A
2017-01-01
Numerous twin studies have examined the genetic and environmental etiology of reading comprehension, though it is likely that etiological estimates are influenced by unidentified sample conditions (e.g. Tucker-Drob and Bates, Psychol Sci:0956797615612727, 2015). The purpose of this meta-analysis was to average the etiological influences of reading comprehension and to explore the potential moderators influencing these estimates. Results revealed an average heritability estimate of h 2 = 0.59, with significant variation in estimates across studies, suggesting potential moderation. Moderation results indicated publication year, grade level, project, zygosity methods, and response type moderated heritability estimates. The average shared environmental estimate was c 2 = 0.16, with publication year, grade and zygosity methods acting as significant moderators. These findings support the role of genetics on reading comprehension, and a small significant role of shared environmental influences. The results suggest that our interpretation of how genes and environments influence reading comprehension should reflect aspects of study and sample.
Estimation of Cloud Fraction Profile in Shallow Convection Using a Scanning Cloud Radar
Oue, Mariko; Kollias, Pavlos; North, Kirk W.; ...
2016-10-18
Large spatial heterogeneities in shallow convection result in uncertainties in estimations of domain-averaged cloud fraction profiles (CFP). This issue is addressed using large eddy simulations of shallow convection over land coupled with a radar simulator. Results indicate that zenith profiling observations are inadequate to provide reliable CFP estimates. Use of Scanning Cloud Radar (SCR), performing a sequence of cross-wind horizon-to-horizon scans, is not straightforward due to the strong dependence of radar sensitivity to target distance. An objective method for estimating domain-averaged CFP is proposed that uses observed statistics of SCR hydrometeor detection with height to estimate optimum sampling regions. Thismore » method shows good agreement with the model CFP. Results indicate that CFP estimates require more than 35 min of SCR scans to converge on the model domain average. Lastly, the proposed technique is expected to improve our ability to compare model output with cloud radar observations in shallow cumulus cloud conditions.« less
Pregnancy intentions-a complex construct and call for new measures.
Mumford, Sunni L; Sapra, Katherine J; King, Rosalind B; Louis, Jean Fredo; Buck Louis, Germaine M
2016-11-01
To estimate the prevalence of unintended pregnancies under relaxed assumptions regarding birth control use compared with a traditional constructed measure. Cross-sectional survey. Not applicable. Nationally representative sample of U.S. women aged 15-44 years. None. Prevalence of intended and unintended pregnancies as estimated by [1] a traditional constructed measure from the National Survey of Family Growth (NSFG), and [2] a constructed measure relaxing assumptions regarding birth control use, reasons for nonuse, and pregnancy timing. The prevalence of unintended pregnancies was 6% higher using the traditional constructed measure as compared with the approach with relaxed assumptions (NSFG: 44%, 95% confidence interval [CI] 41, 46; new construct 38%, 95% CI, 36, 41). Using the NSFG approach, only 92% of women who stopped birth control to become pregnant and 0 women who were not using contraceptives at the time of the pregnancy and reported that they did not mind getting pregnant were classified as having intended pregnancies, compared with 100% using the new construct. Current measures of pregnancy intention may overestimate rates of unintended pregnancy, with over 340,000 pregnancies in the United States misclassified as unintended using the current approach, corresponding to an estimated savings of $678 million in public health-care expenditures. Current constructs make assumptions that may not reflect contemporary reproductive practices, so improved measures are needed. Published by Elsevier Inc.
Injuries and their burden in insured construction workers in Iran, 2012.
Hatami, Seyed Esmaeil; Khanjani, Narges; Alavinia, Seyed Mohammad; Ravandi, Mohammad Reza Ghotbi
2017-03-01
The present study used disability adjusted life years (DALY) to estimate the burden of external cause of injuries in construction workers insured in Iran in 2012. The Global Burden of Disease method (2010) was used to estimate the years of life lost due to death (YLL) and years of life lost due to disability (YLD). DALY was calculated as the sum of YLL and YLD. There were 5352 injured construction workers in Iran (11.25 individuals per 1000). Falling was the most common incidence and included 2490 individuals (46.53%). Totally, DALY was estimated 18,557 years for all age groups and both genders including 17,821 YLD (96%) and 736 YLL (4%). The DALY related to construction work is high in Iran and it has notably affected the young. Hence more preventive methods should be applied to reduce the overall burden of specific external cause of injuries especially in young and inexperienced workers.
NASA Astrophysics Data System (ADS)
Park, Jin-Young; Lee, Dong-Eun; Kim, Byung-Soo
2017-10-01
Due to the increasing concern about climate change, efforts to reduce environmental load are continuously being made in construction industry, and LCA (life cycle assessment) is being presented as an effective method to assess environmental load. Since LCA requires information on construction quantity used for environmental load estimation, however, it is not being utilized in the environmental review in the early design phase where it is difficult to obtain such information. In this study, computation system for construction quantity based on standard cross section of road drainage facilities was developed to compute construction quantity required for LCA using only information available in the early design phase to develop and verify the effectiveness of a model that can perform environmental load estimation. The result showed that it is an effective model that can be used in the early design phase as it revealed a 13.39% mean absolute error rate.
This report, Methodology to Estimate the Quantity, Composition and Management of Construction and Demolition Debris in the US, was developed to expand access to data on CDD in the US and to support research on CDD and sustainable materials management. Since past US EPA CDD estima...
NASA Technical Reports Server (NTRS)
Brown, J. A.
1983-01-01
Kennedy Space Center cost Index aids in conceptual design cost estimates. Report discusses development of KSC Cost Index since January 1974. Index since January 1974. Index provides management, design engineers, and estimators an up-to-data reference for local labor and material process. Also provides mount and rate of change in these costs used to predict future construction costs.
Lateral deflection contribution to settlement estimates : [summary].
DOT National Transportation Integrated Search
2014-12-01
The Wisconsin Department of Transportation (WisDOT) occasionally constructs : embankments and retaining walls over compressible materials using staged construction. : Staged construction is a technique used to build an embankment or retaining wall in...
Cost analysis of DAWT innovative wind energy systems
NASA Astrophysics Data System (ADS)
Foreman, K. M.
The results of a diffuser augmented wind turbine (DAWT) preliminary design study of three constructional material approaches and cost analysis of DAWT electrical energy generation are presented. Costs are estimated assuming a limited production run (100 to 500 units) of factory-built subassemblies and on-site final assembly and erection within 200 miles of regional production centers. It is concluded that with the DAWT the (busbar) cost of electricity (COE) can range between 2.0 and 3.5 cents/kW-hr for farm and REA cooperative end users, for sites with annual average wind speeds of 16 and 12 mph respectively, and 150 kW rated units. No tax credit incentives are included in these figures. For commercial end users of the same units and site characteristics, the COE ranges between 4.0 and 6.5 cents/kW-hr.
Infrared Emission and Thermal Processes in Spiral Galaxies
NASA Technical Reports Server (NTRS)
Mundy, Lee; Wolfire, Mark
1999-01-01
In this research we constructed theoretical models of the infrared and submillimeter line and continuum emission from the neutral interstellar medium in the Milky Way and external galaxies. The model line intensities were compared to observations of the Galactic disk and several galaxies to determine the average physical properties of the neutral gas including the density, temperature, and ultraviolet radiation field which illuminates the gas. In addition we investigated the heating mechanisms in the Galactic disk and estimated the emission rate of the [C 11] 158 micrometer line as a function of position in the Galaxy. We conclude that the neutral gas is heated mainly by the grain photoelectric effect and that a two phase (CNM+WNM) is possible between Galactic radii R = 3 kpc and R = 18 kpc. Listings of meeting presentations and publications are included.
Long-term variability of global statistical properties of epileptic brain networks
NASA Astrophysics Data System (ADS)
Kuhnert, Marie-Therese; Elger, Christian E.; Lehnertz, Klaus
2010-12-01
We investigate the influence of various pathophysiologic and physiologic processes on global statistical properties of epileptic brain networks. We construct binary functional networks from long-term, multichannel electroencephalographic data recorded from 13 epilepsy patients, and the average shortest path length and the clustering coefficient serve as global statistical network characteristics. For time-resolved estimates of these characteristics we observe large fluctuations over time, however, with some periodic temporal structure. These fluctuations can—to a large extent—be attributed to daily rhythms while relevant aspects of the epileptic process contribute only marginally. Particularly, we could not observe clear cut changes in network states that can be regarded as predictive of an impending seizure. Our findings are of particular relevance for studies aiming at an improved understanding of the epileptic process with graph-theoretical approaches.