NASA Astrophysics Data System (ADS)
Huang, Q. Z.; Hsu, S. Y.; Li, M. H.
2016-12-01
The long-term streamflow prediction is important not only to estimate water-storage of a reservoir but also to the surface water intakes, which supply people's livelihood, agriculture, and industry. Climatology forecasts of streamflow have been traditionally used for calculating the exceedance probability curve of streamflow and water resource management. In this study, we proposed a stochastic approach to predict the exceedance probability curve of long-term streamflow with the seasonal weather outlook from Central Weather Bureau (CWB), Taiwan. The approach incorporates a statistical downscale weather generator and a catchment-scale hydrological model to convert the monthly outlook into daily rainfall and temperature series and to simulate the streamflow based on the outlook information. Moreover, we applied Bayes' theorem to derive a method for calculating the exceedance probability curve of the reservoir inflow based on the seasonal weather outlook and its imperfection. The results show that our approach can give the exceedance probability curves reflecting the three-month weather outlook and its accuracy. We also show how the improvement of the weather outlook affects the predicted exceedance probability curves of the streamflow. Our approach should be useful for the seasonal planning and management of water resource and their risk assessment.
Partial least squares for efficient models of fecal indicator bacteria on Great Lakes beaches
Brooks, Wesley R.; Fienen, Michael N.; Corsi, Steven R.
2013-01-01
At public beaches, it is now common to mitigate the impact of water-borne pathogens by posting a swimmer's advisory when the concentration of fecal indicator bacteria (FIB) exceeds an action threshold. Since culturing the bacteria delays public notification when dangerous conditions exist, regression models are sometimes used to predict the FIB concentration based on readily-available environmental measurements. It is hard to know which environmental parameters are relevant to predicting FIB concentration, and the parameters are usually correlated, which can hurt the predictive power of a regression model. Here the method of partial least squares (PLS) is introduced to automate the regression modeling process. Model selection is reduced to the process of setting a tuning parameter to control the decision threshold that separates predicted exceedances of the standard from predicted non-exceedances. The method is validated by application to four Great Lakes beaches during the summer of 2010. Performance of the PLS models compares favorably to that of the existing state-of-the-art regression models at these four sites.
Ahearn, Elizabeth A.
2010-01-01
Multiple linear regression equations for determining flow-duration statistics were developed to estimate select flow exceedances ranging from 25- to 99-percent for six 'bioperiods'-Salmonid Spawning (November), Overwinter (December-February), Habitat Forming (March-April), Clupeid Spawning (May), Resident Spawning (June), and Rearing and Growth (July-October)-in Connecticut. Regression equations also were developed to estimate the 25- and 99-percent flow exceedances without reference to a bioperiod. In total, 32 equations were developed. The predictive equations were based on regression analyses relating flow statistics from streamgages to GIS-determined basin and climatic characteristics for the drainage areas of those streamgages. Thirty-nine streamgages (and an additional 6 short-term streamgages and 28 partial-record sites for the non-bioperiod 99-percent exceedance) in Connecticut and adjacent areas of neighboring States were used in the regression analysis. Weighted least squares regression analysis was used to determine the predictive equations; weights were assigned based on record length. The basin characteristics-drainage area, percentage of area with coarse-grained stratified deposits, percentage of area with wetlands, mean monthly precipitation (November), mean seasonal precipitation (December, January, and February), and mean basin elevation-are used as explanatory variables in the equations. Standard errors of estimate of the 32 equations ranged from 10.7 to 156 percent with medians of 19.2 and 55.4 percent to predict the 25- and 99-percent exceedances, respectively. Regression equations to estimate high and median flows (25- to 75-percent exceedances) are better predictors (smaller variability of the residual values around the regression line) than the equations to estimate low flows (less than 75-percent exceedance). The Habitat Forming (March-April) bioperiod had the smallest standard errors of estimate, ranging from 10.7 to 20.9 percent. In contrast, the Rearing and Growth (July-October) bioperiod had the largest standard errors, ranging from 30.9 to 156 percent. The adjusted coefficient of determination of the equations ranged from 77.5 to 99.4 percent with medians of 98.5 and 90.6 percent to predict the 25- and 99-percent exceedances, respectively. Descriptive information on the streamgages used in the regression, measured basin and climatic characteristics, and estimated flow-duration statistics are provided in this report. Flow-duration statistics and the 32 regression equations for estimating flow-duration statistics in Connecticut are stored on the U.S. Geological Survey World Wide Web application ?StreamStats? (http://water.usgs.gov/osw/streamstats/index.html). The regression equations developed in this report can be used to produce unbiased estimates of select flow exceedances statewide.
NASA Astrophysics Data System (ADS)
Sanders, B. F.; Gallegos, H. A.; Schubert, J. E.
2011-12-01
The Baldwin Hills dam-break flood and associated structural damage is investigated in this study. The flood caused high velocity flows exceeding 5 m/s which destroyed 41 wood-framed residential structures, 16 of which were completed washed out. Damage is predicted by coupling a calibrated hydrodynamic flood model based on the shallow-water equations to structural damage models. The hydrodynamic and damage models are two-way coupled so building failure is predicted upon exceedance of a hydraulic intensity parameter, which in turn triggers a localized reduction in flow resistance which affects flood intensity predictions. Several established damage models and damage correlations reported in the literature are tested to evaluate the predictive skill for two damage states defined by destruction (Level 2) and washout (Level 3). Results show that high-velocity structural damage can be predicted with a remarkable level of skill using established damage models, but only with two-way coupling of the hydrodynamic and damage models. In contrast, when structural failure predictions have no influence on flow predictions, there is a significant reduction in predictive skill. Force-based damage models compare well with a subset of the damage models which were devised for similar types of structures. Implications for emergency planning and preparedness as well as monetary damage estimation are discussed.
The Impact of Exceeding TANF Time Limits on the Access to Healthcare of Low-Income Mothers.
Narain, Kimberly; Ettner, Susan
2017-01-01
The objective of this article is to estimate the relationship of exceeding Temporary Assistance for Needy Families (TANF) time limits, with health insurance, healthcare, and health outcomes. The authors use Heckman selection models that exploit variability in state time-limit duration and timing of policy implementation as identifying exclusion restrictions to adjust the effect estimates of exceeding time limits for possible correlations between the probability of exceeding time limits and unobservable factors influencing the outcomes. The authors find that exceeding time limits decreases the predicted probability of Medicaid coverage, increases the predicted probability of being uninsured, and decreases the predicted probability of annual medical provider contact.
Henne, Melinda B; Stegmann, Barbara J; Neithardt, Adrienne B; Catherino, William H; Armstrong, Alicia Y; Kao, Tzu-Cheg; Segars, James H
2008-01-01
To predict the cost of a delivery following assisted reproductive technologies (ART). Cost analysis based on retrospective chart analysis. University-based ART program. Women aged >or=26 and
Zimmerman, Tammy M.
2008-01-01
The Lake Erie beaches in Pennsylvania are a valuable recreational resource for Erie County. Concentrations of Escherichia coli (E. coli) at monitored beaches in Presque Isle State Park in Erie, Pa., occasionally exceed the single-sample bathing-water standard of 235 colonies per 100 milliliters resulting in potentially unsafe swimming conditions and prompting beach managers to post public advisories or to close beaches to recreation. To supplement the current method for assessing recreational water quality (E. coli concentrations from the previous day), a predictive regression model for E. coli concentrations at Presque Isle Beach 2 was developed from data collected during the 2004 and 2005 recreational seasons. Model output included predicted E. coli concentrations and exceedance probabilities--the probability that E. coli concentrations would exceed the standard. For this study, E. coli concentrations and other water-quality and environmental data were collected during the 2006 recreational season at Presque Isle Beach 2. The data from 2006, an independent year, were used to test (validate) the 2004-2005 predictive regression model and compare the model performance to the current method. Using 2006 data, the 2004-2005 model yielded more correct responses and better predicted exceedances of the standard than the use of E. coli concentrations from the previous day. The differences were not pronounced, however, and more data are needed. For example, the model correctly predicted exceedances of the standard 11 percent of the time (1 out of 9 exceedances that occurred in 2006) whereas using the E. coli concentrations from the previous day did not result in any correctly predicted exceedances. After validation, new models were developed by adding the 2006 data to the 2004-2005 dataset and by analyzing the data in 2- and 3-year combinations. Results showed that excluding the 2004 data (using 2005 and 2006 data only) yielded the best model. Explanatory variables in the 2005-2006 model were log10 turbidity, bird count, and wave height. The 2005-2006 model correctly predicted when the standard would not be exceeded (specificity) with a response of 95.2 percent (178 out of 187 nonexceedances) and correctly predicted when the standard would be exceeded (sensitivity) with a response of 64.3 percent (9 out of 14 exceedances). In all cases, the results from predictive modeling produced higher percentages of correct predictions than using E. coli concentrations from the previous day. Additional data collected each year can be used to test and possibly improve the model. The results of this study will aid beach managers in more rapidly determining when waters are not safe for recreational use and, subsequently, when to close a beach or post an advisory.
NASA Astrophysics Data System (ADS)
Neumann, D. W.; Zagona, E. A.; Rajagopalan, B.
2005-12-01
Warm summer stream temperatures due to low flows and high air temperatures are a critical water quality problem in many western U.S. river basins because they impact threatened fish species' habitat. Releases from storage reservoirs and river diversions are typically driven by human demands such as irrigation, municipal and industrial uses and hydropower production. Historically, fish needs have not been formally incorporated in the operating procedures, which do not supply adequate flows for fish in the warmest, driest periods. One way to address this problem is for local and federal organizations to purchase water rights to be used to increase flows, hence decrease temperatures. A statistical model-predictive technique for efficient and effective use of a limited supply of fish water has been developed and incorporated in a Decision Support System (DSS) that can be used in an operations mode to effectively use water acquired to mitigate warm stream temperatures. The DSS is a rule-based system that uses the empirical, statistical predictive model to predict maximum daily stream temperatures based on flows that meet the non-fish operating criteria, and to compute reservoir releases of allocated fish water when predicted temperatures exceed fish habitat temperature targets with a user specified confidence of the temperature predictions. The empirical model is developed using a step-wise linear regression procedure to select significant predictors, and includes the computation of a prediction confidence interval to quantify the uncertainty of the prediction. The DSS also includes a strategy for managing a limited amount of water throughout the season based on degree-days in which temperatures are allowed to exceed the preferred targets for a limited number of days that can be tolerated by the fish. The DSS is demonstrated by an example application to the Truckee River near Reno, Nevada using historical flows from 1988 through 1994. In this case, the statistical model predicts maximum daily Truckee River stream temperatures in June, July, and August using predicted maximum daily air temperature and modeled average daily flow. The empirical relationship was created using a step-wise linear regression selection process using 1993 and 1994 data. The adjusted R2 value for this relationship is 0.91. The model is validated using historic data and demonstrated in a predictive mode with a prediction confidence interval to quantify the uncertainty. Results indicate that the DSS could substantially reduce the number of target temperature violations, i.e., stream temperatures exceeding the target temperature levels detrimental to fish habitat. The results show that large volumes of water are necessary to meet a temperature target with a high degree of certainty and violations may still occur if all of the stored water is depleted. A lower degree of certainty requires less water but there is a higher probability that the temperature targets will be exceeded. Addition of the rules that consider degree-days resulted in a reduction of the number of temperature violations without increasing the amount of water used. This work is described in detail in publications referenced in the URL below.
Statistical analysis of PM₁₀ concentrations at different locations in Malaysia.
Sansuddin, Nurulilyana; Ramli, Nor Azam; Yahaya, Ahmad Shukri; Yusof, Noor Faizah Fitri Md; Ghazali, Nurul Adyani; Madhoun, Wesam Ahmed Al
2011-09-01
Malaysia has experienced several haze events since the 1980s as a consequence of the transboundary movement of air pollutants emitted from forest fires and open burning activities. Hazy episodes can result from local activities and be categorized as "localized haze". General probability distributions (i.e., gamma and log-normal) were chosen to analyze the PM(10) concentrations data at two different types of locations in Malaysia: industrial (Johor Bahru and Nilai) and residential (Kota Kinabalu and Kuantan). These areas were chosen based on their frequently high PM(10) concentration readings. The best models representing the areas were chosen based on their performance indicator values. The best distributions provided the probability of exceedances and the return period between the actual and predicted concentrations based on the threshold limit given by the Malaysian Ambient Air Quality Guidelines (24-h average of 150 μg/m(3)) for PM(10) concentrations. The short-term prediction for PM(10) exceedances in 14 days was obtained using the autoregressive model.
Adjusted hospital death rates: a potential screen for quality of medical care.
Dubois, R W; Brook, R H; Rogers, W H
1987-09-01
Increased economic pressure on hospitals has accelerated the need to develop a screening tool for identifying hospitals that potentially provide poor quality care. Based upon data from 93 hospitals and 205,000 admissions, we used a multiple regression model to adjust the hospitals crude death rate. The adjustment process used age, origin of patient from the emergency department or nursing home, and a hospital case mix index based on DRGs (diagnostic related groups). Before adjustment, hospital death rates ranged from 0.3 to 5.8 per 100 admissions. After adjustment, hospital death ratios ranged from 0.36 to 1.36 per 100 (actual death rate divided by predicted death rate). Eleven hospitals (12 per cent) were identified where the actual death rate exceeded the predicted death rate by more than two standard deviations. In nine hospitals (10 per cent), the predicted death rate exceeded the actual death rate by a similar statistical margin. The 11 hospitals with higher than predicted death rates may provide inadequate quality of care or have uniquely ill patient populations. The adjusted death rate model needs to be validated and generalized before it can be used routinely to screen hospitals. However, the remaining large differences in observed versus predicted death rates lead us to believe that important differences in hospital performance may exist.
Predictive onboard flow control for packet switching satellites
NASA Technical Reports Server (NTRS)
Bobinsky, Eric A.
1992-01-01
We outline two alternate approaches to predicting the onset of congestion in a packet switching satellite, and argue that predictive, rather than reactive, flow control is necessary for the efficient operation of such a system. The first method discussed is based on standard, statistical techniques which are used to periodically calculate a probability of near-term congestion based on arrival rate statistics. If this probability exceeds a present threshold, the satellite would transmit a rate-reduction signal to all active ground stations. The second method discussed would utilize a neural network to periodically predict the occurrence of buffer overflow based on input data which would include, in addition to arrival rates, the distributions of packet lengths, source addresses, and destination addresses.
Estimating Flow-Duration and Low-Flow Frequency Statistics for Unregulated Streams in Oregon
Risley, John; Stonewall, Adam J.; Haluska, Tana
2008-01-01
Flow statistical datasets, basin-characteristic datasets, and regression equations were developed to provide decision makers with surface-water information needed for activities such as water-quality regulation, water-rights adjudication, biological habitat assessment, infrastructure design, and water-supply planning and management. The flow statistics, which included annual and monthly period of record flow durations (5th, 10th, 25th, 50th, and 95th percent exceedances) and annual and monthly 7-day, 10-year (7Q10) and 7-day, 2-year (7Q2) low flows, were computed at 466 streamflow-gaging stations at sites with unregulated flow conditions throughout Oregon and adjacent areas of neighboring States. Regression equations, created from the flow statistics and basin characteristics of the stations, can be used to estimate flow statistics at ungaged stream sites in Oregon. The study area was divided into 10 regression modeling regions based on ecological, topographic, geologic, hydrologic, and climatic criteria. In total, 910 annual and monthly regression equations were created to predict the 7 flow statistics in the 10 regions. Equations to predict the five flow-duration exceedance percentages and the two low-flow frequency statistics were created with Ordinary Least Squares and Generalized Least Squares regression, respectively. The standard errors of estimate of the equations created to predict the 5th and 95th percent exceedances had medians of 42.4 and 64.4 percent, respectively. The standard errors of prediction of the equations created to predict the 7Q2 and 7Q10 low-flow statistics had medians of 51.7 and 61.2 percent, respectively. Standard errors for regression equations for sites in western Oregon were smaller than those in eastern Oregon partly because of a greater density of available streamflow-gaging stations in western Oregon than eastern Oregon. High-flow regression equations (such as the 5th and 10th percent exceedances) also generally were more accurate than the low-flow regression equations (such as the 95th percent exceedance and 7Q10 low-flow statistic). The regression equations predict unregulated flow conditions in Oregon. Flow estimates need to be adjusted if they are used at ungaged sites that are regulated by reservoirs or affected by water-supply and agricultural withdrawals if actual flow conditions are of interest. The regression equations are installed in the USGS StreamStats Web-based tool (http://water.usgs.gov/osw/streamstats/index.html, accessed July 16, 2008). StreamStats provides users with a set of annual and monthly flow-duration and low-flow frequency estimates for ungaged sites in Oregon in addition to the basin characteristics for the sites. Prediction intervals at the 90-percent confidence level also are automatically computed.
NASA Astrophysics Data System (ADS)
Brenner, Simon; Coxon, Gemma; Howden, Nicholas J. K.; Freer, Jim; Hartmann, Andreas
2018-02-01
Chalk aquifers are an important source of drinking water in the UK. Due to their properties, they are particularly vulnerable to groundwater-related hazards like floods and droughts. Understanding and predicting groundwater levels is therefore important for effective and safe water management. Chalk is known for its high porosity and, due to its dissolvability, exposed to karstification and strong subsurface heterogeneity. To cope with the karstic heterogeneity and limited data availability, specialised modelling approaches are required that balance model complexity and data availability. In this study, we present a novel approach to evaluate simulated groundwater level frequencies derived from a semi-distributed karst model that represents subsurface heterogeneity by distribution functions. Simulated groundwater storages are transferred into groundwater levels using evidence from different observations wells. Using a percentile approach we can assess the number of days exceeding or falling below selected groundwater level percentiles. Firstly, we evaluate the performance of the model when simulating groundwater level time series using a spilt sample test and parameter identifiability analysis. Secondly, we apply a split sample test to the simulated groundwater level percentiles to explore the performance in predicting groundwater level exceedances. We show that the model provides robust simulations of discharge and groundwater levels at three observation wells at a test site in a chalk-dominated catchment in south-western England. The second split sample test also indicates that the percentile approach is able to reliably predict groundwater level exceedances across all considered timescales up to their 75th percentile. However, when looking at the 90th percentile, it only provides acceptable predictions for long time periods and it fails when the 95th percentile of groundwater exceedance levels is considered. By modifying the historic forcings of our model according to expected future climate changes, we create simple climate scenarios and we show that the projected climate changes may lead to generally lower groundwater levels and a reduction of exceedances of high groundwater level percentiles.
Nowell, Lisa H.; Crawford, Charles G.; Gilliom, Robert J.; Nakagaki, Naomi; Stone, Wesley W.; Thelin, Gail; Wolock, David M.
2009-01-01
Empirical regression models were developed for estimating concentrations of dieldrin, total chlordane, and total DDT in whole fish from U.S. streams. Models were based on pesticide concentrations measured in whole fish at 648 stream sites nationwide (1992-2001) as part of the U.S. Geological Survey's National Water Quality Assessment Program. Explanatory variables included fish lipid content, estimates (or surrogates) representing historical agricultural and urban sources, watershed characteristics, and geographic location. Models were developed using Tobit regression methods appropriate for data with censoring. Typically, the models explain approximately 50 to 70% of the variability in pesticide concentrations measured in whole fish. The models were used to predict pesticide concentrations in whole fish for streams nationwide using the U.S. Environmental Protection Agency's River Reach File 1 and to estimate the probability that whole-fish concentrations exceed benchmarks for protection of fish-eating wildlife. Predicted concentrations were highest for dieldrin in the Corn Belt, Texas, and scattered urban areas; for total chlordane in the Corn Belt, Texas, the Southeast, and urbanized Northeast; and for total DDT in the Southeast, Texas, California, and urban areas nationwide. The probability of exceeding wildlife benchmarks for dieldrin and chlordane was predicted to be low for most U.S. streams. The probability of exceeding wildlife benchmarks for total DDT is higher but varies depending on the fish taxon and on the benchmark used. Because the models in the present study are based on fish data collected during the 1990s and organochlorine pesticide residues in the environment continue to decline decades after their uses were discontinued, these models may overestimate present-day pesticide concentrations in fish. ?? 2009 SETAC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez, Anthony; Maclaurin, Galen; Roberts, Billy
Long-term variability of solar resource is an important factor in planning a utility-scale photovoltaic (PV) generation plant, and annual generation for a given location can vary significantly from year to year. Based on multiple years of solar irradiance data, an exceedance probability is the amount of energy that could potentially be produced by a power plant in any given year. An exceedance probability accounts for long-term variability and climate cycles (e.g., monsoons or changes in aerosols), which ultimately impact PV energy generation. Study results indicate that a significant bias could be associated with relying solely on typical meteorological year (TMY)more » resource data to capture long-term variability. While the TMY tends to under-predict annual generation overall compared to the P50, there appear to be pockets of over-prediction as well.« less
Gallium arsenide solar cell efficiency: Problems and potential
NASA Technical Reports Server (NTRS)
Weizer, V. G.; Godlewski, M. P.
1985-01-01
Under ideal conditions the GaAs solar cell should be able to operate at an AMO efficiency exceeding 27 percent, whereas to date the best measured efficiencies barely exceed 19 percent. Of more concern is the fact that there has been no improvement in the past half decade, despite the expenditure of considerable effort. State-of-the-art GaAs efficiency is analyzed in an attempt to determine the feasibility of improving on the status quo. The possible gains to be had in the planar cell. An attempt is also made to predict the efficiency levels that could be achieved with a grating geometry. Both the N-base and the P-base BaAs cells in their planar configurations have the potential to operate at AMO efficiencies between 23 and 24 percent. For the former the enabling technology is essentially in hand, while for the latter the problem of passivating the emitter surface remains to be solved. In the dot grating configuration, P-base efficiencies approaching 26 percent are possible with minor improvements in existing technology. N-base grating cell efficiencies comparable to those predicted for the P-base cell are achievable if the N surface can be sufficiently passivated.
Sun, Xiyang; Miao, Jiacheng; Wang, You; Luo, Zhiyuan; Li, Guang
2017-01-01
An estimate on the reliability of prediction in the applications of electronic nose is essential, which has not been paid enough attention. An algorithm framework called conformal prediction is introduced in this work for discriminating different kinds of ginsengs with a home-made electronic nose instrument. Nonconformity measure based on k-nearest neighbors (KNN) is implemented separately as underlying algorithm of conformal prediction. In offline mode, the conformal predictor achieves a classification rate of 84.44% based on 1NN and 80.63% based on 3NN, which is better than that of simple KNN. In addition, it provides an estimate of reliability for each prediction. In online mode, the validity of predictions is guaranteed, which means that the error rate of region predictions never exceeds the significance level set by a user. The potential of this framework for detecting borderline examples and outliers in the application of E-nose is also investigated. The result shows that conformal prediction is a promising framework for the application of electronic nose to make predictions with reliability and validity. PMID:28805721
Use of noise attenuation modeling in managing missile motor detonation activities.
McFarland, Michael J; Watkins, Jeffrey W; Kordich, Micheal M; Pollet, Dean A; Palmer, Glenn R
2004-03-01
The Sound Intensity Prediction System (SIPS) and Blast Operation Overpressure Model (BOOM) are semiempirical sound models that are employed by the Utah Test and Training Range (UTTR) to predict whether noise levels from the detonation of large missile motors will exceed regulatory thresholds. Field validation of SIPS confirmed that the model was effective in limiting the number of detonations of large missile motors that could potentially result in a regulatory noise exceedance. Although the SIPS accurately predicted the impact of weather on detonation noise propagation, regulators have required that the more conservative BOOM model be employed in conjunction with SIPS in evaluating peak noise levels in populated areas. By simultaneously considering the output of both models, in 2001, UTTR detonated 104 missile motors having net explosive weights (NEW) that ranged between 14,960 and 38,938 lb without a recorded public noise complaint. Based on the encouraging results, the U.S. Department of Defense is considering expanding the application of these noise models to support the detonation of missile motors having a NEW of 81,000 lb. Recent modeling results suggest that, under appropriate weather conditions, missile motors containing up to 96,000 lb NEW can be detonated at the UTTR without exceeding the regulatory noise limit of 134 decibels (dB).
Statistical Modeling of Occupational Exposure to Polycyclic Aromatic Hydrocarbons Using OSHA Data.
Lee, Derrick G; Lavoué, Jérôme; Spinelli, John J; Burstyn, Igor
2015-01-01
Polycyclic aromatic hydrocarbons (PAHs) are a group of pollutants with multiple variants classified as carcinogenic. The Occupational Safety and Health Administration (OSHA) provided access to two PAH exposure databanks of United States workplace compliance testing data collected between 1979 and 2010. Mixed-effects logistic models were used to predict the exceedance fraction (EF), i.e., the probability of exceeding OSHA's Permissible Exposure Limit (PEL = 0.2 mg/m3) for PAHs based on industry and occupation. Measurements of coal tar pitch volatiles were used as a surrogate for PAHs. Time, databank, occupation, and industry were included as fixed-effects while an identifier for the compliance inspection number was included as a random effect. Analyses involved 2,509 full-shift personal measurements. Results showed that the majority of industries had an estimated EF < 0.5, although several industries, including Standardized Industry Classification codes 1623 (Water, Sewer, Pipeline, and Communication and Powerline Construction), 1711 (Plumbing, Heating, and Air-Conditioning), 2824 (Manmade Organic Fibres), 3496 (Misc. Fabricated Wire products), and 5812 (Eating Places), and Major group's 13 (Oil and Gas Extraction) and 30 (Rubber and Miscellaneous Plastic Products), were estimated to have more than an 80% likelihood of exceeding the PEL. There was an inverse temporal trend of exceeding the PEL, with lower risk in most recent years, albeit not statistically significant. Similar results were shown when incorporating occupation, but varied depending on the occupation as the majority of industries predicted at the administrative level, e.g., managers, had an estimated EF < 0.5 while at the minimally skilled/laborer level there was a substantial increase in the estimated EF. These statistical models allow the prediction of PAH exposure risk through individual occupational histories and will be used to create a job-exposure matrix for use in a population-based case-control study exploring PAH exposure and breast cancer risk.
A Bayesian predictive two-stage design for phase II clinical trials.
Sambucini, Valeria
2008-04-15
In this paper, we propose a Bayesian two-stage design for phase II clinical trials, which represents a predictive version of the single threshold design (STD) recently introduced by Tan and Machin. The STD two-stage sample sizes are determined specifying a minimum threshold for the posterior probability that the true response rate exceeds a pre-specified target value and assuming that the observed response rate is slightly higher than the target. Unlike the STD, we do not refer to a fixed experimental outcome, but take into account the uncertainty about future data. In both stages, the design aims to control the probability of getting a large posterior probability that the true response rate exceeds the target value. Such a probability is expressed in terms of prior predictive distributions of the data. The performance of the design is based on the distinction between analysis and design priors, recently introduced in the literature. The properties of the method are studied when all the design parameters vary.
Exposure to cosmic radiation of British Airways flying crew on ultralonghaul routes.
Bagshaw, M; Irvine, D; Davies, D M
1996-07-01
British Airways has carried out radiation monitoring in Concorde for more than 20 years and has used a heuristic model based on data quoted by the National Aeronautics and Space Administration (NASA) to model radiation exposure in all longhaul fleets. From these data it has been calculated that no flight deck crew would exceed the control level of 6 mSv/y currently under consideration by regulatory authorities, which is three tenths of the occupational dose limit of 20 mSv/y recommended by the International Commission on Radiological Protection (ICRP). The model suggested that less than 4% of cabin crew based in Tokyo flying only between London and Japan could reach or exceed the 6 mSv/y level, based on a predicted effective dose rate of 7 microSv/h. To validate this calculation a sampling measurement programme was carried out on nine round trips flown by a Boeing 747-400 between London and Tokyo. The radiation field was measured with dosimeters used for routine personal monitoring (thermoluminescence dosimeters (TLDs) and polyallydiglycol carbonate neutron dosimeters). The limitations of the methodology are acknowledged, but the results indicate that the effective dose rate was 6 microSv/h which is consistent with the predicted effective dose rate of 7 microSv/h. This result, which is in accordance with other reported studies indicates that it is unlikely that any of the cabin crew based in Tokyo exceeded the 6 mSv/y level. In accordance with "as low as reasonably achievable" principles British Airways will continue to monitor flying crew routes and hours flown to ensure compliance.
Predicting Marital and Career Success among Dual-worker Couples.
ERIC Educational Resources Information Center
Journal of Marriage and the Family, 1982
1982-01-01
Reviews research both supportive and skeptical of theories (based upon status competition processes, status incompatibility, complementary needs, and threat to gender identity) which posit that stress is created in marriages where the wife's occupational achievements exceed the husband's. Posits a theory explaining which couples will succeed in…
Insights into earthquake hazard map performance from shaking history simulations
NASA Astrophysics Data System (ADS)
Stein, S.; Vanneste, K.; Camelbeeck, T.; Vleminckx, B.
2017-12-01
Why recent large earthquakes caused shaking stronger than predicted by earthquake hazard maps is under debate. This issue has two parts. Verification involves how well maps implement probabilistic seismic hazard analysis (PSHA) ("have we built the map right?"). Validation asks how well maps forecast shaking ("have we built the right map?"). We explore how well a map can ideally perform by simulating an area's shaking history and comparing "observed" shaking to that predicted by a map generated for the same parameters. The simulations yield shaking distributions whose mean is consistent with the map, but individual shaking histories show large scatter. Infrequent large earthquakes cause shaking much stronger than mapped, as observed. Hence, PSHA seems internally consistent and can be regarded as verified. Validation is harder because an earthquake history can yield shaking higher or lower than that predicted while being consistent with the hazard map. The scatter decreases for longer observation times because the largest earthquakes and resulting shaking are increasingly likely to have occurred. For the same reason, scatter is much less for the more active plate boundary than for a continental interior. For a continental interior, where the mapped hazard is low, even an M4 event produces exceedances at some sites. Larger earthquakes produce exceedances at more sites. Thus many exceedances result from small earthquakes, but infrequent large ones may cause very large exceedances. However, for a plate boundary, an M6 event produces exceedance at only a few sites, and an M7 produces them in a larger, but still relatively small, portion of the study area. As reality gives only one history, and a real map involves assumptions about more complicated source geometries and occurrence rates, which are unlikely to be exactly correct and thus will contribute additional scatter, it is hard to assess whether misfit between actual shaking and a map — notably higher-than-mapped shaking — arises by chance or reflects biases in the map. Due to this problem, there are limits to how well we can expect hazard maps to predict future shaking, as well as to our ability to test the performance of a hazard map based on available observations.
Lai, Wesley; Buttineau, Mackenzie; Harvey, Jennifer K; Pucci, Rebecca A; Wong, Anna P M; Dell'Erario, Linda; Bosnyak, Stephanie; Reid, Shannon; Salbach, Nancy M
2017-10-01
In Ontario, Canada, patients admitted to inpatient rehabilitation hospitals post-stroke are classified into rehabilitation patient groups based on age and functional level. Clinical practice guidelines, called quality-based procedures, recommend a target length of stay (LOS) for each group. The study objective was to evaluate the extent to which patients post-stroke at an inpatient rehabilitation hospital are meeting LOS targets and to identify patient characteristics that predict exceeding target LOS. A quantitative, longitudinal study from an inpatient rehabilitation hospital was conducted. Participants included adult patients (≥18 years) with stroke, admitted to an inpatient rehabilitation hospital between 2014 and 2015. The percentage of patients exceeding the recommended target LOS was determined. Logistic regression was performed to identify clinical and psychosocial patient characteristics associated with exceeding target LOS after adjusting for stroke severity. Of 165 patients, 38.8% exceeded their target LOS. Presence of ataxia, recurrent stroke, living alone, absence of a caregiver at admission, and acquiring a caregiver during hospital LOS was each associated with significantly higher odds of exceeding target LOS in comparison to patients without these characteristics after adjusting for stroke severity (p < 0.05). Findings suggest that social and stroke-specific factors may be helpful to adjust LOS expectations and promote efficient resource allocation. This exploratory study was limited to findings from one inpatient rehabilitation hospital. Cross-validation of results using data-sets from multiple rehabilitation hospitals across Ontario is recommended.
System and Method for Providing Model-Based Alerting of Spatial Disorientation to a Pilot
NASA Technical Reports Server (NTRS)
Johnson, Steve (Inventor); Conner, Kevin J (Inventor); Mathan, Santosh (Inventor)
2015-01-01
A system and method monitor aircraft state parameters, for example, aircraft movement and flight parameters, applies those inputs to a spatial disorientation model, and makes a prediction of when pilot may become spatially disoriented. Once the system predicts a potentially disoriented pilot, the sensitivity for alerting the pilot to conditions exceeding a threshold can be increased and allow for an earlier alert to mitigate the possibility of an incorrect control input.
NASA Technical Reports Server (NTRS)
Duda, David P.; Minnis, Patrick
2009-01-01
Previous studies have shown that probabilistic forecasting may be a useful method for predicting persistent contrail formation. A probabilistic forecast to accurately predict contrail formation over the contiguous United States (CONUS) is created by using meteorological data based on hourly meteorological analyses from the Advanced Regional Prediction System (ARPS) and from the Rapid Update Cycle (RUC) as well as GOES water vapor channel measurements, combined with surface and satellite observations of contrails. Two groups of logistic models were created. The first group of models (SURFACE models) is based on surface-based contrail observations supplemented with satellite observations of contrail occurrence. The second group of models (OUTBREAK models) is derived from a selected subgroup of satellite-based observations of widespread persistent contrails. The mean accuracies for both the SURFACE and OUTBREAK models typically exceeded 75 percent when based on the RUC or ARPS analysis data, but decreased when the logistic models were derived from ARPS forecast data.
Probabilistic rainfall warning system with an interactive user interface
NASA Astrophysics Data System (ADS)
Koistinen, Jarmo; Hohti, Harri; Kauhanen, Janne; Kilpinen, Juha; Kurki, Vesa; Lauri, Tuomo; Nurmi, Pertti; Rossi, Pekka; Jokelainen, Miikka; Heinonen, Mari; Fred, Tommi; Moisseev, Dmitri; Mäkelä, Antti
2013-04-01
A real time 24/7 automatic alert system is in operational use at the Finnish Meteorological Institute (FMI). It consists of gridded forecasts of the exceedance probabilities of rainfall class thresholds in the continuous lead time range of 1 hour to 5 days. Nowcasting up to six hours applies ensemble member extrapolations of weather radar measurements. With 2.8 GHz processors using 8 threads it takes about 20 seconds to generate 51 radar based ensemble members in a grid of 760 x 1226 points. Nowcasting exploits also lightning density and satellite based pseudo rainfall estimates. The latter ones utilize convective rain rate (CRR) estimate from Meteosat Second Generation. The extrapolation technique applies atmospheric motion vectors (AMV) originally developed for upper wind estimation with satellite images. Exceedance probabilities of four rainfall accumulation categories are computed for the future 1 h and 6 h periods and they are updated every 15 minutes. For longer forecasts exceedance probabilities are calculated for future 6 and 24 h periods during the next 4 days. From approximately 1 hour to 2 days Poor man's Ensemble Prediction System (PEPS) is used applying e.g. the high resolution short range Numerical Weather Prediction models HIRLAM and AROME. The longest forecasts apply EPS data from the European Centre for Medium Range Weather Forecasts (ECMWF). The blending of the ensemble sets from the various forecast sources is performed applying mixing of accumulations with equal exceedance probabilities. The blending system contains a real time adaptive estimator of the predictability of radar based extrapolations. The uncompressed output data are written to file for each member, having total size of 10 GB. Ensemble data from other sources (satellite, lightning, NWP) are converted to the same geometry as the radar data and blended as was explained above. A verification system utilizing telemetering rain gauges has been established. Alert dissemination e.g. for citizens and professional end users applies SMS messages and, in near future, smartphone maps. The present interactive user interface facilitates free selection of alert sites and two warning thresholds (any rain, heavy rain) at any location in Finland. The pilot service was tested by 1000-3000 users during summers 2010 and 2012. As an example of dedicated end-user services gridded exceedance scenarios (of probabilities 5 %, 50 % and 90 %) of hourly rainfall accumulations for the next 3 hours have been utilized as an online input data for the influent model at the Greater Helsinki Wastewater Treatment Plant.
Risk Reduction and Resource Pooling on a Cooperation Task
ERIC Educational Resources Information Center
Pietras, Cynthia J.; Cherek, Don R.; Lane, Scott D.; Tcheremissine, Oleg
2006-01-01
Two experiments investigated choice in adult humans on a simulated cooperation task to evaluate a risk-reduction account of sharing based on the energy-budget rule. The energy-budget rule is an optimal foraging model that predicts risk-averse choices when net energy gains exceed energy requirements (positive energy budget) and risk-prone choices…
USDA-ARS?s Scientific Manuscript database
Accurate estimates of daily crop evapotranspiration (ET) are needed for efficient irrigation management, especially in arid and semi-arid irrigated regions where crop water demand exceeds rainfall. The impact of inaccurate ET estimates can be tremendous in both irrigation cost and the increased dema...
McDonnell, Todd C; Sullivan, Timothy J; Hessburg, Paul F; Reynolds, Keith M; Povak, Nicholas A; Cosby, Bernard J; Jackson, William; Salter, R Brion
2014-12-15
Atmospherically deposited sulfur (S) causes stream water acidification throughout the eastern U.S. Southern Appalachian Mountain (SAM) region. Acidification has been linked with reduced fitness and richness of aquatic species and changes to benthic communities. Maintaining acid-base chemistry that supports native biota depends largely on balancing acidic deposition with the natural resupply of base cations. Stream water acid neutralizing capacity (ANC) is maintained by base cations that mostly originate from weathering of surrounding lithologies. When ambient atmospheric S deposition exceeds the critical load (CL) an ecosystem can tolerate, stream water chemistry may become lethal to biota. This work links statistical predictions of ANC and base cation weathering for streams and watersheds of the SAM region with a steady-state model to estimate CLs and exceedances. Results showed that 20.1% of the total length of study region streams displayed ANC <100 μeq∙L(-1), a level at which effects to biota may be anticipated; most were 4th or lower order streams. Nearly one-third of the stream length within the study region exhibited CLs of S deposition <50 meq∙m(-2)∙yr(-1), which is less than the regional average S deposition of 60 meq∙m(-2)∙yr(-1). Owing to their geologic substrates, relatively high elevation, and cool and moist forested conditions, the percentage of stream length in exceedance was highest for mountain wilderness areas and in national parks, and lowest for privately owned valley bottom land. Exceedance results were summarized by 12-digit hydrologic unit code (subwatershed) for use in developing management goals and policy objectives, and for long-term monitoring. Copyright © 2014 Elsevier Ltd. All rights reserved.
New theory of transport due to like-particle collisions
NASA Technical Reports Server (NTRS)
Oneil, T. M.
1985-01-01
Cross-magnetic-field transport due to like-particle collisions is discussed for the parameter regime lambda sub D much greater than r sub L, where lambda sub D is the Debye length and r sub L is the characteristic Larmor radius of the colliding particles. A new theory based on collisionally produced E x B drifts predicts a particle flux which exceeds the flux predicted previously, by the factor (lambda sub D/r sub L)-squared much greater than 1.
Exposure to cosmic radiation of British Airways flying crew on ultralonghaul routes.
Bagshaw, M; Irvine, D; Davies, D M
1996-01-01
British Airways has carried out radiation monitoring in Concorde for more than 20 years and has used a heuristic model based on data quoted by the National Aeronautics and Space Administration (NASA) to model radiation exposure in all longhaul fleets. From these data it has been calculated that no flight deck crew would exceed the control level of 6 mSv/y currently under consideration by regulatory authorities, which is three tenths of the occupational dose limit of 20 mSv/y recommended by the International Commission on Radiological Protection (ICRP). The model suggested that less than 4% of cabin crew based in Tokyo flying only between London and Japan could reach or exceed the 6 mSv/y level, based on a predicted effective dose rate of 7 microSv/h. To validate this calculation a sampling measurement programme was carried out on nine round trips flown by a Boeing 747-400 between London and Tokyo. The radiation field was measured with dosimeters used for routine personal monitoring (thermoluminescence dosimeters (TLDs) and polyallydiglycol carbonate neutron dosimeters). The limitations of the methodology are acknowledged, but the results indicate that the effective dose rate was 6 microSv/h which is consistent with the predicted effective dose rate of 7 microSv/h. This result, which is in accordance with other reported studies indicates that it is unlikely that any of the cabin crew based in Tokyo exceeded the 6 mSv/y level. In accordance with "as low as reasonably achievable" principles British Airways will continue to monitor flying crew routes and hours flown to ensure compliance. PMID:8704876
Filter replacement lifetime prediction
Hamann, Hendrik F.; Klein, Levente I.; Manzer, Dennis G.; Marianno, Fernando J.
2017-10-25
Methods and systems for predicting a filter lifetime include building a filter effectiveness history based on contaminant sensor information associated with a filter; determining a rate of filter consumption with a processor based on the filter effectiveness history; and determining a remaining filter lifetime based on the determined rate of filter consumption. Methods and systems for increasing filter economy include measuring contaminants in an internal and an external environment; determining a cost of a corrosion rate increase if unfiltered external air intake is increased for cooling; determining a cost of increased air pressure to filter external air; and if the cost of filtering external air exceeds the cost of the corrosion rate increase, increasing an intake of unfiltered external air.
Advancement of proprotor technology. Task 2: Wind-tunnel test results
NASA Technical Reports Server (NTRS)
1971-01-01
An advanced-design 25-foot-diameter flightworthy proprotor was tested in the NASA-Ames Large-Scale Wind Tunnel. These tests, have verified and confirmed the theory and design solutions developed as part of the Army Composite Aircraft Program. This report presents the test results and compares them with theoretical predictions. During performance tests, the results met or exceeded predictions. Hover thrust 15 percent greater than the predicted maximum was measured. In airplane mode, propulsive efficiencies (some of which exceeded 90 percent) agreed with theory.
A theoretical framework for the episodic-urban air quality management plan ( e-UAQMP)
NASA Astrophysics Data System (ADS)
Gokhale, Sharad; Khare, Mukesh
The present research proposes the local urban air quality management plan which combines two different modelling approaches (hybrid model) and possesses an improved predictive ability including the 'probabilistic exceedances over norms' and their 'frequency of occurrences' and so termed, herein, as episodic-urban air quality management plan ( e-UAQMP). The e-UAQMP deals with the consequences of 'extreme' concentrations of pollutant, mainly occurring at urban 'hotspots' e.g. traffic junctions, intersections and signalized roadways and are also influenced by complexities of traffic generated 'wake' effects. The e-UAQMP (based on probabilistic approach), also acts as an efficient preventive measure to predict the 'probability of exceedances' so as to prepare a successful policy responses in relation to the protection of urban environment as well as disseminating information to its sensitive 'receptors'. The e-UAQMP may be tailored to the requirements of the local area for the policy implementation programmes. The importance of such policy-making framework in the context of current air pollution 'episodes' in urban environments is discussed. The hybrid model that is based on both deterministic and stochastic based approaches predicting the 'average' as well as 'extreme' concentration distribution of air pollutants together in form of probability has been used at two air quality control regions (AQCRs) in the Delhi city, India, in formulating and executing the e-UAQMP— first, the income tax office (ITO), one of the busiest signalized traffic intersection and second, the Sirifort, one of the busiest signalized roadways.
Synchrophasor-Assisted Prediction of Stability/Instability of a Power System
NASA Astrophysics Data System (ADS)
Saha Roy, Biman Kumar; Sinha, Avinash Kumar; Pradhan, Ashok Kumar
2013-05-01
This paper presents a technique for real-time prediction of stability/instability of a power system based on synchrophasor measurements obtained from phasor measurement units (PMUs) at generator buses. For stability assessment the technique makes use of system severity indices developed using bus voltage magnitude obtained from PMUs and generator electrical power. Generator power is computed using system information and PMU information like voltage and current phasors obtained from PMU. System stability/instability is predicted when the indices exceeds a threshold value. A case study is carried out on New England 10-generator, 39-bus system to validate the performance of the technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Viner, Brian J.; Jannik, Tim; Stone, Daniel
Firefighters responding to wildland fires where surface litter and vegetation contain radiological contamination will receive a radiological dose by inhaling resuspended radioactive material in the smoke. This may increase their lifetime risk of contracting certain types of cancer. Using published data, we modelled hypothetical radionuclide emissions, dispersion and dose for 70th and 97th percentile environmental conditions and for average and high fuel loads at the Savannah River Site. We predicted downwind concentration and potential dose to firefighters for radionuclides of interest ( 137Cs, 238Pu, 90Sr and 210Po). Predicted concentrations exceeded dose guidelines in the base case scenario emissions of 1.0more » × 10 7 Bq ha –1 for 238Pu at 70th percentile environmental conditions and average fuel load levels for both 4- and 14-h shifts. Under 97th percentile environmental conditions and high fuel loads, dose guidelines were exceeded for several reported cases for 90Sr, 238Pu and 210Po. Potential for exceeding dose guidelines was mitigated by including plume rise (>2 m s –1) or moving a small distance from the fire owing to large concentration gradients near the edge of the fire. As a result, our approach can quickly estimate potential dose from airborne radionuclides in wildland fire and assist decision-making to reduce firefighter exposure.« less
[Application of artificial neural networks on the prediction of surface ozone concentrations].
Shen, Lu-Lu; Wang, Yu-Xuan; Duan, Lei
2011-08-01
Ozone is an important secondary air pollutant in the lower atmosphere. In order to predict the hourly maximum ozone one day in advance based on the meteorological variables for the Wanqingsha site in Guangzhou, Guangdong province, a neural network model (Multi-Layer Perceptron) and a multiple linear regression model were used and compared. Model inputs are meteorological parameters (wind speed, wind direction, air temperature, relative humidity, barometric pressure and solar radiation) of the next day and hourly maximum ozone concentration of the previous day. The OBS (optimal brain surgeon) was adopted to prune the neutral work, to reduce its complexity and to improve its generalization ability. We find that the pruned neural network has the capacity to predict the peak ozone, with an agreement index of 92.3%, the root mean square error of 0.0428 mg/m3, the R-square of 0.737 and the success index of threshold exceedance 77.0% (the threshold O3 mixing ratio of 0.20 mg/m3). When the neural classifier was added to the neural network model, the success index of threshold exceedance increased to 83.6%. Through comparison of the performance indices between the multiple linear regression model and the neural network model, we conclud that that neural network is a better choice to predict peak ozone from meteorological forecast, which may be applied to practical prediction of ozone concentration.
Near Real-Time Optimal Prediction of Adverse Events in Aviation Data
NASA Technical Reports Server (NTRS)
Martin, Rodney Alexander; Das, Santanu
2010-01-01
The prediction of anomalies or adverse events is a challenging task, and there are a variety of methods which can be used to address the problem. In this paper, we demonstrate how to recast the anomaly prediction problem into a form whose solution is accessible as a level-crossing prediction problem. The level-crossing prediction problem has an elegant, optimal, yet untested solution under certain technical constraints, and only when the appropriate modeling assumptions are made. As such, we will thoroughly investigate the resilience of these modeling assumptions, and show how they affect final performance. Finally, the predictive capability of this method will be assessed by quantitative means, using both validation and test data containing anomalies or adverse events from real aviation data sets that have previously been identified as operationally significant by domain experts. It will be shown that the formulation proposed yields a lower false alarm rate on average than competing methods based on similarly advanced concepts, and a higher correct detection rate than a standard method based upon exceedances that is commonly used for prediction.
Electronic clinical predictive thermometer using logarithm for temperature prediction
NASA Technical Reports Server (NTRS)
Cambridge, Vivien J. (Inventor); Koger, Thomas L. (Inventor); Nail, William L. (Inventor); Diaz, Patrick (Inventor)
1998-01-01
A thermometer that rapidly predicts body temperature based on the temperature signals received from a temperature sensing probe when it comes into contact with the body. The logarithms of the differences between the temperature signals in a selected time frame are determined. A line is fit through the logarithms and the slope of the line is used as a system time constant in predicting the final temperature of the body. The time constant in conjunction with predetermined additional constants are used to compute the predicted temperature. Data quality in the time frame is monitored and if unacceptable, a different time frame of temperature signals is selected for use in prediction. The processor switches to a monitor mode if data quality over a limited number of time frames is unacceptable. Determining the start time on which the measurement time frame for prediction is based is performed by summing the second derivatives of temperature signals over time frames. When the sum of second derivatives in a particular time frame exceeds a threshold, the start time is established.
Group-regularized individual prediction: theory and application to pain.
Lindquist, Martin A; Krishnan, Anjali; López-Solà, Marina; Jepma, Marieke; Woo, Choong-Wan; Koban, Leonie; Roy, Mathieu; Atlas, Lauren Y; Schmidt, Liane; Chang, Luke J; Reynolds Losin, Elizabeth A; Eisenbarth, Hedwig; Ashar, Yoni K; Delk, Elizabeth; Wager, Tor D
2017-01-15
Multivariate pattern analysis (MVPA) has become an important tool for identifying brain representations of psychological processes and clinical outcomes using fMRI and related methods. Such methods can be used to predict or 'decode' psychological states in individual subjects. Single-subject MVPA approaches, however, are limited by the amount and quality of individual-subject data. In spite of higher spatial resolution, predictive accuracy from single-subject data often does not exceed what can be accomplished using coarser, group-level maps, because single-subject patterns are trained on limited amounts of often-noisy data. Here, we present a method that combines population-level priors, in the form of biomarker patterns developed on prior samples, with single-subject MVPA maps to improve single-subject prediction. Theoretical results and simulations motivate a weighting based on the relative variances of biomarker-based prediction-based on population-level predictive maps from prior groups-and individual-subject, cross-validated prediction. Empirical results predicting pain using brain activity on a trial-by-trial basis (single-trial prediction) across 6 studies (N=180 participants) confirm the theoretical predictions. Regularization based on a population-level biomarker-in this case, the Neurologic Pain Signature (NPS)-improved single-subject prediction accuracy compared with idiographic maps based on the individuals' data alone. The regularization scheme that we propose, which we term group-regularized individual prediction (GRIP), can be applied broadly to within-person MVPA-based prediction. We also show how GRIP can be used to evaluate data quality and provide benchmarks for the appropriateness of population-level maps like the NPS for a given individual or study. Copyright © 2015 Elsevier Inc. All rights reserved.
How well can we test probabilistic seismic hazard maps?
NASA Astrophysics Data System (ADS)
Vanneste, Kris; Stein, Seth; Camelbeeck, Thierry; Vleminckx, Bart
2017-04-01
Recent large earthquakes that gave rise to shaking much stronger than shown in probabilistic seismic hazard (PSH) maps have stimulated discussion about how well these maps forecast future shaking. These discussions have brought home the fact that although the maps are designed to achieve certain goals, we know little about how well they actually perform. As for any other forecast, this question involves verification and validation. Verification involves assessing how well the algorithm used to produce hazard maps implements the conceptual PSH model ("have we built the model right?"). Validation asks how well the model forecasts the shaking that actually occurs ("have we built the right model?"). We explore the verification issue by simulating shaking histories for an area with assumed uniform distribution of earthquakes, Gutenberg-Richter magnitude-frequency relation, Poisson temporal occurrence model, and ground-motion prediction equation (GMPE). We compare the maximum simulated shaking at many sites over time with that predicted by a hazard map generated for the same set of parameters. The Poisson model predicts that the fraction of sites at which shaking will exceed that of the hazard map is p = 1 - exp(-t/T), where t is the duration of observations and T is the map's return period. Exceedance is typically associated with infrequent large earthquakes, as observed in real cases. The ensemble of simulated earthquake histories yields distributions of fractional exceedance with mean equal to the predicted value. Hence, the PSH algorithm appears to be internally consistent and can be regarded as verified for this set of simulations. However, simulated fractional exceedances show a large scatter about the mean value that decreases with increasing t/T, increasing observation time and increasing Gutenberg-Richter a-value (combining intrinsic activity rate and surface area), but is independent of GMPE uncertainty. This scatter is due to the variability of earthquake recurrence, and so decreases as the largest earthquakes occur in more simulations. Our results are important for evaluating the performance of a hazard map based on misfits in fractional exceedance, and for assessing whether such misfit arises by chance or reflects a bias in the map. More specifically, we determined for a broad range of Gutenberg-Richter a-values theoretical confidence intervals on allowed misfits in fractional exceedance and on the percentage of hazard-map bias that can thus be detected by comparison with observed shaking histories. Given that in the real world we only have one shaking history for an area, these results indicate that even if a hazard map does not fit the observations, it is very difficult to assess its veracity, especially for low-to-moderate-seismicity regions. Because our model is a simplified version of reality, any additional uncertainty or complexity will tend to widen these confidence intervals.
Are EMS call volume predictions based on demand pattern analysis accurate?
Brown, Lawrence H; Lerner, E Brooke; Larmon, Baxter; LeGassick, Todd; Taigman, Michael
2007-01-01
Most EMS systems determine the number of crews they will deploy in their communities and when those crews will be scheduled based on anticipated call volumes. Many systems use historical data to calculate their anticipated call volumes, a method of prediction known as demand pattern analysis. To evaluate the accuracy of call volume predictions calculated using demand pattern analysis. Seven EMS systems provided 73 consecutive weeks of hourly call volume data. The first 20 weeks of data were used to calculate three common demand pattern analysis constructs for call volume prediction: average peak demand (AP), smoothed average peak demand (SAP), and 90th percentile rank (90%R). The 21st week served as a buffer. Actual call volumes in the last 52 weeks were then compared to the predicted call volumes by using descriptive statistics. There were 61,152 hourly observations in the test period. All three constructs accurately predicted peaks and troughs in call volume but not exact call volume. Predictions were accurate (+/-1 call) 13% of the time using AP, 10% using SAP, and 19% using 90%R. Call volumes were overestimated 83% of the time using AP, 86% using SAP, and 74% using 90%R. When call volumes were overestimated, predictions exceeded actual call volume by a median (Interquartile range) of 4 (2-6) calls for AP, 4 (2-6) for SAP, and 3 (2-5) for 90%R. Call volumes were underestimated 4% of time using AP, 4% using SAP, and 7% using 90%R predictions. When call volumes were underestimated, call volumes exceeded predictions by a median (Interquartile range; maximum under estimation) of 1 (1-2; 18) call for AP, 1 (1-2; 18) for SAP, and 2 (1-3; 20) for 90%R. Results did not vary between systems. Generally, demand pattern analysis estimated or overestimated call volume, making it a reasonable predictor for ambulance staffing patterns. However, it did underestimate call volume between 4% and 7% of the time. Communities need to determine if these rates of over-and underestimation are acceptable given their resources and local priorities.
Forecasting impact injuries of unrestrained occupants in railway vehicle passenger compartments.
Xie, Suchao; Zhou, Hui
2014-01-01
In order to predict the injury parameters of the occupants corresponding to different experimental parameters and to determine impact injury indices conveniently and efficiently, a model forecasting occupant impact injury was established in this work. The work was based on finite experimental observation values obtained by numerical simulation. First, the various factors influencing the impact injuries caused by the interaction between unrestrained occupants and the compartment's internal structures were collated and the most vulnerable regions of the occupant's body were analyzed. Then, the forecast model was set up based on a genetic algorithm-back propagation (GA-BP) hybrid algorithm, which unified the individual characteristics of the back propagation-artificial neural network (BP-ANN) model and the genetic algorithm (GA). The model was well suited to studies of occupant impact injuries and allowed multiple-parameter forecasts of the occupant impact injuries to be realized assuming values for various influencing factors. Finally, the forecast results for three types of secondary collision were analyzed using forecasting accuracy evaluation methods. All of the results showed the ideal accuracy of the forecast model. When an occupant faced a table, the relative errors between the predicted and experimental values of the respective injury parameters were kept within ± 6.0 percent and the average relative error (ARE) values did not exceed 3.0 percent. When an occupant faced a seat, the relative errors between the predicted and experimental values of the respective injury parameters were kept within ± 5.2 percent and the ARE values did not exceed 3.1 percent. When the occupant faced another occupant, the relative errors between the predicted and experimental values of the respective injury parameters were kept within ± 6.3 percent and the ARE values did not exceed 3.8 percent. The injury forecast model established in this article reduced repeat experiment times and improved the design efficiency of the internal compartment's structure parameters, and it provided a new way for assessing the safety performance of the interior structural parameters in existing, and newly designed, railway vehicle compartments.
Cella, Laura; Liuzzi, Raffaele; Conson, Manuel; D'Avino, Vittoria; Salvatore, Marco; Pacelli, Roberto
2012-12-27
Hypothyroidism is a frequent late side effect of radiation therapy of the cervical region. Purpose of this work is to develop multivariate normal tissue complication probability (NTCP) models for radiation-induced hypothyroidism (RHT) and to compare them with already existing NTCP models for RHT. Fifty-three patients treated with sequential chemo-radiotherapy for Hodgkin's lymphoma (HL) were retrospectively reviewed for RHT events. Clinical information along with thyroid gland dose distribution parameters were collected and their correlation to RHT was analyzed by Spearman's rank correlation coefficient (Rs). Multivariate logistic regression method using resampling methods (bootstrapping) was applied to select model order and parameters for NTCP modeling. Model performance was evaluated through the area under the receiver operating characteristic curve (AUC). Models were tested against external published data on RHT and compared with other published NTCP models. If we express the thyroid volume exceeding X Gy as a percentage (Vx(%)), a two-variable NTCP model including V30(%) and gender resulted to be the optimal predictive model for RHT (Rs = 0.615, p < 0.001. AUC = 0.87). Conversely, if absolute thyroid volume exceeding X Gy (Vx(cc)) was analyzed, an NTCP model based on 3 variables including V30(cc), thyroid gland volume and gender was selected as the most predictive model (Rs = 0.630, p < 0.001. AUC = 0.85). The three-variable model performs better when tested on an external cohort characterized by large inter-individuals variation in thyroid volumes (AUC = 0.914, 95% CI 0.760-0.984). A comparable performance was found between our model and that proposed in the literature based on thyroid gland mean dose and volume (p = 0.264). The absolute volume of thyroid gland exceeding 30 Gy in combination with thyroid gland volume and gender provide an NTCP model for RHT with improved prediction capability not only within our patient population but also in an external cohort.
Frans, Lonna M.
2000-01-01
Logistic regression was used to relate anthropogenic (man-made) and natural factors to the occurrence of elevated concentrations of nitrite plus nitrate as nitrogen in ground water in the Columbia Basin Ground Water Management Area, eastern Washington. Variables that were analyzed included well depth, depth of well casing, ground-water recharge rates, presence of canals, fertilizer application amounts, soils, surficial geology, and land-use types. The variables that best explain the occurrence of nitrate concentrations above 3 milligrams per liter in wells were the amount of fertilizer applied annually within a 2-kilometer radius of a well and the depth of the well casing; the variables that best explain the occurrence of nitrate above 10 milligrams per liter included the amount of fertilizer applied annually within a 3-kilometer radius of a well, the depth of the well casing, and the mean soil hydrologic group, which is a measure of soil infiltration rate. Based on the relations between these variables and elevated nitrate concentrations, models were developed using logistic regression that predict the probability that ground water will exceed a nitrate concentration of either 3 milligrams per liter or 10 milligrams per liter. Maps were produced that illustrate the predicted probability that ground-water nitrate concentrations will exceed 3 milligrams per liter or 10 milligrams per liter for wells cased to 78 feet below land surface (median casing depth) and the predicted depth to which wells would need to be cased in order to have an 80-percent probability of drawing water with a nitrate concentration below either 3 milligrams per liter or 10 milligrams per liter. Maps showing the predicted probability for the occurrence of elevated nitrate concentrations indicate that the irrigated agricultural regions are most at risk. The predicted depths to which wells need to be cased in order to have an 80-percent chance of obtaining low nitrate ground water exceed 600 feet in the irrigated agricultural regions, whereas wells in dryland agricultural areas generally need a casing in excess of 400 feet. The predicted depth to which wells need to be cased to have at least an 80-percent chance to draw water with a nitrate concentration less than 10 milligrams per liter generally did not exceed 800 feet, with a 200-foot casing depth typical of the majority of the area.
Fu, Lei; Wang, Feng; Wu, Bin; Wu, Nian; Huang, Wei; Wang, Hanlin; Jin, Chuanhong; Zhuang, Lin; He, Jun; Fu, Lei; Liu, Yunqi
2017-08-01
As a member of the group IVB transition metal dichalcogenides (TMDs) family, hafnium disulfide (HfS 2 ) is recently predicted to exhibit higher carrier mobility and higher tunneling current density than group VIB (Mo and W) TMDs. However, the synthesis of high-quality HfS 2 crystals, sparsely reported, has greatly hindered the development of this new field. Here, a facile strategy for controlled synthesis of high-quality atomic layered HfS 2 crystals by van der Waals epitaxy is reported. Density functional theory calculations are applied to elucidate the systematic epitaxial growth process of the S-edge and Hf-edge. Impressively, the HfS 2 back-gate field-effect transistors display a competitive mobility of 7.6 cm 2 V -1 s -1 and an ultrahigh on/off ratio exceeding 10 8 . Meanwhile, ultrasensitive near-infrared phototransistors based on the HfS 2 crystals (indirect bandgap ≈1.45 eV) exhibit an ultrahigh responsivity exceeding 3.08 × 10 5 A W -1 , which is 10 9 -fold higher than 9 × 10 -5 A W -1 obtained from the multilayer MoS 2 in near-infrared photodetection. Moreover, an ultrahigh photogain exceeding 4.72 × 10 5 and an ultrahigh detectivity exceeding 4.01 × 10 12 Jones, superior to the vast majority of the reported 2D-materials-based phototransistors, imply a great promise in TMD-based 2D electronic and optoelectronic applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
[Prediction of 137Cs accumulation in animal products in the territory of Semipalatinsk test site].
Spiridonov, S I; Gontarenko, I A; Mukusheva, M K; Fesenko, S V; Semioshkina, N A
2005-01-01
The paper describes mathematical models for 137Cs behavior in the organism of horses and sheep pasturing on the bording area to the testing area "Ground Zero" of the Semipalatinsk Test Site. The models are parameterized on the base of the data from an experiment with the breeds of animals now commonly encountered within the Semipalatinsk Test Site. The predictive calculations with the models devised have shown that 137Cs concentrations in milk of horses and sheep pasturingon the testing area to "Ground Zero" can exceed the adopted standards during a long period of time.
NASA Astrophysics Data System (ADS)
Williston, P.; Aherne, J.; Watmough, S.; Marmorek, D.; Hall, A.; de la Cueva Bueno, P.; Murray, C.; Henolson, A.; Laurence, J. A.
2016-12-01
Northwest British Columbia, Canada, a sparsely populated and largely pristine region, is targeted for rapid industrial growth owing to the modernization of an aluminum smelter and multiple proposed liquefied natural gas (LNG) facilities. Consequently, air quality in this region is expected to undergo considerable changes within the next decade. In concert, the increase in LNG capacity driven by gas production from shale resources across North America has prompted environmental concerns and highlighted the need for science-based management decisions regarding the permitting of air emissions. In this study, an effects-based approach widely-used to support transboundary emissions policy negotiations was used to assess industrial air emissions in the Kitimat and Prince Rupert airsheds under permitted and future potential industrial emissions. Critical levels for vegetation of SO2 and NO2 and critical loads of acidity and nutrient nitrogen for terrestrial and aquatic ecosystems were estimated for both regions and compared with modelled concentration and deposition estimates to identify the potential extent and magnitude of ecosystem impacts. The critical level for SO2 was predicted to be exceeded in an area ranging from 81 to 251 km2 in the Kitimat airshed owing to emissions from an existing smelter, compared with <1 km2 in Prince Rupert under the lowest to highest emissions scenarios. In contrast, the NO2 critical level was not exceeded in Kitimat, and ranged from 4.5 to 6 km2 in Prince Rupert owing to proposed LNG related emissions. Predicted areal exceedance of the critical load of acidity for soil ranged from 1 to 28 km2 in Kitimat and 4-10 km2 in Prince Rupert, while the areal exceedance of empirical critical load for nutrient N was predicted to be greater in the Prince Rupert airshed (20-94 km2) than in the Kitimat airshed (1-31 km2). The number of lakes that exceeded the critical load of acidity did not vary greatly across emissions scenarios in the Kitimat (21-23 out of 80 sampled lakes) and Prince Rupert (0 out of 35 sampled lakes) airsheds. While critical loads have been widely used to underpin international emissions reductions of transboundary pollutants, it is clear that they can also play an important role in managing regional air emissions. In the current study, exceedance of critical levels and loads suggests that industrial emissions from the nascent LNG export sector may require careful regulation to avoid environmental impacts. Emissions management from LNG export facilities in other regions should consider critical levels and loads analyses to ensure industrial development is synergistic with ecosystem protection. While recognizing uncertainties in dispersion modelling, critical load estimates, and subsequent effects, the critical levels and loads approach is being used to inform regulatory decisions in British Columbia to prevent impacts that have been well documented in other regions.
Aquatic risk assessment of a polycarboxylate dispersant polymer used in laundry detergents.
Hamilton, J D; Freeman, M B; Reinert, K H
1996-09-01
Polycarboxylates enhance detergent soil removal properties and prevent encrustation of calcium salts on fabrics during washing. Laundry wastewater typically reaches wastewater treatment plants, which then discharge into aquatic environments. The yearly average concentration of a 4500 molecular weight (MW) sodium acrylate homopolymer reaching U.S. wastewater treatment plants will be approximately 0.7 mg/L. Publications showing the low to moderate acute aquatic toxicity of polycarboxylates are readily available. However, there are no published evaluations that estimate wastewater removal and characterize the probability of exceedance of acceptable chronic aquatic exposure. WW-TREAT can be used to estimate removal during wastewater treatment and PG-GRIDS can be applied to characterize risk for exceedance in wastewater treatment plant outfalls. After adjustments for the MW distribution of the homopolymer, WW-TREAT predicted that 6.5% will be removed in primary treatment plants and 60% will be removed in combined primary and activated sludge treatment plants. These estimates are consistent with wastewater fate tests, but underestimate homopolymer removal when homopolymer precipitation is included. Acceptable levels of chronic outfall (receiving water) exposure were based on aquatic toxicity testing in algae, fish, and Daphnia magna. PG-GRIDS predicted that no unreasonable risk for exceedance of acceptable chronic exposure will occur in the outfalls of U.S. wastewater plants. Future development of wastewater treatment models should consider polymer MW distribution and precipitation as factors that may alter removal of materials from wastewater.
Viner, Brian J.; Jannik, Tim; Stone, Daniel; ...
2015-06-12
Firefighters responding to wildland fires where surface litter and vegetation contain radiological contamination will receive a radiological dose by inhaling resuspended radioactive material in the smoke. This may increase their lifetime risk of contracting certain types of cancer. Using published data, we modelled hypothetical radionuclide emissions, dispersion and dose for 70th and 97th percentile environmental conditions and for average and high fuel loads at the Savannah River Site. We predicted downwind concentration and potential dose to firefighters for radionuclides of interest ( 137Cs, 238Pu, 90Sr and 210Po). Predicted concentrations exceeded dose guidelines in the base case scenario emissions of 1.0more » × 10 7 Bq ha –1 for 238Pu at 70th percentile environmental conditions and average fuel load levels for both 4- and 14-h shifts. Under 97th percentile environmental conditions and high fuel loads, dose guidelines were exceeded for several reported cases for 90Sr, 238Pu and 210Po. Potential for exceeding dose guidelines was mitigated by including plume rise (>2 m s –1) or moving a small distance from the fire owing to large concentration gradients near the edge of the fire. As a result, our approach can quickly estimate potential dose from airborne radionuclides in wildland fire and assist decision-making to reduce firefighter exposure.« less
Testing the adaptive radiation hypothesis for the lemurs of Madagascar.
Herrera, James P
2017-01-01
Lemurs, the diverse, endemic primates of Madagascar, are thought to represent a classic example of adaptive radiation. Based on the most complete phylogeny of living and extinct lemurs yet assembled, I tested predictions of adaptive radiation theory by estimating rates of speciation, extinction and adaptive phenotypic evolution. As predicted, lemur speciation rate exceeded that of their sister clade by nearly twofold, indicating the diversification dynamics of lemurs and mainland relatives may have been decoupled. Lemur diversification rates did not decline over time, however, as predicted by adaptive radiation theory. Optimal body masses diverged among dietary and activity pattern niches as lineages diversified into unique multidimensional ecospace. Based on these results, lemurs only partially fulfil the predictions of adaptive radiation theory, with phenotypic evolution corresponding to an 'early burst' of adaptive differentiation. The results must be interpreted with caution, however, because over the long evolutionary history of lemurs (approx. 50 million years), the 'early burst' signal of adaptive radiation may have been eroded by extinction.
Testing the adaptive radiation hypothesis for the lemurs of Madagascar
2017-01-01
Lemurs, the diverse, endemic primates of Madagascar, are thought to represent a classic example of adaptive radiation. Based on the most complete phylogeny of living and extinct lemurs yet assembled, I tested predictions of adaptive radiation theory by estimating rates of speciation, extinction and adaptive phenotypic evolution. As predicted, lemur speciation rate exceeded that of their sister clade by nearly twofold, indicating the diversification dynamics of lemurs and mainland relatives may have been decoupled. Lemur diversification rates did not decline over time, however, as predicted by adaptive radiation theory. Optimal body masses diverged among dietary and activity pattern niches as lineages diversified into unique multidimensional ecospace. Based on these results, lemurs only partially fulfil the predictions of adaptive radiation theory, with phenotypic evolution corresponding to an ‘early burst’ of adaptive differentiation. The results must be interpreted with caution, however, because over the long evolutionary history of lemurs (approx. 50 million years), the ‘early burst’ signal of adaptive radiation may have been eroded by extinction. PMID:28280597
The role of natural analogs in the repository licensing process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, W.M.
1995-09-01
The concept of a permanent geologic repository for high-level nuclear waste (NLW) is implicitly based on analogy to natural systems that have been stable for millions or billions of years. The time of radioactive and chemical toxicity of HLW exceeds the duration of human civilization, and it is impossible to demonstrate the accuracy of predictions of the behavior of engineered or social systems over such long periods.
Ollson, Christopher A; Whitfield Aslund, Melissa L; Knopper, Loren D; Dan, Tereza
2014-01-01
The regions of Durham and York in Ontario, Canada have partnered to construct an energy-from-waste (EFW) thermal treatment facility as part of a long term strategy for the management of their municipal solid waste. In this paper we present the results of a comprehensive ecological risk assessment (ERA) for this planned facility, based on baseline sampling and site specific modeling to predict facility-related emissions, which was subsequently accepted by regulatory authorities. Emissions were estimated for both the approved initial operating design capacity of the facility (140,000 tonnes per year) and the maximum design capacity (400,000 tonnes per year). In general, calculated ecological hazard quotients (EHQs) and screening ratios (SRs) for receptors did not exceed the benchmark value (1.0). The only exceedances noted were generally due to existing baseline media concentrations, which did not differ from those expected for similar unimpacted sites in Ontario. This suggests that these exceedances reflect conservative assumptions applied in the risk assessment rather than actual potential risk. However, under predicted upset conditions at 400,000 tonnes per year (i.e., facility start-up, shutdown, and loss of air pollution control), a potential unacceptable risk was estimated for freshwater receptors with respect to benzo(g,h,i)perylene (SR=1.1), which could not be attributed to baseline conditions. Although this slight exceedance reflects a conservative worst-case scenario (upset conditions coinciding with worst-case meteorological conditions), further investigation of potential ecological risk should be performed if this facility is expanded to the maximum operating capacity in the future. © 2013.
NASA Astrophysics Data System (ADS)
Javernick, Luke; Redolfi, Marco; Bertoldi, Walter
2018-05-01
New data collection techniques offer numerical modelers the ability to gather and utilize high quality data sets with high spatial and temporal resolution. Such data sets are currently needed for calibration, verification, and to fuel future model development, particularly morphological simulations. This study explores the use of high quality spatial and temporal data sets of observed bed load transport in braided river flume experiments to evaluate the ability of a two-dimensional model, Delft3D, to predict bed load transport. This study uses a fixed bed model configuration and examines the model's shear stress calculations, which are the foundation to predict the sediment fluxes necessary for morphological simulations. The evaluation is conducted for three flow rates, and model setup used highly accurate Structure-from-Motion (SfM) topography and discharge boundary conditions. The model was hydraulically calibrated using bed roughness, and performance was evaluated based on depth and inundation agreement. Model bed load performance was evaluated in terms of critical shear stress exceedance area compared to maps of observed bed mobility in a flume. Following the standard hydraulic calibration, bed load performance was tested for sensitivity to horizontal eddy viscosity parameterization and bed morphology updating. Simulations produced depth errors equal to the SfM inherent errors, inundation agreement of 77-85%, and critical shear stress exceedance in agreement with 49-68% of the observed active area. This study provides insight into the ability of physically based, two-dimensional simulations to accurately predict bed load as well as the effects of horizontal eddy viscosity and bed updating. Further, this study highlights how using high spatial and temporal data to capture the physical processes at work during flume experiments can help to improve morphological modeling.
Vitolins, Mara Z; Anderson, Andrea M; Delahanty, Linda; Raynor, Hollie; Miller, Gary D; Mobley, Connie; Reeves, Rebecca; Yamamoto, Monica; Champagne, Catherine; Wing, Rena R; Mayer-Davis, Elizabeth
2009-08-01
Little has been reported regarding food and nutrient intake in individuals diagnosed with type 2 diabetes, and most reports have been based on findings in select groups or individuals who self-reported having diabetes. To describe the baseline food and nutrient intake of the Look AHEAD (Action for Health in Diabetes) trial participants, compare participant intake to national guidelines, and describe demographic and health characteristics associated with food group consumption. The Look AHEAD trial is evaluating the effects of a lifestyle intervention (calorie control and increased physical activity for weight loss) compared with diabetes support and education on long-term cardiovascular and other health outcomes. Participants are 45 to 75 years old, overweight or obese (body mass index [BMI] > or = 25), and have type 2 diabetes. In this cross-sectional analysis, baseline food consumption was assessed by food frequency questionnaire from 2,757 participants between September 2000 and December 2003. Descriptive statistics were used to summarize intake by demographic characteristics. Kruskal-Wallis tests assessed univariate effects of characteristics on consumption. Multiple linear regression models assessed factors predictive of intake. Least square estimates were based on final models, and logistic regression determined factors predictive of recommended intake. Ninety-three percent of the participants exceeded the recommended percentage of calories from fat, 85% exceeded the saturated fat recommendation, and 92% consumed too much sodium. Also, fewer than half met the minimum recommended servings of fruit, vegetables, dairy, and grains. These participants with pre-existing diabetes did not meet recommended food and nutrition guidelines. These overweight adults diagnosed with diabetes are exceeding recommended intake of fat, saturated fats, and sodium, which may contribute to increasing their risk of cardiovascular disease and other chronic diseases.
Sequencing of Vocational Development Stages: Further Studies
ERIC Educational Resources Information Center
Hershenson, David B.; Lavery, Gerard J.
1978-01-01
Reports two studies supporting the prediction derived from Hershenson's life-stage vocational development model that average scores on Self-differentiation (worker self-concept and motivation) would exceed those on Competence (work habits, skills, and interpersonal relations), which in turn would exceed those on Independence (appropriateness and…
NASA Technical Reports Server (NTRS)
Mark, W. D.
1977-01-01
Mathematical expressions were derived for the exceedance rates and probability density functions of aircraft response variables using a turbulence model that consists of a low frequency component plus a variance modulated Gaussian turbulence component. The functional form of experimentally observed concave exceedance curves was predicted theoretically, the strength of the concave contribution being governed by the coefficient of variation of the time fluctuating variance of the turbulence. Differences in the functional forms of response exceedance curves and probability densities also were shown to depend primarily on this same coefficient of variation. Criteria were established for the validity of the local stationary assumption that is required in the derivations of the exceedance curves and probability density functions. These criteria are shown to depend on the relative time scale of the fluctuations in the variance, the fluctuations in the turbulence itself, and on the nominal duration of the relevant aircraft impulse response function. Metrics that can be generated from turbulence recordings for testing the validity of the local stationary assumption were developed.
Zimmerman, Tammy M.
2006-01-01
The Lake Erie shoreline in Pennsylvania spans nearly 40 miles and is a valuable recreational resource for Erie County. Nearly 7 miles of the Lake Erie shoreline lies within Presque Isle State Park in Erie, Pa. Concentrations of Escherichia coli (E. coli) bacteria at permitted Presque Isle beaches occasionally exceed the single-sample bathing-water standard, resulting in unsafe swimming conditions and closure of the beaches. E. coli concentrations and other water-quality and environmental data collected at Presque Isle Beach 2 during the 2004 and 2005 recreational seasons were used to develop models using tobit regression analyses to predict E. coli concentrations. All variables statistically related to E. coli concentrations were included in the initial regression analyses, and after several iterations, only those explanatory variables that made the models significantly better at predicting E. coli concentrations were included in the final models. Regression models were developed using data from 2004, 2005, and the combined 2-year dataset. Variables in the 2004 model and the combined 2004-2005 model were log10 turbidity, rain weight, wave height (calculated), and wind direction. Variables in the 2005 model were log10 turbidity and wind direction. Explanatory variables not included in the final models were water temperature, streamflow, wind speed, and current speed; model results indicated these variables did not meet significance criteria at the 95-percent confidence level (probabilities were greater than 0.05). The predicted E. coli concentrations produced by the models were used to develop probabilities that concentrations would exceed the single-sample bathing-water standard for E. coli of 235 colonies per 100 milliliters. Analysis of the exceedence probabilities helped determine a threshold probability for each model, chosen such that the correct number of exceedences and nonexceedences was maximized and the number of false positives and false negatives was minimized. Future samples with computed exceedence probabilities higher than the selected threshold probability, as determined by the model, will likely exceed the E. coli standard and a beach advisory or closing may need to be issued; computed exceedence probabilities lower than the threshold probability will likely indicate the standard will not be exceeded. Additional data collected each year can be used to test and possibly improve the model. This study will aid beach managers in more rapidly determining when waters are not safe for recreational use and, subsequently, when to issue beach advisories or closings.
40 CFR Table 17 to Subpart Xxxx of... - Applicability of General Provisions to This Subpart XXXX
Code of Federal Regulations, 2011 CFR
2011-07-01
...) Compliance Except During SSM Yes No. § 63.6(f)(2)-(3) Methods for Determining Compliance Compliance based on... SSM; not a violation to exceed standard during SSM Yes No. § 63.7(e)(2) Conditions for Conducting... Applies as modified by § 63.5990(e) and (f) No. § 63.8(c)(1)(i) Routine and Predictable SSM No No. § 63.8...
On the adaptive daily forecasting of seismic aftershock hazard
NASA Astrophysics Data System (ADS)
Ebrahimian, Hossein; Jalayer, Fatemeh; Asprone, Domenico; Lombardi, Anna Maria; Marzocchi, Warner; Prota, Andrea; Manfredi, Gaetano
2013-04-01
Post-earthquake ground motion hazard assessment is a fundamental initial step towards time-dependent seismic risk assessment for buildings in a post main-shock environment. Therefore, operative forecasting of seismic aftershock hazard forms a viable support basis for decision-making regarding search and rescue, inspection, repair, and re-occupation in a post main-shock environment. Arguably, an adaptive procedure for integrating the aftershock occurrence rate together with suitable ground motion prediction relations is key to Probabilistic Seismic Aftershock Hazard Assessment (PSAHA). In the short-term, the seismic hazard may vary significantly (Jordan et al., 2011), particularly after the occurrence of a high magnitude earthquake. Hence, PSAHA requires a reliable model that is able to track the time evolution of the earthquake occurrence rates together with suitable ground motion prediction relations. This work focuses on providing adaptive daily forecasts of the mean daily rate of exceeding various spectral acceleration values (the aftershock hazard). Two well-established earthquake occurrence models suitable for daily seismicity forecasts associated with the evolution of an aftershock sequence, namely, the modified Omori's aftershock model and the Epidemic Type Aftershock Sequence (ETAS) are adopted. The parameters of the modified Omori model are updated on a daily basis using Bayesian updating and based on the data provided by the ongoing aftershock sequence based on the methodology originally proposed by Jalayer et al. (2011). The Bayesian updating is used also to provide sequence-based parameter estimates for a given ground motion prediction model, i.e. the aftershock events in an ongoing sequence are exploited in order to update in an adaptive manner the parameters of an existing ground motion prediction model. As a numerical example, the mean daily rates of exceeding specific spectral acceleration values are estimated adaptively for the L'Aquila 2009 aftershock catalog. The parameters of the modified Omori model are estimated in an adaptive manner using the Bayesian updating based on the aftershock events that had already taken place at each day elapsed and using the Italian generic sequence (Lolli and Gasperini 2003) as prior information. For the ETAS model, the real-time daily forecast of the spatio-temporal evolution of the L'Aquila sequence provided for the Italian Civil Protection for managing the emergency (Marzocchi and Lombardi, 2009) is utilized. Moreover, the parameters of the ground motion prediction relation proposed by Sabetta and Pugliese (1996) are updated adaptively and on a daily basis using Bayesian updating based on the ongoing aftershock sequence. Finally, the forecasted daily rates of exceeding (first-mode) spectral acceleration values are compared with observed rates of exceedance calculated based on the wave-forms that have actually taken place. References Jalayer, F., Asprone, D., Prota, A., Manfredi, G. (2011). A decision support system for post-earthquake reliability assessment of structures subjected to after-shocks: an application to L'Aquila earthquake, 2009. Bull. Earthquake Eng. 9(4) 997-1014. Jordan, T.H., Chen Y-T., Gasparini P., Madariaga R., Main I., Marzocchi W., Papadopoulos G., Sobolev G., Yamaoka K., and J. Zschau (2011). Operational earthquake forecasting: State of knowledge and guidelines for implementation, Ann. Geophys. 54(4) 315-391, doi 10.4401/ag-5350. Lolli, B., and P. Gasperini (2003). Aftershocks hazard in Italy part I: estimation of time-magnitude distribution model parameters and computation of probabilities of occurrence. Journal of Seismology 7(2) 235-257. Marzocchi, W., and A.M. Lombardi (2009). Real-time forecasting following a damaging earthquake, Geophys. Res. Lett. 36, L21302, doi: 10.1029/2009GL040233. Sabetta F., A. Pugliese (1996) Estimation of response spectra and simulation of nonstationary earthquake ground motions. Bull Seismol Soc Am 86(2) 337-352.
Pharmaceuticals in water, fish and osprey nestlings in Delaware River and Bay
Bean, Thomas G.; Rattner, Barnett A.; Lazarus, Rebecca S.; Day, Daniel D.; Burket, S. Rebekah; Brooks, Bryan W.; Haddad, Samuel P.; Bowerman, William W.
2018-01-01
Exposure of wildlife to Active Pharmaceutical Ingredients (APIs) is likely to occur but studies of risk are limited. One exposure pathway that has received attention is trophic transfer of APIs in a water-fish-osprey food chain. Samples of water, fish plasma and osprey plasma were collected from Delaware River and Bay, and analyzed for 21 APIs. Only 2 of 21 analytes exceeded method detection limits in osprey plasma (acetaminophen and diclofenac) with plasma levels typically 2–3 orders of magnitude below human therapeutic concentrations (HTC). We built upon a screening level model used to predict osprey exposure to APIs in Chesapeake Bay and evaluated whether exposure levels could have been predicted in Delaware Bay had we just measured concentrations in water or fish. Use of surface water and BCFs did not predict API concentrations in fish well, likely due to fish movement patterns, and partitioning and bioaccumulation uncertainties associated with these ionizable chemicals. Input of highest measured API concentration in fish plasma combined with pharmacokinetic data accurately predicted that diclofenac and acetaminophen would be the APIs most likely detected in osprey plasma. For the majority of APIs modeled, levels were not predicted to exceed 1 ng/mL or method detection limits in osprey plasma. Based on the target analytes examined, there is little evidence that APIs represent a significant risk to ospreys nesting in Delaware Bay. If an API is present in fish orders of magnitude below HTC, sampling of fish-eating birds is unlikely to be necessary. However, several human pharmaceuticals accumulated in fish plasma within a recommended safety factor for HTC. It is now important to expand the scope of diet-based API exposure modeling to include alternative exposure pathways (e.g., uptake from landfills, dumps and wastewater treatment plants) and geographic locations (developing countries) where API contamination of the environment may represent greater risk.
Using VAPEPS for noise control on Space Station Freedom
NASA Technical Reports Server (NTRS)
Badilla, Gloria; Bergen, Thomas; Scharton, Terry
1991-01-01
Noise environmental control is an important design consideration for Space Station Freedom (SSF), both for crew safety and productivity. Acoustic noise requirements are established to eliminate fatigue and potential hearing loss by crew members from long-term exposure and to facilitate speech communication. VAPEPS (VibroAcoustic Payload Environment Prediction System) is currently being applied to SSF for prediction of the on-orbit noise and vibration environments induced in the 50 to 10,000 Hz frequency range. Various sources such as fans, pumps, centrifuges, exercise equipment, and other mechanical devices are used in the analysis. The predictions will be used in design tradeoff studies and to provide confidence that requirements will be met. Preliminary predictions show that the required levels will be exceeded unless substantial noise control measures are incorporated in the SSF design. Predicted levels for an SSF design without acoustic control treatments exceed requirements by 25 dB in some one-third octave frequency bands.
Nahum-Shani, Inbal; Bamberger, Peter A
2011-01-01
Seeking to explain mixed empirical findings regarding the buffering effect of social support on work-based stress-strain relations, we posit that whether an increase in the level of support received buffers or exacerbates the harmful effects of workload on employee health and well-being is contingent upon the general pattern characterizing an employee supportive exchanges across his/her close relationships. Specifically, we propose that the buffering effect of receiving social support depends on whether the employee perceives his/her social exchanges as reciprocal (support given equals support received), under-reciprocating (support given exceeds support received), or over-reciprocating (support received exceeds support given). Based on longitudinal data collected from a random sample of blue-collar workers, our findings support our predictions, indicating that the buffering effect of social support on the relationship between work hours (on the one hand) and employee health and well-being (on the other) varies as a function of the pattern of exchange relations between an employee and his/her close support providers.
Nahum-Shani, Inbal; Bamberger, Peter A.
2010-01-01
Seeking to explain mixed empirical findings regarding the buffering effect of social support on work-based stress-strain relations, we posit that whether an increase in the level of support received buffers or exacerbates the harmful effects of workload on employee health and well-being is contingent upon the general pattern characterizing an employee supportive exchanges across his/her close relationships. Specifically, we propose that the buffering effect of receiving social support depends on whether the employee perceives his/her social exchanges as reciprocal (support given equals support received), under-reciprocating (support given exceeds support received), or over-reciprocating (support received exceeds support given). Based on longitudinal data collected from a random sample of blue-collar workers, our findings support our predictions, indicating that the buffering effect of social support on the relationship between work hours (on the one hand) and employee health and well-being (on the other) varies as a function of the pattern of exchange relations between an employee and his/her close support providers. PMID:21152110
Outlaw, George S.; Hoos, Anne B.; Pankey, John T.
1994-01-01
Rainfall, streamflow, and water-quality data were collected furing storm conditions at five urban watersheds in Nashville, Tennessee. These data can be used to build a database for developing predictive models of the relations between storm- water quality and land use, storm characteristics, and seasonal variations. The primary land and mix of land uses was different for each watershed. Stormwater samples were collected during three storms at each watershed and analyzed for selected volatile, acidic and base/neutral organic compounds; organic pesticides; trace metals; conventional pollutants; and several physical properties. Storm loads were computed for all constituents and properties with event mean concentration above the minimum reporting level. None of the samples con- tained acidic organic compounds at concentrations above the minimum reporting levels. Several constituents in each of the other categories, however, were present at concentrations above the minimum reporting level. For 21 of these constituents, water-quality criteria have been pro- mulgated by the State of Tennessee. For only 8 of the 21 did the value exceed the most restrictive of the criteria: pyrene, dieldrin, and mercury concen- trations and counts of fecal coliform exceeded the criteria for recreational use, copper and zinc concentrations and pH value exceeded the criteria for fish and aquatic life, and lead concentrations exceeded the criteria for domestic supply.
NASA Astrophysics Data System (ADS)
Biswas, Jhumoor; John, Kuruvilla; Farooqui, Zuber
The recent Intergovernmental Panel on Climate Change report predicts significant temperature increases over the century which constitutes the pulse of climate variability in a region. A modeling study was performed to identify the potential impact of temperature perturbations on tropospheric ozone concentrations in South Texas. A future case modeling scenario which incorporates appropriate emission reduction strategies without accounting for climatic inconsistencies was used in this study. The photochemical modeling was undertaken for a high ozone episode of 13-20 September 1999, and a future modeling scenario was projected for ozone episode days in 2007 utilizing the meteorological conditions prevalent in the base year. The temperatures were increased uniformly throughout the simulation domain and through the vertical layers by 2°C, 3°C, 4°C, 5°C, and 6°C, respectively in the future year modeling case. These temperature perturbations represented the outcome of extreme climate change within the study region. Significantly large changes in peak ozone concentrations were predicted by the photochemical model. For the 6°C temperature perturbation, the greatest amplification in the maximum 8-h ozone concentrations within urban areas of the modeling domain was approximately 12 ppb. In addition, transboundary flux from major industrialized urban areas played a major role in supplementing the high ozone concentrations during the perturbed temperature scenarios. The Unites States Environmental Protection Agency (USEPA) is currently proposing stricter 8-h ozone standards. The effect of temperature perturbations on ozone exceedances based on current and potential stringent future National Ambient Air Quality Standards (NAAQS) was also studied. Temperatures had an appreciable spatial impact on the 8-h ozone exceedances with a considerable increase in spatial area exceeding the NAAQS for the 8-h ozone levels within the study region for each successive augmentation in temperature. The number of exceedances of the 8-h ozone standard increased significantly with each degree rise of temperature with the problem becoming even more acute in light of stricter future proposed standards of ozone.
Rattner, B.; Hatfield, J.; Melancon, M.; Custer, T.; Tillitt, D.
1995-01-01
Pipping black-crowned night-heron (Nycticorax nycticorax) embryos were collected from an uncontaminated site (Chincoteague National Wildlife Refuge,VA) and three polluted sites (Cat Island, Green Bay, WI; Bair and West Marin Islands, San Francisco Bay, CA). Hepatic microsomal monooxygenases were induced up to 85-fold relative to the reference site, and was associated with concentrations of total PCBs and 11 PCB congeners that are presumed to concern.to express toxicity through the Ah receptor. TEQs [mathematically predicted; summed product of PCB congener concentrations using 5 different sets of toxic equivalency factors (TEFs)] were compared to TCDD-EQs [derived by bioassay; ethoxyresorufin-O-dealkylase activity of treated H411E rat hepatoma cells]. Although TEQs were up to 15-fold greater than TCDD-EQs, the pattern among sites was consistent and TEQs were highly correlated with TCDD-EQs. TEFs based on single congener mammalian studies yielded TEQs that greatly exceeded values from the H411E bioassay of field sample. TEFs generated from avian egg injection studies yielded TEQs that most closely approximated bioassay-derived TCDD-EQs. Cytochrome P450 parameters were related to TEQs and TCDD-EQs; adjusted r2 often exceeded 0.5 for the relation among mathematically predicted TEQs and cytochrome P450 measurements. These data document the general predictive value of TEQs and TCDD-EQs for P450 induction in field collected samples, but also indicate the need for development of TEFs for the species and biological end point of concern.
Regression trees modeling and forecasting of PM10 air pollution in urban areas
NASA Astrophysics Data System (ADS)
Stoimenova, M.; Voynikova, D.; Ivanov, A.; Gocheva-Ilieva, S.; Iliev, I.
2017-10-01
Fine particulate matter (PM10) air pollution is a serious problem affecting the health of the population in many Bulgarian cities. As an example, the object of this study is the pollution with PM10 of the town of Pleven, Northern Bulgaria. The measured concentrations of this air pollutant for this city consistently exceeded the permissible limits set by European and national legislation. Based on data for the last 6 years (2011-2016), the analysis shows that this applies both to the daily limit of 50 micrograms per cubic meter and the allowable number of daily concentration exceedances to 35 per year. Also, the average annual concentration of PM10 exceeded the prescribed norm of no more than 40 micrograms per cubic meter. The aim of this work is to build high performance mathematical models for effective prediction and forecasting the level of PM10 pollution. The study was conducted with the powerful flexible data mining technique Classification and Regression Trees (CART). The values of PM10 were fitted with respect to meteorological data such as maximum and minimum air temperature, relative humidity, wind speed and direction and others, as well as with time and autoregressive variables. As a result the obtained CART models demonstrate high predictive ability and fit the actual data with up to 80%. The best models were applied for forecasting the level pollution for 3 to 7 days ahead. An interpretation of the modeling results is presented.
Quantitative CT based radiomics as predictor of resectability of pancreatic adenocarcinoma
NASA Astrophysics Data System (ADS)
van der Putten, Joost; Zinger, Svitlana; van der Sommen, Fons; de With, Peter H. N.; Prokop, Mathias; Hermans, John
2018-02-01
In current clinical practice, the resectability of pancreatic ductal adenocarcinoma (PDA) is determined subjec- tively by a physician, which is an error-prone procedure. In this paper, we present a method for automated determination of resectability of PDA from a routine abdominal CT, to reduce such decision errors. The tumor features are extracted from a group of patients with both hypo- and iso-attenuating tumors, of which 29 were resectable and 21 were not. The tumor contours are supplied by a medical expert. We present an approach that uses intensity, shape, and texture features to determine tumor resectability. The best classification results are obtained with fine Gaussian SVM and the L0 Feature Selection algorithms. Compared to expert predictions made on the same dataset, our method achieves better classification results. We obtain significantly better results on correctly predicting non-resectability (+17%) compared to a expert, which is essential for patient treatment (negative prediction value). Moreover, our predictions of resectability exceed expert predictions by approximately 3% (positive prediction value).
Prediction of hospital failure: a post-PPS analysis.
Gardiner, L R; Oswald, S L; Jahera, J S
1996-01-01
This study investigates the ability of discriminant analysis to provide accurate predictions of hospital failure. Using data from the period following the introduction of the Prospective Payment System, we developed discriminant functions for each of two hospital ownership categories: not-for-profit and proprietary. The resulting discriminant models contain six and seven variables, respectively. For each ownership category, the variables represent four major aspects of financial health (liquidity, leverage, profitability, and efficiency) plus county marketshare and length of stay. The proportion of closed hospitals misclassified as open one year before closure does not exceed 0.05 for either ownership type. Our results show that discriminant functions based on a small set of financial and nonfinancial variables provide the capability to predict hospital failure reliably for both not-for-profit and proprietary hospitals.
The evolution of dispersal conditioned on migration status
Asaduzzaman, Sarder Mohammed; Wild, Geoff
2012-01-01
We consider a model for the evolution of dispersal of offspring. Dispersal is treated as a parental trait that is expressed conditional upon a parent’s own “migration status,” that is, whether a parent, itself, is native or nonnative to the area in which it breeds. We compare the evolution of this kind of conditional dispersal to the evolution of unconditional dispersal, in order to determine the extent to which the former changes predictions about population-wide levels of dispersal. We use numerical simulations of an inclusive-fitness model, and individual-based simulations to predict population-average dispersal rates for the case in which dispersal based on migration status occurs. When our model predictions are compared to predictions that neglect conditional dispersal, observed differences between rates are only slight, and never exceed 0.06. While the effect of dispersal conditioned upon migration status could be detected in a carefully designed experiment, we argue that less-than-ideal experimental conditions, and factors such as dispersal conditioned on sex are likely to play a larger role that the type of conditional dispersal studied here. PMID:22837829
Chen, Poyu; Lin, Keh-Chung; Liing, Rong-Jiuan; Wu, Ching-Yi; Chen, Chia-Ling; Chang, Ku-Chou
2016-06-01
To examine the criterion validity, responsiveness, and minimal clinically important difference (MCID) of the EuroQoL 5-Dimensions Questionnaire (EQ-5D-5L) and visual analog scale (EQ-VAS) in people receiving rehabilitation after stroke. The EQ-5D-5L, along with four criterion measures-the Medical Research Council scales for muscle strength, the Fugl-Meyer assessment, the functional independence measure, and the Stroke Impact Scale-was administered to 65 patients with stroke before and after 3- to 4-week therapy. Criterion validity was estimated using the Spearman correlation coefficient. Responsiveness was analyzed by the effect size, standardized response mean (SRM), and criterion responsiveness. The MCID was determined by anchor-based and distribution-based approaches. The percentage of patients exceeding the MCID was also reported. Concurrent validity of the EQ-Index was better compared with the EQ-VAS. The EQ-Index has better power for predicting the rehabilitation outcome in the activities of daily living than other motor-related outcome measures. The EQ-Index was moderately responsive to change (SRM = 0.63), whereas the EQ-VAS was only mildly responsive to change. The MCID estimation of the EQ-Index (the percentage of patients exceeding the MCID) was 0.10 (33.8 %) and 0.10 (33.8 %) based on the anchor-based and distribution-based approaches, respectively, and the estimation of EQ-VAS was 8.61 (41.5 %) and 10.82 (32.3 %). The EQ-Index has shown reasonable concurrent validity, limited predictive validity, and acceptable responsiveness for detecting the health-related quality of life in stroke patients undergoing rehabilitation, but not for EQ-VAS. Future research considering different recovery stages after stroke is warranted to validate these estimations.
Trowbridge, Philip R; Kahl, J Steve; Sassan, Dari A; Heath, Douglas L; Walsh, Edward M
2010-07-01
Six watersheds in New Hampshire were studied to determine the effects of road salt on stream water quality. Specific conductance in streams was monitored every 15 min for one year using dataloggers. Chloride concentrations were calculated from specific conductance using empirical relationships. Stream chloride concentrations were directly correlated with development in the watersheds and were inversely related to streamflow. Exceedances of the EPA water quality standard for chloride were detected in the four watersheds with the most development. The number of exceedances during a year was linearly related to the annual average concentration of chloride. Exceedances of the water quality standard were not predicted for streams with annual average concentrations less than 102 mg L(-1). Chloride was imported into three of the watersheds at rates ranging from 45 to 98 Mg Cl km(-2) yr(-1). Ninety-one percent of the chloride imported was road salt for deicing roadways and parking lots. A simple, mass balance equation was shown to predict annual average chloride concentrations from streamflow and chloride import rates to the watershed. This equation, combined with the apparent threshold for exceedances of the water quality standard, can be used for screening-level TMDLs for road salt in impaired watersheds.
The unintended energy impacts of increased nitrate contamination from biofuels production.
Twomey, Kelly M; Stillwell, Ashlynn S; Webber, Michael E
2010-01-01
Increases in corn cultivation for biofuels production, due to the Energy Independence and Security Act of 2007, are likely to lead to increases in nitrate concentrations in both surface and groundwater resources in the United States. These increases might trigger the requirement for additional energy consumption for water treatment to remove the nitrates. While these increasing concentrations of nitrate might pose a human health concern, most water resources were found to be within current maximum contaminant level (MCL) limits of 10 mg L(-1) NO(3)-N. When water resources exceed this MCL, energy-intensive drinking water treatment is required to reduce nitrate levels below 10 mg L(-1). Based on prior estimates of water supplies currently exceeding the nitrate MCL, we calculate that advanced drinking water treatment might require an additional 2360 million kWh annually (for nitrate affected areas only)--a 2100% increase in energy requirements for water treatment in those same areas--to mitigate nitrate contamination and meet the MCL requirement. We predict that projected increases in nitrate contamination in water may impact the energy consumed in the water treatment sector, because of the convergence of several related trends: (1) increasing cornstarch-based ethanol production, (2) increasing nutrient loading in surface water and groundwater resources as a consequence of increased corn-based ethanol production, (3) additional drinking water sources that exceed the MCL for nitrate, and (4) potentially more stringent drinking water standards for nitrate.
Er3+-doped BaY2F8 crystal waveguides for broadband optical amplification at 1.5 μm
NASA Astrophysics Data System (ADS)
Toccafondo, V.; Cerqueira S., A.; Faralli, S.; Sani, E.; Toncelli, A.; Tonelli, M.; Di Pasquale, F.
2007-01-01
Integrated waveguide amplifiers based on high concentration Er3+ doped BaY2F8 crystals are numerically studied by combining a full-vectorial finite element based modal analysis and propagation-rate equations. Using realistic input data, such as the absorption/emission cross sections and Er level lifetimes measured on grown crystal samples, we investigate the amplifier performance by optimizing the total Er concentration. We predict optimum gain coefficient up to 5dB/cm and broad amplification bandwidth exceeding 80nm with 1480nm pumping.
Eash, David A.; Barnes, Kimberlee K.; Veilleux, Andrea G.
2013-01-01
A statewide study was performed to develop regional regression equations for estimating selected annual exceedance-probability statistics for ungaged stream sites in Iowa. The study area comprises streamgages located within Iowa and 50 miles beyond the State’s borders. Annual exceedance-probability estimates were computed for 518 streamgages by using the expected moments algorithm to fit a Pearson Type III distribution to the logarithms of annual peak discharges for each streamgage using annual peak-discharge data through 2010. The estimation of the selected statistics included a Bayesian weighted least-squares/generalized least-squares regression analysis to update regional skew coefficients for the 518 streamgages. Low-outlier and historic information were incorporated into the annual exceedance-probability analyses, and a generalized Grubbs-Beck test was used to detect multiple potentially influential low flows. Also, geographic information system software was used to measure 59 selected basin characteristics for each streamgage. Regional regression analysis, using generalized least-squares regression, was used to develop a set of equations for each flood region in Iowa for estimating discharges for ungaged stream sites with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities, which are equivalent to annual flood-frequency recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively. A total of 394 streamgages were included in the development of regional regression equations for three flood regions (regions 1, 2, and 3) that were defined for Iowa based on landform regions and soil regions. Average standard errors of prediction range from 31.8 to 45.2 percent for flood region 1, 19.4 to 46.8 percent for flood region 2, and 26.5 to 43.1 percent for flood region 3. The pseudo coefficients of determination for the generalized least-squares equations range from 90.8 to 96.2 percent for flood region 1, 91.5 to 97.9 percent for flood region 2, and 92.4 to 96.0 percent for flood region 3. The regression equations are applicable only to stream sites in Iowa with flows not significantly affected by regulation, diversion, channelization, backwater, or urbanization and with basin characteristics within the range of those used to develop the equations. These regression equations will be implemented within the U.S. Geological Survey StreamStats Web-based geographic information system tool. StreamStats allows users to click on any ungaged site on a river and compute estimates of the eight selected statistics; in addition, 90-percent prediction intervals and the measured basin characteristics for the ungaged sites also are provided by the Web-based tool. StreamStats also allows users to click on any streamgage in Iowa and estimates computed for these eight selected statistics are provided for the streamgage.
Nacher, Jose C; Ochiai, Tomoshiro
2012-05-01
Increasingly accessible financial data allow researchers to infer market-dynamics-based laws and to propose models that are able to reproduce them. In recent years, several stylized facts have been uncovered. Here we perform an extensive analysis of foreign exchange data that leads to the unveiling of a statistical financial law. First, our findings show that, on average, volatility increases more when the price exceeds the highest (or lowest) value, i.e., breaks the resistance line. We call this the breaking-acceleration effect. Second, our results show that the probability P(T) to break the resistance line in the past time T follows power law in both real data and theoretically simulated data. However, the probability calculated using real data is rather lower than the one obtained using a traditional Black-Scholes (BS) model. Taken together, the present analysis characterizes a different stylized fact of financial markets and shows that the market exceeds a past (historical) extreme price fewer times than expected by the BS model (the resistance effect). However, when the market does, we predict that the average volatility at that time point will be much higher. These findings indicate that any Markovian model does not faithfully capture the market dynamics.
NASA Astrophysics Data System (ADS)
Nacher, Jose C.; Ochiai, Tomoshiro
2012-05-01
Increasingly accessible financial data allow researchers to infer market-dynamics-based laws and to propose models that are able to reproduce them. In recent years, several stylized facts have been uncovered. Here we perform an extensive analysis of foreign exchange data that leads to the unveiling of a statistical financial law. First, our findings show that, on average, volatility increases more when the price exceeds the highest (or lowest) value, i.e., breaks the resistance line. We call this the breaking-acceleration effect. Second, our results show that the probability P(T) to break the resistance line in the past time T follows power law in both real data and theoretically simulated data. However, the probability calculated using real data is rather lower than the one obtained using a traditional Black-Scholes (BS) model. Taken together, the present analysis characterizes a different stylized fact of financial markets and shows that the market exceeds a past (historical) extreme price fewer times than expected by the BS model (the resistance effect). However, when the market does, we predict that the average volatility at that time point will be much higher. These findings indicate that any Markovian model does not faithfully capture the market dynamics.
Modeling the probability of arsenic in groundwater in New England as a tool for exposure assessment.
Ayotte, Joseph D; Nolan, Bernard T; Nuckols, John R; Cantor, Kenneth P; Robinson, Gilpin R; Baris, Dalsu; Hayes, Laura; Karagas, Margaret; Bress, William; Silverman, Debra T; Lubin, Jay H
2006-06-01
We developed a process-based model to predict the probability of arsenic exceeding 5 microg/L in drinking water wells in New England bedrock aquifers. The model is being used for exposure assessment in an epidemiologic study of bladder cancer. One important study hypothesis that may explain increased bladder cancer risk is elevated concentrations of inorganic arsenic in drinking water. In eastern New England, 20-30% of private wells exceed the arsenic drinking water standard of 10 micrograms per liter. Our predictive model significantly improves the understanding of factors associated with arsenic contamination in New England. Specific rock types, high arsenic concentrations in stream sediments, geochemical factors related to areas of Pleistocene marine inundation and proximity to intrusive granitic plutons, and hydrologic and landscape variables relating to groundwater residence time increase the probability of arsenic occurrence in groundwater. Previous studies suggest that arsenic in bedrock groundwater may be partly from past arsenical pesticide use. Variables representing historic agricultural inputs do not improve the model, indicating that this source does not significantly contribute to current arsenic concentrations. Due to the complexity of the fractured bedrock aquifers in the region, well depth and related variables also are not significant predictors.
Effectiveness and cost of reducing particle-related mortality with particle filtration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fisk, W. J.; Chan, W. R.
This study evaluates the mortality-related benefits and costs of improvements in particle filtration in U.S. homes and commercial buildings based on models with empirical inputs. The models account for time spent in various environments as well as activity levels and associated breathing rates. The scenarios evaluated include improvements in filter efficiencies in both forced-air heating and cooling systems of homes and heating, ventilating, and air conditioning systems of workplaces as well as use of portable air cleaners in homes. The predicted reductions in mortality range from approximately 0.25 to 2.4 per 10 000 population. The largest reductions in mortality were frommore » interventions with continuously operating portable air cleaners in homes because, given our scenarios, these portable air cleaners with HEPA filters most reduced particle exposures. For some interventions, predicted annual mortality-related economic benefits exceed $1000 per person. Economic benefits always exceed costs with benefit-to-cost ratios ranging from approximately 3.9 to 133. In conclusion, restricting interventions to homes of the elderly further increases the mortality reductions per unit population and the benefit-to-cost ratios.« less
Auinger, Hans-Jürgen; Schönleben, Manfred; Lehermeier, Christina; Schmidt, Malthe; Korzun, Viktor; Geiger, Hartwig H; Piepho, Hans-Peter; Gordillo, Andres; Wilde, Peer; Bauer, Eva; Schön, Chris-Carolin
2016-11-01
Genomic prediction accuracy can be significantly increased by model calibration across multiple breeding cycles as long as selection cycles are connected by common ancestors. In hybrid rye breeding, application of genome-based prediction is expected to increase selection gain because of long selection cycles in population improvement and development of hybrid components. Essentially two prediction scenarios arise: (1) prediction of the genetic value of lines from the same breeding cycle in which model training is performed and (2) prediction of lines from subsequent cycles. It is the latter from which a reduction in cycle length and consequently the strongest impact on selection gain is expected. We empirically investigated genome-based prediction of grain yield, plant height and thousand kernel weight within and across four selection cycles of a hybrid rye breeding program. Prediction performance was assessed using genomic and pedigree-based best linear unbiased prediction (GBLUP and PBLUP). A total of 1040 S 2 lines were genotyped with 16 k SNPs and each year testcrosses of 260 S 2 lines were phenotyped in seven or eight locations. The performance gap between GBLUP and PBLUP increased significantly for all traits when model calibration was performed on aggregated data from several cycles. Prediction accuracies obtained from cross-validation were in the order of 0.70 for all traits when data from all cycles (N CS = 832) were used for model training and exceeded within-cycle accuracies in all cases. As long as selection cycles are connected by a sufficient number of common ancestors and prediction accuracy has not reached a plateau when increasing sample size, aggregating data from several preceding cycles is recommended for predicting genetic values in subsequent cycles despite decreasing relatedness over time.
Towards psychologically adaptive brain-computer interfaces
NASA Astrophysics Data System (ADS)
Myrden, A.; Chau, T.
2016-12-01
Objective. Brain-computer interface (BCI) performance is sensitive to short-term changes in psychological states such as fatigue, frustration, and attention. This paper explores the design of a BCI that can adapt to these short-term changes. Approach. Eleven able-bodied individuals participated in a study during which they used a mental task-based EEG-BCI to play a simple maze navigation game while self-reporting their perceived levels of fatigue, frustration, and attention. In an offline analysis, a regression algorithm was trained to predict changes in these states, yielding Pearson correlation coefficients in excess of 0.45 between the self-reported and predicted states. Two means of fusing the resultant mental state predictions with mental task classification were investigated. First, single-trial mental state predictions were used to predict correct classification by the BCI during each trial. Second, an adaptive BCI was designed that retrained a new classifier for each testing sample using only those training samples for which predicted mental state was similar to that predicted for the current testing sample. Main results. Mental state-based prediction of BCI reliability exceeded chance levels. The adaptive BCI exhibited significant, but practically modest, increases in classification accuracy for five of 11 participants and no significant difference for the remaining six despite a smaller average training set size. Significance. Collectively, these findings indicate that adaptation to psychological state may allow the design of more accurate BCIs.
Kitzmann, JP; O’Gorman, D; Kin, T; Gruessner, AC; Senior, P; Imes, S; Gruessner, RW; Shapiro, AMJ; Papas, KK
2014-01-01
Human islet allotransplant (ITx) for the treatment of type 1 diabetes is in phase III clinical registration trials in the US and standard of care in several other countries. Current islet product release criteria include viability based on cell membrane integrity stains, glucose stimulated insulin release (GSIR), and islet equivalent (IE) dose based on counts. However, only a fraction of patients transplanted with islets that meet or exceed these release criteria become insulin independent following one transplant. Measurements of islet oxygen consumption rate (OCR) have been reported as highly predictive of transplant outcome in many models. In this paper we report on the assessment of clinical islet allograft preparations using islet oxygen consumption rate (OCR) dose (or viable IE dose) and current product release assays in a series of 13 first transplant recipients. The predictive capability of each assay was examined and successful graft function was defined as 100% insulin independence within 45 days post-transplant. Results showed that OCR dose was most predictive of CTO. IE dose was also highly predictive, while GSIR and membrane integrity stains were not. In conclusion, OCR dose can predict CTO with high specificity and sensitivity and is a useful tool for evaluating islet preparations prior to clinical ITx. PMID:25131089
Can air temperature be used to project influences of climate change on stream temperature?
Arismendi, Ivan; Safeeq, Mohammad; Dunham, Jason B.; Johnson, Sherri L.
2014-01-01
Worldwide, lack of data on stream temperature has motivated the use of regression-based statistical models to predict stream temperatures based on more widely available data on air temperatures. Such models have been widely applied to project responses of stream temperatures under climate change, but the performance of these models has not been fully evaluated. To address this knowledge gap, we examined the performance of two widely used linear and nonlinear regression models that predict stream temperatures based on air temperatures. We evaluated model performance and temporal stability of model parameters in a suite of regulated and unregulated streams with 11–44 years of stream temperature data. Although such models may have validity when predicting stream temperatures within the span of time that corresponds to the data used to develop them, model predictions did not transfer well to other time periods. Validation of model predictions of most recent stream temperatures, based on air temperature–stream temperature relationships from previous time periods often showed poor performance when compared with observed stream temperatures. Overall, model predictions were less robust in regulated streams and they frequently failed in detecting the coldest and warmest temperatures within all sites. In many cases, the magnitude of errors in these predictions falls within a range that equals or exceeds the magnitude of future projections of climate-related changes in stream temperatures reported for the region we studied (between 0.5 and 3.0 °C by 2080). The limited ability of regression-based statistical models to accurately project stream temperatures over time likely stems from the fact that underlying processes at play, namely the heat budgets of air and water, are distinctive in each medium and vary among localities and through time.
Antibody-protein interactions: benchmark datasets and prediction tools evaluation
Ponomarenko, Julia V; Bourne, Philip E
2007-01-01
Background The ability to predict antibody binding sites (aka antigenic determinants or B-cell epitopes) for a given protein is a precursor to new vaccine design and diagnostics. Among the various methods of B-cell epitope identification X-ray crystallography is one of the most reliable methods. Using these experimental data computational methods exist for B-cell epitope prediction. As the number of structures of antibody-protein complexes grows, further interest in prediction methods using 3D structure is anticipated. This work aims to establish a benchmark for 3D structure-based epitope prediction methods. Results Two B-cell epitope benchmark datasets inferred from the 3D structures of antibody-protein complexes were defined. The first is a dataset of 62 representative 3D structures of protein antigens with inferred structural epitopes. The second is a dataset of 82 structures of antibody-protein complexes containing different structural epitopes. Using these datasets, eight web-servers developed for antibody and protein binding sites prediction have been evaluated. In no method did performance exceed a 40% precision and 46% recall. The values of the area under the receiver operating characteristic curve for the evaluated methods were about 0.6 for ConSurf, DiscoTope, and PPI-PRED methods and above 0.65 but not exceeding 0.70 for protein-protein docking methods when the best of the top ten models for the bound docking were considered; the remaining methods performed close to random. The benchmark datasets are included as a supplement to this paper. Conclusion It may be possible to improve epitope prediction methods through training on datasets which include only immune epitopes and through utilizing more features characterizing epitopes, for example, the evolutionary conservation score. Notwithstanding, overall poor performance may reflect the generality of antigenicity and hence the inability to decipher B-cell epitopes as an intrinsic feature of the protein. It is an open question as to whether ultimately discriminatory features can be found. PMID:17910770
Gabbett, Tim J
2010-10-01
Limited information exists on the training dose-response relationship in elite collision sport athletes. In addition, no study has developed an injury prediction model for collision sport athletes. The purpose of this study was to develop an injury prediction model for noncontact, soft-tissue injuries in elite collision sport athletes. Ninety-one professional rugby league players participated in this 4-year prospective study. This study was conducted in 2 phases. Firstly, training load and injury data were prospectively recorded over 2 competitive seasons in elite collision sport athletes. Training load and injury data were modeled using a logistic regression model with a binomial distribution (injury vs. no injury) and logit link function. Secondly, training load and injury data were prospectively recorded over a further 2 competitive seasons in the same cohort of elite collision sport athletes. An injury prediction model based on planned and actual training loads was developed and implemented to determine if noncontact, soft-tissue injuries could be predicted and therefore prevented in elite collision sport athletes. Players were 50-80% likely to sustain a preseason injury within the training load range of 3,000-5,000 units. These training load 'thresholds' were considerably reduced (1,700-3,000 units) in the late-competition phase of the season. A total of 159 noncontact, soft-tissue injuries were sustained over the latter 2 seasons. The percentage of true positive predictions was 62.3% (n = 121), whereas the total number of false positive and false negative predictions was 20 and 18, respectively. Players that exceeded the training load threshold were 70 times more likely to test positive for noncontact, soft-tissue injury, whereas players that did not exceed the training load threshold were injured 1/10 as often. These findings provide information on the training dose-response relationship and a scientific method of monitoring and regulating training load in elite collision sport athletes.
Stochastic modeling of unsteady extinction in turbulent non-premixed combustion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lackmann, T.; Hewson, J. C.; Knaus, R. C.
Turbulent fluctuations of the scalar dissipation rate have a major impact on extinction in non-premixed combustion. Recently, an unsteady extinction criterion has been developed (Hewson, 2013) that predicts extinction dependent on the duration and the magnitude of dissipation rate fluctuations exceeding a critical quenching value; this quantity is referred to as the dissipation impulse. Furthermore, the magnitude of the dissipation impulse corresponding to unsteady extinction is related to the difficulty with which a flamelet is exintguished, based on the steady-state S-curve.
Stochastic modeling of unsteady extinction in turbulent non-premixed combustion
Lackmann, T.; Hewson, J. C.; Knaus, R. C.; ...
2016-07-19
Turbulent fluctuations of the scalar dissipation rate have a major impact on extinction in non-premixed combustion. Recently, an unsteady extinction criterion has been developed (Hewson, 2013) that predicts extinction dependent on the duration and the magnitude of dissipation rate fluctuations exceeding a critical quenching value; this quantity is referred to as the dissipation impulse. Furthermore, the magnitude of the dissipation impulse corresponding to unsteady extinction is related to the difficulty with which a flamelet is exintguished, based on the steady-state S-curve.
Falke, Jeffrey A.; Dunham, Jason B.; Hockman-Wert, David; Pahl, Randy
2016-01-01
We provide a simple framework for diagnosing the impairment of stream water temperature for coldwater fishes across broad spatial extents based on a weight-of-evidence approach that integrates biological criteria, species distribution models, and geostatistical models of stream temperature. As a test case, we applied our approach to identify stream reaches most likely to be thermally impaired for Lahontan Cutthroat Trout Oncorhynchus clarkii henshawi in the upper Reese River, located in the northern Great Basin, Nevada. We first evaluated the capability of stream thermal regime descriptors to explain variation across 170 sites, and we found that the 7-d moving average of daily maximum stream temperatures (7DADM) provided minimal among-descriptor redundancy and, based on an upper threshold of 20°C, was also a good indicator of acute and chronic thermal stress. Next, we quantified the range of Lahontan Cutthroat Trout within our study area using a geographic distribution model. Finally, we used a geostatistical model to assess spatial variation in 7DADM and predict potential thermal impairment at the stream reach scale. We found that whereas 38% of reaches in our study area exceeded a 7DADM of 20°C and 35% were significantly warmer than predicted, only 17% both exceeded the biological criterion and were significantly warmer than predicted. This filtering allowed us to identify locations where physical and biological impairment were most likely within the network and that would represent the highest management priorities. Although our approach lacks the precision of more comprehensive approaches, it provides a broader context for diagnosing impairment and is a useful means of identifying priorities for more detailed evaluations across broad and heterogeneous stream networks.
NASA Astrophysics Data System (ADS)
Gariano, S. L.; Brunetti, M. T.; Iovine, G.; Melillo, M.; Peruccacci, S.; Terranova, O.; Vennari, C.; Guzzetti, F.
2015-01-01
Empirical rainfall thresholds are tools to forecast the possible occurrence of rainfall-induced shallow landslides. Accurate prediction of landslide occurrence requires reliable thresholds, which need to be properly validated before their use in operational warning systems. We exploited a catalogue of 200 rainfall conditions that have resulted in at least 223 shallow landslides in Sicily, southern Italy, in the 11-year period 2002-2011, to determine regional event duration-cumulated event rainfall (ED) thresholds for shallow landslide occurrence. We computed ED thresholds for different exceedance probability levels and determined the uncertainty associated to the thresholds using a consolidated bootstrap nonparametric technique. We further determined subregional thresholds, and we studied the role of lithology and seasonal periods in the initiation of shallow landslides in Sicily. Next, we validated the regional rainfall thresholds using 29 rainfall conditions that have resulted in 42 shallow landslides in Sicily in 2012. We based the validation on contingency tables, skill scores, and a receiver operating characteristic (ROC) analysis for thresholds at different exceedance probability levels, from 1% to 50%. Validation of rainfall thresholds is hampered by lack of information on landslide occurrence. Therefore, we considered the effects of variations in the contingencies and the skill scores caused by lack of information. Based on the results obtained, we propose a general methodology for the objective identification of a threshold that provides an optimal balance between maximization of correct predictions and minimization of incorrect predictions, including missed and false alarms. We expect that the methodology will increase the reliability of rainfall thresholds, fostering the operational use of validated rainfall thresholds in operational early warning system for regional shallow landslide forecasting.
NASA Technical Reports Server (NTRS)
Iliff, Kenneth W.; Shafer, Mary F.
1993-01-01
Aerodynamic and aerothermodynamic comparisons between flight and ground test for the Space Shuttle at hypersonic speeds are discussed. All of the comparisons are taken from papers published by researchers active in the Space Shuttle program. The aerodynamic comparisons include stability and control derivatives, center-of-pressure location, and reaction control jet interaction. Comparisons are also discussed for various forms of heating, including catalytic, boundary layer, top centerline, side fuselage, OMS pod, wing leading edge, and shock interaction. The jet interaction and center-of-pressure location flight values exceeded not only the predictions but also the uncertainties of the predictions. Predictions were significantly exceeded for the heating caused by the vortex impingement on the OMS pods and for heating caused by the wing leading-edge shock interaction.
Flood-frequency characteristics of Wisconsin streams
Walker, John F.; Peppler, Marie C.; Danz, Mari E.; Hubbard, Laura E.
2017-05-22
Flood-frequency characteristics for 360 gaged sites on unregulated rural streams in Wisconsin are presented for percent annual exceedance probabilities ranging from 0.2 to 50 using a statewide skewness map developed for this report. Equations of the relations between flood-frequency and drainage-basin characteristics were developed by multiple-regression analyses. Flood-frequency characteristics for ungaged sites on unregulated, rural streams can be estimated by use of the equations presented in this report. The State was divided into eight areas of similar physiographic characteristics. The most significant basin characteristics are drainage area, soil saturated hydraulic conductivity, main-channel slope, and several land-use variables. The standard error of prediction for the equation for the 1-percent annual exceedance probability flood ranges from 56 to 70 percent for Wisconsin Streams; these values are larger than results presented in previous reports. The increase in the standard error of prediction is likely due to increased variability of the annual-peak discharges, resulting in increased variability in the magnitude of flood peaks at higher frequencies. For each of the unregulated rural streamflow-gaging stations, a weighted estimate based on the at-site log Pearson type III analysis and the multiple regression results was determined. The weighted estimate generally has a lower uncertainty than either the Log Pearson type III or multiple regression estimates. For regulated streams, a graphical method for estimating flood-frequency characteristics was developed from the relations of discharge and drainage area for selected annual exceedance probabilities. Graphs for the major regulated streams in Wisconsin are presented in the report.
Predicting Treatment Response in Social Anxiety Disorder From Functional Magnetic Resonance Imaging
Doehrmann, Oliver; Ghosh, Satrajit S.; Polli, Frida E.; Reynolds, Gretchen O.; Horn, Franziska; Keshavan, Anisha; Triantafyllou, Christina; Saygin, Zeynep M.; Whitfield-Gabrieli, Susan; Hofmann, Stefan G.; Pollack, Mark; Gabrieli, John D.
2013-01-01
Context Current behavioral measures poorly predict treatment outcome in social anxiety disorder (SAD). To our knowledge, this is the first study to examine neuroimaging-based treatment prediction in SAD. Objective To measure brain activation in patients with SAD as a biomarker to predict subsequent response to cognitive behavioral therapy (CBT). Design Functional magnetic resonance imaging (fMRI) data were collected prior to CBT intervention. Changes in clinical status were regressed on brain responses and tested for selectivity for social stimuli. Setting Patients were treated with protocol-based CBT at anxiety disorder programs at Boston University or Massachusetts General Hospital and underwent neuroimaging data collection at Massachusetts Institute of Technology. Patients Thirty-nine medication-free patients meeting DSM-IV criteria for the generalized subtype of SAD. Interventions Brain responses to angry vs neutral faces or emotional vs neutral scenes were examined with fMRI prior to initiation of CBT. Main Outcome Measures Whole-brain regression analyses with differential fMRI responses for angry vs neutral faces and changes in Liebowitz Social Anxiety Scale score as the treatment outcome measure. Results Pretreatment responses significantly predicted subsequent treatment outcome of patients selectively for social stimuli and particularly in regions of higher-order visual cortex. Combining the brain measures with information on clinical severity accounted for more than 40% of the variance in treatment response and substantially exceeded predictions based on clinical measures at baseline. Prediction success was unaffected by testing for potential confounding factors such as depression severity at baseline. Conclusions The results suggest that brain imaging can provide biomarkers that substantially improve predictions for the success of cognitive behavioral interventions and more generally suggest that such biomarkers may offer evidence-based, personalized medicine approaches for optimally selecting among treatment options for a patient. PMID:22945462
de Vries, W; McLaughlin, M J
2013-09-01
The historical build up and future cadmium (Cd) concentrations in top soils and in crops of four Australian agricultural systems are predicted with a mass balance model, focusing on the period 1900-2100. The systems include a rotation of dryland cereals, a rotation of sugarcane and peanuts/soybean, intensive dairy production and intensive horticulture. The input of Cd to soil is calculated from fertilizer application and atmospheric deposition and also examines options including biosolid and animal manure application in the sugarcane rotation and dryland cereal production systems. Cadmium output from the soil is calculated from leaching to deeper horizons and removal with the harvested crop or with livestock products. Parameter values for all Cd fluxes were based on a number of measurements on Australian soil-plant systems. In the period 1900-2000, soil Cd concentrations were predicted to increase on average between 0.21 mg kg(-1) in dryland cereals, 0.42 mg kg(-1) in intensive agriculture and 0.68 mg kg(-1) in dairy production, which are within the range of measured increases in soils in these systems. Predicted soil concentrations exceed critical soil Cd concentrations, based on food quality criteria for Cd in crops during the simulation period in clay-rich soils under dairy production and intensive horticulture. Predicted dissolved Cd concentrations in soil pore water exceed a ground water quality criterion of 2 μg l(-1) in light textured soils, except for the sugarcane rotation due to large water leaching fluxes. Results suggest that the present fertilizer Cd inputs in Australia are in excess of the long-term critical loads in heavy-textured soils for dryland cereals and that all other systems are at low risk. Calculated critical Cd/P ratios in P fertilizers vary from <50 to >1000 mg Cd kg P(-1) for the different soil, crop and environmental conditions applied. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Delorit, Justin; Cristian Gonzalez Ortuya, Edmundo; Block, Paul
2017-09-01
In many semi-arid regions, multisectoral demands often stress available water supplies. Such is the case in the Elqui River valley of northern Chile, which draws on a limited-capacity reservoir to allocate 25 000 water rights. Delayed infrastructure investment forces water managers to address demand-based allocation strategies, particularly in dry years, which are realized through reductions in the volume associated with each water right. Skillful season-ahead streamflow forecasts have the potential to inform managers with an indication of future conditions to guide reservoir allocations. This work evaluates season-ahead statistical prediction models of October-January (growing season) streamflow at multiple lead times associated with manager and user decision points, and links predictions with a reservoir allocation tool. Skillful results (streamflow forecasts outperform climatology) are produced for short lead times (1 September: ranked probability skill score (RPSS) of 0.31, categorical hit skill score of 61 %). At longer lead times, climatological skill exceeds forecast skill due to fewer observations of precipitation. However, coupling the 1 September statistical forecast model with a sea surface temperature phase and strength statistical model allows for equally skillful categorical streamflow forecasts to be produced for a 1 May lead, triggered for 60 % of years (1950-2015), suggesting forecasts need not be strictly deterministic to be useful for water rights holders. An early (1 May) categorical indication of expected conditions is reinforced with a deterministic forecast (1 September) as more observations of local variables become available. The reservoir allocation model is skillful at the 1 September lead (categorical hit skill score of 53 %); skill improves to 79 % when categorical allocation prediction certainty exceeds 80 %. This result implies that allocation efficiency may improve when forecasts are integrated into reservoir decision frameworks. The methods applied here advance the understanding of the mechanisms and timing responsible for moisture transport to the Elqui Valley and provide a unique application of streamflow forecasting in the prediction of water right allocations.
Opioid Receptors Mediate Direct Predictive Fear Learning: Evidence from One-Trial Blocking
ERIC Educational Resources Information Center
Cole, Sindy; McNally, Gavan P.
2007-01-01
Pavlovian fear learning depends on predictive error, so that fear learning occurs when the actual outcome of a conditioning trial exceeds the expected outcome. Previous research has shown that opioid receptors, including [mu]-opioid receptors in the ventrolateral quadrant of the midbrain periaqueductal gray (vlPAG), mediate such predictive fear…
Kim, Ji-Hoon; Kang, Wee-Soo; Yun, Sung-Chul
2014-06-01
A population model of bacterial spot caused by Xanthomonas campestris pv. vesicatoria on hot pepper was developed to predict the primary disease infection date. The model estimated the pathogen population on the surface and within the leaf of the host based on the wetness period and temperature. For successful infection, at least 5,000 cells/ml of the bacterial population were required. Also, wind and rain were necessary according to regression analyses of the monitored data. Bacterial spot on the model is initiated when the pathogen population exceeds 10(15) cells/g within the leaf. The developed model was validated using 94 assessed samples from 2000 to 2007 obtained from monitored fields. Based on the validation study, the predicted initial infection dates varied based on the year rather than the location. Differences in initial infection dates between the model predictions and the monitored data in the field were minimal. For example, predicted infection dates for 7 locations were within the same month as the actual infection dates, 11 locations were within 1 month of the actual infection, and only 3 locations were more than 2 months apart from the actual infection. The predicted infection dates were mapped from 2009 to 2012; 2011 was the most severe year. Although the model was not sensitive enough to predict disease severity of less than 0.1% in the field, our model predicted bacterial spot severity of 1% or more. Therefore, this model can be applied in the field to determine when bacterial spot control is required.
Kim, Ji-Hoon; Kang, Wee-Soo; Yun, Sung-Chul
2014-01-01
A population model of bacterial spot caused by Xanthomonas campestris pv. vesicatoria on hot pepper was developed to predict the primary disease infection date. The model estimated the pathogen population on the surface and within the leaf of the host based on the wetness period and temperature. For successful infection, at least 5,000 cells/ml of the bacterial population were required. Also, wind and rain were necessary according to regression analyses of the monitored data. Bacterial spot on the model is initiated when the pathogen population exceeds 1015 cells/g within the leaf. The developed model was validated using 94 assessed samples from 2000 to 2007 obtained from monitored fields. Based on the validation study, the predicted initial infection dates varied based on the year rather than the location. Differences in initial infection dates between the model predictions and the monitored data in the field were minimal. For example, predicted infection dates for 7 locations were within the same month as the actual infection dates, 11 locations were within 1 month of the actual infection, and only 3 locations were more than 2 months apart from the actual infection. The predicted infection dates were mapped from 2009 to 2012; 2011 was the most severe year. Although the model was not sensitive enough to predict disease severity of less than 0.1% in the field, our model predicted bacterial spot severity of 1% or more. Therefore, this model can be applied in the field to determine when bacterial spot control is required. PMID:25288995
Predicting Energy Performance of a Net-Zero Energy Building: A Statistical Approach
Kneifel, Joshua; Webb, David
2016-01-01
Performance-based building requirements have become more prevalent because it gives freedom in building design while still maintaining or exceeding the energy performance required by prescriptive-based requirements. In order to determine if building designs reach target energy efficiency improvements, it is necessary to estimate the energy performance of a building using predictive models and different weather conditions. Physics-based whole building energy simulation modeling is the most common approach. However, these physics-based models include underlying assumptions and require significant amounts of information in order to specify the input parameter values. An alternative approach to test the performance of a building is to develop a statistically derived predictive regression model using post-occupancy data that can accurately predict energy consumption and production based on a few common weather-based factors, thus requiring less information than simulation models. A regression model based on measured data should be able to predict energy performance of a building for a given day as long as the weather conditions are similar to those during the data collection time frame. This article uses data from the National Institute of Standards and Technology (NIST) Net-Zero Energy Residential Test Facility (NZERTF) to develop and validate a regression model to predict the energy performance of the NZERTF using two weather variables aggregated to the daily level, applies the model to estimate the energy performance of hypothetical NZERTFs located in different cities in the Mixed-Humid climate zone, and compares these estimates to the results from already existing EnergyPlus whole building energy simulations. This regression model exhibits agreement with EnergyPlus predictive trends in energy production and net consumption, but differs greatly in energy consumption. The model can be used as a framework for alternative and more complex models based on the experimental data collected from the NZERTF. PMID:27956756
Predicting Energy Performance of a Net-Zero Energy Building: A Statistical Approach.
Kneifel, Joshua; Webb, David
2016-09-01
Performance-based building requirements have become more prevalent because it gives freedom in building design while still maintaining or exceeding the energy performance required by prescriptive-based requirements. In order to determine if building designs reach target energy efficiency improvements, it is necessary to estimate the energy performance of a building using predictive models and different weather conditions. Physics-based whole building energy simulation modeling is the most common approach. However, these physics-based models include underlying assumptions and require significant amounts of information in order to specify the input parameter values. An alternative approach to test the performance of a building is to develop a statistically derived predictive regression model using post-occupancy data that can accurately predict energy consumption and production based on a few common weather-based factors, thus requiring less information than simulation models. A regression model based on measured data should be able to predict energy performance of a building for a given day as long as the weather conditions are similar to those during the data collection time frame. This article uses data from the National Institute of Standards and Technology (NIST) Net-Zero Energy Residential Test Facility (NZERTF) to develop and validate a regression model to predict the energy performance of the NZERTF using two weather variables aggregated to the daily level, applies the model to estimate the energy performance of hypothetical NZERTFs located in different cities in the Mixed-Humid climate zone, and compares these estimates to the results from already existing EnergyPlus whole building energy simulations. This regression model exhibits agreement with EnergyPlus predictive trends in energy production and net consumption, but differs greatly in energy consumption. The model can be used as a framework for alternative and more complex models based on the experimental data collected from the NZERTF.
Climate-Based Models for Understanding and Forecasting Dengue Epidemics
Descloux, Elodie; Mangeas, Morgan; Menkes, Christophe Eugène; Lengaigne, Matthieu; Leroy, Anne; Tehei, Temaui; Guillaumot, Laurent; Teurlai, Magali; Gourinat, Ann-Claire; Benzler, Justus; Pfannstiel, Anne; Grangeon, Jean-Paul; Degallier, Nicolas; De Lamballerie, Xavier
2012-01-01
Background Dengue dynamics are driven by complex interactions between human-hosts, mosquito-vectors and viruses that are influenced by environmental and climatic factors. The objectives of this study were to analyze and model the relationships between climate, Aedes aegypti vectors and dengue outbreaks in Noumea (New Caledonia), and to provide an early warning system. Methodology/Principal Findings Epidemiological and meteorological data were analyzed from 1971 to 2010 in Noumea. Entomological surveillance indices were available from March 2000 to December 2009. During epidemic years, the distribution of dengue cases was highly seasonal. The epidemic peak (March–April) lagged the warmest temperature by 1–2 months and was in phase with maximum precipitations, relative humidity and entomological indices. Significant inter-annual correlations were observed between the risk of outbreak and summertime temperature, precipitations or relative humidity but not ENSO. Climate-based multivariate non-linear models were developed to estimate the yearly risk of dengue outbreak in Noumea. The best explicative meteorological variables were the number of days with maximal temperature exceeding 32°C during January–February–March and the number of days with maximal relative humidity exceeding 95% during January. The best predictive variables were the maximal temperature in December and maximal relative humidity during October–November–December of the previous year. For a probability of dengue outbreak above 65% in leave-one-out cross validation, the explicative model predicted 94% of the epidemic years and 79% of the non epidemic years, and the predictive model 79% and 65%, respectively. Conclusions/Significance The epidemic dynamics of dengue in Noumea were essentially driven by climate during the last forty years. Specific conditions based on maximal temperature and relative humidity thresholds were determinant in outbreaks occurrence. Their persistence was also crucial. An operational model that will enable health authorities to anticipate the outbreak risk was successfully developed. Similar models may be developed to improve dengue management in other countries. PMID:22348154
A new method for determining a sector alert
DOT National Transportation Integrated Search
2008-09-29
The Traffic Flow Management System (TFMS) currently declares an alert for any 15-minute interval in which the predicted demand exceeds the Monitor/Alert Parameter (MAP) for any airport, sector, or fix. For a sector, TFMS predicts the demand for each ...
A spatio-temporal model for probabilistic seismic hazard zonation of Tehran
NASA Astrophysics Data System (ADS)
Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza
2013-08-01
A precondition for all disaster management steps, building damage prediction, and construction code developments is a hazard assessment that shows the exceedance probabilities of different ground motion levels at a site considering different near- and far-field earthquake sources. The seismic sources are usually categorized as time-independent area sources and time-dependent fault sources. While the earlier incorporates the small and medium events, the later takes into account only the large characteristic earthquakes. In this article, a probabilistic approach is proposed to aggregate the effects of time-dependent and time-independent sources on seismic hazard. The methodology is then applied to generate three probabilistic seismic hazard maps of Tehran for 10%, 5%, and 2% exceedance probabilities in 50 years. The results indicate an increase in peak ground acceleration (PGA) values toward the southeastern part of the study area and the PGA variations are mostly controlled by the shear wave velocities across the city. In addition, the implementation of the methodology takes advantage of GIS capabilities especially raster-based analyses and representations. During the estimation of the PGA exceedance rates, the emphasis has been placed on incorporating the effects of different attenuation relationships and seismic source models by using a logic tree.
A global probabilistic tsunami hazard assessment from earthquake sources
Davies, Gareth; Griffin, Jonathan; Lovholt, Finn; Glimsdal, Sylfest; Harbitz, Carl; Thio, Hong Kie; Lorito, Stefano; Basili, Roberto; Selva, Jacopo; Geist, Eric L.; Baptista, Maria Ana
2017-01-01
Large tsunamis occur infrequently but have the capacity to cause enormous numbers of casualties, damage to the built environment and critical infrastructure, and economic losses. A sound understanding of tsunami hazard is required to underpin management of these risks, and while tsunami hazard assessments are typically conducted at regional or local scales, globally consistent assessments are required to support international disaster risk reduction efforts, and can serve as a reference for local and regional studies. This study presents a global-scale probabilistic tsunami hazard assessment (PTHA), extending previous global-scale assessments based largely on scenario analysis. Only earthquake sources are considered, as they represent about 80% of the recorded damaging tsunami events. Globally extensive estimates of tsunami run-up height are derived at various exceedance rates, and the associated uncertainties are quantified. Epistemic uncertainties in the exceedance rates of large earthquakes often lead to large uncertainties in tsunami run-up. Deviations between modelled tsunami run-up and event observations are quantified, and found to be larger than suggested in previous studies. Accounting for these deviations in PTHA is important, as it leads to a pronounced increase in predicted tsunami run-up for a given exceedance rate.
Wu, Jinglan; Zhuang, Wei; Ying, Hanjie; Jiao, Pengfei; Li, Renjie; Wen, Qingshi; Wang, Lili; Zhou, Jingwei; Yang, Pengpeng
2015-01-01
Separation of butanol based on sorption methodology from acetone-butanol-ethanol (ABE) fermentation broth has advantages in terms of biocompatibility and stability, as well as economy, and therefore gains much attention. In this work a chromatographic column model based on the solid film linear driving force approach and the competitive Langmuir isotherm equations was used to predict the competitive sorption behaviors of ABE single, binary, and ternary mixture. It was observed that the outlet concentration of weaker retained components exceeded the inlet concentration, which is an evidence of competitive adsorption. Butanol, the strongest retained component, could replace ethanol almost completely and also most of acetone. In the end of this work, the proposed model was validated by comparison of the experimental and predicted ABE ternary breakthrough curves using the real ABE fermentation broth as a feed solution. © 2014 American Institute of Chemical Engineers.
Bao, Yi; Chen, Yizheng; Hoehler, Matthew S; Smith, Christopher M; Bundy, Matthew; Chen, Genda
2017-01-01
This paper presents high temperature measurements using a Brillouin scattering-based fiber optic sensor and the application of the measured temperatures and building code recommended material parameters into enhanced thermomechanical analysis of simply supported steel beams subjected to combined thermal and mechanical loading. The distributed temperature sensor captures detailed, nonuniform temperature distributions that are compared locally with thermocouple measurements with less than 4.7% average difference at 95% confidence level. The simulated strains and deflections are validated using measurements from a second distributed fiber optic (strain) sensor and two linear potentiometers, respectively. The results demonstrate that the temperature-dependent material properties specified in the four investigated building codes lead to strain predictions with less than 13% average error at 95% confidence level and that the Europe building code provided the best predictions. However, the implicit consideration of creep in Europe is insufficient when the beam temperature exceeds 800°C.
Wagner, Daniel M.; Krieger, Joshua D.; Veilleux, Andrea G.
2016-08-04
In 2013, the U.S. Geological Survey initiated a study to update regional skew, annual exceedance probability discharges, and regional regression equations used to estimate annual exceedance probability discharges for ungaged locations on streams in the study area with the use of recent geospatial data, new analytical methods, and available annual peak-discharge data through the 2013 water year. An analysis of regional skew using Bayesian weighted least-squares/Bayesian generalized-least squares regression was performed for Arkansas, Louisiana, and parts of Missouri and Oklahoma. The newly developed constant regional skew of -0.17 was used in the computation of annual exceedance probability discharges for 281 streamgages used in the regional regression analysis. Based on analysis of covariance, four flood regions were identified for use in the generation of regional regression models. Thirty-nine basin characteristics were considered as potential explanatory variables, and ordinary least-squares regression techniques were used to determine the optimum combinations of basin characteristics for each of the four regions. Basin characteristics in candidate models were evaluated based on multicollinearity with other basin characteristics (variance inflation factor < 2.5) and statistical significance at the 95-percent confidence level (p ≤ 0.05). Generalized least-squares regression was used to develop the final regression models for each flood region. Average standard errors of prediction of the generalized least-squares models ranged from 32.76 to 59.53 percent, with the largest range in flood region D. Pseudo coefficients of determination of the generalized least-squares models ranged from 90.29 to 97.28 percent, with the largest range also in flood region D. The regional regression equations apply only to locations on streams in Arkansas where annual peak discharges are not substantially affected by regulation, diversion, channelization, backwater, or urbanization. The applicability and accuracy of the regional regression equations depend on the basin characteristics measured for an ungaged location on a stream being within range of those used to develop the equations.
Can brook trout survive climate change in large rivers? If it rains.
Merriam, Eric R; Fernandez, Rodrigo; Petty, J Todd; Zegre, Nicolas
2017-12-31
We provide an assessment of thermal characteristics and climate change vulnerability for brook trout (Salvelinus fontinalis) habitats in the upper Shavers Fork sub-watershed, West Virginia. Spatial and temporal (2001-2015) variability in observed summer (6/1-8/31) stream temperatures was quantified in 23 (9 tributary, 14 main-stem) reaches. We developed a mixed effects model to predict site-specific mean daily stream temperature from air temperature and discharge and coupled this model with a hydrologic model to predict future (2016-2100) changes in stream temperature under low (RCP 4.5) and high (RCP 8.5) emissions scenarios. Observed mean daily stream temperature exceeded the 21°C brook trout physiological threshold in all but one main-stem site, and 3 sites exceeded proposed thermal limits for either 63- and 7-day mean stream temperature. We modeled mean daily stream temperature with a high degree of certainty (R 2 =0.93; RMSE=0.76°C). Predicted increases in mean daily stream temperature in main-stem and tributary reaches ranged from 0.2°C (RCP 4.5) to 1.2°C (RCP 8.5). Between 2091 and 2100, the average number of days with mean daily stream temperature>21°C increased within main-stem sites under the RCP 4.5 (0-1.2days) and 8.5 (0-13) scenarios; however, no site is expected to exceed 63- or 7-day thermal limits. During the warmest 10years, ≥5 main-stem sites exceeded the 63- or 7-day thermal tolerance limits under both climate emissions scenarios. Years with the greatest increases in stream temperature were characterized by low mean daily discharge. Main-stem reaches below major tributaries never exceed thermal limits, despite neighboring reaches having among the highest observed and predicted stream temperatures. Persistence of thermal refugia within upper Shavers Fork would enable persistence of metapopulation structure and life history processes. However, this will only be possible if projected increases in discharge are realized and offset expected increases in air temperature. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Reeves, K. L.; Samson, C.; Summers, R. S.; Balaji, R.
2017-12-01
Drinking water treatment utilities (DWTU) are tasked with the challenge of meeting disinfection and disinfection byproduct (DBP) regulations to provide safe, reliable drinking water under changing climate and land surface characteristics. DBPs form in drinking water when disinfectants, commonly chlorine, react with organic matter as measured by total organic carbon (TOC), and physical removal of pathogen microorganisms are achieved by filtration and monitored by turbidity removal. Turbidity and TOC in influent waters to DWTUs are expected to increase due to variable climate and more frequent fires and droughts. Traditional methods for forecasting turbidity and TOC require catchment specific data (i.e. streamflow) and have difficulties predicting them under non-stationary climate. A modelling framework was developed to assist DWTUs with assessing their risk for future compliance with disinfection and DBP regulations under changing climate. A local polynomial method was developed to predict surface water TOC using climate data collected from NOAA, Normalized Difference Vegetation Index (NDVI) data from the IRI Data Library, and historical TOC data from three DWTUs in diverse geographic locations. Characteristics from the DWTUs were used in the EPA Water Treatment Plant model to determine thresholds for influent TOC that resulted in DBP concentrations within compliance. Lastly, extreme value theory was used to predict probabilities of threshold exceedances under the current climate. Results from the utilities were used to produce a generalized TOC threshold approach that only requires water temperature and bromide concentration. The threshold exceedance model will be used to estimate probabilities of exceedances under projected climate scenarios. Initial results show that TOC can be forecasted using widely available data via statistical methods, where temperature, precipitation, Palmer Drought Severity Index, and NDVI with various lags were shown to be important predictors of TOC, and TOC thresholds can be determined using water temperature and bromide concentration. Results include a model to predict influent turbidity and turbidity thresholds, similar to the TOC models, as well as probabilities of threshold exceedances for TOC and turbidity under changing climate.
Pohlmann, André; Hameyer, Kay
2012-01-01
Ventricular Assist Devices (VADs) are mechanical blood pumps that support the human heart in order to maintain a sufficient perfusion of the human body and its organs. During VAD operation blood damage caused by hemolysis, thrombogenecity and denaturation has to be avoided. One key parameter causing the blood's denaturation is its temperature which must not exceed 42 °C. As a temperature rise can be directly linked to the losses occuring in the drive system, this paper introduces an efficiency prediction chain for Brushless DC (BLDC) drives which are applied in various VAD systems. The presented chain is applied to various core materials and operation ranges, providing a general overview on the loss dependencies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seinfeld, John H.
Organic material constitutes about 50% of global atmospheric aerosol mass, and the dominant source of organic aerosol is the oxidation of volatile hydrocarbons, to produce secondary organic aerosol (SOA). Understanding the formation of SOA is crucial to predicting present and future climate effects of atmospheric aerosols. The goal of this program is to significantly increase our understanding of secondary organic aerosol (SOA) formation in the atmosphere. Ambient measurements indicate that the amount of SOA in the atmosphere exceeds that predicted in current models based on existing laboratory chamber data. This would suggest that either the SOA yields measured in laboratorymore » chambers are understated or that all major organic precursors have not been identified. In this research program we are systematically exploring these possibilities.« less
McNulty, Steven G; Cohen, Erika C; Moore Myers, Jennifer A; Sullivan, Timothy J; Li, Harbin
2007-10-01
Concern regarding the impacts of continued nitrogen and sulfur deposition on ecosystem health has prompted the development of critical acid load assessments for forest soils. A critical acid load is a quantitative estimate of exposure to one or more pollutants at or above which harmful acidification-related effects on sensitive elements of the environment occur. A pollutant load in excess of a critical acid load is termed exceedance. This study combined a simple mass balance equation with national-scale databases to estimate critical acid load and exceedance for forest soils at a 1-km(2) spatial resolution across the conterminous US. This study estimated that about 15% of US forest soils are in exceedance of their critical acid load by more than 250eqha(-1)yr(-1), including much of New England and West Virginia. Very few areas of exceedance were predicted in the western US.
Predicting Item Difficulty of Science National Curriculum Tests: The Case of Key Stage 2 Assessments
ERIC Educational Resources Information Center
El Masri, Yasmine H.; Ferrara, Steve; Foltz, Peter W.; Baird, Jo-Anne
2017-01-01
Predicting item difficulty is highly important in education for both teachers and item writers. Despite identifying a large number of explanatory variables, predicting item difficulty remains a challenge in educational assessment with empirical attempts rarely exceeding 25% of variance explained. This paper analyses 216 science items of key stage…
Predicting turns in proteins with a unified model.
Song, Qi; Li, Tonghua; Cong, Peisheng; Sun, Jiangming; Li, Dapeng; Tang, Shengnan
2012-01-01
Turns are a critical element of the structure of a protein; turns play a crucial role in loops, folds, and interactions. Current prediction methods are well developed for the prediction of individual turn types, including α-turn, β-turn, and γ-turn, etc. However, for further protein structure and function prediction it is necessary to develop a uniform model that can accurately predict all types of turns simultaneously. In this study, we present a novel approach, TurnP, which offers the ability to investigate all the turns in a protein based on a unified model. The main characteristics of TurnP are: (i) using newly exploited features of structural evolution information (secondary structure and shape string of protein) based on structure homologies, (ii) considering all types of turns in a unified model, and (iii) practical capability of accurate prediction of all turns simultaneously for a query. TurnP utilizes predicted secondary structures and predicted shape strings, both of which have greater accuracy, based on innovative technologies which were both developed by our group. Then, sequence and structural evolution features, which are profile of sequence, profile of secondary structures and profile of shape strings are generated by sequence and structure alignment. When TurnP was validated on a non-redundant dataset (4,107 entries) by five-fold cross-validation, we achieved an accuracy of 88.8% and a sensitivity of 71.8%, which exceeded the most state-of-the-art predictors of certain type of turn. Newly determined sequences, the EVA and CASP9 datasets were used as independent tests and the results we achieved were outstanding for turn predictions and confirmed the good performance of TurnP for practical applications.
Predicting Turns in Proteins with a Unified Model
Song, Qi; Li, Tonghua; Cong, Peisheng; Sun, Jiangming; Li, Dapeng; Tang, Shengnan
2012-01-01
Motivation Turns are a critical element of the structure of a protein; turns play a crucial role in loops, folds, and interactions. Current prediction methods are well developed for the prediction of individual turn types, including α-turn, β-turn, and γ-turn, etc. However, for further protein structure and function prediction it is necessary to develop a uniform model that can accurately predict all types of turns simultaneously. Results In this study, we present a novel approach, TurnP, which offers the ability to investigate all the turns in a protein based on a unified model. The main characteristics of TurnP are: (i) using newly exploited features of structural evolution information (secondary structure and shape string of protein) based on structure homologies, (ii) considering all types of turns in a unified model, and (iii) practical capability of accurate prediction of all turns simultaneously for a query. TurnP utilizes predicted secondary structures and predicted shape strings, both of which have greater accuracy, based on innovative technologies which were both developed by our group. Then, sequence and structural evolution features, which are profile of sequence, profile of secondary structures and profile of shape strings are generated by sequence and structure alignment. When TurnP was validated on a non-redundant dataset (4,107 entries) by five-fold cross-validation, we achieved an accuracy of 88.8% and a sensitivity of 71.8%, which exceeded the most state-of-the-art predictors of certain type of turn. Newly determined sequences, the EVA and CASP9 datasets were used as independent tests and the results we achieved were outstanding for turn predictions and confirmed the good performance of TurnP for practical applications. PMID:23144872
Laidlaw, Mark A S; Mohmmad, Shaike M; Gulson, Brian L; Taylor, Mark P; Kristensen, Louise J; Birch, Gavin
2017-07-01
Surface soils in portions of the Sydney (New South Wales, Australia) urban area are contaminated with lead (Pb) primarily from past use of Pb in gasoline, the deterioration of exterior lead-based paints, and industrial activities. Surface soil samples (n=341) were collected from a depth of 0-2.5cm at a density of approximately one sample per square kilometre within the Sydney estuary catchment and analysed for lead. The bioaccessibility of soil Pb was analysed in 18 samples. The blood lead level (BLL) of a hypothetical 24 month old child was predicted at soil sampling sites in residential and open land use using the United States Environmental Protection Agency (US EPA) Integrated Exposure Uptake and Biokinetic (IEUBK) model. Other environmental exposures used the Australian National Environmental Protection Measure (NEPM) default values. The IEUBK model predicted a geometric mean BLL of 2.0±2.1µg/dL using measured soil lead bioavailability measurements (bioavailability =34%) and 2.4±2.8µg/dL using the Australian NEPM default assumption (bioavailability =50%). Assuming children were present and residing at the sampling locations, the IEUBK model incorporating soil Pb bioavailability predicted that 5.6% of the children at the sampling locations could potentially have BLLs exceeding 5µg/dL and 2.1% potentially could have BLLs exceeding 10µg/dL. These estimations are consistent with BLLs previously measured in children in Sydney. Copyright © 2017 Elsevier Inc. All rights reserved.
Secondary Structure Predictions for Long RNA Sequences Based on Inversion Excursions and MapReduce.
Yehdego, Daniel T; Zhang, Boyu; Kodimala, Vikram K R; Johnson, Kyle L; Taufer, Michela; Leung, Ming-Ying
2013-05-01
Secondary structures of ribonucleic acid (RNA) molecules play important roles in many biological processes including gene expression and regulation. Experimental observations and computing limitations suggest that we can approach the secondary structure prediction problem for long RNA sequences by segmenting them into shorter chunks, predicting the secondary structures of each chunk individually using existing prediction programs, and then assembling the results to give the structure of the original sequence. The selection of cutting points is a crucial component of the segmenting step. Noting that stem-loops and pseudoknots always contain an inversion, i.e., a stretch of nucleotides followed closely by its inverse complementary sequence, we developed two cutting methods for segmenting long RNA sequences based on inversion excursions: the centered and optimized method. Each step of searching for inversions, chunking, and predictions can be performed in parallel. In this paper we use a MapReduce framework, i.e., Hadoop, to extensively explore meaningful inversion stem lengths and gap sizes for the segmentation and identify correlations between chunking methods and prediction accuracy. We show that for a set of long RNA sequences in the RFAM database, whose secondary structures are known to contain pseudoknots, our approach predicts secondary structures more accurately than methods that do not segment the sequence, when the latter predictions are possible computationally. We also show that, as sequences exceed certain lengths, some programs cannot computationally predict pseudoknots while our chunking methods can. Overall, our predicted structures still retain the accuracy level of the original prediction programs when compared with known experimental secondary structure.
Physiologically Based Pharmacokinetic (PBPK) Modeling of ...
Background: Quantitative estimation of toxicokinetic variability in the human population is a persistent challenge in risk assessment of environmental chemicals. Traditionally, inter-individual differences in the population are accounted for by default assumptions or, in rare cases, are based on human toxicokinetic data.Objectives: To evaluate the utility of genetically diverse mouse strains for estimating toxicokinetic population variability for risk assessment, using trichloroethylene (TCE) metabolism as a case study. Methods: We used data on oxidative and glutathione conjugation metabolism of TCE in 16 inbred and one hybrid mouse strains to calibrate and extend existing physiologically-based pharmacokinetic (PBPK) models. We added one-compartment models for glutathione metabolites and a two-compartment model for dichloroacetic acid (DCA). A Bayesian population analysis of inter-strain variability was used to quantify variability in TCE metabolism. Results: Concentration-time profiles for TCE metabolism to oxidative and glutathione conjugation metabolites varied across strains. Median predictions for the metabolic flux through oxidation was less variable (5-fold range) than that through glutathione conjugation (10-fold range). For oxidative metabolites, median predictions of trichloroacetic acid production was less variable (2-fold range) than DCA production (5-fold range), although uncertainty bounds for DCA exceeded the predicted variability. Conclusions:
Jørgensen, Søren; Dau, Torsten
2011-09-01
A model for predicting the intelligibility of processed noisy speech is proposed. The speech-based envelope power spectrum model has a similar structure as the model of Ewert and Dau [(2000). J. Acoust. Soc. Am. 108, 1181-1196], developed to account for modulation detection and masking data. The model estimates the speech-to-noise envelope power ratio, SNR(env), at the output of a modulation filterbank and relates this metric to speech intelligibility using the concept of an ideal observer. Predictions were compared to data on the intelligibility of speech presented in stationary speech-shaped noise. The model was further tested in conditions with noisy speech subjected to reverberation and spectral subtraction. Good agreement between predictions and data was found in all cases. For spectral subtraction, an analysis of the model's internal representation of the stimuli revealed that the predicted decrease of intelligibility was caused by the estimated noise envelope power exceeding that of the speech. The classical concept of the speech transmission index fails in this condition. The results strongly suggest that the signal-to-noise ratio at the output of a modulation frequency selective process provides a key measure of speech intelligibility. © 2011 Acoustical Society of America
Zholdikova, Z I; Kharchevnikova, N V
2006-01-01
A version of logical-combinatorial JSM type intelligent system was used to predict the presence and the degree of a carcinogenic effect. This version was based on combined description of chemical substances including both structural and numeric parameters. The new version allows for the fact that the toxicity and danger caused by chemical substances often depend on their biological activation in the organism. The authors substantiate classifying chemicals according to their carcinogenic activity, and illustrate the use of the system to predict the carcinogenicity of polycyclic aromatic hydrocarbons using a model of bioactivation via the formation of diolepoxides, and the carcinogenicity of halogenated alkanes using a model of bioactivation via oxidative dehalogenation. The paper defined the boundary level of an energetic parameter, the exceeding of which correlated with the inhibition of halogenated alkanes's metabolism and the absence of carcinogenic activity.
Weaver, J. Curtis; Feaster, Toby D.; Gotvald, Anthony J.
2009-01-01
Reliable estimates of the magnitude and frequency of floods are required for the economical and safe design of transportation and water-conveyance structures. A multistate approach was used to update methods for estimating the magnitude and frequency of floods in rural, ungaged basins in North Carolina, South Carolina, and Georgia that are not substantially affected by regulation, tidal fluctuations, or urban development. In North Carolina, annual peak-flow data available through September 2006 were available for 584 sites; 402 of these sites had a total of 10 or more years of systematic record that is required for at-site, flood-frequency analysis. Following data reviews and the computation of 20 physical and climatic basin characteristics for each station as well as at-site flood-frequency statistics, annual peak-flow data were identified for 363 sites in North Carolina suitable for use in this analysis. Among these 363 sites, 19 sites had records that could be divided into unregulated and regulated/ channelized annual peak discharges, which means peak-flow records were identified for a total of 382 cases in North Carolina. Considering the 382 cases, at-site flood-frequency statistics are provided for 333 unregulated cases (also used for the regression database) and 49 regulated/channelized cases. The flood-frequency statistics for the 333 unregulated sites were combined with data for sites from South Carolina, Georgia, and adjacent parts of Alabama, Florida, Tennessee, and Virginia to create a database of 943 sites considered for use in the regional regression analysis. Flood-frequency statistics were computed by fitting logarithms (base 10) of the annual peak flows to a log-Pearson Type III distribution. As part of the computation process, a new generalized skew coefficient was developed by using a Bayesian generalized least-squares regression model. Exploratory regression analyses using ordinary least-squares regression completed on the initial database of 943 sites resulted in defining five hydrologic regions for North Carolina, South Carolina, and Georgia. Stations with drainage areas less than 1 square mile were removed from the database, and a procedure to examine for basin redundancy (based on drainage area and periods of record) also resulted in the removal of some stations from the regression database. Flood-frequency estimates and basin characteristics for 828 gaged stations were combined to form the final database that was used in the regional regression analysis. Regional regression analysis, using generalized least-squares regression, was used to develop a set of predictive equations that can be used for estimating the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent chance exceedance flows for rural ungaged, basins in North Carolina, South Carolina, and Georgia. The final predictive equations are all functions of drainage area and the percentage of drainage basin within each of the five hydrologic regions. Average errors of prediction for these regression equations range from 34.0 to 47.7 percent. Discharge estimates determined from the systematic records for the current study are, on average, larger in magnitude than those from a previous study for the highest percent chance exceedances (50 and 20 percent) and tend to be smaller than those from the previous study for the lower percent chance exceedances when all sites are considered as a group. For example, mean differences for sites in the Piedmont hydrologic region range from positive 0.5 percent for the 50-percent chance exceedance flow to negative 4.6 percent for the 0.2-percent chance exceedance flow when stations are grouped by hydrologic region. Similarly for the same hydrologic region, median differences range from positive 0.9 percent for the 50-percent chance exceedance flow to negative 7.1 percent for the 0.2-percent chance exceedance flow. However, mean and median percentage differences between the estimates from the previous and curre
Anning, David W.; Paul, Angela P.; McKinney, Tim S.; Huntington, Jena M.; Bexfield, Laura M.; Thiros, Susan A.
2012-01-01
The National Water-Quality Assessment (NAWQA) Program of the U.S. Geological Survey (USGS) is conducting a regional analysis of water quality in the principal aquifer systems across the United States. The Southwest Principal Aquifers (SWPA) study is building a better understanding of the susceptibility and vulnerability of basin-fill aquifers in the region to groundwater contamination by synthesizing baseline knowledge of groundwater-quality conditions in 16 basins previously studied by the NAWQA Program. The improved understanding of aquifer susceptibility and vulnerability to contamination is assisting in the development of tools that water managers can use to assess and protect the quality of groundwater resources.Human-health concerns and economic considerations associated with meeting drinking-water standards motivated a study of the vulnerability of basin-fill aquifers to nitrate contamination and arsenic enrichment in the southwestern United States. Statistical models were developed by using the random forest classifier algorithm to predict concentrations of nitrate and arsenic across a model grid that represents about 190,600 square miles of basin-fill aquifers in parts of Arizona, California, Colorado, Nevada, New Mexico, and Utah. The statistical models, referred to as classifiers, reflect natural and human-related factors that affect aquifer vulnerability to contamination and relate nitrate and arsenic concentrations to explanatory variables representing local- and basin-scale measures of source, aquifer susceptibility, and geochemical conditions. The classifiers were unbiased and fit the observed data well, and misclassifications were primarily due to statistical sampling error in the training datasets.The classifiers were designed to predict concentrations to be in one of six classes for nitrate, and one of seven classes for arsenic. Each classification scheme allowed for identification of areas with concentrations that were equal to or exceeding the U.S. Environmental Protection Agency drinking-water standard. Whereas 2.4 percent of the area underlain by basin-fill aquifers in the study area was predicted to equal or exceed this standard for nitrate (10 milligrams per liter as N; mg/L), 42.7 percent was predicted to equal or exceed the standard for arsenic (10 micrograms per liter; μg/L). Areas predicted to equal or exceed the drinking-water standard for nitrate include basins in central Arizona near Phoenix; the San Joaquin, Inland, and San Jacinto basins of California; and the San Luis Valley of Colorado. Much of the area predicted to equal or exceed the drinking-water standard for arsenic is within a belt of basins along the western portion of the Basin and Range Physiographic Province in Nevada, California, and Arizona. Predicted nitrate and arsenic concentrations are substantially lower than the drinking-water standards in much of the study area—about 93.0 percent of the area underlain by basin-fill aquifers was less than one-half the standard for nitrate (5.0 mg/L), and 50.2 percent was less than one-half the standard for arsenic (5.0 μg/L).
NASA Astrophysics Data System (ADS)
Statella, T.; Pina, P.; Silva, E. A.; Nervis Frigeri, Ary Vinicius; Neto, Frederico Gallon
2016-10-01
We have calculated the prevailing dust devil tracks direction as a means of verifying the Mars Climate Database (MCD) predicted wind directions accuracy. For that purpose we have applied an automatic method based on morphological openings for inferring the prevailing tracks direction in a dataset comprising 200 Mars Orbiter Camera (MOC) Narrow Angle (NA) and High Resolution Imaging Science Experiment (HiRISE) images of the Martian surface, depicting regions in the Aeolis, Eridania, Noachis, Argyre and Hellas quadrangles. The prevailing local wind directions were calculated from the MCD predicted speeds for the WE and SN wind components. The results showed that the MCD may not be able to predict accurately the locally dominant wind direction near the surface. In adittion, we confirm that the surface wind stress alone cannot produce dust lifting in the studied sites, since it never exceeds the threshold value of 0.0225 Nm-2 in the MCD.
NASA Astrophysics Data System (ADS)
Li, Yangfan; Hamada, Yukitaka; Otobe, Katsunori; Ando, Teiichi
2017-02-01
Multi-traverse CS provides a unique means for the production of thick coatings and bulk materials from powders. However, the material along spray and spray-layer boundaries is often poorly bonded as it is laid by the leading and trailing peripheries of the spray that carry powder particles with insufficient kinetic energy. For the same reason, the splats in the very first layer deposited on the substrate may not be bonded well either. A mathematical spray model was developed based on an axisymmetric Gaussian mass flow rate distribution and a stepped deposition yield to predict the thickness of such poorly-bonded layers in multi-traverse CS deposition. The predicted thickness of poorly-bonded layers in a multi-traverse Cu coating falls in the range of experimental values. The model also predicts that the material that contains poorly bonded splats could exceed 20% of the total volume of the coating.
Hinkey, Lynne M; Zaidi, Baqar R
2007-02-01
Two US Virgin Islands marinas were examined for potential metal impacts by comparing sediment chemistry data with two sediment quality guideline (SQG) values: the ratio of simultaneously extractable metals to acid volatile sulfides (SEM-AVS), and effects range-low and -mean (ERL-ERM) values. ERL-ERMs predicted the marina/boatyard complex (IBY: 2118 microg/g dry weight total metals, two exceeded ERMs) would have greater impacts than the marina with no boatyard (CBM: 231 microg/g dry weight total metals, no ERMs exceeded). The AVS-SEM method predicted IBY would have fewer effects due to high AVS-forming metal sulfide complexes, reducing trace metal bioavailability. These contradictory predictions demonstrate the importance of validating the results of either of these methods with other toxicity measures before making any management or regulatory decisions regarding boating and marina impacts. This is especially important in non-temperate areas where sediment quality guidelines have not been validated.
Developing a probability-based model of aquifer vulnerability in an agricultural region
NASA Astrophysics Data System (ADS)
Chen, Shih-Kai; Jang, Cheng-Shin; Peng, Yi-Huei
2013-04-01
SummaryHydrogeological settings of aquifers strongly influence the regional groundwater movement and pollution processes. Establishing a map of aquifer vulnerability is considerably critical for planning a scheme of groundwater quality protection. This study developed a novel probability-based DRASTIC model of aquifer vulnerability in the Choushui River alluvial fan, Taiwan, using indicator kriging and to determine various risk categories of contamination potentials based on estimated vulnerability indexes. Categories and ratings of six parameters in the probability-based DRASTIC model were probabilistically characterized according to the parameter classification methods of selecting a maximum estimation probability and calculating an expected value. Moreover, the probability-based estimation and assessment gave us an excellent insight into propagating the uncertainty of parameters due to limited observation data. To examine the prediction capacity of pollutants for the developed probability-based DRASTIC model, medium, high, and very high risk categories of contamination potentials were compared with observed nitrate-N exceeding 0.5 mg/L indicating the anthropogenic groundwater pollution. The analyzed results reveal that the developed probability-based DRASTIC model is capable of predicting high nitrate-N groundwater pollution and characterizing the parameter uncertainty via the probability estimation processes.
Modeling the probability of arsenic in groundwater in New England as a tool for exposure assessment
Ayotte, J.D.; Nolan, B.T.; Nuckols, J.R.; Cantor, K.P.; Robinson, G.R.; Baris, D.; Hayes, L.; Karagas, M.; Bress, W.; Silverman, D.T.; Lubin, J.H.
2006-01-01
We developed a process-based model to predict the probability of arsenic exceeding 5 ??g/L in drinking water wells in New England bedrock aquifers. The model is being used for exposure assessment in an epidemiologic study of bladder cancer. One important study hypothesis that may explain increased bladder cancer risk is elevated concentrations of inorganic arsenic in drinking water. In eastern New England, 20-30% of private wells exceed the arsenic drinking water standard of 10 micrograms per liter. Our predictive model significantly improves the understanding of factors associated with arsenic contamination in New England. Specific rock types, high arsenic concentrations in stream sediments, geochemical factors related to areas of Pleistocene marine inundation and proximity to intrusive granitic plutons, and hydrologic and landscape variables relating to groundwater residence time increase the probability of arsenic occurrence in groundwater. Previous studies suggest that arsenic in bedrock groundwater may be partly from past arsenical pesticide use. Variables representing historic agricultural inputs do not improve the model, indicating that this source does not significantly contribute to current arsenic concentrations. Due to the complexity of the fractured bedrock aquifers in the region, well depth and related variables also are not significant predictors. ?? 2006 American Chemical Society.
NASA Astrophysics Data System (ADS)
Li, Xishuang; Liu, Baohua; Liu, Lejun; Zheng, Jiewen; Zhou, Songwang; Zhou, Qingjie
2017-12-01
The Liwan (Lw) gas field located in the northern slope of the South China Sea (SCS) is extremely complex for its sea-floor topograghy, which is a huge challenge for the safety of subsea facilities. It is economically impractical to obtain parameters for risk assessment of slope stability through a large amount of sampling over the whole field. The linkage between soil shear strength and seabed peak amplitude derived from 2D/3D seismic data is helpful for understanding the regional slope-instability risk. In this paper, the relationships among seabed peak, acoustic impedance and shear strength of shallow soil in the study area were discussed based on statistical analysis results. We obtained a similar relationship to that obtained in other deep-water areas. There is a positive correlation between seabed peak amplitude and acoustic impedance and an exponential relationship between acoustic impedance and shear strength of sediment. The acoustic impedance is the key factor linking the seismic amplitude and shear strength. Infinite slope stability analysis results indicate the areas have a high potential of shallow landslide on slopes exceeding 15° when the thickness of loose sediments exceeds 8 m in the Lw gas field. Our prediction shows that they are mainly located in the heads and walls of submarine canyons.
Nonlinear-regression flow model of the Gulf Coast aquifer systems in the south-central United States
Kuiper, L.K.
1994-01-01
A multiple-regression methodology was used to help answer questions concerning model reliability, and to calibrate a time-dependent variable-density ground-water flow model of the gulf coast aquifer systems in the south-central United States. More than 40 regression models with 2 to 31 regressions parameters are used and detailed results are presented for 12 of the models. More than 3,000 values for grid-element volume-averaged head and hydraulic conductivity are used for the regression model observations. Calculated prediction interval half widths, though perhaps inaccurate due to a lack of normality of the residuals, are the smallest for models with only four regression parameters. In addition, the root-mean weighted residual decreases very little with an increase in the number of regression parameters. The various models showed considerable overlap between the prediction inter- vals for shallow head and hydraulic conductivity. Approximate 95-percent prediction interval half widths for volume-averaged freshwater head exceed 108 feet; for volume-averaged base 10 logarithm hydraulic conductivity, they exceed 0.89. All of the models are unreliable for the prediction of head and ground-water flow in the deeper parts of the aquifer systems, including the amount of flow coming from the underlying geopressured zone. Truncating the domain of solution of one model to exclude that part of the system having a ground-water density greater than 1.005 grams per cubic centimeter or to exclude that part of the systems below a depth of 3,000 feet, and setting the density to that of freshwater does not appreciably change the results for head and ground-water flow, except for locations close to the truncation surface.
Li, Hai-Ling; Song, Wei-Wei; Zhang, Zi-Feng; Ma, Wan-Li; Gao, Chong-Jing; Li, Jia; Huo, Chun-Yan; Mohammed, Mohammed O A; Liu, Li-Yan; Kannan, Kurunthachalam; Li, Yi-Fan
2016-09-15
Phthalates are widely used chemicals in household products, which severely affect human health. However, there were limited studies emphasized on young adults' exposure to phthalates in dormitories. In this study, seven phthalates were extracted from indoor dust that collected in university dormitories in Harbin, Shenyang, and Baoding, in the north of China. Dust samples were also collected in houses in Harbin for comparison. The total concentrations of phthalates in dormitory dust in Harbin and Shenyang samples were significantly higher than those in Baoding samples. The total geometric mean concentration of phthalates in dormitory dust in Harbin was lower than in house dust. Di-(2-ethylhexyl) phthalate (DEHP) was the most abundant phthalate in both dormitory and house dust. The daily intakes of the total phthalates, carcinogenic risk (CR) of DEHP, hazard index (HI) of di-isobutyl phthalate (DiBP), dibutyl phthalate (DBP), and DEHP were estimated, the median values for all students in dormitories were lower than adults who live in the houses. Monte Carlo simulation was applied to predict the human exposure risk of phthalates. HI of DiBP, DBP, and DEHP was predicted according to the reference doses (RfD) provided by the United States Environmental Protection Agency (U.S.EPA) and the reference doses for anti-androgenicity (RfD AA) developed by Kortenkamp and Faust. The results indicated that the risks of some students had exceeded the limitation, however, the measured results were not exceeded the limitation. Risk quotients (RQ) of DEHP were predicted based on China specific No Significant Risk Level (NSRL) and Maximum Allowable Dose Level (MADL). The predicted results of CR and RQ of DEHP suggested that DEHP could pose a health risk through intake of indoor dust. Copyright © 2016 Elsevier B.V. All rights reserved.
Salisbury, Margaret L; Xia, Meng; Murray, Susan; Bartholmai, Brian J; Kazerooni, Ella A; Meldrum, Catherine A; Martinez, Fernando J; Flaherty, Kevin R
2016-09-01
Idiopathic pulmonary fibrosis (IPF) can be diagnosed confidently and non-invasively when clinical and computed tomography (CT) criteria are met. Many do not meet these criteria due to absence of CT honeycombing. We investigated predictors of IPF and combinations allowing accurate diagnosis in individuals without honeycombing. We utilized prospectively collected clinical and CT data from patients enrolled in the Lung Tissue Research Consortium. Included patients had no honeycombing, no connective tissue disease, underwent diagnostic lung biopsy, and had CT pattern consistent with fibrosing ILD (n = 200). Logistic regression identified clinical and CT variables predictive of IPF. The probability of IPF was assessed at various cut-points of important clinical and CT variables. A multivariable model adjusted for age and gender found increasingly extensive reticular densities (OR 2.93, CI 95% 1.55-5.56, p = 0.001) predicted IPF, while increasing ground glass densities predicted a diagnosis other than IPF (OR 0.55, CI 95% 0.34-0.89, p = 0.02). The model-based probability of IPF was 80% or greater in patients with age at least 60 years and extent of reticular density one-third or more of total lung volume; for patients meeting or exceeding these clinical thresholds the specificity for IPF is 96% (CI 95% 91-100%) with 21 of 134 (16%) biopsies avoided. In patients with suspected fibrotic ILD and absence of CT honeycombing, extent of reticular and ground glass densities predict a diagnosis of IPF. The probability of IPF exceeds 80% in subjects over age 60 years with one-third of total lung having reticular densities. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hostache, Renaud; Rains, Dominik; Chini, Marco; Lievens, Hans; Verhoest, Niko E. C.; Matgen, Patrick
2017-04-01
Motivated by climate change and its impact on the scarcity or excess of water in many parts of the world, several agencies and research institutions have taken initiatives in monitoring and predicting the hydrologic cycle at a global scale. Such a monitoring/prediction effort is important for understanding the vulnerability to extreme hydrological events and for providing early warnings. This can be based on an optimal combination of hydro-meteorological models and remote sensing, in which satellite measurements can be used as forcing or calibration data or for regularly updating the model states or parameters. Many advances have been made in these domains and the near future will bring new opportunities with respect to remote sensing as a result of the increasing number of spaceborn sensors enabling the large scale monitoring of water resources. Besides of these advances, there is currently a tendency to refine and further complicate physically-based hydrologic models to better capture the hydrologic processes at hand. However, this may not necessarily be beneficial for large-scale hydrology, as computational efforts are therefore increasing significantly. As a matter of fact, a novel thematic science question that is to be investigated is whether a flexible conceptual model can match the performance of a complex physically-based model for hydrologic simulations at large scale. In this context, the main objective of this study is to investigate how innovative techniques that allow for the estimation of soil moisture from satellite data can help in reducing errors and uncertainties in large scale conceptual hydro-meteorological modelling. A spatially distributed conceptual hydrologic model has been set up based on recent developments of the SUPERFLEX modelling framework. As it requires limited computational efforts, this model enables early warnings for large areas. Using as forcings the ERA-Interim public dataset and coupled with the CMEM radiative transfer model, SUPERFLEX is capable of predicting runoff, soil moisture, and SMOS-like brightness temperature time series. Such a model is traditionally calibrated using only discharge measurements. In this study we designed a multi-objective calibration procedure based on both discharge measurements and SMOS-derived brightness temperature observations in order to evaluate the added value of remotely sensed soil moisture data in the calibration process. As a test case we set up the SUPERFLEX model for the large scale Murray-Darling catchment in Australia ( 1 Million km2). When compared to in situ soil moisture time series, model predictions show good agreement resulting in correlation coefficients exceeding 70 % and Root Mean Squared Errors below 1 %. When benchmarked with the physically based land surface model CLM, SUPERFLEX exhibits similar performance levels. By adapting the runoff routing function within the SUPERFLEX model, the predicted discharge results in a Nash Sutcliff Efficiency exceeding 0.7 over both the calibration and the validation periods.
Mass Gains of the Antarctic Ice Sheet Exceed Losses
NASA Technical Reports Server (NTRS)
Zwally, H. Jay; Li, Jun; Robbins, John; Saba, Jack L.; Yi, Donghui; Brenner, Anita; Bromwich, David
2012-01-01
During 2003 to 2008, the mass gain of the Antarctic ice sheet from snow accumulation exceeded the mass loss from ice discharge by 49 Gt/yr (2.5% of input), as derived from ICESat laser measurements of elevation change. The net gain (86 Gt/yr) over the West Antarctic (WA) and East Antarctic ice sheets (WA and EA) is essentially unchanged from revised results for 1992 to 2001 from ERS radar altimetry. Imbalances in individual drainage systems (DS) are large (-68% to +103% of input), as are temporal changes (-39% to +44%). The recent 90 Gt/yr loss from three DS (Pine Island, Thwaites-Smith, and Marie-Bryd Coast) of WA exceeds the earlier 61 Gt/yr loss, consistent with reports of accelerating ice flow and dynamic thinning. Similarly, the recent 24 Gt/yr loss from three DS in the Antarctic Peninsula (AP) is consistent with glacier accelerations following breakup of the Larsen B and other ice shelves. In contrast, net increases in the five other DS of WA and AP and three of the 16 DS in East Antarctica (EA) exceed the increased losses. Alternate interpretations of the mass changes driven by accumulation variations are given using results from atmospheric-model re-analysis and a parameterization based on 5% change in accumulation per degree of observed surface temperature change. A slow increase in snowfall with climate waRMing, consistent with model predictions, may be offsetting increased dynamic losses.
Testing Einstein in Space: The Gravity Probe B Relativity Mission
NASA Astrophysics Data System (ADS)
Mester, John
The Gravity Probe B Relativity Mission was successfully launched on April 20, 2004 from Vandenberg Air Force Base in California, a culmination of 40 years of collaborative development at Stanford University and NASA. The goal of the GP-B experiment is to perform precision tests of two independent predictions of general relativity, the geodetic effect and frame dragging. On-orbit cryogenic operations lasted 17.3 months, exceeding requirements. Analysis of the science data is now in progress with a planned announcement of results scheduled for December 2007.
NASA Astrophysics Data System (ADS)
Kitamura, Akihisa
2016-12-01
Japanese historical documents reveal that Mw 8 class earthquakes have occurred every 100-150 years along the Suruga and Nankai troughs since the 684 Hakuho earthquake. These earthquakes have commonly caused large tsunamis with wave heights of up to 10 m in the Japanese coastal area along the Suruga and Nankai troughs. From the perspective of tsunami disaster management, these tsunamis are designated as Level 1 tsunamis and are the basis for the design of coastal protection facilities. A Mw 9.0 earthquake (the 2011 Tohoku-oki earthquake) and a mega-tsunami with wave heights of 10-40 m struck the Pacific coast of the northeastern Japanese mainland on 11 March 2011, and far exceeded pre-disaster predictions of wave height. Based on the lessons learned from the 2011 Tohoku-oki earthquake, the Japanese Government predicted the tsunami heights of the largest-possible tsunami (termed a Level 2 tsunami) that could be generated in the Suruga and Nankai troughs. The difference in wave heights between Level 1 and Level 2 tsunamis exceeds 20 m in some areas, including the southern Izu Peninsula. This study reviews the distribution of prehistorical tsunami deposits and tsunami boulders during the past 4000 years, based on previous studies in the coastal area of Shizuoka Prefecture, Japan. The results show that a tsunami deposit dated at 3400-3300 cal BP can be traced between the Shimizu, Shizuoka and Rokken-gawa lowlands, whereas no geologic evidence related to the corresponding tsunami (the Rokken-gawa-Oya tsunami) was found on the southern Izu Peninsula. Thus, the Rokken-gawa-Oya tsunami is not classified as a Level 2 tsunami.
Beisner, Kimberly R.; Anning, David W.; Paul, Angela P.; McKinney, Tim S.; Huntington, Jena M.; Bexfield, Laura M.; Thiros, Susan A.
2012-01-01
Human-health concerns and economic considerations associated with meeting drinking-water standards motivated a study of the vulnerability of basin-fill aquifers to nitrate contamination and arsenic enrichment in the southwestern United States. Statistical models were developed by using the random forest classifier algorithm to predict concentrations of nitrate and arsenic across a model grid representing about 190,600 square miles of basin-fill aquifers in parts of Arizona, California, Colorado, Nevada, New Mexico, and Utah. The statistical models, referred to as classifiers, reflect natural and human-related factors that affect aquifer vulnerability to contamination and relate nitrate and arsenic concentrations to explanatory variables representing local- and basin-scale measures of source and aquifer susceptibility conditions. Geochemical variables were not used in concentration predictions because they were not available for the entire study area. The models were calibrated to assess model accuracy on the basis of measured values.Only 2 percent of the area underlain by basin-fill aquifers in the study area was predicted to equal or exceed the U.S. Environmental Protection Agency drinking-water standard for nitrate as N (10 milligrams per liter), whereas 43 percent of the area was predicted to equal or exceed the standard for arsenic (10 micrograms per liter). Areas predicted to equal or exceed the drinking-water standard for nitrate include basins in central Arizona near Phoenix; the San Joaquin Valley, the Santa Ana Inland, and San Jacinto Basins of California; and the San Luis Valley of Colorado. Much of the area predicted to equal or exceed the drinking-water standard for arsenic is within a belt of basins along the western portion of the Basin and Range Physiographic Province that includes almost all of Nevada and parts of California and Arizona. Predicted nitrate and arsenic concentrations are substantially lower than the drinking-water standards in much of the study area-about 93 percent of the area underlain by basin-fill aquifers was less than one-half the standard for nitrate as N (5.0 milligrams per liter), and 50 percent was less than one-half the standard for arsenic (5.0 micrograms per liter). The predicted concentrations and the improved understanding of the susceptibility and vulnerability of southwestern basin-fill aquifers to nitrate contamination and arsenic enrichment can be used by water managers as a qualitative tool to assess and protect the quality of groundwater resources in the Southwest.
Karlsson, Per Erik; Klingberg, Jenny; Engardt, Magnuz; Andersson, Camilla; Langner, Joakim; Karlsson, Gunilla Pihl; Pleijel, Håkan
2017-01-15
This review summarizes new information on the current status of ground-level ozone in Europe north of the Alps. There has been a re-distribution in the hourly ozone concentrations in northern Europe during 1990-2015. The highest concentrations during summer daytime hours have decreased while the summer night-time and winter day- and night-time concentrations have increased. The yearly maximum 8-h mean concentrations ([O 3 ] 8h,max ), a metric used to assess ozone impacts on human health, have decreased significantly during 1990-2015 at four out of eight studied sites in Fennoscandia and northern UK. Also the annual number of days when the yearly [O 3 ] 8h,max exceeded the EU Environmental Quality Standard (EQS) target value of 60ppb has decreased. In contrast, the number of days per year when the yearly [O 3 ] 8h,max exceeded 35ppb has increased significantly at two sites, while it decreased at one far northern site. [O 3 ] 8h,max is predicted not to exceed 60ppb in northern UK and Fennoscandia after 2020. However, the WHO EQS target value of 50ppb will still be exceeded. The AOT40 May-July and AOT40 April-September metrics, used for the protection of vegetation, have decreased significantly at three and four sites, respectively. The EQS for the protection of forests, AOT40 April-September 5000ppbh, is projected to no longer be exceeded for most of northern Europe sometime before the time period 2040-2059. However, if the EQS is based on Phytotoxic Ozone Dose (POD), POD 1 , it may still be exceeded by 2050. The increasing trend for low and medium range ozone concentrations in combination with a decrease in high concentrations indicate that a new control strategy, with a larger geographical scale than Europe and including methane, is needed for ozone abatement in northern Europe. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Nakatsugawa, M.; Kobayashi, Y.; Okazaki, R.; Taniguchi, Y.
2017-12-01
This research aims to improve accuracy of water level prediction calculations for more effective river management. In August 2016, Hokkaido was visited by four typhoons, whose heavy rainfall caused severe flooding. In the Tokoro river basin of Eastern Hokkaido, the water level (WL) at the Kamikawazoe gauging station, which is at the lower reaches exceeded the design high-water level and the water rose to the highest level on record. To predict such flood conditions and mitigate disaster damage, it is necessary to improve the accuracy of prediction as well as to prolong the lead time (LT) required for disaster mitigation measures such as flood-fighting activities and evacuation actions by residents. There is the need to predict the river water level around the peak stage earlier and more accurately. Previous research dealing with WL prediction had proposed a method in which the WL at the lower reaches is estimated by the correlation with the WL at the upper reaches (hereinafter: "the water level correlation method"). Additionally, a runoff model-based method has been generally used in which the discharge is estimated by giving rainfall prediction data to a runoff model such as a storage function model and then the WL is estimated from that discharge by using a WL discharge rating curve (H-Q curve). In this research, an attempt was made to predict WL by applying the Random Forest (RF) method, which is a machine learning method that can estimate the contribution of explanatory variables. Furthermore, from the practical point of view, we investigated the prediction of WL based on a multiple correlation (MC) method involving factors using explanatory variables with high contribution in the RF method, and we examined the proper selection of explanatory variables and the extension of LT. The following results were found: 1) Based on the RF method tuned up by learning from previous floods, the WL for the abnormal flood case of August 2016 was properly predicted with a lead time of 6 h. 2) Based on the contribution of explanatory variables, factors were selected for the MC method. In this way, plausible prediction results were obtained.
Timkova, Jana; Fojtikova, Ivana; Pacherova, Petra
2017-01-01
The purpose of the study is to determine radon-prone areas in the Czech Republic based on the measurements of indoor radon concentration and independent predictors (rock type and permeability of the bedrock, gamma dose rate, GPS coordinates and the average age of family houses). The relationship between the mean observed indoor radon concentrations in monitored areas (∼22% municipalities) and the independent predictors was modelled using a bagged neural network. Levels of mean indoor radon concentration in the unmonitored areas were predicted using the bagged neural network model fitted for the monitored areas. The propensity to increased indoor radon was determined by estimated probability of exceeding the action level of 300Bq/m 3 . Copyright © 2016 Elsevier Ltd. All rights reserved.
2014-04-24
intermittent dosing regimens. CONCLUSION: Given its ability to predict antimicrobial clearance above populationmedians, which could compromise therapy, the...campaign dedicated to improve out- comes.1,2 In the era ofmultiply drug- resistant pathogens and rising antimicrobial minimum inhibitory concentrations (MICs...urinary creatinine clearance significantly exceeds what is predicted by the serum creatinine concentration according to various mathematical
How to test for partially predictable chaos.
Wernecke, Hendrik; Sándor, Bulcsú; Gros, Claudius
2017-04-24
For a chaotic system pairs of initially close-by trajectories become eventually fully uncorrelated on the attracting set. This process of decorrelation can split into an initial exponential decrease and a subsequent diffusive process on the chaotic attractor causing the final loss of predictability. Both processes can be either of the same or of very different time scales. In the latter case the two trajectories linger within a finite but small distance (with respect to the overall extent of the attractor) for exceedingly long times and remain partially predictable. Standard tests for chaos widely use inter-orbital correlations as an indicator. However, testing partially predictable chaos yields mostly ambiguous results, as this type of chaos is characterized by attractors of fractally broadened braids. For a resolution we introduce a novel 0-1 indicator for chaos based on the cross-distance scaling of pairs of initially close trajectories. This test robustly discriminates chaos, including partially predictable chaos, from laminar flow. Additionally using the finite time cross-correlation of pairs of initially close trajectories, we are able to identify laminar flow as well as strong and partially predictable chaos in a 0-1 manner solely from the properties of pairs of trajectories.
Real-time assessments of water quality: expanding nowcasting throughout the Great Lakes
,
2013-01-01
Nowcasts are systems that inform the public of current bacterial water-quality conditions at beaches on the basis of predictive models. During 2010–12, the U.S. Geological Survey (USGS) worked with 23 local and State agencies to improve existing operational beach nowcast systems at 4 beaches and expand the use of predictive models in nowcasts at an additional 45 beaches throughout the Great Lakes. The predictive models were specific to each beach, and the best model for each beach was based on a unique combination of environmental and water-quality explanatory variables. The variables used most often in models to predict Escherichia coli (E. coli) concentrations or the probability of exceeding a State recreational water-quality standard included turbidity, day of the year, wave height, wind direction and speed, antecedent rainfall for various time periods, and change in lake level over 24 hours. During validation of 42 beach models during 2012, the models performed better than the current method to assess recreational water quality (previous day's E. coli concentration). The USGS will continue to work with local agencies to improve nowcast predictions, enable technology transfer of predictive model development procedures, and implement more operational systems during 2013 and beyond.
Seasonal prediction of typhoon genesis frequency and track patterns in the North West Pacific area
NASA Astrophysics Data System (ADS)
Hyoun, Yoosun; Kang, Kiryong; Shin, Do-Shick
2014-05-01
This study is to investigate the performance of the typhoon seasonal predictability using a dynamical model. The check items are the monthly statistics for total number of typhoon genesis in Western North Pacific (WNP) area and possible threat to Korean peninsula among them, and the probability of each categorized track pattern. As the dynamical model the Florida State University/Center for Ocean-Atmospheric Prediction Studies (FSU/COAPS) was used, and it uses five ensemble members including control run are generated using time-lagged methods and the resolution of T126L27 (a Gaussian grid spacing of 0.94º). The model initial conditions are obtained from the National Center for Enviromental Prediction Global Forecast System (NCEP GFS) and the SST from Climate Forecast System with bias correction was used for ocean surface boundary condition. The summer (Jun-Jul-Aug) season prediction is made one month prior to target season. The detection of tropical cyclone used in this system is based on six criteria. First, the isolated vortex type minimum sea level pressure should be below 1008hPa. Second, the maximum wind speed is larger than 17m s-1. Third, the magnitude of the maximum relative vorticity at 850hPa exceeds 3.5x10-5s-1. Fourth, the average temperature difference from the area mean of surrounding region at 300hPa, 500hPa, 700hPa exceeds 2.5K. Fifth, the maximum wind speed at 850hPa is larger than that at 300hPa. Sixth, this identified vortex should last more than two days. These criteria were chosen after close examination from model-observation comparison. In this study, we will focus on performance of the system typhoon frequency and track pattern in the WNP area during 2004-2013.
Analysis and Tests of Reinforced Carbon-Epoxy/Foam-Core Sandwich Panels with Cutouts
NASA Technical Reports Server (NTRS)
Baker, Donald J.; Rogers, Charles
1996-01-01
The results of a study of a low-cost structurally efficient minimum-gage shear-panel design that can be used in light helicopters are presented. The shear-panel design is based on an integrally stiffened syntactic-foam stabilized-skin with an all-bias-ply tape construction for stabilized-skin concept with an all-bias-ply tape construction for the skins. This sandwich concept is an economical way to increase the panel bending stiffness weight penalty. The panels considered in the study were designed to be buckling resistant up to 100 lbs/in. of shear load and to have an ultimate strength of 300 lbs/in. The panel concept uses unidirectional carbon-epoxy tape on a syntactic adhesive as a stiffener that is co-cured with the skin and is an effective concept for improving panel buckling strength. The panel concept also uses pultruded carbon-epoxy rods embedded in a syntactic adhesive and over-wrapped with a bias-ply carbon-epoxy tape to form a reinforcing beam which is an effective method for redistributing load around rectangular cutout. The buckling strength of the reinforced panels is 83 to 90 percent of the predicted buckling strength based on a linear buckling analysis. The maximum experimental deflection exceeds the maximum deflection predicted by a nonlinear analysis by approximately one panel thickness. The failure strength of the reinforced panels was two and a half to seven times of the buckling strength. This efficient shear-panel design concept exceeds the required ultimate strength requirement of 300 lbs/in by more than 100 percent.
Mars Science Laboratory Heatshield Aerothermodynamics: Design and Reconstruction
NASA Technical Reports Server (NTRS)
Edquist, Karl T.; Hollis, Brian R.; Johnston, Christopher O.; Bose, Deepak; White, Todd R.; Mahzari, Milad
2013-01-01
The Mars Science Laboratory heatshield was designed to withstand a fully turbulent heat pulse based on test results and computational analysis on a pre-flight design trajectory. Instrumentation on the flight heatshield measured in-depth temperatures in the thermal protection system. The data indicate that boundary layer transition occurred at 5 of 7 thermocouple locations prior to peak heating. Data oscillations at 3 pressure measurement locations may also indicate transition. This paper presents the heatshield temperature and pressure data, possible explanations for the timing of boundary layer transition, and a qualitative comparison of reconstructed and computational heating on the as-flown trajectory. Boundary layer Reynolds numbers that are typically used to predict transition are compared to observed transition at various heatshield locations. A uniform smooth-wall transition Reynolds number does not explain the timing of boundary layer transition observed during flight. A roughness-based Reynolds number supports the possibility of transition due to discrete or distributed roughness elements on the heatshield. However, the distributed roughness height would have needed to be larger than the pre-flight assumption. The instrumentation confirmed the predicted location of maximum turbulent heat flux near the leeside shoulder. The reconstructed heat flux at that location is bounded by smooth-wall turbulent calculations on the reconstructed trajectory, indicating that augmentation due to surface roughness probably did not occur. Turbulent heating on the downstream side of the heatshield nose exceeded smooth-wall computations, indicating that roughness may have augmented heating. The stagnation region also experienced heating that exceeded computational levels, but shock layer radiation does not fully explain the differences.
Holcomb, David A; Messier, Kyle P; Serre, Marc L; Rowny, Jakob G; Stewart, Jill R
2018-06-25
Predictive modeling is promising as an inexpensive tool to assess water quality. We developed geostatistical predictive models of microbial water quality that empirically modeled spatiotemporal autocorrelation in measured fecal coliform (FC) bacteria concentrations to improve prediction. We compared five geostatistical models featuring different autocorrelation structures, fit to 676 observations from 19 locations in North Carolina's Jordan Lake watershed using meteorological and land cover predictor variables. Though stream distance metrics (with and without flow-weighting) failed to improve prediction over the Euclidean distance metric, incorporating temporal autocorrelation substantially improved prediction over the space-only models. We predicted FC throughout the stream network daily for one year, designating locations "impaired", "unimpaired", or "unassessed" if the probability of exceeding the state standard was ≥90%, ≤10%, or >10% but <90%, respectively. We could assign impairment status to more of the stream network on days any FC were measured, suggesting frequent sample-based monitoring remains necessary, though implementing spatiotemporal predictive models may reduce the number of concurrent sampling locations required to adequately assess water quality. Together, these results suggest that prioritizing sampling at different times and conditions using geographically sparse monitoring networks is adequate to build robust and informative geostatistical models of water quality impairment.
Liu, Zhongyang; Guo, Feifei; Gu, Jiangyong; Wang, Yong; Li, Yang; Wang, Dan; Lu, Liang; Li, Dong; He, Fuchu
2015-06-01
Anatomical Therapeutic Chemical (ATC) classification system, widely applied in almost all drug utilization studies, is currently the most widely recognized classification system for drugs. Currently, new drug entries are added into the system only on users' requests, which leads to seriously incomplete drug coverage of the system, and bioinformatics prediction is helpful during this process. Here we propose a novel prediction model of drug-ATC code associations, using logistic regression to integrate multiple heterogeneous data sources including chemical structures, target proteins, gene expression, side-effects and chemical-chemical associations. The model obtains good performance for the prediction not only on ATC codes of unclassified drugs but also on new ATC codes of classified drugs assessed by cross-validation and independent test sets, and its efficacy exceeds previous methods. Further to facilitate the use, the model is developed into a user-friendly web service SPACE ( S: imilarity-based P: redictor of A: TC C: od E: ), which for each submitted compound, will give candidate ATC codes (ranked according to the decreasing probability_score predicted by the model) together with corresponding supporting evidence. This work not only contributes to knowing drugs' therapeutic, pharmacological and chemical properties, but also provides clues for drug repositioning and side-effect discovery. In addition, the construction of the prediction model also provides a general framework for similarity-based data integration which is suitable for other drug-related studies such as target, side-effect prediction etc. The web service SPACE is available at http://www.bprc.ac.cn/space. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
An assessment of the microgravity and acoustic environments in Space Station Freedom using VAPEPS
NASA Technical Reports Server (NTRS)
Bergen, Thomas F.; Scharton, Terry D.; Badilla, Gloria A.
1992-01-01
The Vibroacoustic Payload Environment Prediction System (VAPEPS) was used to predict the stationary on-orbit environments in one of the Space Station Freedom modules. The model of the module included the outer structure, equipment and payload racks, avionics, and cabin air and duct systems. Acoustic and vibratory outputs of various source classes were derived and input to the model. Initial results of analyses, performed in one-third octave frequency bands from 10 to 10,000 Hz, show that both the microgravity and acoustic environments will be exceeded in some one-third octave bands with the current SSF design. Further analyses indicate that interior acoustic level requirements will be exceeded even if the microgravity requirements are met.
NASA Astrophysics Data System (ADS)
Bernardi, Michael P.; Milovich, Daniel; Francoeur, Mathieu
2016-09-01
Using Rytov's fluctuational electrodynamics framework, Polder and Van Hove predicted that radiative heat transfer between planar surfaces separated by a vacuum gap smaller than the thermal wavelength exceeds the blackbody limit due to tunnelling of evanescent modes. This finding has led to the conceptualization of systems capitalizing on evanescent modes such as thermophotovoltaic converters and thermal rectifiers. Their development is, however, limited by the lack of devices enabling radiative transfer between macroscale planar surfaces separated by a nanosize vacuum gap. Here we measure radiative heat transfer for large temperature differences (~120 K) using a custom-fabricated device in which the gap separating two 5 × 5 mm2 intrinsic silicon planar surfaces is modulated from 3,500 to 150 nm. A substantial enhancement over the blackbody limit by a factor of 8.4 is reported for a 150-nm-thick gap. Our device paves the way for the establishment of novel evanescent wave-based systems.
The good, the bad and the ugly of marine reserves for fishery yields
De Leo, Giulio A.; Micheli, Fiorenza
2015-01-01
Marine reserves (MRs) are used worldwide as a means of conserving biodiversity and protecting depleted populations. Despite major investments in MRs, their environmental and social benefits have proven difficult to demonstrate and are still debated. Clear expectations of the possible outcomes of MR establishment are needed to guide and strengthen empirical assessments. Previous models show that reserve establishment in overcapitalized, quota-based fisheries can reduce both catch and population abundance, thereby negating fisheries and even conservation benefits. By using a stage-structured, spatially explicit stochastic model, we show that catches under quota-based fisheries that include a network of MRs can exceed maximum sustainable yield (MSY) under conventional quota management if reserves provide protection to old, large spawners that disproportionally contribute to recruitment outside the reserves. Modelling results predict that the net fishery benefit of MRs is lost when gains in fecundity of old, large individuals are small, is highest in the case of sedentary adults with high larval dispersal, and decreases with adult mobility. We also show that environmental variability may mask fishery benefits of reserve implementation and that MRs may buffer against collapse when sustainable catch quotas are exceeded owing to stock overestimation or systematic overfishing. PMID:26460129
Huang, Qiusen; Bu, Qingwei; Zhong, Wenjue; Shi, Kaichong; Cao, Zhiguo; Yu, Gang
2018-02-01
For pharmaceuticals, the ecological risk assessment based on traditional endpoints of toxicity could not be properly protective in the long run since the mode of action could vary because they are intended for different therapeutic uses. In this study, the predicted no-effect concentrations (PNECs) of two selected pharmaceuticals, ibuprofen (IBU) and sulfamethoxazole (SMX), were derived based on either traditional endpoints of survival and growth data or some nonlethal endpoints such as reproduction, biochemical and molecular data. The PNECs of IBU based on biochemical-cellular and reproduction data were 0.018 and 0.026 μg L -1 that were significantly lower than those derived from other endpoints, while the lowest PNEC for SMX derived from growth data with the concentration of 0.89 μg L -1 . Ecological risk assessment was performed for IBU and SMX to the aquatic environment by applying hazard quotient and probabilistic distribution based quotient (DBQs) methods. The results showed that the probability of DBQs of IBU exceeding 0.1 was 11.2%, while for SMX the probability was 0.9% that could be neglected. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fang, Kaizheng; Mu, Daobin; Chen, Shi; Wu, Borong; Wu, Feng
2012-06-01
In this study, a prediction model based on artificial neural network is constructed for surface temperature simulation of nickel-metal hydride battery. The model is developed from a back-propagation network which is trained by Levenberg-Marquardt algorithm. Under each ambient temperature of 10 °C, 20 °C, 30 °C and 40 °C, an 8 Ah cylindrical Ni-MH battery is charged in the rate of 1 C, 3 C and 5 C to its SOC of 110% in order to provide data for the model training. Linear regression method is adopted to check the quality of the model training, as well as mean square error and absolute error. It is shown that the constructed model is of excellent training quality for the guarantee of prediction accuracy. The surface temperature of battery during charging is predicted under various ambient temperatures of 50 °C, 60 °C, 70 °C by the model. The results are validated in good agreement with experimental data. The value of battery surface temperature is calculated to exceed 90 °C under the ambient temperature of 60 °C if it is overcharged in 5 C, which might cause battery safety issues.
Methods, apparatus and system for notification of predictable memory failure
Cher, Chen-Yong; Andrade Costa, Carlos H.; Park, Yoonho; Rosenburg, Bryan S.; Ryu, Kyung D.
2017-01-03
A method for providing notification of a predictable memory failure includes the steps of: obtaining information regarding at least one condition associated with a memory; calculating a memory failure probability as a function of the obtained information; calculating a failure probability threshold; and generating a signal when the memory failure probability exceeds the failure probability threshold, the signal being indicative of a predicted future memory failure.
Bao, Yi; Chen, Yizheng; Hoehler, Matthew S.; Smith, Christopher M.; Bundy, Matthew; Chen, Genda
2016-01-01
This paper presents high temperature measurements using a Brillouin scattering-based fiber optic sensor and the application of the measured temperatures and building code recommended material parameters into enhanced thermomechanical analysis of simply supported steel beams subjected to combined thermal and mechanical loading. The distributed temperature sensor captures detailed, nonuniform temperature distributions that are compared locally with thermocouple measurements with less than 4.7% average difference at 95% confidence level. The simulated strains and deflections are validated using measurements from a second distributed fiber optic (strain) sensor and two linear potentiometers, respectively. The results demonstrate that the temperature-dependent material properties specified in the four investigated building codes lead to strain predictions with less than 13% average error at 95% confidence level and that the Europe building code provided the best predictions. However, the implicit consideration of creep in Europe is insufficient when the beam temperature exceeds 800°C. PMID:28239230
Origins and Destinations: Tracking Planet Composition through Planet Formation Simulations
NASA Astrophysics Data System (ADS)
Chance, Quadry; Ballard, Sarah
2018-01-01
There are now several thousand confirmed exoplanets, a number which far exceeds our resources to study them all in detail. In particular, planets around M dwarfs provide the best opportunity for in-depth study of their atmospheres by telescopes in the near future. The question of which M dwarf planets most merit follow-up resources is a pressing one, given that NASA’s TESS mission will soon find hundreds of such planets orbiting stars bright enough for both ground and spaced-based follow-up.Our work aims to predict the approximate composition of planets around these stars through n-body simulations of the last stage of planet formation. With a variety of initial disk conditions, we investigate how the relative abundances of both refractory and volatile compounds in the primordial planetesimals are mapped to the final planet outcomes. These predictions can serve to provide a basis for making an educated guess about (a) which planets to observe with precious resources like JWST and (b) how to identify them based on dynamical clues.
Tertiary structural propensities reveal fundamental sequence/structure relationships.
Zheng, Fan; Zhang, Jian; Grigoryan, Gevorg
2015-05-05
Extracting useful generalizations from the continually growing Protein Data Bank (PDB) is of central importance. We hypothesize that the PDB contains valuable quantitative information on the level of local tertiary structural motifs (TERMs). We show that by breaking a protein structure into its constituent TERMs, and querying the PDB to characterize the natural ensemble matching each, we can estimate the compatibility of the structure with a given amino acid sequence through a metric we term "structure score." Considering submissions from recent Critical Assessment of Structure Prediction (CASP) experiments, we found a strong correlation (R = 0.69) between structure score and model accuracy, with poorly predicted regions readily identifiable. This performance exceeds that of leading atomistic statistical energy functions. Furthermore, TERM-based analysis of two prototypical multi-state proteins rapidly produced structural insights fully consistent with prior extensive experimental studies. We thus find that TERM-based analysis should have considerable utility for protein structural biology. Copyright © 2015 Elsevier Ltd. All rights reserved.
Rollover risk prediction of heavy vehicles by reliability index and empirical modelling
NASA Astrophysics Data System (ADS)
Sellami, Yamine; Imine, Hocine; Boubezoul, Abderrahmane; Cadiou, Jean-Charles
2018-03-01
This paper focuses on a combination of a reliability-based approach and an empirical modelling approach for rollover risk assessment of heavy vehicles. A reliability-based warning system is developed to alert the driver to a potential rollover before entering into a bend. The idea behind the proposed methodology is to estimate the rollover risk by the probability that the vehicle load transfer ratio (LTR) exceeds a critical threshold. Accordingly, a so-called reliability index may be used as a measure to assess the vehicle safe functioning. In the reliability method, computing the maximum of LTR requires to predict the vehicle dynamics over the bend which can be in some cases an intractable problem or time-consuming. With the aim of improving the reliability computation time, an empirical model is developed to substitute the vehicle dynamics and rollover models. This is done by using the SVM (Support Vector Machines) algorithm. The preliminary obtained results demonstrate the effectiveness of the proposed approach.
Quintero, Ignacio; Wiens, John J
2013-08-01
A key question in predicting responses to anthropogenic climate change is: how quickly can species adapt to different climatic conditions? Here, we take a phylogenetic approach to this question. We use 17 time-calibrated phylogenies representing the major tetrapod clades (amphibians, birds, crocodilians, mammals, squamates, turtles) and climatic data from distributions of > 500 extant species. We estimate rates of change based on differences in climatic variables between sister species and estimated times of their splitting. We compare these rates to predicted rates of climate change from 2000 to 2100. Our results are striking: matching projected changes for 2100 would require rates of niche evolution that are > 10,000 times faster than rates typically observed among species, for most variables and clades. Despite many caveats, our results suggest that adaptation to projected changes in the next 100 years would require rates that are largely unprecedented based on observed rates among vertebrate species. © 2013 John Wiley & Sons Ltd/CNRS.
Predictive Analytics In Healthcare: Medications as a Predictor of Medical Complexity.
Higdon, Roger; Stewart, Elizabeth; Roach, Jared C; Dombrowski, Caroline; Stanberry, Larissa; Clifton, Holly; Kolker, Natali; van Belle, Gerald; Del Beccaro, Mark A; Kolker, Eugene
2013-12-01
Children with special healthcare needs (CSHCN) require health and related services that exceed those required by most hospitalized children. A small but growing and important subset of the CSHCN group includes medically complex children (MCCs). MCCs typically have comorbidities and disproportionately consume healthcare resources. To enable strategic planning for the needs of MCCs, simple screens to identify potential MCCs rapidly in a hospital setting are needed. We assessed whether the number of medications used and the class of those medications correlated with MCC status. Retrospective analysis of medication data from the inpatients at Seattle Children's Hospital found that the numbers of inpatient and outpatient medications significantly correlated with MCC status. Numerous variables based on counts of medications, use of individual medications, and use of combinations of medications were considered, resulting in a simple model based on three different counts of medications: outpatient and inpatient drug classes and individual inpatient drug names. The combined model was used to rank the patient population for medical complexity. As a result, simple, objective admission screens for predicting the complexity of patients based on the number and type of medications were implemented.
Hess, G.W.; Bohman, L.R.
1996-01-01
Techniques for estimating monthly mean streamflow at gaged sites and monthly streamflow duration characteristics at ungaged sites in central Nevada were developed using streamflow records at six gaged sites and basin physical and climatic characteristics. Streamflow data at gaged sites were related by regression techniques to concurrent flows at nearby gaging stations so that monthly mean streamflows for periods of missing or no record can be estimated for gaged sites in central Nevada. The standard error of estimate for relations at these sites ranged from 12 to 196 percent. Also, monthly streamflow data for selected percent exceedence levels were used in regression analyses with basin and climatic variables to determine relations for ungaged basins for annual and monthly percent exceedence levels. Analyses indicate that the drainage area and percent of drainage area at altitudes greater than 10,000 feet are the most significant variables. For the annual percent exceedence, the standard error of estimate of the relations for ungaged sites ranged from 51 to 96 percent and standard error of prediction for ungaged sites ranged from 96 to 249 percent. For the monthly percent exceedence values, the standard error of estimate of the relations ranged from 31 to 168 percent, and the standard error of prediction ranged from 115 to 3,124 percent. Reliability and limitations of the estimating methods are described.
von der Ohe, Peter Carsten; Dulio, Valeria; Slobodnik, Jaroslav; De Deckere, Eric; Kühne, Ralph; Ebert, Ralf-Uwe; Ginebreda, Antoni; De Cooman, Ward; Schüürmann, Gerrit; Brack, Werner
2011-05-01
Given the huge number of chemicals released into the environment and existing time and budget constraints, there is a need to prioritize chemicals for risk assessment and monitoring in the context of the European Union Water Framework Directive (EU WFD). This study is the first to assess the risk of 500 organic substances based on observations in the four European river basins of the Elbe, Scheldt, Danube and Llobregat. A decision tree is introduced that first classifies chemicals into six categories depending on the information available, which allows water managers to focus on the next steps (e.g. derivation of Environmental Quality Standards (EQS), improvement of analytical methods, etc.). The priority within each category is then evaluated based on two indicators, the Frequency of Exceedance and the Extent of Exceedance of Predicted No-Effect Concentrations (PNECs). These two indictors are based on maximum environmental concentrations (MEC), rather than the commonly used statistically based averages (Predicted Effect Concentration, PEC), and compared to the lowest acute-based (PNEC(acute)) or chronic-based thresholds (PNEC(chronic)). For 56% of the compounds, PNECs were available from existing risk assessments, and the majority of these PNECs were derived from chronic toxicity data or simulated ecosystem studies (mesocosm) with rather low assessment factors. The limitations of this concept for risk assessment purposes are discussed. For the remainder, provisional PNECs (P-PNECs) were established from read-across models for acute toxicity to the standard test organisms Daphnia magna, Pimephales promelas and Selenastrum capricornutum. On the one hand, the prioritization revealed that about three-quarter of the 44 substances with MEC/PNEC ratios above ten were pesticides. On the other hand, based on the monitoring data used in this study, no risk with regard to the water phase could be found for eight of the 41 priority substances, indicating a first success of the implementation of the WFD in the investigated river basins. Copyright © 2011 Elsevier B.V. All rights reserved.
Prediction of static friction coefficient in rough contacts based on the junction growth theory
NASA Astrophysics Data System (ADS)
Spinu, S.; Cerlinca, D.
2017-08-01
The classic approach to the slip-stick contact is based on the framework advanced by Mindlin, in which localized slip occurs on the contact area when the local shear traction exceeds the product between the local pressure and the static friction coefficient. This assumption may be too conservative in the case of high tractions arising at the asperities tips in the contact of rough surfaces, because the shear traction may be allowed to exceed the shear strength of the softer material. Consequently, the classic frictional contact model is modified in this paper so that gross sliding occurs when the junctions formed between all contacting asperities are independently sheared. In this framework, when the contact tractions, normal and shear, exceed the hardness of the softer material on the entire contact area, the material of the asperities yields and the junction growth process ends in all contact regions, leading to gross sliding inception. This friction mechanism is implemented in a previously proposed numerical model for the Cattaneo-Mindlin slip-stick contact problem, which is modified to accommodate the junction growth theory. The frictionless normal contact problem is solved first, then the tangential force is gradually increased, until gross sliding inception. The contact problems in the normal and in the tangential direction are successively solved, until one is stabilized in relation to the other. The maximum tangential force leading to a non-vanishing stick area is the static friction force that can be sustained by the rough contact. The static friction coefficient is eventually derived as the ratio between the latter friction force and the normal force.
Risk of nitrate in groundwaters of the United States - A national perspective
Nolan, B.T.; Ruddy, B.C.; Hitt, K.J.; Helsel, D.R.
1997-01-01
Nitrate contamination of groundwater occurs in predictable patterns, based on findings of the U.S. Geological Survey's (USGS) National Water Quality Assessment (NAWQA) Program. The NAWQA Program was begun in 1991 to describe the quality of the Nation's water resources, using nationally consistent methods. Variables affecting nitrate concentration in groundwater were grouped as 'input' factors (population density end the amount of nitrogen contributed by fertilizer, manure, and atmospheric sources) and 'aquifer vulnerability' factors (soil drainage characteristic and the ratio of woodland acres to cropland acres in agricultural areas) and compiled in a national map that shows patterns of risk for nitrate contamination of groundwater. Areas with high nitrogen input, well-drained soils, and low woodland to cropland ratio have the highest potential for contamination of shallow groundwater by nitrate. Groundwater nitrate data collected through 1992 from wells less than 100 ft deep generally verified the risk patterns shown on the national map. Median nitrate concentration was 0.2 mg/L in wells representing the low-risk group, and the maximum contaminant level (MCL) was exceeded in 3% of the wells. In contrast, median nitrate concentration was 4.8 mg/L in wells representing the high-risk group, and the MCL was exceeded in 25% of the wells.Nitrate contamination of groundwater occurs in predictable patterns, based on findings of the U.S. Geological Survey's (USGS) National Water Quality Assessment (NAWQA) Program. The NAWQA Program was begun in 1991 to describe the quality of the Nation's water resources, using nationally consistent methods. Variables affecting nitrate concentration in groundwater were grouped as `input' factors (population density and the amount of nitrogen contributed by fertilizer, manure, and atmospheric sources) and `aquifer vulnerability' factors (soil drainage characteristic and the ratio of woodland acres to cropland acres in agricultural areas) and compiled in a national map that shows patterns of risk for nitrate contamination of groundwater. Areas with high nitrogen input, well-drained soils, and low woodland to cropland ratio have the highest potential for contamination of shallow groundwater by nitrate. Groundwater nitrate data collected through 1992 from wells less than 100 ft deep generally verified the risk patterns shown on the national map. Median nitrate concentration was 0.2 mg/L in wells representing the low-risk group, and the maximum contaminant level (MCL) was exceeded in 3% of the wells. In contrast, median nitrate concentration was 4.8 mg/L in wells representing the high-risk group, and the MCL was exceeded in 25% of the wells.
Triantafyllidou, Simoni; Le, Trung; Gallagher, Daniel; Edwards, Marc
2014-01-01
The risk of students to develop elevated blood lead from drinking water consumption at schools was assessed, which is a different approach from predictions of geometric mean blood lead levels. Measured water lead levels (WLLs) from 63 elementary schools in Seattle and 601 elementary schools in Los Angeles were acquired before and after voluntary remediation of water lead contamination problems. Combined exposures to measured school WLLs (first-draw and flushed, 50% of water consumption) and home WLLs (50% of water consumption) were used as inputs to the Integrated Exposure Uptake Biokinetic (IEUBK) model for each school. In Seattle an average 11.2% of students were predicted to exceed a blood lead threshold of 5 μg/dL across 63 schools pre-remediation, but predicted risks at individual schools varied (7% risk of exceedance at a "low exposure school", 11% risk at a "typical exposure school", and 31% risk at a "high exposure school"). Addition of water filters and removal of lead plumbing lowered school WLL inputs to the model, and reduced the predicted risk output to 4.8% on average for Seattle elementary students across all 63 schools. The remnant post-remediation risk was attributable to other assumed background lead sources in the model (air, soil, dust, diet and home WLLs), with school WLLs practically eliminated as a health threat. Los Angeles schools instead instituted a flushing program which was assumed to eliminate first-draw WLLs as inputs to the model. With assumed benefits of remedial flushing, the predicted average risk of students to exceed a BLL threshold of 5 μg/dL dropped from 8.6% to 6.0% across 601 schools. In an era with increasingly stringent public health goals (e.g., reduction of blood lead safety threshold from 10 to 5 μg/dL), quantifiable health benefits to students were predicted after water lead remediation at two large US school systems. © 2013.
Modelling of PM10 concentration for industrialized area in Malaysia: A case study in Shah Alam
NASA Astrophysics Data System (ADS)
N, Norazian Mohamed; Abdullah, M. M. A.; Tan, Cheng-yau; Ramli, N. A.; Yahaya, A. S.; Fitri, N. F. M. Y.
In Malaysia, the predominant air pollutants are suspended particulate matter (SPM) and nitrogen dioxide (NO2). This research is on PM10 as they may trigger harm to human health as well as environment. Six distributions, namely Weibull, log-normal, gamma, Rayleigh, Gumbel and Frechet were chosen to model the PM10 observations at the chosen industrial area i.e. Shah Alam. One-year period hourly average data for 2006 and 2007 were used for this research. For parameters estimation, method of maximum likelihood estimation (MLE) was selected. Four performance indicators that are mean absolute error (MAE), root mean squared error (RMSE), coefficient of determination (R2) and prediction accuracy (PA), were applied to determine the goodness-of-fit criteria of the distributions. The best distribution that fits with the PM10 observations in Shah Alamwas found to be log-normal distribution. The probabilities of the exceedences concentration were calculated and the return period for the coming year was predicted from the cumulative density function (cdf) obtained from the best-fit distributions. For the 2006 data, Shah Alam was predicted to exceed 150 μg/m3 for 5.9 days in 2007 with a return period of one occurrence per 62 days. For 2007, the studied area does not exceed the MAAQG of 150 μg/m3
Online Bayesian Learning with Natural Sequential Prior Distribution Used for Wind Speed Prediction
NASA Astrophysics Data System (ADS)
Cheggaga, Nawal
2017-11-01
Predicting wind speed is one of the most important and critic tasks in a wind farm. All approaches, which directly describe the stochastic dynamics of the meteorological data are facing problems related to the nature of its non-Gaussian statistics and the presence of seasonal effects .In this paper, Online Bayesian learning has been successfully applied to online learning for three-layer perceptron's used for wind speed prediction. First a conventional transition model based on the squared norm of the difference between the current parameter vector and the previous parameter vector has been used. We noticed that the transition model does not adequately consider the difference between the current and the previous wind speed measurement. To adequately consider this difference, we use a natural sequential prior. The proposed transition model uses a Fisher information matrix to consider the difference between the observation models more naturally. The obtained results showed a good agreement between both series, measured and predicted. The mean relative error over the whole data set is not exceeding 5 %.
Materials separation by dielectrophoresis
NASA Technical Reports Server (NTRS)
Sagar, A. D.; Rose, R. M.
1988-01-01
The feasibility of vacuum dielectrophoresis as a method for particulate materials separation in a microgravity environment was investigated. Particle separations were performed in a specially constructed miniature drop-tower with a residence time of about 0.3 sec. Particle motion in such a system is independent of size and based only on density and dielectric constant, for a given electric field. The observed separations and deflections exceeded the theoretical predictions, probably due to multiparticle effects. In any case, this approach should work well in microgravity for many classes of materials, with relatively simple apparatus and low weight and power requirements.
Inversion of Qubit Energy Levels in Qubit-Oscillator Circuits in the Deep-Strong-Coupling Regime.
Yoshihara, F; Fuse, T; Ao, Z; Ashhab, S; Kakuyanagi, K; Saito, S; Aoki, T; Koshino, K; Semba, K
2018-05-04
We report on experimentally measured light shifts of superconducting flux qubits deep-strongly coupled to LC oscillators, where the coupling constants are comparable to the qubit and oscillator resonance frequencies. By using two-tone spectroscopy, the energies of the six lowest levels of each circuit are determined. We find huge Lamb shifts that exceed 90% of the bare qubit frequencies and inversions of the qubits' ground and excited states when there are a finite number of photons in the oscillator. Our experimental results agree with theoretical predictions based on the quantum Rabi model.
Estimation of metallic structure durability for a known law of stress variation
NASA Astrophysics Data System (ADS)
Mironov, V. I.; Lukashuk, O. A.; Ogorelkov, D. A.
2017-12-01
Overload of machines working in transient operational modes leads to such stresses in their bearing metallic structures that considerably exceed the endurance limit. The estimation of fatigue damages based on linear summation offers a more accurate prediction in terms of machine durability. The paper presents an alternative approach to the estimation of the factors of the cyclic degradation of a material. Free damped vibrations of the bridge girder of an overhead crane, which follow a known logarithmical decrement, are studied. It is shown that taking into account cyclic degradation substantially decreases the durability estimated for a product.
Inversion of Qubit Energy Levels in Qubit-Oscillator Circuits in the Deep-Strong-Coupling Regime
NASA Astrophysics Data System (ADS)
Yoshihara, F.; Fuse, T.; Ao, Z.; Ashhab, S.; Kakuyanagi, K.; Saito, S.; Aoki, T.; Koshino, K.; Semba, K.
2018-05-01
We report on experimentally measured light shifts of superconducting flux qubits deep-strongly coupled to L C oscillators, where the coupling constants are comparable to the qubit and oscillator resonance frequencies. By using two-tone spectroscopy, the energies of the six lowest levels of each circuit are determined. We find huge Lamb shifts that exceed 90% of the bare qubit frequencies and inversions of the qubits' ground and excited states when there are a finite number of photons in the oscillator. Our experimental results agree with theoretical predictions based on the quantum Rabi model.
Bahouth, George; Graygo, Jill; Digges, Kennerly; Schulman, Carl; Baur, Peter
2014-01-01
The objectives of this study are to (1) characterize the population of crashes meeting the Centers for Disease Control and Prevention (CDC)-recommended 20% risk of Injury Severity Score (ISS)>15 injury and (2) explore the positive and negative effects of an advanced automatic crash notification (AACN) system whose threshold for high-risk indications is 10% versus 20%. Binary logistic regression analysis was performed to predict the occurrence of motor vehicle crash injuries at both the ISS>15 and Maximum Abbreviated Injury Scale (MAIS) 3+ level. Models were trained using crash characteristics recommended by the CDC Committee on Advanced Automatic Collision Notification and Triage of the Injured Patient. Each model was used to assign the probability of severe injury (defined as MAIS 3+ or ISS>15 injury) to a subset of NASS-CDS cases based on crash attributes. Subsequently, actual AIS and ISS levels were compared with the predicted probability of injury to determine the extent to which the seriously injured had corresponding probabilities exceeding the 10% and 20% risk thresholds. Models were developed using an 80% sample of NASS-CDS data from 2002 to 2012 and evaluations were performed using the remaining 20% of cases from the same period. Within the population of seriously injured (i.e., those having one or more AIS 3 or higher injuries), the number of occupants whose injury risk did not exceed the 10% and 20% thresholds were estimated to be 11,700 and 18,600, respectively, each year using the MAIS 3+ injury model. For the ISS>15 model, 8,100 and 11,000 occupants sustained ISS>15 injuries yet their injury probability did not reach the 10% and 20% probability for severe injury respectively. Conversely, model predictions suggested that, at the 10% and 20% thresholds, 207,700 and 55,400 drivers respectively would be incorrectly flagged as injured when their injuries had not reached the AIS 3 level. For the ISS>15 model, 87,300 and 41,900 drivers would be incorrectly flagged as injured when injury severity had not reached the ISS>15 injury level. This article provides important information comparing the expected positive and negative effects of an AACN system with thresholds at the 10% and 20% levels using 2 outcome metrics. Overall, results suggest that the 20% risk threshold would not provide a useful notification to improve the quality of care for a large number of seriously injured crash victims. Alternately, a lower threshold may increase the over triage rate. Based on the vehicle damage observed for crashes reaching and exceeding the 10% risk threshold, we anticipate that rescue services would have been deployed based on current Public Safety Answering Point (PSAP) practices.
Development and evaluation of sediment quality guidelines for Florida coastal waters
MacDonald, Donald D.; Carr, R. Scott; Calder, Fred D.; Long, Edward R.; Ingersoll, Christopher G.
1996-01-01
The weight-of-evidence approach to the development of sediment quality guidelines (SQGs) was modified to support the derivation of biological effects-based SQGs for Florida coastal waters. Numerical SQGs were derived for 34 substances, including nine trace metals, 13 individual polycyclic aromatic hydrocarbons (PAHs), three groups of PAHs, total polychlorinated biphenyls (PCBs), seven pesticides and one phthalate ester. For each substance, a threshold effects level (TEL) and a probable effects level (PEL) was calculated. These two values defined three ranges of chemical concentrations, including those that were (1) rarely, (2) occasionally or (3) frequently associated with adverse effects. The SQGs were then evaluated to determine their degree of agreement with other guidelines (an indicator of comparability) and the percent incidence of adverse effects within each concentration range (an indicator of reliability). The guidelines also were used to classify (using a dichotomous system: toxic, with one or more exceedances of the PELs or non-toxic, with no exceedances of the TELs) sediment samples collected from various locations in Florida and the Gulf of Mexico. The accuracy of these predictions was then evaluated using the results of the biological tests that were performed on the same sediment samples. The resultant SQGs were demonstrated to provide practical, reliable and predictive tools for assessing sediment quality in Florida and elsewhere in the southeastern portion of the United States.
Compound activity prediction using models of binding pockets or ligand properties in 3D
Kufareva, Irina; Chen, Yu-Chen; Ilatovskiy, Andrey V.; Abagyan, Ruben
2014-01-01
Transient interactions of endogenous and exogenous small molecules with flexible binding sites in proteins or macromolecular assemblies play a critical role in all biological processes. Current advances in high-resolution protein structure determination, database development, and docking methodology make it possible to design three-dimensional models for prediction of such interactions with increasing accuracy and specificity. Using the data collected in the Pocketome encyclopedia, we here provide an overview of two types of the three-dimensional ligand activity models, pocket-based and ligand property-based, for two important classes of proteins, nuclear and G-protein coupled receptors. For half the targets, the pocket models discriminate actives from property matched decoys with acceptable accuracy (the area under ROC curve, AUC, exceeding 84%) and for about one fifth of the targets with high accuracy (AUC > 95%). The 3D ligand property field models performed better than 95% in half of the cases. The high performance models can already become a basis of activity predictions for new chemicals. Family-wide benchmarking of the models highlights strengths of both approaches and helps identify their inherent bottlenecks and challenges. PMID:23116466
Boore, D.M.
2001-01-01
This article has the modest goal of comparing the ground motions recorded during the 1999 Chi-Chi, Taiwan, mainshock with predictions from four empirical-based equations commonly used for western North America; these empirical predictions are largely based on data from California. Comparisons are made for peak acceleration and 5%-damped response spectra at periods between 0.1 and 4 sec. The general finding is that the Chi-Chi ground motions are smaller than those predicted from the empirically based equations for periods less than about 1 sec by factors averaging about 0.4 but as small as 0.26 (depending on period, on which equation is used, and on whether the sites are assumed to be rock or soil). There is a trend for the observed motions to approach or even exceed the predicted motions for longer periods. Motions at similar distances (30-60 km) to the east and to the west of the fault differ dramatically at periods between about 2 and 20 sec: Long-duration wave trains are present on the motions to the west, and when normalized to similar amplitudes at short periods, the response spectra of the motions at the western stations are as much as five times larger than those of motions from eastern stations. The explanation for the difference is probably related to site and propagation effects; the western stations are on the Coastal Plain, whereas the eastern stations are at the foot of young and steep mountains, either in the relatively narrow Longitudinal Valley or along the eastern coast-the sediments underlying the eastern stations are probably shallower and have higher velocity than those under the western stations.
A hybrid model for predicting carbon monoxide from vehicular exhausts in urban environments
NASA Astrophysics Data System (ADS)
Gokhale, Sharad; Khare, Mukesh
Several deterministic-based air quality models evaluate and predict the frequently occurring pollutant concentration well but, in general, are incapable of predicting the 'extreme' concentrations. In contrast, the statistical distribution models overcome the above limitation of the deterministic models and predict the 'extreme' concentrations. However, the environmental damages are caused by both extremes as well as by the sustained average concentration of pollutants. Hence, the model should predict not only 'extreme' ranges but also the 'middle' ranges of pollutant concentrations, i.e. the entire range. Hybrid modelling is one of the techniques that estimates/predicts the 'entire range' of the distribution of pollutant concentrations by combining the deterministic based models with suitable statistical distribution models ( Jakeman, et al., 1988). In the present paper, a hybrid model has been developed to predict the carbon monoxide (CO) concentration distributions at one of the traffic intersections, Income Tax Office (ITO), in the Delhi city, where the traffic is heterogeneous in nature and meteorology is 'tropical'. The model combines the general finite line source model (GFLSM) as its deterministic, and log logistic distribution (LLD) model, as its statistical components. The hybrid (GFLSM-LLD) model is then applied at the ITO intersection. The results show that the hybrid model predictions match with that of the observed CO concentration data within the 5-99 percentiles range. The model is further validated at different street location, i.e. Sirifort roadway. The validation results show that the model predicts CO concentrations fairly well ( d=0.91) in 10-95 percentiles range. The regulatory compliance is also developed to estimate the probability of exceedance of hourly CO concentration beyond the National Ambient Air Quality Standards (NAAQS) of India. It consists of light vehicles, heavy vehicles, three- wheelers (auto rickshaws) and two-wheelers (scooters, motorcycles, etc).
NASA Astrophysics Data System (ADS)
Chang, Ya-Ju; Huang, Hui-Chun; Hsueh, Yuan-Yu; Wang, Shao-Wei; Su, Fong-Chin; Chang, Chih-Han; Tang, Ming-Jer; Li, Yi-Shuan; Wang, Shyh-Hau; Shung, Kirk K.; Chien, Shu; Wu, Chia-Ching
2016-02-01
Little is known regarding the interplays between the mechanical and molecular bases for vein graft restenosis. We elucidated the stenosis initiation using a high-frequency ultrasonic (HFU) echogenicity platform and estimated the endothelium yield stress from von-Mises stress computation to predict the damage locations in living rats over time. The venous-arterial transition induced the molecular cascades for autophagy and apoptosis in venous endothelial cells (ECs) to cause neointimal hyperplasia, which correlated with the high echogenicity in HFU images and the large mechanical stress that exceeded the yield strength. The ex vivo perfusion of arterial laminar shear stress to isolated veins further confirmed the correlation. EC damage can be rescued by inhibiting autophagy formation using 3-methyladenine (3-MA). Pretreatment of veins with 3-MA prior to grafting reduced the pathological increases of echogenicity and neointima formation in rats. Therefore, this platform provides non-invasive temporal spatial measurement and prediction of restenosis after venous-arterial transition as well as monitoring the progression of the treatments.
The velocity field of clusters of galaxies within 100 megaparsecs. II - Northern clusters
NASA Technical Reports Server (NTRS)
Mould, J. R.; Akeson, R. L.; Bothun, G. D.; Han, M.; Huchra, J. P.; Roth, J.; Schommer, R. A.
1993-01-01
Distances and peculiar velocities for galaxies in eight clusters and groups have been determined by means of the near-infrared Tully-Fisher relation. With the possible exception of a group halfway between us and the Hercules Cluster, we observe peculiar velocities of the same order as the measuring errors of about 400 km/s. The present sample is drawn from the northern Galactic hemisphere and delineates a quiet region in the Hubble flow. This contrasts with the large-scale flows seen in the Hydra-Centaurus and Perseus-Pisces regions. We compare the observed peculiar velocities with predictions based upon the gravity field inferred from the IRAS redshift survey. The differences between the observed and predicted peculiar motions are generally small, except near dense structures, where the observed motions exceed the predictions by significant amounts. Kinematic models of the velocity field are also compared with the data. We cannot distinguish between parameterized models with a great attractor or models with a bulk flow.
21 CFR 172.892 - Food starch-modified.
Code of Federal Regulations, 2010 CFR
2010-04-01
... phosphorus. 1-Octenyl succinic anhydride, not to exceed 3 percent 1-Octenyl succinic anhydride, not to exceed... beverage bases as defined in § 170.3(n)(3) of this chapter. Phosphorus oxychloride, not to exceed 0.1 percent Phosphorus oxychloride, not to exceed 0.1 percent, followed by either acetic anhydride, not to...
21 CFR 172.892 - Food starch-modified.
Code of Federal Regulations, 2011 CFR
2011-04-01
... phosphorus. 1-Octenyl succinic anhydride, not to exceed 3 percent 1-Octenyl succinic anhydride, not to exceed... beverage bases as defined in § 170.3(n)(3) of this chapter. Phosphorus oxychloride, not to exceed 0.1 percent Phosphorus oxychloride, not to exceed 0.1 percent, followed by either acetic anhydride, not to...
Gill, Katherine L.; Gertz, Michael; Houston, J. Brian
2013-01-01
A physiologically based pharmacokinetic (PBPK) modeling approach was used to assess the prediction accuracy of propofol hepatic and extrahepatic metabolic clearance and to address previously reported underprediction of in vivo clearance based on static in vitro–in vivo extrapolation methods. The predictive capacity of propofol intrinsic clearance data (CLint) obtained in human hepatocytes and liver and kidney microsomes was assessed using the PBPK model developed in MATLAB software. Microsomal data obtained by both substrate depletion and metabolite formation methods and in the presence of 2% bovine serum albumin were considered in the analysis. Incorporation of hepatic and renal in vitro metabolic clearance in the PBPK model resulted in underprediction of propofol clearance regardless of the source of in vitro data; the predicted value did not exceed 35% of the observed clearance. Subsequently, propofol clinical data from three dose levels in intact patients and anhepatic subjects were used for the optimization of hepatic and renal CLint in a simultaneous fitting routine. Optimization process highlighted that renal glucuronidation clearance was underpredicted to a greater extent than liver clearance, requiring empirical scaling factors of 17 and 9, respectively. The use of optimized clearance parameters predicted hepatic and renal extraction ratios within 20% of the observed values, reported in an additional independent clinical study. This study highlights the complexity involved in assessing the contribution of extrahepatic clearance mechanisms and illustrates the application of PBPK modeling, in conjunction with clinical data, to assess prediction of clearance from in vitro data for each tissue individually. PMID:23303442
SCORE should be preferred to Framingham to predict cardiovascular death in French population.
Marchant, Ivanny; Boissel, Jean-Pierre; Kassaï, Behrouz; Bejan, Theodora; Massol, Jacques; Vidal, Chrystelle; Amsallem, Emmanuel; Naudin, Florence; Galan, Pilar; Czernichow, Sébastien; Nony, Patrice; Gueyffier, François
2009-10-01
Numerous studies have examined the validity of available scores to predict the absolute cardiovascular risk. We developed a virtual population based on data representative of the French population and compared the performances of the two most popular risk equations to predict cardiovascular death: Framingham and SCORE. A population was built based on official French demographic statistics and summarized data from representative observational studies. The 10-year coronary and cardiovascular death risk and their ratio were computed for each individual by SCORE and Framingham equations. The resulting rates were compared with those derived from national vital statistics. Framingham overestimated French coronary deaths by 2.8 in men and 1.9 in women, and cardiovascular deaths by 1.5 in men and 1.3 in women. SCORE overestimated coronary death by 1.6 in men and 1.7 in women, and underestimated cardiovascular death by 0.94 in men and 0.85 in women. Our results revealed an exaggerated representation of coronary among cardiovascular death predicted by Framingham, with coronary death exceeding cardiovascular death in some individual profiles. Sensitivity analyses gave some insights to explain the internal inconsistency of the Framingham equations. Evidence is that SCORE should be preferred to Framingham to predict cardiovascular death risk in French population. This discrepancy between prediction scores is likely to be observed in other populations. To improve the validation of risk equations, specific guidelines should be issued to harmonize the outcomes definition across epidemiologic studies. Prediction models should be calibrated for risk differences in the space and time dimensions.
Last, Mark; Rabinowitz, Nitzan; Leonard, Gideon
2016-01-01
This paper explores several data mining and time series analysis methods for predicting the magnitude of the largest seismic event in the next year based on the previously recorded seismic events in the same region. The methods are evaluated on a catalog of 9,042 earthquake events, which took place between 01/01/1983 and 31/12/2010 in the area of Israel and its neighboring countries. The data was obtained from the Geophysical Institute of Israel. Each earthquake record in the catalog is associated with one of 33 seismic regions. The data was cleaned by removing foreshocks and aftershocks. In our study, we have focused on ten most active regions, which account for more than 80% of the total number of earthquakes in the area. The goal is to predict whether the maximum earthquake magnitude in the following year will exceed the median of maximum yearly magnitudes in the same region. Since the analyzed catalog includes only 28 years of complete data, the last five annual records of each region (referring to the years 2006-2010) are kept for testing while using the previous annual records for training. The predictive features are based on the Gutenberg-Richter Ratio as well as on some new seismic indicators based on the moving averages of the number of earthquakes in each area. The new predictive features prove to be much more useful than the indicators traditionally used in the earthquake prediction literature. The most accurate result (AUC = 0.698) is reached by the Multi-Objective Info-Fuzzy Network (M-IFN) algorithm, which takes into account the association between two target variables: the number of earthquakes and the maximum earthquake magnitude during the same year.
2016-01-01
This paper explores several data mining and time series analysis methods for predicting the magnitude of the largest seismic event in the next year based on the previously recorded seismic events in the same region. The methods are evaluated on a catalog of 9,042 earthquake events, which took place between 01/01/1983 and 31/12/2010 in the area of Israel and its neighboring countries. The data was obtained from the Geophysical Institute of Israel. Each earthquake record in the catalog is associated with one of 33 seismic regions. The data was cleaned by removing foreshocks and aftershocks. In our study, we have focused on ten most active regions, which account for more than 80% of the total number of earthquakes in the area. The goal is to predict whether the maximum earthquake magnitude in the following year will exceed the median of maximum yearly magnitudes in the same region. Since the analyzed catalog includes only 28 years of complete data, the last five annual records of each region (referring to the years 2006–2010) are kept for testing while using the previous annual records for training. The predictive features are based on the Gutenberg-Richter Ratio as well as on some new seismic indicators based on the moving averages of the number of earthquakes in each area. The new predictive features prove to be much more useful than the indicators traditionally used in the earthquake prediction literature. The most accurate result (AUC = 0.698) is reached by the Multi-Objective Info-Fuzzy Network (M-IFN) algorithm, which takes into account the association between two target variables: the number of earthquakes and the maximum earthquake magnitude during the same year. PMID:26812351
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delmau, L.H.; Haverlock, T.J.; Sloop, F.V., Jr.
This report presents the work that followed the CSSX model development completed in FY2002. The developed cesium and potassium extraction model was based on extraction data obtained from simple aqueous media. It was tested to ensure the validity of the prediction for the cesium extraction from actual waste. Compositions of the actual tank waste were obtained from the Savannah River Site personnel and were used to prepare defined simulants and to predict cesium distribution ratios using the model. It was therefore possible to compare the cesium distribution ratios obtained from the actual waste, the simulant, and the predicted values. Itmore » was determined that the predicted values agree with the measured values for the simulants. Predicted values also agreed, with three exceptions, with measured values for the tank wastes. Discrepancies were attributed in part to the uncertainty in the cation/anion balance in the actual waste composition, but likely more so to the uncertainty in the potassium concentration in the waste, given the demonstrated large competing effect of this metal on cesium extraction. It was demonstrated that the upper limit for the potassium concentration in the feed ought to not exceed 0.05 M in order to maintain suitable cesium distribution ratios.« less
Comparison of Prediction Models for Lynch Syndrome Among Individuals With Colorectal Cancer
Ojha, Rohit P.; Leenen, Celine; Alvero, Carmelita; Mercado, Rowena C.; Balmaña, Judith; Valenzuela, Irene; Balaguer, Francesc; Green, Roger; Lindor, Noralane M.; Thibodeau, Stephen N.; Newcomb, Polly; Win, Aung Ko; Jenkins, Mark; Buchanan, Daniel D.; Bertario, Lucio; Sala, Paola; Hampel, Heather; Syngal, Sapna; Steyerberg, Ewout W.
2016-01-01
Background: Recent guidelines recommend the Lynch Syndrome prediction models MMRPredict, MMRPro, and PREMM1,2,6 for the identification of MMR gene mutation carriers. We compared the predictive performance and clinical usefulness of these prediction models to identify mutation carriers. Methods: Pedigree data from CRC patients in 11 North American, European, and Australian cohorts (6 clinic- and 5 population-based sites) were used to calculate predicted probabilities of pathogenic MLH1, MSH2, or MSH6 gene mutations by each model and gene-specific predictions by MMRPro and PREMM1,2,6. We examined discrimination with area under the receiver operating characteristic curve (AUC), calibration with observed to expected (O/E) ratio, and clinical usefulness using decision curve analysis to select patients for further evaluation. All statistical tests were two-sided. Results: Mutations were detected in 539 of 2304 (23%) individuals from the clinic-based cohorts (237 MLH1, 251 MSH2, 51 MSH6) and 150 of 3451 (4.4%) individuals from the population-based cohorts (47 MLH1, 71 MSH2, 32 MSH6). Discrimination was similar for clinic- and population-based cohorts: AUCs of 0.76 vs 0.77 for MMRPredict, 0.82 vs 0.85 for MMRPro, and 0.85 vs 0.88 for PREMM1,2,6. For clinic- and population-based cohorts, O/E deviated from 1 for MMRPredict (0.38 and 0.31, respectively) and MMRPro (0.62 and 0.36) but were more satisfactory for PREMM1,2,6 (1.0 and 0.70). MMRPro or PREMM1,2,6 predictions were clinically useful at thresholds of 5% or greater and in particular at greater than 15%. Conclusions: MMRPro and PREMM1,2,6 can well be used to select CRC patients from genetics clinics or population-based settings for tumor and/or germline testing at a 5% or higher risk. If no MMR deficiency is detected and risk exceeds 15%, we suggest considering additional genetic etiologies for the cause of cancer in the family. PMID:26582061
Robertson, Brian; Zhang, Zichen; Yang, Haining; Redmond, Maura M; Collings, Neil; Liu, Jinsong; Lin, Ruisheng; Jeziorska-Chapman, Anna M; Moore, John R; Crossland, William A; Chu, D P
2012-04-20
It is shown that reflective liquid crystal on silicon (LCOS) spatial light modulator (SLM) based interconnects or fiber switches that use defocus to reduce crosstalk can be evaluated and optimized using a fractional Fourier transform if certain optical symmetry conditions are met. Theoretically the maximum allowable linear hologram phase error compared to a Fourier switch is increased by a factor of six before the target crosstalk for telecom applications of -40 dB is exceeded. A Gerchberg-Saxton algorithm incorporating a fractional Fourier transform modified for use with a reflective LCOS SLM is used to optimize multi-casting holograms in a prototype telecom switch. Experiments are in close agreement to predicted performance.
Multiple ionization of neon by soft x-rays at ultrahigh intensity
NASA Astrophysics Data System (ADS)
Guichard, R.; Richter, M.; Rost, J.-M.; Saalmann, U.; Sorokin, A. A.; Tiedtke, K.
2013-08-01
At the free-electron laser FLASH, multiple ionization of neon atoms was quantitatively investigated at photon energies of 93.0 and 90.5 eV. For ion charge states up to 6+, we compare the respective absolute photoionization yields with results from a minimal model and an elaborate description including standard sequential and direct photoionization channels. Both approaches are based on rate equations and take into account a Gaussian spatial intensity distribution of the laser beam. From the comparison we conclude that photoionization up to a charge of 5+ can be described by the minimal model which we interpret as sequential photoionization assisted by electron shake-up processes. For higher charges, the experimental ionization yields systematically exceed the elaborate rate-based prediction.
NASA Astrophysics Data System (ADS)
Stockert, Sven; Wehr, Matthias; Lohmar, Johannes; Abel, Dirk; Hirt, Gerhard
2017-10-01
In the electrical and medical industries the trend towards further miniaturization of devices is accompanied by the demand for smaller manufacturing tolerances. Such industries use a plentitude of small and narrow cold rolled metal strips with high thickness accuracy. Conventional rolling mills can hardly achieve further improvement of these tolerances. However, a model-based controller in combination with an additional piezoelectric actuator for high dynamic roll adjustment is expected to enable the production of the required metal strips with a thickness tolerance of +/-1 µm. The model-based controller has to be based on a rolling theory which can describe the rolling process very accurately. Additionally, the required computing time has to be low in order to predict the rolling process in real-time. In this work, four rolling theories from literature with different levels of complexity are tested for their suitability for the predictive controller. Rolling theories of von Kármán, Siebel, Bland & Ford and Alexander are implemented in Matlab and afterwards transferred to the real-time computer used for the controller. The prediction accuracy of these theories is validated using rolling trials with different thickness reduction and a comparison to the calculated results. Furthermore, the required computing time on the real-time computer is measured. Adequate results according the prediction accuracy can be achieved with the rolling theories developed by Bland & Ford and Alexander. A comparison of the computing time of those two theories reveals that Alexander's theory exceeds the sample rate of 1 kHz of the real-time computer.
NASA Astrophysics Data System (ADS)
Matetic, Rudy J.
Over-exposure to noise remains a widespread and serious health hazard in the U.S. mining industries despite 25 years of regulation. Every day, 80% of the nation's miners go to work in an environment where the time weighted average (TWA) noise level exceeds 85 dBA and more than 25% of the miners are exposed to a TWA noise level that exceeds 90 dBA, the permissible exposure limit (PEL). Additionally, MSHA coal noise sample data collected from 2000 to 2002 show that 65% of the equipment whose operators exceeded 100% noise dosage comprise only seven different types of machines; auger miners, bulldozers, continuous miners, front end loaders, roof bolters, shuttle cars (electric), and trucks. In addition, the MSHA data indicate that the roof bolter is third among all the equipment and second among equipment in underground coal whose operators exceed 100% dosage. A research program was implemented to: (1) determine, characterize and to measure sound power levels radiated by a roof bolting machine during differing drilling configurations (thrust, rotational speed, penetration rate, etc.) and utilizing differing types of drilling methods in high compressive strength rock media (>20,000 psi). The research approach characterized the sound power level results from laboratory testing and provided the mining industry with empirical data relative to utilizing differing noise control technologies (drilling configurations and types of drilling methods) in reducing sound power level emissions on a roof bolting machine; (2) distinguish and correlate the empirical data into one, statistically valid, equation, in which, provided the mining industry with a tool to predict overall sound power levels of a roof bolting machine given any type of drilling configuration and drilling method utilized in industry; (3) provided the mining industry with several approaches to predict or determine sound pressure levels in an underground coal mine utilizing laboratory test results from a roof bolting machine and (4) described a method for determining an operators' noise dosage of a roof bolting machine utilizing predicted or determined sound pressure levels.
Khashan, Raed; Zheng, Weifan; Tropsha, Alexander
2014-03-01
We present a novel approach to generating fragment-based molecular descriptors. The molecules are represented by labeled undirected chemical graph. Fast Frequent Subgraph Mining (FFSM) is used to find chemical-fragments (subgraphs) that occur in at least a subset of all molecules in a dataset. The collection of frequent subgraphs (FSG) forms a dataset-specific descriptors whose values for each molecule are defined by the number of times each frequent fragment occurs in this molecule. We have employed the FSG descriptors to develop variable selection k Nearest Neighbor (kNN) QSAR models of several datasets with binary target property including Maximum Recommended Therapeutic Dose (MRTD), Salmonella Mutagenicity (Ames Genotoxicity), and P-Glycoprotein (PGP) data. Each dataset was divided into training, test, and validation sets to establish the statistical figures of merit reflecting the model validated predictive power. The classification accuracies of models for both training and test sets for all datasets exceeded 75 %, and the accuracy for the external validation sets exceeded 72 %. The model accuracies were comparable or better than those reported earlier in the literature for the same datasets. Furthermore, the use of fragment-based descriptors affords mechanistic interpretation of validated QSAR models in terms of essential chemical fragments responsible for the compounds' target property. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Brown, Colin D.; de Zwart, Dick; Diamond, Jerome; Dyer, Scott D.; Holmes, Christopher M.; Marshall, Stuart; Burton, G. Allen
2018-01-01
Abstract Ecological risk assessment increasingly focuses on risks from chemical mixtures and multiple stressors because ecosystems are commonly exposed to a plethora of contaminants and nonchemical stressors. To simplify the task of assessing potential mixture effects, we explored 3 land use–related chemical emission scenarios. We applied a tiered methodology to judge the implications of the emissions of chemicals from agricultural practices, domestic discharges, and urban runoff in a quantitative model. The results showed land use–dependent mixture exposures, clearly discriminating downstream effects of land uses, with unique chemical “signatures” regarding composition, concentration, and temporal patterns. Associated risks were characterized in relation to the land‐use scenarios. Comparisons to measured environmental concentrations and predicted impacts showed relatively good similarity. The results suggest that the land uses imply exceedances of regulatory protective environmental quality standards, varying over time in relation to rain events and associated flow and dilution variation. Higher‐tier analyses using ecotoxicological effect criteria confirmed that species assemblages may be affected by exposures exceeding no‐effect levels and that mixture exposure could be associated with predicted species loss under certain situations. The model outcomes can inform various types of prioritization to support risk management, including a ranking across land uses as a whole, a ranking on characteristics of exposure times and frequencies, and various rankings of the relative role of individual chemicals. Though all results are based on in silico assessments, the prospective land use–based approach applied in the present study yields useful insights for simplifying and assessing potential ecological risks of chemical mixtures and can therefore be useful for catchment‐management decisions. Environ Toxicol Chem 2018;37:715–728. © 2017 The Authors. Environmental Toxicology Chemistry Published by Wiley Periodicals, Inc. PMID:28845901
Linear regression models for solvent accessibility prediction in proteins.
Wagner, Michael; Adamczak, Rafał; Porollo, Aleksey; Meller, Jarosław
2005-04-01
The relative solvent accessibility (RSA) of an amino acid residue in a protein structure is a real number that represents the solvent exposed surface area of this residue in relative terms. The problem of predicting the RSA from the primary amino acid sequence can therefore be cast as a regression problem. Nevertheless, RSA prediction has so far typically been cast as a classification problem. Consequently, various machine learning techniques have been used within the classification framework to predict whether a given amino acid exceeds some (arbitrary) RSA threshold and would thus be predicted to be "exposed," as opposed to "buried." We have recently developed novel methods for RSA prediction using nonlinear regression techniques which provide accurate estimates of the real-valued RSA and outperform classification-based approaches with respect to commonly used two-class projections. However, while their performance seems to provide a significant improvement over previously published approaches, these Neural Network (NN) based methods are computationally expensive to train and involve several thousand parameters. In this work, we develop alternative regression models for RSA prediction which are computationally much less expensive, involve orders-of-magnitude fewer parameters, and are still competitive in terms of prediction quality. In particular, we investigate several regression models for RSA prediction using linear L1-support vector regression (SVR) approaches as well as standard linear least squares (LS) regression. Using rigorously derived validation sets of protein structures and extensive cross-validation analysis, we compare the performance of the SVR with that of LS regression and NN-based methods. In particular, we show that the flexibility of the SVR (as encoded by metaparameters such as the error insensitivity and the error penalization terms) can be very beneficial to optimize the prediction accuracy for buried residues. We conclude that the simple and computationally much more efficient linear SVR performs comparably to nonlinear models and thus can be used in order to facilitate further attempts to design more accurate RSA prediction methods, with applications to fold recognition and de novo protein structure prediction methods.
Environmental assessment proposed license renewal of Nuclear Metals, Inc. Concord, Massachusetts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, R.L.; Easterly, C.E.; Lombardi, C.E.
1997-02-01
The US Nuclear Regulatory Commission (NRC) has prepared this Environmental Assessment (EA) to evaluate environmental issues associated with the renewal of licenses issued by NRC for facilities operated by Nuclear Metals, Inc. (NMI) in Concord, Massachusetts. By renewing the licenses, NRC proposes to allow the continuation of ongoing operations involving radioactive materials at NMI`s facilities. This EA focuses on the potential impacts related to air emissions at NMI during normal (incident-free) operations and accidental releases. Findings indicate that there are only two areas of potential concern. First, modeling results for sulfur dioxide (SO{sub 2}) emissions from the boilers during normalmore » operations indicate that the potential exists for exceeding the short-term National Ambient Air Quality Standards (NAAQS). NMI is prepared to undertake mitigative action to prevent potential exceedances of the short-term SO{sub 2} NAAQS, and the Massachusetts Department of Environmental Protection is prepared to resolve the issue via a permit/approval change or through a Consent Order. Second, in the unlikely event of a severe fire, predicted sulfuric acid (H{sub 2}SO{sub 4}) concentrations based on conservative (upper bound) modeling exceed the Emergency Response Planning Guideline (ERPG) levels. NMI has committed to NRC to give a briefing for local emergency response officials regarding the potential for an accidental H{sub 2}SO{sub 4} release.« less
Predicting Sargassum blooms in the Caribbean Sea from MODIS observations
NASA Astrophysics Data System (ADS)
Wang, Mengqiu; Hu, Chuanmin
2017-04-01
Recurrent and significant Sargassum beaching events in the Caribbean Sea (CS) have caused serious environmental and economic problems, calling for a long-term prediction capacity of Sargassum blooms. Here we present predictions based on a hindcast of 2000-2016 observations from Moderate Resolution Imaging Spectroradiometer (MODIS), which showed Sargassum abundance in the CS and the Central West Atlantic (CWA), as well as connectivity between the two regions with time lags. This information was used to derive bloom and nonbloom probability matrices for each 1° square in the CS for the months of May-August, predicted from bloom conditions in a hotspot region in the CWA in February. A suite of standard statistical measures were used to gauge the prediction accuracy, among which the user's accuracy and kappa statistics showed high fidelity of the probability maps in predicting both blooms and nonblooms in the eastern CS with several months of lead time, with overall accuracy often exceeding 80%. The bloom probability maps from this hindcast analysis will provide early warnings to better study Sargassum blooms and prepare for beaching events near the study region. This approach may also be extendable to many other regions around the world that face similar challenges and opportunities of macroalgal blooms and beaching events.
1983-09-28
No comment . Inventory will exceed requirements for blank NATO round. (See p. 10.) Inventory will exceed requirements for match round. (See p. 11.) No comment . Premature procurement. (See p. 17.) Inventory will exceed requirements for three types of rounds. (See p. 11.) Inventory will exceed requirements for both rounds. (See p. 13.) No comment . Inventory will exceed requirements for TP-T round. (See p. 13.) No comment . No comment . 38
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harriman, D.A.; Sargent, B.P.
Groundwater quality was evaluated in seven confined aquifers and the water table aquifer in east-central New Jersey based on 237 analyses of samples collected in 1981-82, and 225 older analyses. Investigation of the effect of land use on water quality and several sampling network proposals for the region are reported. Iron (Fe) and manganese (Mn) concentrations exceed US EPA drinking water standards in some wells screened in the Potomac-Raritan-Magothy aquifer system. Sodium (Na) concentrations in samples from three wells more than 800 ft deep in the Englishtown aquifer exceed the standard. Iron and Mn concentrations in this aquifer may alsomore » exceed the standards. Iron concentrations in the Wenonah-Mount Laurel aquifer exceed the standard. Based on 15 analyses of water from the Vincetown aquifer, Mn is the only constituent that exceeds the drinking water standard. In the Manasquan aquifer, 4 of the 16 Na determinations exceed the standard, and 8 of 16 Fe determinations exceed the standard. Water quality in the Atlantic City 800-ft sand is generally satisfactory. However, 12 Fe and 1 of 12 Mn determinations exceed the standards. For the Rio Grande water-bearing zone, 1 of 3 Fe determinations exceed the standard. The Kirkwood-Cohansey aquifer system was the most thoroughly sampled (249 chemical analyses from 209 wells). Dissolved solids, chloride, Fe, nitrate, and Mn concentrations exceed drinking water standards in some areas. 76 refs., 36 figs., 12 tabs.« less
Sinclair, Karen; Kinable, Els; Grosch, Kai; Wang, Jixian
2016-05-01
In current industry practice, it is difficult to assess QT effects at potential therapeutic doses based on Phase I dose-escalation trials in oncology due to data scarcity, particularly in combinations trials. In this paper, we propose to use dose-concentration and concentration-QT models jointly to model the exposures and effects of multiple drugs in combination. The fitted models then can be used to make early predictions for QT prolongation to aid choosing recommended dose combinations for further investigation. The models consider potential correlation between concentrations of test drugs and potential drug-drug interactions at PK and QT levels. In addition, this approach allows for the assessment of the probability of QT prolongation exceeding given thresholds of clinical significance. The performance of this approach was examined via simulation under practical scenarios for dose-escalation trials for a combination of two drugs. The simulation results show that invaluable information of QT effects at therapeutic dose combinations can be gained by the proposed approaches. Early detection of dose combinations with substantial QT prolongation is evaluated effectively through the CIs of the predicted peak QT prolongation at each dose combination. Furthermore, the probability of QT prolongation exceeding a certain threshold is also computed to support early detection of safety signals while accounting for uncertainty associated with data from Phase I studies. While the prediction of QT effects is sensitive to the dose escalation process, the sensitivity and limited sample size should be considered when providing support to the decision-making process for further developing certain dose combinations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Sgueo, Carrie; Wells, Marion E; Russell, David E; Schaeffer, Paul J
2012-07-15
Northern cardinals (Cardinalis cardinalis) are faced with energetically expensive seasonal challenges that must be met to ensure survival, including thermoregulation in winter and reproductive activities in summer. Contrary to predictions of life history theory that suggest breeding metabolic rate should be the apex of energetic effort, winter metabolism exceeds that during breeding in several temperate resident bird species. By examining whole-animal, tissue and cellular function, we ask whether seasonal acclimatization is accomplished by coordinated phenotypic plasticity of metabolic systems. We measured summit metabolism (V(O(2),sum)), daily energy expenditure (DEE) and muscle oxidative capacity under both winter (December to January) and breeding (May to June) conditions. We hypothesize that: (1) rates of energy utilization will be highest in the winter, contrary to predictions based on life history theory, and (2) acclimatization of metabolism will occur at multiple levels of organization such that birds operate with a similar metabolic ceiling during different seasons. We measured field metabolic rates using heart rate telemetry and report the first daily patterns in avian field metabolic rate. Patterns of daily energy use differed seasonally, primarily as birds maintain high metabolic rates throughout the winter daylight hours. We found that DEE and V(O(2),sum) were significantly greater and DEE occurred at a higher fraction of maximum metabolic capacity during winter, indicating an elevation of the metabolic ceiling. Surprisingly, there were no significant differences in mass or oxidative capacity of skeletal muscle. These data, highlighting the importance of examining energetic responses to seasonal challenges at multiple levels, clearly reject life history predictions that breeding is the primary energetic challenge for temperate zone residents. Further, they indicate that metabolic ceilings are seasonally flexible as metabolic effort during winter thermoregulation exceeds that of breeding.
Dynamic properties of porous B sub 4 C. Interim report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brar, N.S.; Rosenberg, Z.; Bless, S.J.
1990-01-25
The sound speed in porous B4C (Boron Carbide) was measured and predicted on the basis of a spherical void model and a penny crack model. Neither model does well for porosity exceeding 10 percent. Measured values of Hugoniot elastic limit for porous B4C agree well with those predicted by the Steinberg's model. Measured transverse stress in the elastic range of B4C under 1-d strain condition agrees with the predictions.
Mai, Qun; Aboagye-Sarfo, Patrick; Sanfilippo, Frank M; Preen, David B; Fatovich, Daniel M
2015-02-01
To predict the number of ED presentations in Western Australia (WA) in the next 5 years, stratified by place of treatment, age, triage and disposition. We conducted a population-based time series analysis of 7 year monthly WA statewide ED presentation data from the financial years 2006/07 to 2012/13 using univariate autoregressive integrated moving average (ARIMA) and multivariate vector-ARIMA techniques. ED presentations in WA were predicted to increase from 990,342 in 2012/13 to 1,250,991 (95% CI: 982,265-1,519,718) in 2017/18, an increase of 260,649 (or 26.3%). The majority of this increase would occur in metropolitan WA (84.2%). The compound annual growth rate (CAGR) in metropolitan WA in the next 5 years was predicted to be 6.5% compared with 2.0% in the non-metropolitan area. The greatest growth in metropolitan WA would be in ages 65 and over (CAGR, 6.9%), triage categories 2 and 3 (8.3% and 7.7%, respectively) and admitted (9.8%) cohorts. The only predicted decrease was triage category 5 (-5.3%). ED demand in WA will exceed population growth. The highest growth will be in patients with complex care needs. An integrated system-wide strategy is urgently required to ensure access, quality and sustainability of the health system. © 2015 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.
Monolithic integrated circuit charge amplifier and comparator for MAMA readout
NASA Technical Reports Server (NTRS)
Cole, Edward H.; Smeins, Larry G.
1991-01-01
Prototype ICs for the Solar Heliospheric Observatory's Multi-Anode Microchannel Array (MAMA) have been developed; these ICs' charge-amplifier and comparator components were then tested with a view to pulse response and noise performance. All model performance predictions have been exceeded. Electrostatic discharge protection has been included on all IC connections; device operation over temperature has been consistent with model predictions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peterson, S
2007-08-15
Over the course of fifty-three years, LLNL had six acute releases of tritiated hydrogen gas (HT) and one acute release of tritiated water vapor (HTO) that were too large relative to the annual releases to be included as part of the annual releases from normal operations detailed in Parts 3 and 4 of the Tritium Dose Reconstruction (TDR). Sandia National Laboratories/California (SNL/CA) had one such release of HT and one of HTO. Doses to the maximally exposed individual (MEI) for these accidents have been modeled using an equation derived from the time-dependent tritium model, UFOTRI, and parameter values based onmore » expert judgment. All of these acute releases are described in this report. Doses that could not have been exceeded from the large HT releases of 1965 and 1970 were calculated to be 43 {micro}Sv (4.3 mrem) and 120 {micro}Sv (12 mrem) to an adult, respectively. Two published sets of dose predictions for the accidental HT release in 1970 are compared with the dose predictions of this TDR. The highest predicted dose was for an acute release of HTO in 1954. For this release, the dose that could not have been exceeded was estimated to have been 2 mSv (200 mrem), although, because of the high uncertainty about the predictions, the likely dose may have been as low as 360 {micro}Sv (36 mrem) or less. The estimated maximum exposures from the accidental releases were such that no adverse health effects would be expected. Appendix A lists all accidents and large routine puff releases that have occurred at LLNL and SNL/CA between 1953 and 2005. Appendix B describes the processes unique to tritium that must be modeled after an acute release, some of the time-dependent tritium models being used today, and the results of tests of these models.« less
Parallel performance optimizations on unstructured mesh-based simulations
Sarje, Abhinav; Song, Sukhyun; Jacobsen, Douglas; ...
2015-06-01
This paper addresses two key parallelization challenges the unstructured mesh-based ocean modeling code, MPAS-Ocean, which uses a mesh based on Voronoi tessellations: (1) load imbalance across processes, and (2) unstructured data access patterns, that inhibit intra- and inter-node performance. Our work analyzes the load imbalance due to naive partitioning of the mesh, and develops methods to generate mesh partitioning with better load balance and reduced communication. Furthermore, we present methods that minimize both inter- and intranode data movement and maximize data reuse. Our techniques include predictive ordering of data elements for higher cache efficiency, as well as communication reduction approaches.more » We present detailed performance data when running on thousands of cores using the Cray XC30 supercomputer and show that our optimization strategies can exceed the original performance by over 2×. Additionally, many of these solutions can be broadly applied to a wide variety of unstructured grid-based computations.« less
NASA Technical Reports Server (NTRS)
Matic, Roy M.; Mosley, Judith I.
1994-01-01
Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.
NASA Astrophysics Data System (ADS)
Hagan, Nicole; Robins, Nicholas; Hsu-Kim, Heileen; Halabi, Susan; Morris, Mark; Woodall, George; Zhang, Tong; Bacon, Allan; Richter, Daniel De B.; Vandenberg, John
2011-12-01
Detailed Spanish records of mercury use and silver production during the colonial period in Potosí, Bolivia were evaluated to estimate atmospheric emissions of mercury from silver smelting. Mercury was used in the silver production process in Potosí and nearly 32,000 metric tons of mercury were released to the environment. AERMOD was used in combination with the estimated emissions to approximate historical air concentrations of mercury from colonial mining operations during 1715, a year of relatively low silver production. Source characteristics were selected from archival documents, colonial maps and images of silver smelters in Potosí and a base case of input parameters was selected. Input parameters were varied to understand the sensitivity of the model to each parameter. Modeled maximum 1-h concentrations were most sensitive to stack height and diameter, whereas an index of community exposure was relatively insensitive to uncertainty in input parameters. Modeled 1-h and long-term concentrations were compared to inhalation reference values for elemental mercury vapor. Estimated 1-h maximum concentrations within 500 m of the silver smelters consistently exceeded present-day occupational inhalation reference values. Additionally, the entire community was estimated to have been exposed to levels of mercury vapor that exceed present-day acute inhalation reference values for the general public. Estimated long-term maximum concentrations of mercury were predicted to substantially exceed the EPA Reference Concentration for areas within 600 m of the silver smelters. A concentration gradient predicted by AERMOD was used to select soil sampling locations along transects in Potosí. Total mercury in soils ranged from 0.105 to 155 mg kg-1, among the highest levels reported for surface soils in the scientific literature. The correlation between estimated air concentrations and measured soil concentrations will guide future research to determine the extent to which the current community of Potosí and vicinity is at risk of adverse health effects from historical mercury contamination.
Fast or Slow Rescue Ventilations: A Predictive Model of Gastric Inflation.
Fitz-Clarke, John R
2018-05-01
Rescue ventilations are given during respiratory and cardiac arrest. Tidal volume must assure oxygen delivery; however, excessive pressure applied to an unprotected airway can cause gastric inflation, regurgitation, and pulmonary aspiration. The optimal technique provides mouth pressure and breath duration that minimize gastric inflation. It remains unclear if breath delivery should be fast or slow, and how inflation time affects the division of gas flow between the lungs and esophagus. A physiological model was used to predict and compare rates of gastric inflation and to determine ideal ventilation duration. Gas flow equations were based on standard pulmonary physiology. Gastric inflation was assumed to occur whenever mouth pressure exceeded lower esophageal sphincter pressure. Mouth pressure profiles that approximated mouth-to-mouth ventilation and bag-valve-mask ventilation were investigated. Target tidal volumes were set to 0.6 and 1.0 L. Compliance and airway resistance were varied. Rapid breaths shorter than 1 s required high mouth pressures, up to 25 cm H 2 O to achieve the target lung volume, which thus promotes gastric inflation. Slow breaths longer than 1 s permitted lower mouth pressures but increased time over which airway pressure exceeded lower esophageal sphincter pressure. The gastric volume increased with breath durations that exceeded 1 s for both mouth pressure profiles. Breath duration of ∼1.0 s caused the least gastric inflation in most scenarios. Very low esophageal sphincter pressure favored a shift toward 0.5 s. High resistance and low compliance each increased gastric inflation and altered ideal breath times. The model illustrated a general theory of optimal rescue ventilation. Breath duration with an unprotected airway should be 1 s to minimize gastric inflation. Short pressure-driven and long duration-driven gastric inflation regimens provide a unifying explanation for results in past studies. Copyright © 2018 by Daedalus Enterprises.
Herman, Christine; Karolak, Wojtek; Yip, Alexandra M; Buth, Karen J; Hassan, Ansar; Légaré, Jean-Francois
2009-10-01
We sought to develop a predictive model based exclusively on preoperative factors to identify patients at risk for PrlICULOS following coronary artery bypass grafting (CABG). Retrospective analysis was performed on patients undergoing isolated CABG at a single center between June 1998 and December 2002. PrlICULOS was defined as initial admission to ICU exceeding 72 h. A parsimonious risk-predictive model was constructed on the basis of preoperative factors, with subsequent internal validation. Of 3483 patients undergoing isolated CABG between June 1998 and December 2002, 411 (11.8%) experienced PrlICULOS. Overall in-hospital mortality was higher among these patients (14.4% vs. 1.2%, P
The good, the bad and the ugly of marine reserves for fishery yields.
De Leo, Giulio A; Micheli, Fiorenza
2015-11-05
Marine reserves (MRs) are used worldwide as a means of conserving biodiversity and protecting depleted populations. Despite major investments in MRs, their environmental and social benefits have proven difficult to demonstrate and are still debated. Clear expectations of the possible outcomes of MR establishment are needed to guide and strengthen empirical assessments. Previous models show that reserve establishment in overcapitalized, quota-based fisheries can reduce both catch and population abundance, thereby negating fisheries and even conservation benefits. By using a stage-structured, spatially explicit stochastic model, we show that catches under quota-based fisheries that include a network of MRs can exceed maximum sustainable yield (MSY) under conventional quota management if reserves provide protection to old, large spawners that disproportionally contribute to recruitment outside the reserves. Modelling results predict that the net fishery benefit of MRs is lost when gains in fecundity of old, large individuals are small, is highest in the case of sedentary adults with high larval dispersal, and decreases with adult mobility. We also show that environmental variability may mask fishery benefits of reserve implementation and that MRs may buffer against collapse when sustainable catch quotas are exceeded owing to stock overestimation or systematic overfishing. © 2015 The Author(s).
Painter, Colin C.; Heimann, David C.; Lanning-Rush, Jennifer L.
2017-08-14
A study was done by the U.S. Geological Survey in cooperation with the Kansas Department of Transportation and the Federal Emergency Management Agency to develop regression models to estimate peak streamflows of annual exceedance probabilities of 50, 20, 10, 4, 2, 1, 0.5, and 0.2 percent at ungaged locations in Kansas. Peak streamflow frequency statistics from selected streamgages were related to contributing drainage area and average precipitation using generalized least-squares regression analysis. The peak streamflow statistics were derived from 151 streamgages with at least 25 years of streamflow data through 2015. The developed equations can be used to predict peak streamflow magnitude and frequency within two hydrologic regions that were defined based on the effects of irrigation. The equations developed in this report are applicable to streams in Kansas that are not substantially affected by regulation, surface-water diversions, or urbanization. The equations are intended for use for streams with contributing drainage areas ranging from 0.17 to 14,901 square miles in the nonirrigation effects region and, 1.02 to 3,555 square miles in the irrigation-affected region, corresponding to the range of drainage areas of the streamgages used in the development of the regional equations.
Boomer, Kathleen B; Weller, Donald E; Jordan, Thomas E
2008-01-01
The Universal Soil Loss Equation (USLE) and its derivatives are widely used for identifying watersheds with a high potential for degrading stream water quality. We compared sediment yields estimated from regional application of the USLE, the automated revised RUSLE2, and five sediment delivery ratio algorithms to measured annual average sediment delivery in 78 catchments of the Chesapeake Bay watershed. We did the same comparisons for another 23 catchments monitored by the USGS. Predictions exceeded observed sediment yields by more than 100% and were highly correlated with USLE erosion predictions (Pearson r range, 0.73-0.92; p < 0.001). RUSLE2-erosion estimates were highly correlated with USLE estimates (r = 0.87; p < 001), so the method of implementing the USLE model did not change the results. In ranked comparisons between observed and predicted sediment yields, the models failed to identify catchments with higher yields (r range, -0.28-0.00; p > 0.14). In a multiple regression analysis, soil erodibility, log (stream flow), basin shape (topographic relief ratio), the square-root transformed proportion of forest, and occurrence in the Appalachian Plateau province explained 55% of the observed variance in measured suspended sediment loads, but the model performed poorly (r(2) = 0.06) at predicting loads in the 23 USGS watersheds not used in fitting the model. The use of USLE or multiple regression models to predict sediment yields is not advisable despite their present widespread application. Integrated watershed models based on the USLE may also be unsuitable for making management decisions.
Francy, Donna S.; Stelzer, Erin A.; Duris, Joseph W.; Brady, Amie M.G.; Harrison, John H.; Johnson, Heather E.; Ware, Michael W.
2013-01-01
Predictive models, based on environmental and water quality variables, have been used to improve the timeliness and accuracy of recreational water quality assessments, but their effectiveness has not been studied in inland waters. Sampling at eight inland recreational lakes in Ohio was done in order to investigate using predictive models for Escherichia coli and to understand the links between E. coli concentrations, predictive variables, and pathogens. Based upon results from 21 beach sites, models were developed for 13 sites, and the most predictive variables were rainfall, wind direction and speed, turbidity, and water temperature. Models were not developed at sites where the E. coli standard was seldom exceeded. Models were validated at nine sites during an independent year. At three sites, the model resulted in increased correct responses, sensitivities, and specificities compared to use of the previous day's E. coli concentration (the current method). Drought conditions during the validation year precluded being able to adequately assess model performance at most of the other sites. Cryptosporidium, adenovirus, eaeA (E. coli), ipaH (Shigella), and spvC (Salmonella) were found in at least 20% of samples collected for pathogens at five sites. The presence or absence of the three bacterial genes was related to some of the model variables but was not consistently related to E. coli concentrations. Predictive models were not effective at all inland lake sites; however, their use at two lakes with high swimmer densities will provide better estimates of public health risk than current methods and will be a valuable resource for beach managers and the public.
Francy, Donna S; Stelzer, Erin A; Duris, Joseph W; Brady, Amie M G; Harrison, John H; Johnson, Heather E; Ware, Michael W
2013-03-01
Predictive models, based on environmental and water quality variables, have been used to improve the timeliness and accuracy of recreational water quality assessments, but their effectiveness has not been studied in inland waters. Sampling at eight inland recreational lakes in Ohio was done in order to investigate using predictive models for Escherichia coli and to understand the links between E. coli concentrations, predictive variables, and pathogens. Based upon results from 21 beach sites, models were developed for 13 sites, and the most predictive variables were rainfall, wind direction and speed, turbidity, and water temperature. Models were not developed at sites where the E. coli standard was seldom exceeded. Models were validated at nine sites during an independent year. At three sites, the model resulted in increased correct responses, sensitivities, and specificities compared to use of the previous day's E. coli concentration (the current method). Drought conditions during the validation year precluded being able to adequately assess model performance at most of the other sites. Cryptosporidium, adenovirus, eaeA (E. coli), ipaH (Shigella), and spvC (Salmonella) were found in at least 20% of samples collected for pathogens at five sites. The presence or absence of the three bacterial genes was related to some of the model variables but was not consistently related to E. coli concentrations. Predictive models were not effective at all inland lake sites; however, their use at two lakes with high swimmer densities will provide better estimates of public health risk than current methods and will be a valuable resource for beach managers and the public.
Martin, Summer L; Stohs, Stephen M; Moore, Jeffrey E
2015-03-01
Fisheries bycatch is a global threat to marine megafauna. Environmental laws require bycatch assessment for protected species, but this is difficult when bycatch is rare. Low bycatch rates, combined with low observer coverage, may lead to biased, imprecise estimates when using standard ratio estimators. Bayesian model-based approaches incorporate uncertainty, produce less volatile estimates, and enable probabilistic evaluation of estimates relative to management thresholds. Here, we demonstrate a pragmatic decision-making process that uses Bayesian model-based inferences to estimate the probability of exceeding management thresholds for bycatch in fisheries with < 100% observer coverage. Using the California drift gillnet fishery as a case study, we (1) model rates of rare-event bycatch and mortality using Bayesian Markov chain Monte Carlo estimation methods and 20 years of observer data; (2) predict unobserved counts of bycatch and mortality; (3) infer expected annual mortality; (4) determine probabilities of mortality exceeding regulatory thresholds; and (5) classify the fishery as having low, medium, or high bycatch impact using those probabilities. We focused on leatherback sea turtles (Dermochelys coriacea) and humpback whales (Megaptera novaeangliae). Candidate models included Poisson or zero-inflated Poisson likelihood, fishing effort, and a bycatch rate that varied with area, time, or regulatory regime. Regulatory regime had the strongest effect on leatherback bycatch, with the highest levels occurring prior to a regulatory change. Area had the strongest effect on humpback bycatch. Cumulative bycatch estimates for the 20-year period were 104-242 leatherbacks (52-153 deaths) and 6-50 humpbacks (0-21 deaths). The probability of exceeding a regulatory threshold under the U.S. Marine Mammal Protection Act (Potential Biological Removal, PBR) of 0.113 humpback deaths was 0.58, warranting a "medium bycatch impact" classification of the fishery. No PBR thresholds exist for leatherbacks, but the probability of exceeding an anticipated level of two deaths per year, stated as part of a U.S. Endangered Species Act assessment process, was 0.0007. The approach demonstrated here would allow managers to objectively and probabilistically classify fisheries with respect to bycatch impacts on species that have population-relevant mortality reference points, and declare with a stipulated level of certainty that bycatch did or did not exceed estimated upper bounds.
Johnson, L E; Bishop, T F A; Birch, G F
2017-11-15
The human population is increasing globally and land use is changing to accommodate for this growth. Soils within urban areas require closer attention as the higher population density increases the chance of human exposure to urban contaminants. One such example of an urban area undergoing an increase in population density is Sydney, Australia. The city also possesses a notable history of intense industrial activity. By integrating multiple soil surveys and covariates into a linear mixed model, it was possible to determine the main drivers and map the distribution of lead and zinc concentrations within the Sydney estuary catchment. The main drivers as derived from the model included elevation, distance to main roads, main road type, soil landscape, population density (lead only) and land use (zinc only). Lead concentrations predicted using the model exceeded the established guideline value of 300mgkg -1 over a large portion of the study area with concentrations exceeding 1000mgkg -1 in the south of the catchment. Predicted zinc did not exceed the established guideline value of 7400mgkg -1 ; however concentrations were higher to the south and west of the study area. Unlike many other studies we considered the prediction uncertainty when assessing the contamination risk. Although the predictions indicate contamination over a large area, the broadness of the prediction intervals suggests that in many of these areas we cannot be sure that the site is contaminated. More samples are required to determine the contaminant distribution with greater precision, especially in residential areas where contamination was highest. Managing sources and addressing areas of elevated lead and zinc concentrations in urban areas has the potential to reduce the impact of past human activities and improve the urban environment of the future. Copyright © 2017 Elsevier B.V. All rights reserved.
Morganza to the Gulf of Mexico Floodgate Study
2011-10-01
133 Figure B61 . Humble Canal maximum velocity (base). ........................................................................ 134...exceedance velocity (base). Figure B60. Bayou Terrebonne 50th percentile exceedance velocity (Plan 6). ERDC/CHL TR-11-6 134 Figure B61 . Humble
Probability-based nitrate contamination map of groundwater in Kinmen.
Liu, Chen-Wuing; Wang, Yeuh-Bin; Jang, Cheng-Shin
2013-12-01
Groundwater supplies over 50% of drinking water in Kinmen. Approximately 16.8% of groundwater samples in Kinmen exceed the drinking water quality standard (DWQS) of NO3 (-)-N (10 mg/L). The residents drinking high nitrate-polluted groundwater pose a potential risk to health. To formulate effective water quality management plan and assure a safe drinking water in Kinmen, the detailed spatial distribution of nitrate-N in groundwater is a prerequisite. The aim of this study is to develop an efficient scheme for evaluating spatial distribution of nitrate-N in residential well water using logistic regression (LR) model. A probability-based nitrate-N contamination map in Kinmen is constructed. The LR model predicted the binary occurrence probability of groundwater nitrate-N concentrations exceeding DWQS by simple measurement variables as independent variables, including sampling season, soil type, water table depth, pH, EC, DO, and Eh. The analyzed results reveal that three statistically significant explanatory variables, soil type, pH, and EC, are selected for the forward stepwise LR analysis. The total ratio of correct classification reaches 92.7%. The highest probability of nitrate-N contamination map presents in the central zone, indicating that groundwater in the central zone should not be used for drinking purposes. Furthermore, a handy EC-pH-probability curve of nitrate-N exceeding the threshold of DWQS was developed. This curve can be used for preliminary screening of nitrate-N contamination in Kinmen groundwater. This study recommended that the local agency should implement the best management practice strategies to control nonpoint nitrogen sources and carry out a systematic monitoring of groundwater quality in residential wells of the high nitrate-N contamination zones.
Betavoltaic battery performance: Comparison of modeling and experiment.
Svintsov, A A; Krasnov, A A; Polikarpov, M A; Polyakov, A Y; Yakimov, E B
2018-07-01
A verification of the Monte Carlo simulation software for the prediction of short circuit current value is carried out using the Ni-63 source with the activity of 2.7 mCi/cm 2 and converters based on Si p-i-n diodes and SiC and GaN Schottky diodes. A comparison of experimentally measured and calculated short circuit current values confirms the validity of the proposed modeling method, with the difference in the measured and calculated short circuit current values not exceeding 25% and the error in the predicted output power values being below 30%. Effects of the protective layer formed on the Ni-63 radioactive film and of the passivating film on the semiconductor converters on the energy deposited inside the converters are estimated. The maximum attainable betavoltaic cell parameters are estimated. Copyright © 2018 Elsevier Ltd. All rights reserved.
On thermonuclear ignition criterion at the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Baolian; Kwan, Thomas J. T.; Wang, Yi-Ming
2014-10-15
Sustained thermonuclear fusion at the National Ignition Facility remains elusive. Although recent experiments approached or exceeded the anticipated ignition thresholds, the nuclear performance of the laser-driven capsules was well below predictions in terms of energy and neutron production. Such discrepancies between expectations and reality motivate a reassessment of the physics of ignition. We have developed a predictive analytical model from fundamental physics principles. Based on the model, we obtained a general thermonuclear ignition criterion in terms of the areal density and temperature of the hot fuel. This newly derived ignition threshold and its alternative forms explicitly show the minimum requirementsmore » of the hot fuel pressure, mass, areal density, and burn fraction for achieving ignition. Comparison of our criterion with existing theories, simulations, and the experimental data shows that our ignition threshold is more stringent than those in the existing literature and that our results are consistent with the experiments.« less
Role of molecular size in cloud droplet activation
NASA Astrophysics Data System (ADS)
Petters, M. D.; Kreidenweis, S. M.; Prenni, A. J.; Sullivan, R. C.; Carrico, C. M.; Koehler, K. A.; Ziemann, P. J.
2009-11-01
We examine the observed relationships between molar volume (the ratio of molar mass and density) and cloud condensation nuclei (CCN) activity for sufficiently soluble organic compounds found in atmospheric particulate matter. Our data compilation includes new CCN data for certain carbohydrates and oligoethylene glycols, as well as published data for organic compounds. We compare predictions of CCN activity using water activities based on Raoult's law and Flory-Huggins theory to observations. The Flory-Huggins water activity expression, with an assumed surface tension of pure water, generally predicts CCN activity within a factor of two over the full range of molar volumes considered. CCN activity is only weakly dependent on molar volume for values exceeding 600 cm3 mol-1, and the diminishing sensitivity to molar volume, combined with the significant scatter in the data, limits the accuracy with which molar volume can be inferred from CCN measurements.
Three-Body Amplification of Photon Heat Tunneling
NASA Astrophysics Data System (ADS)
Messina, Riccardo; Antezza, Mauro; Ben-Abdallah, Philippe
2012-12-01
Resonant tunneling of surface polaritons across a subwavelength vacuum gap between two polar or metallic bodies at different temperatures leads to an almost monochromatic heat transfer which can exceed by several orders of magnitude the far-field upper limit predicted by Planck’s blackbody theory. However, despite its strong magnitude, this transfer is very far from the maximum theoretical limit predicted in the near field. Here we propose an amplifier for the photon heat tunneling based on a passive relay system intercalated between the two bodies, which is able to partially compensate the intrinsic exponential damping of energy transmission probability thanks to three-body interaction mechanisms. As an immediate corollary, we show that the exalted transfer observed in the near field between two media can be exported at larger separation distances using such a relay. Photon heat tunneling assisted by three-body interactions enables novel applications for thermal management at nanoscale, near-field energy conversion and infrared spectroscopy.
Bosse, Casey; Rosen, Gunther; Colvin, Marienne; Earley, Patrick; Santore, Robert; Rivera-Duarte, Ignacio
2014-08-15
The bioavailability and toxicity of copper (Cu) in Shelter Island Yacht Basin (SIYB), San Diego, CA, USA, was assessed with simultaneous toxicological, chemical, and modeling approaches. Toxicological measurements included laboratory toxicity testing with Mytilus galloprovincialis (Mediterranean mussel) embryos added to both site water (ambient) and site water spiked with multiple Cu concentrations. Chemical assessment of ambient samples included total and dissolved Cu concentrations, and Cu complexation capacity measurements. Modeling was based on chemical speciation and predictions of bioavailability and toxicity using a marine Biotic Ligand Model (BLM). Cumulatively, these methods assessed the natural buffering capacity of Cu in SIYB during singular wet and dry season sampling events. Overall, the three approaches suggested negligible bioavailability, and isolated observed or predicted toxicity, despite an observed gradient of increasing Cu concentration, both horizontally and vertically within the water body, exceeding current water quality criteria for saltwater. Published by Elsevier Ltd.
Parallel photonic information processing at gigabyte per second data rates using transient states
NASA Astrophysics Data System (ADS)
Brunner, Daniel; Soriano, Miguel C.; Mirasso, Claudio R.; Fischer, Ingo
2013-01-01
The increasing demands on information processing require novel computational concepts and true parallelism. Nevertheless, hardware realizations of unconventional computing approaches never exceeded a marginal existence. While the application of optics in super-computing receives reawakened interest, new concepts, partly neuro-inspired, are being considered and developed. Here we experimentally demonstrate the potential of a simple photonic architecture to process information at unprecedented data rates, implementing a learning-based approach. A semiconductor laser subject to delayed self-feedback and optical data injection is employed to solve computationally hard tasks. We demonstrate simultaneous spoken digit and speaker recognition and chaotic time-series prediction at data rates beyond 1Gbyte/s. We identify all digits with very low classification errors and perform chaotic time-series prediction with 10% error. Our approach bridges the areas of photonic information processing, cognitive and information science.
Performance Impact Associated with Ni-Based SOFCs Fueled with Higher Hydrocarbon-Doped Coal Syngas
NASA Astrophysics Data System (ADS)
Hackett, Gregory A.; Gerdes, Kirk; Chen, Yun; Song, Xueyan; Zondlo, John
2015-03-01
Energy generation strategies demonstrating high efficiency and fuel flexibility are desirable in the contemporary energy market. When integrated with a gasification process, a solid oxide fuel cell (SOFC) can produce electricity at efficiencies exceeding 50 pct by consuming fuels such as coal, biomass, municipal solid waste, or other opportunity wastes. The synthesis gas derived from such fuel may contain trace species (including arsenic, lead, cadmium, mercury, phosphorus, sulfur, and tars) and low concentration organic species that adversely affect the SOFC performance. This work demonstrates the impact of exposure of the hydrocarbons ethylene, benzene, and naphthalene at various concentrations. The cell performance degradation rate is determined for tests exceeding 500 hours at 1073 K (800 °C). Cell performance is evaluated during operation with electrochemical impedance spectroscopy, and exposed samples are post-operationally analyzed by scanning electron microscopy/energy dispersive spectroscopy, X-ray photoelectron spectroscopy, and transmission electron microscopy. The short-term performance is modeled to predict performances to the desired 40,000-hours operational lifetime for SOFCs. Possible hydrocarbon interactions with the nickel anode are postulated, and acceptable hydrocarbon exposure limits are discussed.
The double high tide at Port Ellen: Doodson's criterion revisited
NASA Astrophysics Data System (ADS)
Byrne, Hannah A. M.; Mattias Green, J. A.; Bowers, David G.
2017-07-01
Doodson proposed a minimum criterion to predict the occurrence of double high (or double low) waters when a higher-frequency tidal harmonic is added to the semi-diurnal tide. If the phasing of the harmonic is optimal, the condition for a double high water can be written bn2/a > 1 where b is the amplitude of the higher harmonic, a is the amplitude of the semi-diurnal tide, and n is the ratio of their frequencies. Here we expand this criterion to allow for (i) a phase difference ϕ between the semi-diurnal tide and the harmonic and (ii) the fact that the double high water will disappear in the event that b/a becomes large enough for the higher harmonic to be the dominant component of the tide. This can happen, for example, at places or times where the semi-diurnal tide is very small. The revised parameter is br2/a, where r is a number generally less than n, although equal to n when ϕ = 0. The theory predicts that a double high tide will form when this parameter exceeds 1 and then disappear when it exceeds a value of order n2 and the higher harmonic becomes dominant. We test these predictions against observations at Port Ellen in the Inner Hebrides of Scotland. For most of the data set, the largest harmonic of the semi-diurnal tide is the sixth diurnal component, for which n = 3. The principal lunar and solar semi-diurnal tides are about equal at Port Ellen and so the semi-diurnal tide becomes very small twice a month at neap tides (here defined as the smallest fortnightly tidal range). A double high water forms when br2/a first exceeds a minimum value of about 1.5 as neap tides are approached and then disappears as br2/a then exceeds a second limiting value of about 10 at neap tides in agreement with the revised criterion.
NASA Astrophysics Data System (ADS)
Rey, Julien; Beauval, Céline; Douglas, John
2018-05-01
Probabilistic seismic hazard assessments are the basis of modern seismic design codes. To test fully a seismic hazard curve at the return periods of interest for engineering would require many thousands of years' worth of ground-motion recordings. Because strong-motion networks are often only a few decades old (e.g. in mainland France the first accelerometric network dates from the mid-1990s), data from such sensors can be used to test hazard estimates only at very short return periods. In this article, several hundreds of years of macroseismic intensity observations for mainland France are interpolated using a robust kriging-with-a-trend technique to establish the earthquake history of every French mainland municipality. At 24 selected cities representative of the French seismic context, the number of exceedances of intensities IV, V and VI is determined over time windows considered complete. After converting these intensities to peak ground accelerations using the global conversion equation of Caprio et al. (Ground motion to intensity conversion equations (GMICEs): a global relationship and evaluation of regional dependency, Bulletin of the Seismological Society of America 105:1476-1490, 2015), these exceedances are compared with those predicted by the European Seismic Hazard Model 2013 (ESHM13). In half of the cities, the number of observed exceedances for low intensities (IV and V) is within the range of predictions of ESHM13. In the other half of the cities, the number of observed exceedances is higher than the predictions of ESHM13. For intensity VI, the match is closer, but the comparison is less meaningful due to a scarcity of data. According to this study, the ESHM13 underestimates hazard in roughly half of France, even when taking into account the uncertainty in the conversion from intensity to acceleration. However, these results are valid only for the acceleration range tested in this study (0.01 to 0.09 g).
NASA Astrophysics Data System (ADS)
Rey, Julien; Beauval, Céline; Douglas, John
2018-02-01
Probabilistic seismic hazard assessments are the basis of modern seismic design codes. To test fully a seismic hazard curve at the return periods of interest for engineering would require many thousands of years' worth of ground-motion recordings. Because strong-motion networks are often only a few decades old (e.g. in mainland France the first accelerometric network dates from the mid-1990s), data from such sensors can be used to test hazard estimates only at very short return periods. In this article, several hundreds of years of macroseismic intensity observations for mainland France are interpolated using a robust kriging-with-a-trend technique to establish the earthquake history of every French mainland municipality. At 24 selected cities representative of the French seismic context, the number of exceedances of intensities IV, V and VI is determined over time windows considered complete. After converting these intensities to peak ground accelerations using the global conversion equation of Caprio et al. (Ground motion to intensity conversion equations (GMICEs): a global relationship and evaluation of regional dependency, Bulletin of the Seismological Society of America 105:1476-1490, 2015), these exceedances are compared with those predicted by the European Seismic Hazard Model 2013 (ESHM13). In half of the cities, the number of observed exceedances for low intensities (IV and V) is within the range of predictions of ESHM13. In the other half of the cities, the number of observed exceedances is higher than the predictions of ESHM13. For intensity VI, the match is closer, but the comparison is less meaningful due to a scarcity of data. According to this study, the ESHM13 underestimates hazard in roughly half of France, even when taking into account the uncertainty in the conversion from intensity to acceleration. However, these results are valid only for the acceleration range tested in this study (0.01 to 0.09 g).
On the estimation of risk associated with an attenuation prediction
NASA Technical Reports Server (NTRS)
Crane, R. K.
1992-01-01
Viewgraphs from a presentation on the estimation of risk associated with an attenuation prediction is presented. Topics covered include: link failure - attenuation exceeding a specified threshold for a specified time interval or intervals; risk - the probability of one or more failures during the lifetime of the link or during a specified accounting interval; the problem - modeling the probability of attenuation by rainfall to provide a prediction of the attenuation threshold for a specified risk; and an accounting for the inadequacy of a model or models.
Jafarzadeh, S Reza; Johnson, Wesley O; Gardner, Ian A
2016-03-15
The area under the receiver operating characteristic (ROC) curve (AUC) is used as a performance metric for quantitative tests. Although multiple biomarkers may be available for diagnostic or screening purposes, diagnostic accuracy is often assessed individually rather than in combination. In this paper, we consider the interesting problem of combining multiple biomarkers for use in a single diagnostic criterion with the goal of improving the diagnostic accuracy above that of an individual biomarker. The diagnostic criterion created from multiple biomarkers is based on the predictive probability of disease, conditional on given multiple biomarker outcomes. If the computed predictive probability exceeds a specified cutoff, the corresponding subject is allocated as 'diseased'. This defines a standard diagnostic criterion that has its own ROC curve, namely, the combined ROC (cROC). The AUC metric for cROC, namely, the combined AUC (cAUC), is used to compare the predictive criterion based on multiple biomarkers to one based on fewer biomarkers. A multivariate random-effects model is proposed for modeling multiple normally distributed dependent scores. Bayesian methods for estimating ROC curves and corresponding (marginal) AUCs are developed when a perfect reference standard is not available. In addition, cAUCs are computed to compare the accuracy of different combinations of biomarkers for diagnosis. The methods are evaluated using simulations and are applied to data for Johne's disease (paratuberculosis) in cattle. Copyright © 2015 John Wiley & Sons, Ltd.
Ciffroy, Philippe; Charlatchka, Rayna; Ferreira, Daniel; Marang, Laura
2013-07-01
The biotic ligand model (BLM) theoretically enables the derivation of environmental quality standards that are based on true bioavailable fractions of metals. Several physicochemical variables (especially pH, major cations, dissolved organic carbon, and dissolved metal concentrations) must, however, be assigned to run the BLM, but they are highly variable in time and space in natural systems. This article describes probabilistic approaches for integrating such variability during the derivation of risk indexes. To describe each variable using a probability density function (PDF), several methods were combined to 1) treat censored data (i.e., data below the limit of detection), 2) incorporate the uncertainty of the solid-to-liquid partitioning of metals, and 3) detect outliers. From a probabilistic perspective, 2 alternative approaches that are based on log-normal and Γ distributions were tested to estimate the probability of the predicted environmental concentration (PEC) exceeding the predicted non-effect concentration (PNEC), i.e., p(PEC/PNEC>1). The probabilistic approach was tested on 4 real-case studies based on Cu-related data collected from stations on the Loire and Moselle rivers. The approach described in this article is based on BLM tools that are freely available for end-users (i.e., the Bio-Met software) and on accessible statistical data treatments. This approach could be used by stakeholders who are involved in risk assessments of metals for improving site-specific studies. Copyright © 2013 SETAC.
Mu, Ying; Valim, Niksa; Niedre, Mark
2013-06-15
We tested the performance of a fast single-photon avalanche photodiode (SPAD) in measurement of early transmitted photons through diffusive media. In combination with a femtosecond titanium:sapphire laser, the overall instrument temporal response time was 59 ps. Using two experimental models, we showed that the SPAD allowed measurement of photon-density sensitivity functions that were approximately 65% narrower than the ungated continuous wave case at very early times. This exceeds the performance that we have previously achieved with photomultiplier-tube-based systems and approaches the theoretical maximum predicted by time-resolved Monte Carlo simulations.
Mapping cumulative noise from shipping to inform marine spatial planning.
Erbe, Christine; MacGillivray, Alexander; Williams, Rob
2012-11-01
Including ocean noise in marine spatial planning requires predictions of noise levels on large spatiotemporal scales. Based on a simple sound transmission model and ship track data (Automatic Identification System, AIS), cumulative underwater acoustic energy from shipping was mapped throughout 2008 in the west Canadian Exclusive Economic Zone, showing high noise levels in critical habitats for endangered resident killer whales, exceeding limits of "good conservation status" under the EU Marine Strategy Framework Directive. Error analysis proved that rough calculations of noise occurrence and propagation can form a basis for management processes, because spending resources on unnecessary detail is wasteful and delays remedial action.
Gigantic Dzyaloshinskii-Moriya interaction in the MnBi ultrathin films
NASA Astrophysics Data System (ADS)
Yu, Jie-Xiang; Zang, Jiadong; Zang's Team
The magnetic skyrmion, a swirling-like spin texture with nontrivial topology, is driven by strong Dzyaloshinskii-Moriya (DM) interaction originated from the spin-orbit coupling in inversion symmetry breaking systems. Here, based on first-principles calculations, we predict a new material, MnBi ultrathin film, with gigantic DM interactions. The ratio of the DM interaction to the Heisenberg exchange is about 0.3, exceeding any values reported so far. Its high Curie temperature, high coercivity, and large perpendicular magnetoanisotropy make MnBi a good candidate for future spintronics studies. Topologically nontrivial spin textures are emergent in this system. We expect further experimental efforts will be devoted into this systems.
Zhao, Li; Chen, Chunxia; Li, Bei; Dong, Li; Guo, Yingqiang; Xiao, Xijun; Zhang, Eryong; Qin, Li
2014-01-01
Objective To study the performance of pharmacogenetics-based warfarin dosing algorithms in the initial and the stable warfarin treatment phases in a cohort of Han-Chinese patients undertaking mechanic heart valve replacement. Methods We searched PubMed, Chinese National Knowledge Infrastructure and Wanfang databases for selecting pharmacogenetics-based warfarin dosing models. Patients with mechanic heart valve replacement were consecutively recruited between March 2012 and July 2012. The predicted warfarin dose of each patient was calculated and compared with the observed initial and stable warfarin doses. The percentage of patients whose predicted dose fell within 20% of their actual therapeutic dose (percentage within 20%), and the mean absolute error (MAE) were utilized to evaluate the predictive accuracy of all the selected algorithms. Results A total of 8 algorithms including Du, Huang, Miao, Wei, Zhang, Lou, Gage, and International Warfarin Pharmacogenetics Consortium (IWPC) model, were tested in 181 patients. The MAE of the Gage, IWPC and 6 Han-Chinese pharmacogenetics-based warfarin dosing algorithms was less than 0.6 mg/day in accuracy and the percentage within 20% exceeded 45% in all of the selected models in both the initial and the stable treatment stages. When patients were stratified according to the warfarin dose range, all of the equations demonstrated better performance in the ideal-dose range (1.88–4.38 mg/day) than the low-dose range (<1.88 mg/day). Among the 8 algorithms compared, the algorithms of Wei, Huang, and Miao showed a lower MAE and higher percentage within 20% in both the initial and the stable warfarin dose prediction and in the low-dose and the ideal-dose ranges. Conclusions All of the selected pharmacogenetics-based warfarin dosing regimens performed similarly in our cohort. However, the algorithms of Wei, Huang, and Miao showed a better potential for warfarin prediction in the initial and the stable treatment phases in Han-Chinese patients undertaking mechanic heart valve replacement. PMID:24728385
Zhao, Li; Chen, Chunxia; Li, Bei; Dong, Li; Guo, Yingqiang; Xiao, Xijun; Zhang, Eryong; Qin, Li
2014-01-01
To study the performance of pharmacogenetics-based warfarin dosing algorithms in the initial and the stable warfarin treatment phases in a cohort of Han-Chinese patients undertaking mechanic heart valve replacement. We searched PubMed, Chinese National Knowledge Infrastructure and Wanfang databases for selecting pharmacogenetics-based warfarin dosing models. Patients with mechanic heart valve replacement were consecutively recruited between March 2012 and July 2012. The predicted warfarin dose of each patient was calculated and compared with the observed initial and stable warfarin doses. The percentage of patients whose predicted dose fell within 20% of their actual therapeutic dose (percentage within 20%), and the mean absolute error (MAE) were utilized to evaluate the predictive accuracy of all the selected algorithms. A total of 8 algorithms including Du, Huang, Miao, Wei, Zhang, Lou, Gage, and International Warfarin Pharmacogenetics Consortium (IWPC) model, were tested in 181 patients. The MAE of the Gage, IWPC and 6 Han-Chinese pharmacogenetics-based warfarin dosing algorithms was less than 0.6 mg/day in accuracy and the percentage within 20% exceeded 45% in all of the selected models in both the initial and the stable treatment stages. When patients were stratified according to the warfarin dose range, all of the equations demonstrated better performance in the ideal-dose range (1.88-4.38 mg/day) than the low-dose range (<1.88 mg/day). Among the 8 algorithms compared, the algorithms of Wei, Huang, and Miao showed a lower MAE and higher percentage within 20% in both the initial and the stable warfarin dose prediction and in the low-dose and the ideal-dose ranges. All of the selected pharmacogenetics-based warfarin dosing regimens performed similarly in our cohort. However, the algorithms of Wei, Huang, and Miao showed a better potential for warfarin prediction in the initial and the stable treatment phases in Han-Chinese patients undertaking mechanic heart valve replacement.
1RM prediction: a novel methodology based on the force-velocity and load-velocity relationships.
Picerno, Pietro; Iannetta, Danilo; Comotto, Stefania; Donati, Marco; Pecoraro, Fabrizio; Zok, Mounir; Tollis, Giorgio; Figura, Marco; Varalda, Carlo; Di Muzio, Davide; Patrizio, Federica; Piacentini, Maria Francesca
2016-10-01
This study aimed to evaluate the accuracy of a novel approach for predicting the one-repetition maximum (1RM). The prediction is based on the force-velocity and load-velocity relationships determined from measured force and velocity data collected during resistance-training exercises with incremental submaximal loads. 1RM was determined as the load corresponding to the intersection of these two curves, where the gravitational force exceeds the force that the subject can exert. The proposed force-velocity-based method (FVM) was tested on 37 participants (23.9 ± 3.1 year; BMI 23.44 ± 2.45) with no specific resistance-training experience, and the predicted 1RM was compared to that achieved using a direct method (DM) in chest-press (CP) and leg-press (LP) exercises. The mean 1RM in CP was 99.5 kg (±27.0) for DM and 100.8 kg (±27.2) for FVM (SEE = 1.2 kg), whereas the mean 1RM in LP was 249.3 kg (±60.2) for DM and 251.1 kg (±60.3) for FVM (SEE = 2.1 kg). A high correlation was found between the two methods for both CP and LP exercises (0.999, p < 0.001). Good agreement between the two methods emerged from the Bland and Altman plot analysis. These findings suggest the use of the proposed methodology as a valid alternative to other indirect approaches for 1RM prediction. The mathematical construct is simply based on the definition of the 1RM, and it is fed with subject's muscle strength capacities measured during a specific exercise. Its reliability is, thus, expected to be not affected by those factors that typically jeopardize regression-based approaches.
42 CFR 435.219 - Individuals receiving State plan home and community-based services.
Code of Federal Regulations, 2014 CFR
2014-10-01
... that does not exceed 150 percent of the Federal poverty line (FPL); (3) Meet the needs-based criteria...) Have income that does not exceed 300 percent of the Supplemental Security Income Federal Benefit Rate...
Mikami, Akiko; Hori, Satoko; Ohtani, Hisakazu; Sawada, Yasufumi
2017-01-01
The purpose of the study was to quantitatively estimate and predict drug interactions between terbinafine and tricyclic antidepressants (TCAs), amitriptyline or nortriptyline, based on in vitro studies. Inhibition of TCA-metabolizing activity by terbinafine was investigated using human liver microsomes. Based on the unbound K i values obtained in vitro and reported pharmacokinetic parameters, a pharmacokinetic model of drug interaction was fitted to the reported plasma concentration profiles of TCAs administered concomitantly with terbinafine to obtain the drug-drug interaction parameters. Then, the model was used to predict nortriptyline plasma concentration with concomitant administration of terbinafine and changes of area under the curve (AUC) of nortriptyline after cessation of terbinafine. The CYP2D6 inhibitory potency of terbinafine was unaffected by preincubation, so the inhibition seems to be reversible. Terbinafine competitively inhibited amitriptyline or nortriptyline E-10-hydroxylation, with unbound K i values of 13.7 and 12.4 nM, respectively. Observed plasma concentrations of TCAs administered concomitantly with terbinafine were successfully simulated with the drug interaction model using the in vitro parameters. Model-predicted nortriptyline plasma concentration after concomitant nortriptylene/terbinafine administration for two weeks exceeded the toxic level, and drug interaction was predicted to be prolonged; the AUC of nortriptyline was predicted to be increased by 2.5- or 2.0- and 1.5-fold at 0, 3 and 6 months after cessation of terbinafine, respectively. The developed model enables us to quantitatively predict the prolonged drug interaction between terbinafine and TCAs. The model should be helpful for clinical management of terbinafine-CYP2D6 substrate drug interactions, which are difficult to predict due to their time-dependency.
Sensory generalization and learning about novel colours by poultry chicks.
Osorio, Daniel; Ham, Abigail D; Gonda, Zsusanna; Andrew, Richard J
2009-07-01
In nature animals constantly encounter novel stimuli and need to generalize from known stimuli. The animal may then learn about the novel stimulus. Hull (1947) suggested that as they learn animals distinguish knowledge based on direct experience from inference by generalization and in support of this view suggested that if a subject is directly trained to a stimulus subsequent extinction of responses is slower than when the response is based on generalization. Such an effect is also predicted by Bayesian models that relate the rate of learning to uncertainty in the estimate of stimulus value. We find support for this prediction when chicks learn about a novel colour (orange) if the initial evaluation is based on similarity to known colours (red, yellow). Specifically, if an expected food reward is absent the rate of extinction of the response to the novel stimulus exceeds that for the familiar colours. Interestingly, the change in relative preference for novel and familiar stimuli occurs after a delay of an hour. This type of delay has not, to our knowledge, been reported in previous studies of single-trial learning, but given its importance of generalization in natural behaviour this type of learning may have wide relevance.
Dinov, Ivo D; Heavner, Ben; Tang, Ming; Glusman, Gustavo; Chard, Kyle; Darcy, Mike; Madduri, Ravi; Pa, Judy; Spino, Cathie; Kesselman, Carl; Foster, Ian; Deutsch, Eric W; Price, Nathan D; Van Horn, John D; Ames, Joseph; Clark, Kristi; Hood, Leroy; Hampstead, Benjamin M; Dauer, William; Toga, Arthur W
2016-01-01
A unique archive of Big Data on Parkinson's Disease is collected, managed and disseminated by the Parkinson's Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson's disease (PD) risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data-large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources-all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction. To explore PD risk using Big Data methodology, we jointly processed complex PPMI imaging, genetics, clinical and demographic data. Collective representation of the multi-source data facilitates the aggregation and harmonization of complex data elements. This enables joint modeling of the complete data, leading to the development of Big Data analytics, predictive synthesis, and statistical validation. Using heterogeneous PPMI data, we developed a comprehensive protocol for end-to-end data characterization, manipulation, processing, cleaning, analysis and validation. Specifically, we (i) introduce methods for rebalancing imbalanced cohorts, (ii) utilize a wide spectrum of classification methods to generate consistent and powerful phenotypic predictions, and (iii) generate reproducible machine-learning based classification that enables the reporting of model parameters and diagnostic forecasting based on new data. We evaluated several complementary model-based predictive approaches, which failed to generate accurate and reliable diagnostic predictions. However, the results of several machine-learning based classification methods indicated significant power to predict Parkinson's disease in the PPMI subjects (consistent accuracy, sensitivity, and specificity exceeding 96%, confirmed using statistical n-fold cross-validation). Clinical (e.g., Unified Parkinson's Disease Rating Scale (UPDRS) scores), demographic (e.g., age), genetics (e.g., rs34637584, chr12), and derived neuroimaging biomarker (e.g., cerebellum shape index) data all contributed to the predictive analytics and diagnostic forecasting. Model-free Big Data machine learning-based classification methods (e.g., adaptive boosting, support vector machines) can outperform model-based techniques in terms of predictive precision and reliability (e.g., forecasting patient diagnosis). We observed that statistical rebalancing of cohort sizes yields better discrimination of group differences, specifically for predictive analytics based on heterogeneous and incomplete PPMI data. UPDRS scores play a critical role in predicting diagnosis, which is expected based on the clinical definition of Parkinson's disease. Even without longitudinal UPDRS data, however, the accuracy of model-free machine learning based classification is over 80%. The methods, software and protocols developed here are openly shared and can be employed to study other neurodegenerative disorders (e.g., Alzheimer's, Huntington's, amyotrophic lateral sclerosis), as well as for other predictive Big Data analytics applications.
NASA Astrophysics Data System (ADS)
Robinson, Donald Arthur
1984-06-01
A method is presented to predict airborne and barrier transmission loss of an audible signal as it travels from a corridor based octave band sound source to a room based receiver location. Flanking pathways are not considered in the prediction model. Although the central focus of the research is on the propagation of the signal, a comprehensive review of the source, path and receiver are presented as related to emergency audible signal propagation. Linear attenuation of the signal and end wall reflection is applied along the corridor path incorporating research conducted by T. L. Redmore of Essex, England. Classical room acoustics are applied to establish the onset of linear attenuation beyond the near field. The "coincidence effect" is applied to the transmission loss through the room door barrier. A constant barrier door transmission loss from corridor-to-room is applied throughout the 250 - 8000 Hertz octave bands. In situ measurements were conducted in two separate dormitories on the University of Massachusetts Amherst campus to verify the validity of the approach. All of the experimental data points follow the corresponding points predicted by the model with all correlations exceeding 0.9. The 95 percent confidence intervals for the absolute difference between predicted and measured values ranged from 0.76 dB to 4.5 dB based on five Leq dB levels taken at each octave band along the length of the corridor. For the corridor to room attenuation in the six test rooms, with the door closed and edge sealed, the predicted minus measured levels ranged from an interval of 0.54 to 2.90 dB Leq at octave bands from 250 to 8000 Hertz. Given the inherent difficulty of in situ tests compared to laboratory or modeling approaches the confidence intervals obtained confirm the usefulness of the prediction model presented.
Coronary Risk Factor Scoring as a Guide for Counseling
NASA Technical Reports Server (NTRS)
Fleck, R. L.
1971-01-01
A risk factor scoring system for early detection, possible prediction, and counseling to coronary heart disease patients is discussed. Scoring data include dynamic EKG, cholesterol levels, triglycerine content, total lipid level, total phospolipid levels, and electrophoretic patterns. Results indicate such a system is effective in identifying high risk subjects, but that the ability to predict exceeds the ability to prevent heart disease or its complications.
NASA Technical Reports Server (NTRS)
Jadaan, Osama M.; Powers, Lynn M.; Gyekenyesi, John P.
1997-01-01
The desirable properties of ceramics at high temperatures have generated interest in their use for structural applications such as in advanced turbine systems. Design lives for such systems can exceed 10,000 hours. Such long life requirements necessitate subjecting the components to relatively low stresses. The combination of high temperatures and low stresses typically places failure for monolithic ceramics in the creep regime. The objective of this work is to present a design methodology for predicting the lifetimes of structural components subjected to multiaxial creep loading. This methodology utilizes commercially available finite element packages and takes into account the time varying creep stress distributions (stress relaxation). In this methodology, the creep life of a component is divided into short time steps, during which, the stress and strain distributions are assumed constant. The damage, D, is calculated for each time step based on a modified Monkman-Grant creep rupture criterion. For components subjected to predominantly tensile loading, failure is assumed to occur when the normalized accumulated damage at any point in the component is greater than or equal to unity.
Quantitative risk management in gas injection project: a case study from Oman oil and gas industry
NASA Astrophysics Data System (ADS)
Khadem, Mohammad Miftaur Rahman Khan; Piya, Sujan; Shamsuzzoha, Ahm
2017-09-01
The purpose of this research was to study the recognition, application and quantification of the risks associated in managing projects. In this research, the management of risks in an oil and gas project is studied and implemented within a case company in Oman. In this study, at first, the qualitative data related to risks in the project were identified through field visits and extensive interviews. These data were then translated into numerical values based on the expert's opinion. Further, the numerical data were used as an input to Monte Carlo simulation. RiskyProject Professional™ software was used to simulate the system based on the identified risks. The simulation result predicted a delay of about 2 years as a worse case with no chance of meeting the project's on stream date. Also, it has predicted 8% chance of exceeding the total estimated budget. The result of numerical analysis from the proposed model is validated by comparing it with the result of qualitative analysis, which was obtained through discussion with various project managers of company.
High sensitivity gas sensor based on high-Q suspended polymer photonic crystal nanocavity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clevenson, Hannah, E-mail: hannahac@mit.edu; Desjardins, Pierre; Gan, Xuetao
2014-06-16
We present high-sensitivity, multi-use optical gas sensors based on a one-dimensional photonic crystal cavity. These devices are implemented in versatile, flexible polymer materials which swell when in contact with a target gas, causing a measurable cavity length change. This change causes a shift in the cavity resonance, allowing precision measurements of gas concentration. We demonstrate suspended polymer nanocavity sensors and the recovery of sensors after the removal of stimulant gas from the system. With a measured quality factor exceeding 10{sup 4}, we show measurements of gas concentration as low as 600 parts per million (ppm) and an experimental sensitivity ofmore » 10 ppm; furthermore, we predict detection levels in the parts-per-billion range for a variety of gases.« less
Ground-water quality in east-central New Jersey, and a plan for sampling networks
Harriman, D.A.; Sargent, B.P.
1985-01-01
Groundwater quality was evaluated in seven confined aquifers and the water table aquifer in east-central New Jersey based on 237 analyses of samples collected in 1981-82, and 225 older analyses. Investigation of the effect of land use on water quality and several sampling network proposals for the region are reported. Generally, water in the confined aquifers is of satisfactory quality for human consumption and most other uses. Iron (Fe) and manganese (Mn) concentrations exceed U.S. EPA drinking water standards in some wells screened in the Potomac-Raritan-Magothy aquifer system. Sodium (Na) concentrations in samples from three wells more than 800 ft deep in the Englishtown aquifer exceed the standard. Iron and Mn concentrations in this aquifer may also exceed the standards. Iron concentrations in the Wenonah-Mount Laurel aquifer exceed the standard. Based on 15 analyses of water from the Vincetown aquifer, Mn is the only constituent that exceeds the drinking water standard. In the Manasquan aquifer, 4 of the 16 Na determinations exceed the standard, and 8 of 16 Fe determinations exceed the standard. Water quality in the Atlantic City 800-ft sand is generally satisfactory. However, 12 Fe and 1 of 12 Mn determinations exceed the standards. For the Rio Grande water-bearing zone, 1 of 3 Fe determinations exceed the standard. The Kirkwood-Cohansey aquifer system (the water table aquifer) was the most thoroughly sampled (249 chemical analyses from 209 wells). Dissolved solids, chloride, Fe, nitrate, and Mn concentrations exceed drinking water standards in some areas. The results of chi-square tests of constituent distributions based on analyses from 158 wells in the water table aquifer indicate that calcium is higher in industrial and commercial areas; and Mg, chloride, and nitrate-plus-nitrite is higher in residential areas. (Author 's abstract)
Development of a 3D numerical methodology for fast prediction of gun blast induced loading
NASA Astrophysics Data System (ADS)
Costa, E.; Lagasco, F.
2014-05-01
In this paper, the development of a methodology based on semi-empirical models from the literature to carry out 3D prediction of pressure loading on surfaces adjacent to a weapon system during firing is presented. This loading is consequent to the impact of the blast wave generated by the projectile exiting the muzzle bore. When exceeding a pressure threshold level, loading is potentially capable to induce unwanted damage to nearby hard structures as well as frangible panels or electronic equipment. The implemented model shows the ability to quickly predict the distribution of the blast wave parameters over three-dimensional complex geometry surfaces when the weapon design and emplacement data as well as propellant and projectile characteristics are available. Considering these capabilities, the use of the proposed methodology is envisaged as desirable in the preliminary design phase of the combat system to predict adverse effects and then enable to identify the most appropriate countermeasures. By providing a preliminary but sensitive estimate of the operative environmental loading, this numerical means represents a good alternative to more powerful, but time consuming advanced computational fluid dynamics tools, which use can, thus, be limited to the final phase of the design.
Transport and stability analyses supporting disruption prediction in high beta KSTAR plasmas
NASA Astrophysics Data System (ADS)
Ahn, J.-H.; Sabbagh, S. A.; Park, Y. S.; Berkery, J. W.; Jiang, Y.; Riquezes, J.; Lee, H. H.; Terzolo, L.; Scott, S. D.; Wang, Z.; Glasser, A. H.
2017-10-01
KSTAR plasmas have reached high stability parameters in dedicated experiments, with normalized beta βN exceeding 4.3 at relatively low plasma internal inductance li (βN/li>6). Transport and stability analyses have begun on these plasmas to best understand a disruption-free path toward the design target of βN = 5 while aiming to maximize the non-inductive fraction of these plasmas. Initial analysis using the TRANSP code indicates that the non-inductive current fraction in these plasmas has exceeded 50 percent. The advent of KSTAR kinetic equilibrium reconstructions now allows more accurate computation of the MHD stability of these plasmas. Attention is placed on code validation of mode stability using the PEST-3 and resistive DCON codes. Initial evaluation of these analyses for disruption prediction is made using the disruption event characterization and forecasting (DECAF) code. The present global mode kinetic stability model in DECAF developed for low aspect ratio plasmas is evaluated to determine modifications required for successful disruption prediction of KSTAR plasmas. Work supported by U.S. DoE under contract DE-SC0016614.
The formation flare loops by magnetic reconnection and chromospheric ablation
NASA Technical Reports Server (NTRS)
Forbes, T. G.; Malherbe, J. M.; Priest, E. R.
1989-01-01
Noncoplanar compressible reconnection theory is combined here with simple scaling arguments for ablation and radiative cooling to predict average properties of hot and cool flare loops as a function of the coronal vector magnetic field. For a coronal field strength of 100 G, the temperature of the hot flare loops decreases from 1.2 x 10 to the 7th K to 4.0 x 10 to the 6th K as the component of the coronal magnetic field perpendicular to the plane of the loops increases from 0 percent to 86 percent of the total field. When the perpendicular component exceeds 86 percent of the total field or when the altitude of the reconnection site exceeds 10 to the 6th km, flare loops no longer occur. Shock-enhanced radiative cooling triggers the formation of cool H-alpha flare loops with predicted densities of roughly 10 to the 13th/cu cm, and a small gap of roughly 1000 km is predicted to exist between the footpoints of the cool flare loops and the inner edges of the flare ribbons.
Decadal predictability of winter windstorm frequency in Eastern Europe
NASA Astrophysics Data System (ADS)
Höschel, Ines; Grieger, Jens; Ulbrich, Uwe
2017-04-01
Winter windstorms are one of the most impact relevant extreme-weather events in Europe. This study is focussed on windstorm frequency in Eastern Europe at multi-year time scale. Individual storms are identified by using 6-hourly 10m-wind-fields. The impact-oriented tracking algorithm is based on the exceedance of the local 98 percentile of wind speed and a minimum duration of 18 hours. Here, storm frequency is the number of 1000km-footprints of identified windstorms touching the location during extended boreal winter from October to March. The temporal development of annual storm frequencies in Eastern Europe shows variations on a six to fifteen years period. Higher than normal windstorm frequency occurred end of the 1950s and in beginning of the seventies, while lower than normal frequency were around 1960 and in the forties, for example. The correlation between bandpass filtered storm frequency and North Atlantic sea surface temperature shows a significant pattern with a positive correlation in the subtropical East Atlantic and significant negative correlations in the Gulfstream region. The relationship between these multi-year variations and predictability on decadal time scales is discussed. The resulting skill of winter wind storms in the German decadal prediction system MiKlip, based on the numerical earth system model MPI-ESM, will be presented.
Quantitative model of the growth of floodplains by vertical accretion
Moody, J.A.; Troutman, B.M.
2000-01-01
A simple one-dimensional model is developed to quantitatively predict the change in elevation, over a period of decades, for vertically accreting floodplains. This unsteady model approximates the monotonic growth of a floodplain as an incremental but constant increase of net sediment deposition per flood for those floods of a partial duration series that exceed a threshold discharge corresponding to the elevation of the floodplain. Sediment deposition from each flood increases the elevation of the floodplain and consequently the magnitude of the threshold discharge resulting in a decrease in the number of floods and growth rate of the floodplain. Floodplain growth curves predicted by this model are compared to empirical growth curves based on dendrochronology and to direct field measurements at five floodplain sites. The model was used to predict the value of net sediment deposition per flood which best fits (in a least squares sense) the empirical and field measurements; these values fall within the range of independent estimates of the net sediment deposition per flood based on empirical equations. These empirical equations permit the application of the model to estimate of floodplain growth for other floodplains throughout the world which do not have detailed data of sediment deposition during individual floods. Copyright (C) 2000 John Wiley and Sons, Ltd.
Liabeuf, Debora; Sim, Sung-Chur; Francis, David M
2018-03-01
Bacterial spot affects tomato crops (Solanum lycopersicum) grown under humid conditions. Major genes and quantitative trait loci (QTL) for resistance have been described, and multiple loci from diverse sources need to be combined to improve disease control. We investigated genomic selection (GS) prediction models for resistance to Xanthomonas euvesicatoria and experimentally evaluated the accuracy of these models. The training population consisted of 109 families combining resistance from four sources and directionally selected from a population of 1,100 individuals. The families were evaluated on a plot basis in replicated inoculated trials and genotyped with single nucleotide polymorphisms (SNP). We compared the prediction ability of models developed with 14 to 387 SNP. Genomic estimated breeding values (GEBV) were derived using Bayesian least absolute shrinkage and selection operator regression (BL) and ridge regression (RR). Evaluations were based on leave-one-out cross validation and on empirical observations in replicated field trials using the next generation of inbred progeny and a hybrid population resulting from selections in the training population. Prediction ability was evaluated based on correlations between GEBV and phenotypes (r g ), percentage of coselection between genomic and phenotypic selection, and relative efficiency of selection (r g /r p ). Results were similar with BL and RR models. Models using only markers previously identified as significantly associated with resistance but weighted based on GEBV and mixed models with markers associated with resistance treated as fixed effects and markers distributed in the genome treated as random effects offered greater accuracy and a high percentage of coselection. The accuracy of these models to predict the performance of progeny and hybrids exceeded the accuracy of phenotypic selection.
NASA Astrophysics Data System (ADS)
Li, Chengcheng; Ren, Hong-Li; Zhou, Fang; Li, Shuanglin; Fu, Joshua-Xiouhua; Li, Guoping
2018-06-01
Precipitation is highly variable in space and discontinuous in time, which makes it challenging for models to predict on subseasonal scales (10-30 days). We analyze multi-pentad predictions from the Beijing Climate Center Climate System Model version 1.2 (BCC_CSM1.2), which are based on hindcasts from 1997 to 2014. The analysis focus on the skill of the model to predict precipitation variability over Southeast Asia from May to September, as well as its connections with intraseasonal oscillation (ISO). The effective precipitation prediction length is about two pentads (10 days), during which the skill measured by anomaly correlation is greater than 0.1. In order to further evaluate the performance of the precipitation prediction, the diagnosis results of the skills of two related circulation fields show that the prediction skills for the circulation fields exceed that of precipitation. Moreover, the prediction skills tend to be higher when the amplitude of ISO is large, especially for a boreal summer intraseasonal oscillation. The skills associated with phases 2 and 5 are higher, but that of phase 3 is relatively lower. Even so, different initial phases reflect the same spatial characteristics, which shows higher skill of precipitation prediction in the northwest Pacific Ocean. Finally, filter analysis is used on the prediction skills of total and subseasonal anomalies. The results of the two anomaly sets are comparable during the first two lead pentads, but thereafter the skill of the total anomalies is significantly higher than that of the subseasonal anomalies. This paper should help advance research in subseasonal precipitation prediction.
Spahr, Norman E.; Mueller, David K.; Wolock, David M.; Hitt, Kerie J.; Gronberg, JoAnn M.
2010-01-01
Data collected for the U.S. Geological Survey National Water-Quality Assessment program from 1992-2001 were used to investigate the relations between nutrient concentrations and nutrient sources, hydrology, and basin characteristics. Regression models were developed to estimate annual flow-weighted concentrations of total nitrogen and total phosphorus using explanatory variables derived from currently available national ancillary data. Different total-nitrogen regression models were used for agricultural (25 percent or more of basin area classified as agricultural land use) and nonagricultural basins. Atmospheric, fertilizer, and manure inputs of nitrogen, percent sand in soil, subsurface drainage, overland flow, mean annual precipitation, and percent undeveloped area were significant variables in the agricultural basin total nitrogen model. Significant explanatory variables in the nonagricultural total nitrogen model were total nonpoint-source nitrogen input (sum of nitrogen from manure, fertilizer, and atmospheric deposition), population density, mean annual runoff, and percent base flow. The concentrations of nutrients derived from regression (CONDOR) models were applied to drainage basins associated with the U.S. Environmental Protection Agency (USEPA) River Reach File (RF1) to predict flow-weighted mean annual total nitrogen concentrations for the conterminous United States. The majority of stream miles in the Nation have predicted concentrations less than 5 milligrams per liter. Concentrations greater than 5 milligrams per liter were predicted for a broad area extending from Ohio to eastern Nebraska, areas spatially associated with greater application of fertilizer and manure. Probabilities that mean annual total-nitrogen concentrations exceed the USEPA regional nutrient criteria were determined by incorporating model prediction uncertainty. In all nutrient regions where criteria have been established, there is at least a 50 percent probability of exceeding the criteria in more than half of the stream miles. Dividing calibration sites into agricultural and nonagricultural groups did not improve the explanatory capability for total phosphorus models. The group of explanatory variables that yielded the lowest model error for mean annual total phosphorus concentrations includes phosphorus input from manure, population density, amounts of range land and forest land, percent sand in soil, and percent base flow. However, the large unexplained variability and associated model error precluded the use of the total phosphorus model for nationwide extrapolations.
Nolan, B.T.; Hitt, K.J.; Ruddy, B.C.
2002-01-01
A new logistic regression (LR) model was used to predict the probability of nitrate contamination exceeding 4 mg/L in predominantly shallow, recently recharged ground waters of the United States. The new model contains variables representing (1) N fertilizer loading (p 2 = 0.875), indicating that the LR model fits the data well. The likelihood of nitrate contamination is greater in areas with high N loading and well-drained surficial soils over unconsolidated sand and gravels. The LR model correctly predicted the status of nitrate contamination in 75% of wells in a validation data set. Considering all wells used in both calibration and validation, observed median nitrate concentration increased from 0.24 to 8.30 mg/L as the mapped probability of nitrate exceeding 4 mg/L increased from less than or equal to 0.17 to > 0.83.
Kennedy, Jeffrey R.; Paretti, Nicholas V.; Veilleux, Andrea G.
2014-01-01
Regression equations, which allow predictions of n-day flood-duration flows for selected annual exceedance probabilities at ungaged sites, were developed using generalized least-squares regression and flood-duration flow frequency estimates at 56 streamgaging stations within a single, relatively uniform physiographic region in the central part of Arizona, between the Colorado Plateau and Basin and Range Province, called the Transition Zone. Drainage area explained most of the variation in the n-day flood-duration annual exceedance probabilities, but mean annual precipitation and mean elevation were also significant variables in the regression models. Standard error of prediction for the regression equations varies from 28 to 53 percent and generally decreases with increasing n-day duration. Outside the Transition Zone there are insufficient streamgaging stations to develop regression equations, but flood-duration flow frequency estimates are presented at select streamgaging stations.
Recknagel, Friedrich; Orr, Philip T; Bartkow, Michael; Swanepoel, Annelie; Cao, Hongqing
2017-11-01
An early warning scheme is proposed that runs ensembles of inferential models for predicting the cyanobacterial population dynamics and cyanotoxin concentrations in drinking water reservoirs on a diel basis driven by in situ sonde water quality data. When the 10- to 30-day-ahead predicted concentrations of cyanobacteria cells or cyanotoxins exceed pre-defined limit values, an early warning automatically activates an action plan considering in-lake control, e.g. intermittent mixing and ad hoc water treatment in water works, respectively. Case studies of the sub-tropical Lake Wivenhoe (Australia) and the Mediterranean Vaal Reservoir (South Africa) demonstrate that ensembles of inferential models developed by the hybrid evolutionary algorithm HEA are capable of up to 30days forecasts of cyanobacteria and cyanotoxins using data collected in situ. The resulting models for Dolicospermum circinale displayed validity for up to 10days ahead, whilst concentrations of Cylindrospermopsis raciborskii and microcystins were successfully predicted up to 30days ahead. Implementing the proposed scheme for drinking water reservoirs enhances current water quality monitoring practices by solely utilising in situ monitoring data, in addition to cyanobacteria and cyanotoxin measurements. Access to routinely measured cyanotoxin data allows for development of models that predict explicitly cyanotoxin concentrations that avoid to inadvertently model and predict non-toxic cyanobacterial strains. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurtz, Sarah; Repins, Ingrid L; Hacke, Peter L
Continued growth of PV system deployment would be enhanced by quantitative, low-uncertainty predictions of the degradation and failure rates of PV modules and systems. The intended product lifetime (decades) far exceeds the product development cycle (months), limiting our ability to reduce the uncertainty of the predictions for this rapidly changing technology. Yet, business decisions (setting insurance rates, analyzing return on investment, etc.) require quantitative risk assessment. Moving toward more quantitative assessments requires consideration of many factors, including the intended application, consequence of a possible failure, variability in the manufacturing, installation, and operation, as well as uncertainty in the measured accelerationmore » factors, which provide the basis for predictions based on accelerated tests. As the industry matures, it is useful to periodically assess the overall strategy for standards development and prioritization of research to provide a technical basis both for the standards and the analysis related to the application of those. To this end, this paper suggests a tiered approach to creating risk assessments. Recent and planned potential improvements in international standards are also summarized.« less
NASA Astrophysics Data System (ADS)
Ji, Zhaojie; Guan, Zhidong; Li, Zengshan
2017-10-01
In this paper, a progressive damage model was established on the basis of ABAQUS software for predicting permanent indentation and impact damage in composite laminates. Intralaminar and interlaminar damage was modelled based on the continuum damage mechanics (CDM) in the finite element model. For the verification of the model, low-velocity impact tests of quasi-isotropic laminates with material system of T300/5228A were conducted. Permanent indentation and impact damage of the laminates were simulated and the numerical results agree well with the experiments. It can be concluded that an obvious knee point can be identified on the curve of the indentation depth versus impact energy. Matrix cracking and delamination develops rapidly with the increasing impact energy, while considerable amount of fiber breakage only occurs when the impact energy exceeds the energy corresponding to the knee point. Predicted indentation depth after the knee point is very sensitive to the parameter μ which is proposed in this paper, and the acceptable value of this parameter is in range from 0.9 to 1.0.
Cryptic biodiversity loss linked to global climate change
NASA Astrophysics Data System (ADS)
Bálint, M.; Domisch, S.; Engelhardt, C. H. M.; Haase, P.; Lehrian, S.; Sauer, J.; Theissinger, K.; Pauls, S. U.; Nowak, C.
2011-09-01
Global climate change (GCC) significantly affects distributional patterns of organisms, and considerable impacts on biodiversity are predicted for the next decades. Inferred effects include large-scale range shifts towards higher altitudes and latitudes, facilitation of biological invasions and species extinctions. Alterations of biotic patterns caused by GCC have usually been predicted on the scale of taxonomically recognized morphospecies. However, the effects of climate change at the most fundamental level of biodiversity--intraspecific genetic diversity--remain elusive. Here we show that the use of morphospecies-based assessments of GCC effects will result in underestimations of the true scale of biodiversity loss. Species distribution modelling and assessments of mitochondrial DNA variability in nine montane aquatic insect species in Europe indicate that future range contractions will be accompanied by severe losses of cryptic evolutionary lineages and genetic diversity within these lineages. These losses greatly exceed those at the scale of morphospecies. We also document that the extent of range reduction may be a useful proxy when predicting losses of genetic diversity. Our results demonstrate that intraspecific patterns of genetic diversity should be considered when estimating the effects of climate change on biodiversity.
Analysis and verification of a prediction model of solar energetic proton events
NASA Astrophysics Data System (ADS)
Wang, J.; Zhong, Q.
2017-12-01
The solar energetic particle event can cause severe radiation damages near Earth. The alerts and summary products of the solar energetic proton events were provided by the Space Environment Prediction Center (SEPC) according to the flux of the greater than 10 MeV protons taken by GOES satellite in geosynchronous orbit. The start of a solar energetic proton event is defined as the time when the flux of the greater than 10 MeV protons equals or exceeds 10 proton flux units (pfu). In this study, a model was developed to predict the solar energetic proton events, provide the warning for the solar energetic proton events at least minutes in advance, based on both the soft X-ray flux and integral proton flux taken by GOES. The quality of the forecast model was measured against verifications of accuracy, reliability, discrimination capability, and forecast skills. The peak flux and rise time of the solar energetic proton events in the six channels, >1MeV, >5 MeV, >10 MeV, >30 MeV, >50 MeV, >100 MeV, were also simulated and analyzed.
Predicting Viral Infection From High-Dimensional Biomarker Trajectories
Chen, Minhua; Zaas, Aimee; Woods, Christopher; Ginsburg, Geoffrey S.; Lucas, Joseph; Dunson, David; Carin, Lawrence
2013-01-01
There is often interest in predicting an individual’s latent health status based on high-dimensional biomarkers that vary over time. Motivated by time-course gene expression array data that we have collected in two influenza challenge studies performed with healthy human volunteers, we develop a novel time-aligned Bayesian dynamic factor analysis methodology. The time course trajectories in the gene expressions are related to a relatively low-dimensional vector of latent factors, which vary dynamically starting at the latent initiation time of infection. Using a nonparametric cure rate model for the latent initiation times, we allow selection of the genes in the viral response pathway, variability among individuals in infection times, and a subset of individuals who are not infected. As we demonstrate using held-out data, this statistical framework allows accurate predictions of infected individuals in advance of the development of clinical symptoms, without labeled data and even when the number of biomarkers vastly exceeds the number of individuals under study. Biological interpretation of several of the inferred pathways (factors) is provided. PMID:23704802
A mathematical model of aortic aneurysm formation
Hao, Wenrui; Gong, Shihua; Wu, Shuonan; Xu, Jinchao; Go, Michael R.; Friedman, Avner; Zhu, Dai
2017-01-01
Abdominal aortic aneurysm (AAA) is a localized enlargement of the abdominal aorta, such that the diameter exceeds 3 cm. The natural history of AAA is progressive growth leading to rupture, an event that carries up to 90% risk of mortality. Hence there is a need to predict the growth of the diameter of the aorta based on the diameter of a patient’s aneurysm at initial screening and aided by non-invasive biomarkers. IL-6 is overexpressed in AAA and was suggested as a prognostic marker for the risk in AAA. The present paper develops a mathematical model which relates the growth of the abdominal aorta to the serum concentration of IL-6. Given the initial diameter of the aorta and the serum concentration of IL-6, the model predicts the growth of the diameter at subsequent times. Such a prediction can provide guidance to how closely the patient’s abdominal aorta should be monitored. The mathematical model is represented by a system of partial differential equations taking place in the aortic wall, where the media is assumed to have the constituency of an hyperelastic material. PMID:28212412
Prottengeier, Johannes; Albermann, Matthias; Heinrich, Sebastian; Birkholz, Torsten; Gall, Christine; Schmidt, Joachim
2016-12-01
Intravenous access in prehospital emergency care allows for early administration of medication and extended measures such as anaesthesia. Cannulation may, however, be difficult, and failure and resulting delay in treatment and transport may have negative effects on the patient. Therefore, our study aims to perform a concise assessment of the difficulties of prehospital venous cannulation. We analysed 23 candidate predictor variables on peripheral venous cannulations in terms of cannulation failure and exceedance of a 2 min time threshold. Multivariate logistic regression models were fitted for variables of predictive value (P<0.25) and evaluated by the area under the curve (AUC>0.6) of their respective receiver operating characteristic curve. A total of 762 intravenous cannulations were enroled. In all, 22% of punctures failed on the first attempt and 13% of punctures exceeded 2 min. Model selection yielded a three-factor model (vein visibility without tourniquet, vein palpability with tourniquet and insufficient ambient lighting) of fair accuracy for the prediction of puncture failure (AUC=0.76) and a structurally congruent model of four factors (failure model factors plus vein visibility with tourniquet) for the exceedance of the 2 min threshold (AUC=0.80). Our study offers a simple assessment to identify cases of difficult intravenous access in prehospital emergency care. Of the numerous factors subjectively perceived as possibly exerting influences on cannulation, only the universal - not exclusive to emergency care - factors of lighting, vein visibility and palpability proved to be valid predictors of cannulation failure and exceedance of a 2 min threshold.
Lyons, John D.; Stewart, Jana S.
2015-01-01
The Lake Sturgeon (Acipenser fulvescens, Rafinesque, 1817) may be threatened by future climate warming. The purpose of this study was to identify river reaches in Wisconsin, USA, where they might be vulnerable to warming water temperatures. In Wisconsin, A. fulvescens is known from 2291 km of large-river habitat that has been fragmented into 48 discrete river-lake networks isolated by impassable dams. Although the exact temperature tolerances are uncertain, water temperatures above 28–30°C are potentially less suitable for this coolwater species. Predictions from 13 downscaled global climate models were input to a lotic water temperature model to estimate amounts of potential thermally less-suitable habitat at present and for 2046–2065. Currently, 341 km (14.9%) of the known habitat are estimated to regularly exceed 28°C for an entire day, but only 6 km (0.3%) to exceed 30°C. In 2046–2065, 685–2164 km (29.9–94.5%) are projected to exceed 28°C and 33–1056 km (1.4–46.1%) to exceed 30°C. Most river-lake networks have cooler segments, large tributaries, or lakes that might provide temporary escape from potentially less suitable temperatures, but 12 short networks in the Lower Fox and Middle Wisconsin rivers totaling 93.6 km are projected to have no potential thermal refugia. One possible adaptation to climate change could be to provide fish passage or translocation so that riverine Lake Sturgeon might have access to more thermally suitable habitats.
Logue, Jennifer M; Sleiman, Mohamad; Montesinos, V Nahuel; Russell, Marion L; Litter, Marta I; Benowitz, Neal L; Gundel, Lara A; Destaillats, Hugo
2017-08-15
E-cigarettes likely represent a lower risk to health than traditional combustion cigarettes, but they are not innocuous. Recently reported emission rates of potentially harmful compounds were used to assess intake and predict health impacts for vapers and bystanders exposed passively. Vapers' toxicant intake was calculated for scenarios in which different e-liquids were used with various vaporizers, battery power settings and vaping regimes. For a high rate of 250 puff day -1 using a typical vaping regime and popular tank devices with battery voltages from 3.8 to 4.8 V, users were predicted to inhale formaldehyde (up to 49 mg day -1 ), acrolein (up to 10 mg day -1 ) and diacetyl (up to 0.5 mg day -1 ), at levels that exceeded U.S. occupational limits. Formaldehyde intake from 100 daily puffs was higher than the amount inhaled by a smoker consuming 10 conventional cigarettes per day. Secondhand exposures were predicted for two typical indoor scenarios: a home and a bar. Contributions from vaping to air pollutant concentrations in the home did not exceed the California OEHHA 8-h reference exposure levels (RELs), except when a high emitting device was used at 4.8 V. In that extreme scenario, the contributions from vaping amounted to as much as 12 μg m -3 formaldehyde and 2.6 μg m -3 acrolein. Pollutant concentrations in bars were modeled using indoor volumes, air exchange rates and the number of hourly users reported in the literature for U.S. bars in which smoking was allowed. Predicted contributions to indoor air levels were higher than those in the residential scenario. Formaldehyde (on average 135 μg m -3 ) and acrolein (28 μg m -3 ) exceeded the acute 1-h exposure REL for the highest emitting vaporizer/voltage combination. Predictions for these compounds also exceeded the 8-h REL in several bars when less intense vaping conditions were considered. Benzene concentrations in a few bars approached the 8-h REL, and diacetyl levels were close to the lower limit for occupational exposures. The integrated health damage from passive vaping was derived by computing disability-adjusted life years (DALYs) lost due to exposure to secondhand vapor. Acrolein was the dominant contributor to the aggregate harm. DALYs for the various device/voltage combinations were lower than-or comparable to-those estimated for exposures to secondhand and thirdhand tobacco smoke.
A new coupling mechanism between two graphene electron waveguides for ultrafast switching
NASA Astrophysics Data System (ADS)
Huang, Wei; Liang, Shi-Jun; Kyoseva, Elica; Ang, Lay Kee
2018-03-01
In this paper, we report a novel coupling between two graphene electron waveguides, in analogy the optical waveguides. The design is based on the coherent quantum mechanical tunneling of Rabi oscillation between the two graphene electron waveguides. Based on this coupling mechanism, we propose that it can be used as an ultrafast electronic switching device. Based on a modified coupled mode theory, we construct a theoretical model to analyze the device characteristics, and predict that the switching speed is faster than 1 ps and the on-off ratio exceeds 106. Due to the long mean free path of electrons in graphene at room temperature, the proposed design avoids the limitation of low temperature operation required in the traditional design by using semiconductor quantum-well structure. The layout of our design is similar to that of a standard complementary metal-oxide-semiconductor transistor that should be readily fabricated with current state-of-art nanotechnology.
SYNTHESIS of MOLECULE/POLYMER-BASED MAGNETIC MATERIALS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Joel S.
2016-02-01
We have synthesized and characterized several families of organic-based magnets, a new area showing that organic species can exhibit the technologically important property of magnetic ordering. Thin film magnets with ordering temperatures exceeding room temperature have been exceeded. Hence, organic-based magnets represent a new class of materials that exhibit magnetic ordering and do not require energy-intensive metallurgical processing and are based upon Earth-abundant elements.
Gaussian and Lognormal Models of Hurricane Gust Factors
NASA Technical Reports Server (NTRS)
Merceret, Frank
2009-01-01
A document describes a tool that predicts the likelihood of land-falling tropical storms and hurricanes exceeding specified peak speeds, given the mean wind speed at various heights of up to 500 feet (150 meters) above ground level. Empirical models to calculate mean and standard deviation of the gust factor as a function of height and mean wind speed were developed in Excel based on data from previous hurricanes. Separate models were developed for Gaussian and offset lognormal distributions for the gust factor. Rather than forecasting a single, specific peak wind speed, this tool provides a probability of exceeding a specified value. This probability is provided as a function of height, allowing it to be applied at a height appropriate for tall structures. The user inputs the mean wind speed, height, and operational threshold. The tool produces the probability from each model that the given threshold will be exceeded. This application does have its limits. They were tested only in tropical storm conditions associated with the periphery of hurricanes. Winds of similar speed produced by non-tropical system may have different turbulence dynamics and stability, which may change those winds statistical characteristics. These models were developed along the Central Florida seacoast, and their results may not accurately extrapolate to inland areas, or even to coastal sites that are different from those used to build the models. Although this tool cannot be generalized for use in different environments, its methodology could be applied to those locations to develop a similar tool tuned to local conditions.
Gimou, M-M; Charrondière, U R; Leblanc, J-C; Noël, L; Guérin, T; Pouillot, R
2013-01-01
Dietary exposure to 11 elements was assessed by the Total Diet Study (TDS) method. Sixty-four pooled samples representing 96.5% of the diet in Yaoundé, Cameroon, were prepared as consumed before analysis. Consumption data were sourced from a household budget survey. Dietary exposures were compared with nutritional or health-based guidance values (HBGV) and to worldwide TDS results. Elevated prevalence of inadequate intake was estimated for calcium (71.6%), iron (89.7%), magnesium (31.8%), zinc (46.9%) and selenium (87.3%). The percentage of the study population exceeding the tolerable upper intake levels was estimated as <3.2% for calcium, iron, magnesium, zinc and cobalt; 19.1% of the population exceeded the HBGV for sodium. No exceedance of the HBGV for inorganic mercury was predicted in the population. The margin of exposure ranged from 0.91 to 25.0 for inorganic arsenic depending on the reference point. The "Fish" food group was the highest contributor to intake for calcium (65%), cobalt (32%) and selenium (96%). This group was the highest contributor to the exposure to total arsenic (71%) and organic mercury (96%). The "Cereals and cereal products" highly contributed to iron (26%), zinc (26%) and chromium (25%) intakes. The "Tubers and starches" highly contributed to magnesium (39%) and potassium (52%) intakes. This study highlights the dietary deficiency of some essential elements and a low dietary exposure to toxic elements in Yaoundé.
The precision of peri-oestrous predictors of the date of onset of parturition in the bitch.
De Cramer, K G M; Nöthling, J O
2017-07-01
Precise prediction of the date of onset of parturition in the bitch is clinically important. The study compared the precision with which four peri-oestrous predictors predict the date of onset of parturition. The predictors evaluated in 24 bitches were: the date of the first or only day of the LH surge, the date on which the concentration of progesterone in the blood plasma first exceeded 6 nmol/L, the date on which the concentration of progesterone in the blood plasma first exceeded 16 nmol/L and the date of onset of cytological dioestrus. Among the 24 bitches, the date of onset of cytological dioestrus predicted the date of onset of parturition with greater precision than the other three predictors. Following the evaluation of another 218 intervals between the onset of cytological dioestrus and the date of onset of parturition, it was shown that the onset of cytological dioestrus predicted the date of onset parturition with a precision of ±1 d, ± 2 d and ±3 d in 88%, 99% and 100% of the 242 pregnancies. This study concludes that the first day of cytological dioestrus is a useful predictor of the date of onset of parturition. Copyright © 2017. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.
2014-11-01
Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural vs. model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty is far more important than model parametric uncertainty to estimate irrigation water requirement. Using the Reliability Ensemble Averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.
A First Look at Electric Motor Noise For Future Propulsion Systems
NASA Technical Reports Server (NTRS)
Huff, Dennis L.; Henderson, Brenda S.; Envia, Edmane
2016-01-01
Motor tone predictions using a vibration analysis and input from design parameters for high power density motors show that the noise can be significantly higher or lower than the empirical correlations and exceeds the stated uncertainty.
Hydrological Predictability for the Peruvian Amazon
NASA Astrophysics Data System (ADS)
Towner, Jamie; Stephens, Elizabeth; Cloke, Hannah; Bazo, Juan; Coughlan, Erin; Zsoter, Ervin
2017-04-01
Population growth in the Peruvian Amazon has prompted the expansion of livelihoods further into the floodplain and thus increasing vulnerability to the annual rise and fall of the river. This growth has coincided with a period of increasing hydrological extremes with more frequent severe flood events. The anticipation and forecasting of these events is crucial for mitigating vulnerability. Forecast-based Financing (FbF) an initiative of the German Red Cross implements risk reducing actions based on threshold exceedance within hydrometeorological forecasts using the Global Flood Awareness System (GloFAS). However, the lead times required to complete certain actions can be long (e.g. several weeks to months ahead to purchase materials and reinforce houses) and are beyond the current capabilities of GloFAS. Therefore, further calibration of the model is required in addition to understanding the climatic drivers and associated hydrological response for specific flood events, such as those observed in 2009, 2012 and 2015. This review sets out to determine the current capabilities of the GloFAS model while exploring the limits of predictability for the Amazon basin. More specifically, how the temporal patterns of flow within the main coinciding tributaries correspond to the overall Amazonian flood wave under various climatic and meteorological influences. Linking the source areas of flow to predictability within the seasonal forecasting system will develop the ability to expand the limit of predictability of the flood wave. This presentation will focus on the Iquitos region of Peru, while providing an overview of the new techniques and current challenges faced within seasonal flood prediction.
On-Line, Self-Learning, Predictive Tool for Determining Payload Thermal Response
NASA Technical Reports Server (NTRS)
Jen, Chian-Li; Tilwick, Leon
2000-01-01
This paper will present the results of a joint ManTech / Goddard R&D effort, currently under way, to develop and test a computer based, on-line, predictive simulation model for use by facility operators to predict the thermal response of a payload during thermal vacuum testing. Thermal response was identified as an area that could benefit from the algorithms developed by Dr. Jeri for complex computer simulations. Most thermal vacuum test setups are unique since no two payloads have the same thermal properties. This requires that the operators depend on their past experiences to conduct the test which requires time for them to learn how the payload responds while at the same time limiting any risk of exceeding hot or cold temperature limits. The predictive tool being developed is intended to be used with the new Thermal Vacuum Data System (TVDS) developed at Goddard for the Thermal Vacuum Test Operations group. This model can learn the thermal response of the payload by reading a few data points from the TVDS, accepting the payload's current temperature as the initial condition for prediction. The model can then be used as a predictive tool to estimate the future payload temperatures according to a predetermined shroud temperature profile. If the error of prediction is too big, the model can be asked to re-learn the new situation on-line in real-time and give a new prediction. Based on some preliminary tests, we feel this predictive model can forecast the payload temperature of the entire test cycle within 5 degrees Celsius after it has learned 3 times during the beginning of the test. The tool will allow the operator to play "what-if' experiments to decide what is his best shroud temperature set-point control strategy. This tool will save money by minimizing guess work and optimizing transitions as well as making the testing process safer and easier to conduct.
Stelzer, Erin A.; Duris, Joseph W.; Brady, Amie M. G.; Harrison, John H.; Johnson, Heather E.; Ware, Michael W.
2013-01-01
Predictive models, based on environmental and water quality variables, have been used to improve the timeliness and accuracy of recreational water quality assessments, but their effectiveness has not been studied in inland waters. Sampling at eight inland recreational lakes in Ohio was done in order to investigate using predictive models for Escherichia coli and to understand the links between E. coli concentrations, predictive variables, and pathogens. Based upon results from 21 beach sites, models were developed for 13 sites, and the most predictive variables were rainfall, wind direction and speed, turbidity, and water temperature. Models were not developed at sites where the E. coli standard was seldom exceeded. Models were validated at nine sites during an independent year. At three sites, the model resulted in increased correct responses, sensitivities, and specificities compared to use of the previous day's E. coli concentration (the current method). Drought conditions during the validation year precluded being able to adequately assess model performance at most of the other sites. Cryptosporidium, adenovirus, eaeA (E. coli), ipaH (Shigella), and spvC (Salmonella) were found in at least 20% of samples collected for pathogens at five sites. The presence or absence of the three bacterial genes was related to some of the model variables but was not consistently related to E. coli concentrations. Predictive models were not effective at all inland lake sites; however, their use at two lakes with high swimmer densities will provide better estimates of public health risk than current methods and will be a valuable resource for beach managers and the public. PMID:23291550
Fleck, David E; Ernest, Nicholas; Adler, Caleb M; Cohen, Kelly; Eliassen, James C; Norris, Matthew; Komoroski, Richard A; Chu, Wen-Jang; Welge, Jeffrey A; Blom, Thomas J; DelBello, Melissa P; Strakowski, Stephen M
2017-06-01
Individualized treatment for bipolar disorder based on neuroimaging treatment targets remains elusive. To address this shortcoming, we developed a linguistic machine learning system based on a cascading genetic fuzzy tree (GFT) design called the LITHium Intelligent Agent (LITHIA). Using multiple objectively defined functional magnetic resonance imaging (fMRI) and proton magnetic resonance spectroscopy ( 1 H-MRS) inputs, we tested whether LITHIA could accurately predict the lithium response in participants with first-episode bipolar mania. We identified 20 subjects with first-episode bipolar mania who received an adequate trial of lithium over 8 weeks and both fMRI and 1 H-MRS scans at baseline pre-treatment. We trained LITHIA using 18 1 H-MRS and 90 fMRI inputs over four training runs to classify treatment response and predict symptom reductions. Each training run contained a randomly selected 80% of the total sample and was followed by a 20% validation run. Over a different randomly selected distribution of the sample, we then compared LITHIA to eight common classification methods. LITHIA demonstrated nearly perfect classification accuracy and was able to predict post-treatment symptom reductions at 8 weeks with at least 88% accuracy in training and 80% accuracy in validation. Moreover, LITHIA exceeded the predictive capacity of the eight comparator methods and showed little tendency towards overfitting. The results provided proof-of-concept that a novel GFT is capable of providing control to a multidimensional bioinformatics problem-namely, prediction of the lithium response-in a pilot data set. Future work on this, and similar machine learning systems, could help assign psychiatric treatments more efficiently, thereby optimizing outcomes and limiting unnecessary treatment. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
HZETRN radiation transport validation using balloon-based experimental data
NASA Astrophysics Data System (ADS)
Warner, James E.; Norman, Ryan B.; Blattnig, Steve R.
2018-05-01
The deterministic radiation transport code HZETRN (High charge (Z) and Energy TRaNsport) was developed by NASA to study the effects of cosmic radiation on astronauts and instrumentation shielded by various materials. This work presents an analysis of computed differential flux from HZETRN compared with measurement data from three balloon-based experiments over a range of atmospheric depths, particle types, and energies. Model uncertainties were quantified using an interval-based validation metric that takes into account measurement uncertainty both in the flux and the energy at which it was measured. Average uncertainty metrics were computed for the entire dataset as well as subsets of the measurements (by experiment, particle type, energy, etc.) to reveal any specific trends of systematic over- or under-prediction by HZETRN. The distribution of individual model uncertainties was also investigated to study the range and dispersion of errors beyond just single scalar and interval metrics. The differential fluxes from HZETRN were generally well-correlated with balloon-based measurements; the median relative model difference across the entire dataset was determined to be 30%. The distribution of model uncertainties, however, revealed that the range of errors was relatively broad, with approximately 30% of the uncertainties exceeding ± 40%. The distribution also indicated that HZETRN systematically under-predicts the measurement dataset as a whole, with approximately 80% of the relative uncertainties having negative values. Instances of systematic bias for subsets of the data were also observed, including a significant underestimation of alpha particles and protons for energies below 2.5 GeV/u. Muons were found to be systematically over-predicted at atmospheric depths deeper than 50 g/cm2 but under-predicted for shallower depths. Furthermore, a systematic under-prediction of alpha particles and protons was observed below the geomagnetic cutoff, suggesting that improvements to the light ion production cross sections in HZETRN should be investigated.
NASA Technical Reports Server (NTRS)
Mckenzie, R. L.
1972-01-01
Predictions from a numerical model of the vibrational relaxation of anharmonic diatomic oscillators in supersonic expansions are used to show the extent to which the small anharmonicity of gases like CO can cause significant overpopulations of upper vibrational states. When mixtures of CO and N2 are considered, radiative gain on many of the vibration-rotation transitions of CO is predicted. Experiments are described that qualitatively verify the predictions by demonstrating laser oscillation in CO-N2 expansions. The resulting CO-N2 gasdynamic laser displays performance characteristics that equal or exceed those of similar CO2 lasers.
NASA Technical Reports Server (NTRS)
Mckenzie, R. L.
1971-01-01
Predictions from a numerical model of the vibrational relaxation of anharmonic diatomic oscillators in supersonic expansions are used to show the extent to which the small anharmonicity of gases like CO can cause significant overpopulations of upper vibrational states. When mixtures of CO and N2 are considered, radiative gain on many of the vibration-rotation transitions of CO is predicted. Experiments are described that qualitatively verify the predictions by demonstrating laser oscillation in CO-N2 expansions. The resulting CO-N2 gasdynamic laser displays performance characteristics that equal or exceed those of similar CO2 lasers.
Rajabi, Mohamadreza; Mansourian, Ali; Bazmani, Ahad
2012-11-01
Visceral leishmaniasis (VL) is a vector-borne disease, highly influenced by environmental factors, which is an increasing public health problem in Iran, especially in the north-western part of the country. A geographical information system was used to extract data and map environmental variables for all villages in the districts of Kalaybar and Ahar in the province of East Azerbaijan. An attempt to predict VL prevalence based on an analytical hierarchy process (AHP) module combined with ordered weighted averaging (OWA) with fuzzy quantifiers indicated that the south-eastern part of Ahar is particularly prone to high VL prevalence. With the main objective to locate the villages most at risk, the opinions of experts and specialists were generalised into a group decision-making process by means of fuzzy weighting methods and induced OWA. The prediction model was applied throughout the entire study area (even where the disease is prevalent and where data already exist). The predicted data were compared with registered VL incidence records in each area. The results suggest that linguistic fuzzy quantifiers, guided by an AHP-OWA model, are capable of predicting susceptive locations for VL prevalence with an accuracy exceeding 80%. The group decision-making process demonstrated that people in 15 villages live under particularly high risk for VL contagion, i.e. villages where the disease is highly prevalent. The findings of this study are relevant for the planning of effective control strategies for VL in northwest Iran.
Mor, Vincent; Intrator, Orna; Unruh, Mark Aaron; Cai, Shubing
2011-04-15
The Minimum Data Set (MDS) for nursing home resident assessment has been required in all U.S. nursing homes since 1990 and has been universally computerized since 1998. Initially intended to structure clinical care planning, uses of the MDS expanded to include policy applications such as case-mix reimbursement, quality monitoring and research. The purpose of this paper is to summarize a series of analyses examining the internal consistency and predictive validity of the MDS data as used in the "real world" in all U.S. nursing homes between 1999 and 2007. We used person level linked MDS and Medicare denominator and all institutional claim files including inpatient (hospital and skilled nursing facilities) for all Medicare fee-for-service beneficiaries entering U.S. nursing homes during the period 1999 to 2007. We calculated the sensitivity and positive predictive value (PPV) of diagnoses taken from Medicare hospital claims and from the MDS among all new admissions from hospitals to nursing homes and the internal consistency (alpha reliability) of pairs of items within the MDS that logically should be related. We also tested the internal consistency of commonly used MDS based multi-item scales and examined the predictive validity of an MDS based severity measure viz. one year survival. Finally, we examined the correspondence of the MDS discharge record to hospitalizations and deaths seen in Medicare claims, and the completeness of MDS assessments upon skilled nursing facility (SNF) admission. Each year there were some 800,000 new admissions directly from hospital to US nursing homes and some 900,000 uninterrupted SNF stays. Comparing Medicare enrollment records and claims with MDS records revealed reasonably good correspondence that improved over time (by 2006 only 3% of deaths had no MDS discharge record, only 5% of SNF stays had no MDS, but over 20% of MDS discharges indicating hospitalization had no associated Medicare claim). The PPV and sensitivity levels of Medicare hospital diagnoses and MDS based diagnoses were between .6 and .7 for major diagnoses like CHF, hypertension, diabetes. Internal consistency, as measured by PPV, of the MDS ADL items with other MDS items measuring impairments and symptoms exceeded .9. The Activities of Daily Living (ADL) long form summary scale achieved an alpha inter-consistency level exceeding .85 and multi-item scale alpha levels of .65 were achieved for well being and mood, and .55 for behavior, levels that were sustained even after stratification by ADL and cognition. The Changes in Health, End-stage disease and Symptoms and Signs (CHESS) index, a summary measure of frailty was highly predictive of one year survival. The MDS demonstrates a reasonable level of consistency both in terms of how well MDS diagnoses correspond to hospital discharge diagnoses and in terms of the internal consistency of functioning and behavioral items. The level of alpha reliability and validity demonstrated by the scales suggest that the data can be useful for research and policy analysis. However, while improving, the MDS discharge tracking record should still not be used to indicate Medicare hospitalizations or mortality. It will be important to monitor the performance of the MDS 3.0 with respect to consistency, reliability and validity now that it has replaced version 2.0, using these results as a baseline that should be exceeded.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-22
... a credit available to all members, regardless of their trading volumes, that exceeds the base credit... million, shares of liquidity during the month, which is a higher rate than the base rate of $0.0007 per... credit exceeds the base rate of $0.0007, the difference is not unfairly discriminatory because the credit...
Application of model abstraction techniques to simulate transport in soils
USDA-ARS?s Scientific Manuscript database
Successful understanding and modeling of contaminant transport in soils is the precondition of risk-informed predictions of the subsurface contaminant transport. Exceedingly complex models of subsurface contaminant transport are often inefficient. Model abstraction is the methodology for reducing th...
A new approach to monitoring and alerting congestion in airspace sectors
DOT National Transportation Integrated Search
2014-09-28
The Federal Aviation Administration (FAA) Traffic Flow Management System (TFMS)currently declares an alert for any 15 minute interval in which the predicted demand exceeds the Monitor/Alert Parameter (MAP)for any airport, sector, or fix. For airports...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Songhua; Tourassi, Georgia
2012-01-01
The majority of clinical content-based image retrieval (CBIR) studies disregard human perception subjectivity, aiming to duplicate the consensus expert assessment of the visual similarity on example cases. The purpose of our study is twofold: (i) discern better the extent of human perception subjectivity when assessing the visual similarity of two images with similar semantic content, and (ii) explore the feasibility of personalized predictive modeling of visual similarity. We conducted a human observer study in which five observers of various expertise were shown ninety-nine triplets of mammographic masses with similar BI-RADS descriptors and were asked to select the two masses withmore » the highest visual relevance. Pairwise agreement ranged between poor and fair among the five observers, as assessed by the kappa statistic. The observers' self-consistency rate was remarkably low, based on repeated questions where either the orientation or the presentation order of a mass was changed. Various machine learning algorithms were explored to determine whether they can predict each observer's personalized selection using textural features. Many algorithms performed with accuracy that exceeded each observer's self-consistency rate, as determined using a cross-validation scheme. This accuracy was statistically significantly higher than would be expected by chance alone (two-tailed p-value ranged between 0.001 and 0.01 for all five personalized models). The study confirmed that human perception subjectivity should be taken into account when developing CBIR-based medical applications.« less
Trait-based diversification shifts reflect differential extinction among fossil taxa.
Wagner, Peter J; Estabrook, George F
2014-11-18
Evolution provides many cases of apparent shifts in diversification associated with particular anatomical traits. Three general models connect these patterns to anatomical evolution: (i) elevated net extinction of taxa bearing particular traits, (ii) elevated net speciation of taxa bearing particular traits, and (iii) elevated evolvability expanding the range of anatomies available to some species. Trait-based diversification shifts predict elevated hierarchical stratigraphic compatibility (i.e., primitive→derived→highly derived sequences) among pairs of anatomical characters. The three specific models further predict (i) early loss of diversity for taxa retaining primitive conditions (elevated net extinction), (ii) increased diversification among later members of a clade (elevated net speciation), and (iii) increased disparity among later members in a clade (elevated evolvability). Analyses of 319 anatomical and stratigraphic datasets for fossil species and genera show that hierarchical stratigraphic compatibility exceeds the expectations of trait-independent diversification in the vast majority of cases, which was expected if trait-dependent diversification shifts are common. Excess hierarchical stratigraphic compatibility correlates with early loss of diversity for groups retaining primitive conditions rather than delayed bursts of diversity or disparity across entire clades. Cambrian clades (predominantly trilobites) alone fit null expectations well. However, it is not clear whether evolution was unusual among Cambrian taxa or only early trilobites. At least among post-Cambrian taxa, these results implicate models, such as competition and extinction selectivity/resistance, as major drivers of trait-based diversification shifts at the species and genus levels while contradicting the predictions of elevated net speciation and elevated evolvability models.
NASA Astrophysics Data System (ADS)
Janneck, Robby; Vercesi, Federico; Heremans, Paul; Genoe, Jan; Rolin, Cedric
2016-09-01
Organic thin film transistors (OTFTs) based on single crystalline thin films of organic semiconductors have seen considerable development in the recent years. The most successful method for the fabrication of single crystalline films are solution-based meniscus guided coating techniques such as dip-coating, solution shearing or zone casting. These upscalable methods enable rapid and efficient film formation without additional processing steps. The single-crystalline film quality is strongly dependent on solvent choice, substrate temperature and coating speed. So far, however, process optimization has been conducted by trial and error methods, involving, for example, the variation of coating speeds over several orders of magnitude. Through a systematic study of solvent phase change dynamics in the meniscus region, we develop a theoretical framework that links the optimal coating speed to the solvent choice and the substrate temperature. In this way, we can accurately predict an optimal processing window, enabling fast process optimization. Our approach is verified through systematic OTFT fabrication based on films grown with different semiconductors, solvents and substrate temperatures. The use of best predicted coating speeds delivers state of the art devices. In the case of C8BTBT, OTFTs show well-behaved characteristics with mobilities up to 7 cm2/Vs and onset voltages close to 0 V. Our approach also explains well optimal recipes published in the literature. This route considerably accelerates parameter screening for all meniscus guided coating techniques and unveils the physics of single crystalline film formation.
Lee, Eun Gyung; Harper, Martin; Bowen, Russell B; Slaven, James
2009-07-01
The current study evaluated the Control of Substances Hazardous to Health (COSHH) Essentials model for short-term task-based exposures and full-shift exposures using measured concentrations of three volatile organic chemicals at a small printing plant. A total of 188 exposure measurements of isopropanol and 187 measurements of acetone were collected and each measurement took approximately 60 min. Historically, collected time-weighted average concentrations (seven results) were evaluated for methylene chloride. The COSHH Essentials model recommended general ventilation control for both isopropanol and acetone. There was good agreement between the task-based exposure measurements and the COSHH Essentials predicted exposure range (PER) for cleaning and print preparation with isopropanol and for cleaning with acetone. For the other tasks and for full-shift exposures, agreement between the exposure measurements and the PER was either moderate or poor. However, for both isopropanol and acetone, our findings suggested that the COSHH Essentials model worked reasonably well because the probabilities of short-term exposure measurements exceeding short-term occupational exposure limits (OELs) or full-shift exposures exceeding the corresponding full-shift OELs were <0.05 under the recommended control strategy. For methylene chloride, the COSHH Essentials recommended containment control but a follow-up study was not able to be performed because it had already been replaced with a less hazardous substance (acetone). This was considered a more acceptable alternative to increasing the level of control.
New 21 cm Power Spectrum Upper Limits From PAPER II: Constraints on IGM Properties at z = 7.7
NASA Astrophysics Data System (ADS)
Pober, Jonathan; Ali, Zaki; Parsons, Aaron; Paper Team
2015-01-01
Using a simulation-based framework, we interpret the power spectrum measurements from PAPER of Ali et al. in the context of IGM physics at z = 7.7. A cold IGM will result in strong 21 cm absorption relative to the CMB and leads to a 21 cm fluctuation power spectrum that can exceed 3000 mK^2. The new PAPER measurements allow us to rule out extreme cold IGM models, placing a lower limit on the physical temperature of the IGM. We also compare this limit with a calculation for the predicted heating from the currently observed galaxy population at z = 8.
NASA Astrophysics Data System (ADS)
von Korff Schmising, Clemens; Weder, David; Noll, Tino; Pfau, Bastian; Hennecke, Martin; Strüber, Christian; Radu, Ilie; Schneider, Michael; Staeck, Steffen; Günther, Christian M.; Lüning, Jan; Merhe, Alaa el dine; Buck, Jens; Hartmann, Gregor; Viefhaus, Jens; Treusch, Rolf; Eisebitt, Stefan
2017-05-01
A new device for polarization control at the free electron laser facility FLASH1 at DESY has been commissioned for user operation. The polarizer is based on phase retardation upon reflection off metallic mirrors. Its performance is characterized in three independent measurements and confirms the theoretical predictions of efficient and broadband generation of circularly polarized radiation in the extreme ultraviolet spectral range from 35 eV to 90 eV. The degree of circular polarization reaches up to 90% while maintaining high total transmission values exceeding 30%. The simple design of the device allows straightforward alignment for user operation and rapid switching between left and right circularly polarized radiation.
Overestimation of marsh vulnerability to sea level rise
Kirwan, Matthew L.; Temmerman, Stijn; Skeehan, Emily E.; Guntenspergen, Glenn R.; Fagherazzi, Sergio
2016-01-01
Coastal marshes are considered to be among the most valuable and vulnerable ecosystems on Earth, where the imminent loss of ecosystem services is a feared consequence of sea level rise. However, we show with a meta-analysis that global measurements of marsh elevation change indicate that marshes are generally building at rates similar to or exceeding historical sea level rise, and that process-based models predict survival under a wide range of future sea level scenarios. We argue that marsh vulnerability tends to be overstated because assessment methods often fail to consider biophysical feedback processes known to accelerate soil building with sea level rise, and the potential for marshes to migrate inland.
Dinov, Ivo D.; Heavner, Ben; Tang, Ming; Glusman, Gustavo; Chard, Kyle; Darcy, Mike; Madduri, Ravi; Pa, Judy; Spino, Cathie; Kesselman, Carl; Foster, Ian; Deutsch, Eric W.; Price, Nathan D.; Van Horn, John D.; Ames, Joseph; Clark, Kristi; Hood, Leroy; Hampstead, Benjamin M.; Dauer, William; Toga, Arthur W.
2016-01-01
Background A unique archive of Big Data on Parkinson’s Disease is collected, managed and disseminated by the Parkinson’s Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson’s disease (PD) risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data–large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources–all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction. To explore PD risk using Big Data methodology, we jointly processed complex PPMI imaging, genetics, clinical and demographic data. Methods and Findings Collective representation of the multi-source data facilitates the aggregation and harmonization of complex data elements. This enables joint modeling of the complete data, leading to the development of Big Data analytics, predictive synthesis, and statistical validation. Using heterogeneous PPMI data, we developed a comprehensive protocol for end-to-end data characterization, manipulation, processing, cleaning, analysis and validation. Specifically, we (i) introduce methods for rebalancing imbalanced cohorts, (ii) utilize a wide spectrum of classification methods to generate consistent and powerful phenotypic predictions, and (iii) generate reproducible machine-learning based classification that enables the reporting of model parameters and diagnostic forecasting based on new data. We evaluated several complementary model-based predictive approaches, which failed to generate accurate and reliable diagnostic predictions. However, the results of several machine-learning based classification methods indicated significant power to predict Parkinson’s disease in the PPMI subjects (consistent accuracy, sensitivity, and specificity exceeding 96%, confirmed using statistical n-fold cross-validation). Clinical (e.g., Unified Parkinson's Disease Rating Scale (UPDRS) scores), demographic (e.g., age), genetics (e.g., rs34637584, chr12), and derived neuroimaging biomarker (e.g., cerebellum shape index) data all contributed to the predictive analytics and diagnostic forecasting. Conclusions Model-free Big Data machine learning-based classification methods (e.g., adaptive boosting, support vector machines) can outperform model-based techniques in terms of predictive precision and reliability (e.g., forecasting patient diagnosis). We observed that statistical rebalancing of cohort sizes yields better discrimination of group differences, specifically for predictive analytics based on heterogeneous and incomplete PPMI data. UPDRS scores play a critical role in predicting diagnosis, which is expected based on the clinical definition of Parkinson’s disease. Even without longitudinal UPDRS data, however, the accuracy of model-free machine learning based classification is over 80%. The methods, software and protocols developed here are openly shared and can be employed to study other neurodegenerative disorders (e.g., Alzheimer’s, Huntington’s, amyotrophic lateral sclerosis), as well as for other predictive Big Data analytics applications. PMID:27494614
Birch, Gavin F; Taylor, Stuart E
2002-06-01
Sediments in the Port Jackson estuary are polluted by a wide range of toxicants and concentrations are among the highest reported for any major harbor in the world. Sediment quality guidelines (SQGs), developed by the National Oceanographic and Atmospheric Administration (NOAA) in the United States are used to estimate possible adverse biological effects of sedimentary contaminants in Port Jackson to benthic animals. The NOAA guidelines indicate that Pb, Zn, DDD, and DDE are the most likely contaminants to cause adverse biological effects in Port Jackson. On an individual chemical basis, the detrimental effects due to these toxicants may occur over extensive areas of the harbor, i.e., about 40%, 30%, 15% and 50%, respectively. The NOAA SQGs can also be used to estimate the probability of sediment toxicity for contaminant mixtures by determining the number of contaminants exceeding an upper guideline value (effects range medium, or ERM), which predicts probable adverse biological effects. The exceedence approach is used in the current study to estimate the probability of sediment toxicity and to prioritize the harbour in terms of possible adverse effects on sediment-dwelling animals. Approximately 1% of the harbor is mantled with sediment containing more than ten contaminants exceeding their respective ERM concentrations and, based on NOAA data, these sediments have an 80% probability of being toxic. Sediment with six to ten contaminants exceeding their respective ERM guidelines extend over approximately 4% of the harbor and have a 57% probability of toxicity. These areas are located in the landward reaches of embayments in the upper and central harbor in proximity to the most industrialised and urbanized part of the catchment. Sediment in a further 17% of the harbor has between one and five exceedences and has a 32% probability of being toxic. The application of SQGs developed by NOAA has not been tested outside North America, and the validity of using them in Port Jackson has yet to be demonstrated. The screening approach adopted here is to use SQGs to identify contaminants of concern and to determine areas of environmental risk. The practical application and management implications of the results of this investigation are discussed.
Life-history tactics: a review of the ideas.
Stearns, S C
1976-03-01
This review organizes ideas on the evolution of life histories. The key life-history traits are brood size, size of young, the age distribution of reproductive effort, the interaction of reproductive effort with adult mortality, and the variation in these traits among an individual's progeny. The general theoretical problem is to predict which combinations of traits will evolve in organisms living in specified circumstances. First consider single traits. Theorists have made the following predictions: (1) Where adult exceeds juvenile mortality, the organism should reproduce only once in its lifetime. Where juvenile exceeds adult mortality, the organism should reproduce several times. (2) Brood size should macimize the number of young surviving to maturity, summed over the lifetime of the parent. But when optimum brood-size unpredictably in time, smaller broods should be favored because they decrease the chances of total failure on a given attempt. (3) In expanding populations, selection should minimize age at maturity. In stable populations, when reproductive success depends on size, age, or social status, or when adult exceeds juvenile mortality, then maturation should be delayed, as it should be in declining populations. (4) Young should increase in size at birth with increased predation risk, and decrease in size with increased resource availability. Theorists have also predicted that only particular combinations of traits should occur in specified circumstances. (5) In growing populations, age at maturity should be minimized, reproductive effort concentrated early in life, and brood size increased. (6) One view holds that in stable environments, late maturity, broods, a few, large young, parental care, and small reproductive efforts should be favored (K-selection). In fluctuating environments, early maturity, many small young, reduced parental care, and large reproductive efforts should be favored (r-selection). (7) But another view holds that when juvenile mortality fluctuates more than adult mortality, the traits associated with stable and fluctuating environments should be reversed. We need experiments that test the assumptions and predictions reviewed here, more comprehensive theory that makes more readily falsifiable predictions, and examination of different definitions of fitness.
Parallel Performance Optimizations on Unstructured Mesh-based Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarje, Abhinav; Song, Sukhyun; Jacobsen, Douglas
2015-01-01
© The Authors. Published by Elsevier B.V. This paper addresses two key parallelization challenges the unstructured mesh-based ocean modeling code, MPAS-Ocean, which uses a mesh based on Voronoi tessellations: (1) load imbalance across processes, and (2) unstructured data access patterns, that inhibit intra- and inter-node performance. Our work analyzes the load imbalance due to naive partitioning of the mesh, and develops methods to generate mesh partitioning with better load balance and reduced communication. Furthermore, we present methods that minimize both inter- and intranode data movement and maximize data reuse. Our techniques include predictive ordering of data elements for higher cachemore » efficiency, as well as communication reduction approaches. We present detailed performance data when running on thousands of cores using the Cray XC30 supercomputer and show that our optimization strategies can exceed the original performance by over 2×. Additionally, many of these solutions can be broadly applied to a wide variety of unstructured grid-based computations.« less
Estimating the exceedance probability of rain rate by logistic regression
NASA Technical Reports Server (NTRS)
Chiu, Long S.; Kedem, Benjamin
1990-01-01
Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.
Lindblad, M; Börjesson, T; Hietaniemi, V; Elen, O
2012-01-01
The relationship between weather data and agronomical factors and deoxynivalenol (DON) levels in oats was examined with the aim of developing a predictive model. Data were collected from a total of 674 fields during periods of up to 10 years in Finland, Norway and Sweden, and included DON levels in the harvested oats crop, agronomical factors and weather data. The results show that there was a large regional variation in DON levels, with higher levels in one region in Norway compared with other regions in Norway, Finland and Sweden. In this region the median DON level was 1000 ng g⁻¹ and the regulatory limit for human consumption (1750 ng g⁻¹) was exceeded in 28% of the samples. In other regions the median DON levels ranged from 75 to 270 ng g⁻¹, and DON levels exceeded 1750 ng g⁻¹ in 3-8% of the samples. Including more variables than region in a multiple regression model only increased the adjusted coefficient of determination from 0.17 to 0.24, indicating that very little of the variation in DON levels could be explained by weather data or agronomical factors. Thus, it was not possible to predict DON levels based on the variables included in this study. Further studies are needed to solve this problem. Apparently the infection and/or growth of DON producing Fusarium species are promoted in certain regions. One possibility may be to study the species distribution of fungal communities and their changes during the oats cultivation period in more detail.
Influence of rice field agrochemicals on the ecological status of a tropical stream.
Rasmussen, Jes Jessen; Reiler, Emilie Marie; Carazo, Elizabeth; Matarrita, Jessie; Muñoz, Alejandro; Cedergreen, Nina
2016-01-15
Many tropical countries contain a high density of protected ecosystems, and these may often be bordered by intensive agricultural systems. We investigated the chemical and ecological status of a stream connecting an area with conventional rice production and a downstream protected nature reserve; Mata Redonda. Three sites were sampled: 1) an upstream control, 2) in the rice production area and 3) a downstream site in Mata Redonda. We sampled benthic macroinvertebrates and pesticides in water and sediments along with supporting physical and chemical data. Pesticide concentrations in water exceeded current safety thresholds at sites 2 and 3, especially during the rainy season, and sediment associated pesticide concentrations exceeded current safety thresholds in three of six samples. Importantly, the highest predicted pesticide toxicity in sediments was observed at site 3 in the Mata Redonda confirming that the nature reserve received critical levels of pesticide pollution from upstream sections. The currently used macroinvertebrate index in Costa Rica (BMWP-CR) and an adjusted version of the SPecies At Risk index (SPEAR) were not significantly correlated to any measure of anthropogenic stress, but the Average Score Per Taxon (ASPT) index was significantly correlated with the predicted pesticide toxicity (sumTUD.magna), oxygen concentrations and substrate composition. Our results suggest that pesticide pollution was likely involved in the impairment of the ecological status of the sampling sites, including site 3 in Mata Redonda. Based on our results, we give guidance to biomonitoring in Costa Rica and call for increased focus on pesticide transport from agricultural regions to protected areas. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Talay, Theodore A.; Poole, Lamont R.
1971-01-01
Analytical calculations have considered the effects of 1) varying parachute system mass, 2) suspension-line damping, and 3) alternate suspension-line force-elongation data on the canopy force history. Results indicate the canopy force on the LADT #3 parachute did not substantially exceed the recorded vehicle force reading and that the above factors can have significant effects on the canopy force history. Analytical calculations have considered the effects of i) varying parachute system mass, 2) suspension line damping, and 3) different suspension-line force-elongation data on the canopy force history. Based on the results of this study the following conclusions are drawn: Specifically, 1. At the LADT #3 failure time of 1.70 seconds, the canopy force ranged anywhere from 15.7% below to 2.4% above the vehicle force depending upon the model and data used. Therefore, the canopy force did not substantially exceed the recorded vehicle force reading. 2. At a predicted full inflation time of 1.80 seconds the canopy force would be greater than the vehicle force by from 1.1% to 10.6%, again depending upon the model and data used. Generally, 3. At low altitudes, enclosed and apparent air mass can significantly effect the canopy force calculated and should, therefore, not be neglected. 4. The canopy force calculations are sensitive to decelerator physical properties. In this case changes in the damping and/or force-elongation characteristics produced significant changes in the canopy force histories. Accurate prediction of canopy force histories requires accurate inputs in these areas.
NASA Astrophysics Data System (ADS)
Attal, Mikaël; Lavé, Jérôme
2009-12-01
In actively eroding landscapes, fluvial abrasion modifies the characteristics of the sediment carried by rivers and consequently has a direct impact on the ability of mountain rivers to erode their bedrock and on the characteristics and volume of the sediment exported from upland catchments. In this experimental study, we use a novel flume replicating hydrodynamic conditions prevailing in mountain rivers to investigate the role played by different controlling variables on pebble abrasion during fluvial transport. Lithology controls abrasion rates and processes, with differences in abrasion rates exceeding two orders of magnitude. Attrition as well as breaking and splitting are efficient processes in reducing particle size. Mass loss by attrition increases with particle velocity but is weakly dependent on particle size. Fragment production is enhanced by the use of large particles, high impact velocities and the presence of joints. Based on our experimental results, we extrapolate a preliminary generic relationship between pebble attrition rate and transport stage (τ*/τ*c), where τ* = fluvial Shields stress and τ*c = critical Shields stress for incipient pebble motion. This relationship predicts that attrition rates are independent of transport stage for (τ*/τ*c) ≤ 3 and increase linearly with transport stage beyond this value. We evaluate the extent to which abrasion rates control downstream fining in several different natural settings. A simplified model predicts that the most resistant lithologies control bed load flux and fining ratio and that the concavity of transport-limited river profiles should rarely exceed 0.25 in the absence of deposition and sorting.
Artificial neural networks as a useful tool to predict the risk level of Betula pollen in the air
NASA Astrophysics Data System (ADS)
Castellano-Méndez, M.; Aira, M. J.; Iglesias, I.; Jato, V.; González-Manteiga, W.
2005-05-01
An increasing percentage of the European population suffers from allergies to pollen. The study of the evolution of air pollen concentration supplies prior knowledge of the levels of pollen in the air, which can be useful for the prevention and treatment of allergic symptoms, and the management of medical resources. The symptoms of Betula pollinosis can be associated with certain levels of pollen in the air. The aim of this study was to predict the risk of the concentration of pollen exceeding a given level, using previous pollen and meteorological information, by applying neural network techniques. Neural networks are a widespread statistical tool useful for the study of problems associated with complex or poorly understood phenomena. The binary response variable associated with each level requires a careful selection of the neural network and the error function associated with the learning algorithm used during the training phase. The performance of the neural network with the validation set showed that the risk of the pollen level exceeding a certain threshold can be successfully forecasted using artificial neural networks. This prediction tool may be implemented to create an automatic system that forecasts the risk of suffering allergic symptoms.
Dynamic energy-balance model predicting gestational weight gain
USDA-ARS?s Scientific Manuscript database
Gestational weight gains (GWGs) that exceed the 2009 Institute of Medicine recommended ranges increase risk of long-term postpartum weight retention; conversely, GWGs within the recommended ranges are more likely to result in positive maternal and fetal outcomes. Despite this evidence, recent epide...
NASA Astrophysics Data System (ADS)
Stigter, T. Y.; Ribeiro, L.; Dill, A. M. M. Carvalho
2008-07-01
SummaryFactorial regression models, based on correspondence analysis, are built to explain the high nitrate concentrations in groundwater beneath an agricultural area in the south of Portugal, exceeding 300 mg/l, as a function of chemical variables, electrical conductivity (EC), land use and hydrogeological setting. Two important advantages of the proposed methodology are that qualitative parameters can be involved in the regression analysis and that multicollinearity is avoided. Regression is performed on eigenvectors extracted from the data similarity matrix, the first of which clearly reveals the impact of agricultural practices and hydrogeological setting on the groundwater chemistry of the study area. Significant correlation exists between response variable NO3- and explanatory variables Ca 2+, Cl -, SO42-, depth to water, aquifer media and land use. Substituting Cl - by the EC results in the most accurate regression model for nitrate, when disregarding the four largest outliers (model A). When built solely on land use and hydrogeological setting, the regression model (model B) is less accurate but more interesting from a practical viewpoint, as it is based on easily obtainable data and can be used to predict nitrate concentrations in groundwater in other areas with similar conditions. This is particularly useful for conservative contaminants, where risk and vulnerability assessment methods, based on assumed rather than established correlations, generally produce erroneous results. Another purpose of the models can be to predict the future evolution of nitrate concentrations under influence of changes in land use or fertilization practices, which occur in compliance with policies such as the Nitrates Directive. Model B predicts a 40% decrease in nitrate concentrations in groundwater of the study area, when horticulture is replaced by other land use with much lower fertilization and irrigation rates.
Aquatic concentrations of chemical analytes compared to ...
We describe screening level estimates of potential aquatic toxicity posed by 227 chemical analytes that were measured in 25 ambient water samples collected as part of a joint USGS/USEPA drinking water plant study. Measured concentrations were compared to biological effect concentration (EC) estimates, including USEPA aquatic life criteria, effective plasma concentrations of pharmaceuticals, published toxicity data summarized in the USEPA ECOTOX database, and chemical structure-based predictions. Potential dietary exposures were estimated using a generic 3-tiered food web accumulation scenario. For many analytes, few or no measured effect data were found, and for some analytes, reporting limits exceeded EC estimates, limiting the scope of conclusions. Results suggest occasional occurrence above ECs for copper, aluminum, strontium, lead, uranium, and nitrate. Sparse effect data for manganese, antimony, and vanadium suggest that these analytes may occur above ECs, but additional effect data would be desirable to corroborate EC estimates. These conclusions were not affected by bioaccumulation estimates. No organic analyte concentrations were found to exceed EC estimates, but ten analytes had concentrations in excess of 1/10th of their respective EC: triclocarban, norverapamil, progesterone, atrazine, metolachlor, triclosan, para-nonylphenol, ibuprofen, venlafaxine, and amitriptyline, suggesting more detailed characterization of these analytes. Purpose: to provide sc
Aquatic concentrations of chemical analytes compared to ecotoxicity estimates
Kostich, Mitchell S.; Flick, Robert W.; Angela L. Batt,; Mash, Heath E.; Boone, J. Scott; Furlong, Edward T.; Kolpin, Dana W.; Glassmeyer, Susan T.
2017-01-01
We describe screening level estimates of potential aquatic toxicity posed by 227 chemical analytes that were measured in 25 ambient water samples collected as part of a joint USGS/USEPA drinking water plant study. Measured concentrations were compared to biological effect concentration (EC) estimates, including USEPA aquatic life criteria, effective plasma concentrations of pharmaceuticals, published toxicity data summarized in the USEPA ECOTOX database, and chemical structure-based predictions. Potential dietary exposures were estimated using a generic 3-tiered food web accumulation scenario. For many analytes, few or no measured effect data were found, and for some analytes, reporting limits exceeded EC estimates, limiting the scope of conclusions. Results suggest occasional occurrence above ECs for copper, aluminum, strontium, lead, uranium, and nitrate. Sparse effect data for manganese, antimony, and vanadium suggest that these analytes may occur above ECs, but additional effect data would be desirable to corroborate EC estimates. These conclusions were not affected by bioaccumulation estimates. No organic analyte concentrations were found to exceed EC estimates, but ten analytes had concentrations in excess of 1/10th of their respective EC: triclocarban, norverapamil, progesterone, atrazine, metolachlor, triclosan, para-nonylphenol, ibuprofen, venlafaxine, and amitriptyline, suggesting more detailed characterization of these analytes.
Aquatic concentrations of chemical analytes compared to ecotoxicity estimates.
Kostich, Mitchell S; Flick, Robert W; Batt, Angela L; Mash, Heath E; Boone, J Scott; Furlong, Edward T; Kolpin, Dana W; Glassmeyer, Susan T
2017-02-01
We describe screening level estimates of potential aquatic toxicity posed by 227 chemical analytes that were measured in 25 ambient water samples collected as part of a joint USGS/USEPA drinking water plant study. Measured concentrations were compared to biological effect concentration (EC) estimates, including USEPA aquatic life criteria, effective plasma concentrations of pharmaceuticals, published toxicity data summarized in the USEPA ECOTOX database, and chemical structure-based predictions. Potential dietary exposures were estimated using a generic 3-tiered food web accumulation scenario. For many analytes, few or no measured effect data were found, and for some analytes, reporting limits exceeded EC estimates, limiting the scope of conclusions. Results suggest occasional occurrence above ECs for copper, aluminum, strontium, lead, uranium, and nitrate. Sparse effect data for manganese, antimony, and vanadium suggest that these analytes may occur above ECs, but additional effect data would be desirable to corroborate EC estimates. These conclusions were not affected by bioaccumulation estimates. No organic analyte concentrations were found to exceed EC estimates, but ten analytes had concentrations in excess of 1/10th of their respective EC: triclocarban, norverapamil, progesterone, atrazine, metolachlor, triclosan, para-nonylphenol, ibuprofen, venlafaxine, and amitriptyline, suggesting more detailed characterization of these analytes. Published by Elsevier B.V.
NASA Technical Reports Server (NTRS)
Litchford, R. J.
2005-01-01
A computational method for the analysis of longitudinal-mode liquid rocket combustion instability has been developed based on the unsteady, quasi-one-dimensional Euler equations where the combustion process source terms were introduced through the incorporation of a two-zone, linearized representation: (1) A two-parameter collapsed combustion zone at the injector face, and (2) a two-parameter distributed combustion zone based on a Lagrangian treatment of the propellant spray. The unsteady Euler equations in inhomogeneous form retain full hyperbolicity and are integrated implicitly in time using second-order, high-resolution, characteristic-based, flux-differencing spatial discretization with Roe-averaging of the Jacobian matrix. This method was initially validated against an analytical solution for nonreacting, isentropic duct acoustics with specified admittances at the inflow and outflow boundaries. For small amplitude perturbations, numerical predictions for the amplification coefficient and oscillation period were found to compare favorably with predictions from linearized small-disturbance theory as long as the grid exceeded a critical density (100 nodes/wavelength). The numerical methodology was then exercised on a generic combustor configuration using both collapsed and distributed combustion zone models with a short nozzle admittance approximation for the outflow boundary. In these cases, the response parameters were varied to determine stability limits defining resonant coupling onset.
Highly selective condensation of biomass-derived methyl ketones as a source of aviation fuel.
Sacia, Eric R; Balakrishnan, Madhesan; Deaner, Matthew H; Goulas, Konstantinos A; Toste, F Dean; Bell, Alexis T
2015-05-22
Aviation fuel (i.e., jet fuel) requires a mixture of C9 -C16 hydrocarbons having both a high energy density and a low freezing point. While jet fuel is currently produced from petroleum, increasing concern with the release of CO2 into the atmosphere from the combustion of petroleum-based fuels has led to policy changes mandating the inclusion of biomass-based fuels into the fuel pool. Here we report a novel way to produce a mixture of branched cyclohexane derivatives in very high yield (>94 %) that match or exceed many required properties of jet fuel. As starting materials, we use a mixture of n-alkyl methyl ketones and their derivatives obtained from biomass. These synthons are condensed into trimers via base-catalyzed aldol condensation and Michael addition. Hydrodeoxygenation of these products yields mixtures of C12 -C21 branched, cyclic alkanes. Using models for predicting the carbon number distribution obtained from a mixture of n-alkyl methyl ketones and for predicting the boiling point distribution of the final mixture of cyclic alkanes, we show that it is possible to define the mixture of synthons that will closely reproduce the distillation curve of traditional jet fuel. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Unified Deep Learning Architecture for Modeling Biology Sequence.
Wu, Hongjie; Cao, Chengyuan; Xia, Xiaoyan; Lu, Qiang
2017-10-09
Prediction of the spatial structure or function of biological macromolecules based on their sequence remains an important challenge in bioinformatics. When modeling biological sequences using traditional sequencing models, characteristics, such as long-range interactions between basic units, the complicated and variable output of labeled structures, and the variable length of biological sequences, usually lead to different solutions on a case-by-case basis. This study proposed the use of bidirectional recurrent neural networks based on long short-term memory or a gated recurrent unit to capture long-range interactions by designing the optional reshape operator to adapt to the diversity of the output labels and implementing a training algorithm to support the training of sequence models capable of processing variable-length sequences. Additionally, the merge and pooling operators enhanced the ability to capture short-range interactions between basic units of biological sequences. The proposed deep-learning model and its training algorithm might be capable of solving currently known biological sequence-modeling problems through the use of a unified framework. We validated our model on one of the most difficult biological sequence-modeling problems currently known, with our results indicating the ability of the model to obtain predictions of protein residue interactions that exceeded the accuracy of current popular approaches by 10% based on multiple benchmarks.
12 CFR 327.52 - Annual dividend determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... the DIF reserve ratio as of December 31st of 2008 or any later year equals or exceeds 1.35 percent... dividend based upon the reserve ratio of the DIF as of December 31st of the preceding year, and the amount... ratio of the DIF equals or exceeds 1.35 percent of estimated insured deposits and does not exceed 1.50...
NASA Astrophysics Data System (ADS)
Stepanek, Adam J.
The prospect for skillful long-term predictions of atmospheric conditions known to directly contribute to the onset and maintenance of severe convective storms remains unclear. A thorough assessment of the capability for a global climate model such as the Climate Forecast System Version 2 (CFSv2) to skillfully represent parameters related to severe weather has the potential to significantly improve medium- to long-range outlooks vital to risk managers. Environmental convective available potential energy (CAPE) and deep-layer vertical wind shear (DLS) can be used to distinguish an atmosphere conducive to severe storms from one supportive of primarily non-severe 'ordinary' convection. As such, this research concentrates on the predictability of CAPE, DLS, and a product of the two parameters (CAPEDLS) by the CFSv2 with a specific focus on the subseasonal timescale. Individual month-long verification periods from the Climate Forecast System reanalysis (CFSR) dataset are measured against a climatological standard using cumulative distribution function (CDF) and area-under-the-CDF (AUCDF) techniques designed mitigate inherent model biases while concurrently assessing the entire distribution of a given parameter in lieu of a threshold-based approach. Similar methods imposed upon the CFS reforecast (CFSRef) and operational CFSv2 allow for comparisons elucidating both spatial and temporal trends in skill using correlation coefficients, proportion correct metrics, Heidke skill score (HSS), and root-mean-square-error (RMSE) statistics. Key results show the CFSv2-based output often demonstrates skill beyond a climatologically-based threshold when the forecast is notably anomalous from the 29-year (1982-2010) mean CFSRef prediction (exceeding one standard deviation at grid point level). CFSRef analysis indicates enhanced skill during the months of April and June (relative to May) and for predictions of DLS. Furthermore, years exhibiting skill in terms of RMSE are shown to possess certain correlations with El Nino-Southern Oscillation conditions from the preceding winter and concurrent Madden Julian Oscillation activity. Applying results gleaned from the CFSRef analysis to the operational CFSv2 (2011-16) indicates predictive skill can be increased by isolating forecasts meeting multiple parameter-based relationships.
Post audit of a numerical prediction of wellfield drawdown in a semiconfined aquifer system
Stewart, M.; Langevin, C.
1999-01-01
A numerical ground water flow model was created in 1978 and revised in 1981 to predict the drawdown effects of a proposed municipal wellfield permitted to withdraw 30 million gallons per day (mgd; 1.1 x 105 m3/day) of water from the semiconfined Floridan Aquifer system. The predictions are based on the assumption that water levels in the semiconfined Floridan Aquifer reach a long-term, steady-state condition within a few days of initiation of pumping. Using this assumption, a 75 day simulation without water table recharge, pumping at the maximum permitted rates, was considered to represent a worst-case condition and the greatest drawdowns that could be experienced during wellfield operation. This method of predicting wellfield effects was accepted by the permitting agency. For this post audit, observed drawdowns were derived by taking the difference between pre-pumping and post-pumping potentiometric surface levels. Comparison of predicted and observed drawdowns suggests that actual drawdown over a 12 year period exceeds predicted drawdown by a factor of two or more. Analysis of the source of error in the 1981 predictions suggests that the values used for transmissivity, storativity, specific yield, and leakance are reasonable at the wellfield scale. Simulation using actual 1980-1992 pumping rates improves the agreement between predicted and observed drawdowns. The principal source of error is the assumption that water levels in a semiconfined aquifer achieve a steady-state condition after a few days or weeks of pumping. Simulations using a version of the 1981 model modified to include recharge and evapotranspiration suggest that it can take hundreds of days or several years for water levels in the linked Surficial and Floridan Aquifers to reach an apparent steady-state condition, and that slow declines in levels continue for years after the initiation of pumping. While the 1981 'impact' model can be used for reasonably predicting short-term, wellfield-scale effects of pumping, using a 75 day long simulation without recharge to predict the long-term behavior of the wellfield was an inappropriate application, resulting in significant underprediction of wellfield effects.A numerical ground water flow model was created in 1978 and revised in 1981 to predict the drawdown effects of a proposed municipal wellfield permitted to withdraw 30 million gallons per day (mgd; 1.1??105 m3/day) of water from the semiconfined Floridan Aquifer system. The predictions are based on the assumption that water levels in the semiconfined Floridan Aquifer reach a long-term, steady-state condition within a few days of initiation of pumping. Using this assumption, a 75 day simulation without water table recharge, pumping at the maximum permitted rates, was considered to represent a worst-case condition and the greatest drawdowns that could be experienced during wellfield operation. This method of predicting wellfield effects was accepted by the permitting agency. For this post audit, observed drawdowns were derived by taking the difference between pre-pumping and post-pumping potentiometric surface levels. Comparison of predicted and observed drawdowns suggests that actual drawdown over a 12 year period exceeds predicted drawdown by a factor of two or more. Analysis of the source of error in the 1981 predictions suggests that the values used for transmissivity, storativity, specific yield, and leakance are reasonable at the wellfield scale. Simulation using actual 1980-1992 pumping rates improves the agreement between predicted and observed drawdowns. The principal source of error is the assumption that water levels in a semiconfined aquifer achieve a steady-state condition after a few days or weeks of pumping. Simulations using a version of the 1981 model modified to include recharge and evapotranspiration suggest that it can take hundreds of days or several years for water levels in the linked Surficial and Floridan Aquifers to reach an apparent stead
Impact of direct substitution of arm span length for current standing height in elderly COPD.
Pothirat, Chaicharn; Chaiwong, Warawut; Phetsuk, Nittaya
2015-01-01
Arm span length is related to standing height and has been studied as a substitute for current standing height for predicting lung function parameters. However, it has never been studied in elderly COPD patients. To evaluate the accuracy of substituting arm span length for current standing height in the evaluation of pulmonary function parameters and severity classification in elderly Thai COPD patients. Current standing height and arm span length were measured in COPD patients aged >60 years. Postbronchodilator spirometric parameters, forced vital capacity (FVC), forced expiratory volume in first second (FEV1), and ratio of FEV1/FVC (FEV1%), were used to classify disease severity according to global initiative for chronic obstructive lung disease criteria. Predicted values for each parameter were also calculated separately utilizing current standing height or arm span length measurements. Student's t-tests and chi-squared tests were used to compare differences between the groups. Statistical significance was set at P<0.05. A total of 106 COPD patients with a mean age of 72.1±7.8 years, mean body mass index of 20.6±3.8 kg/m(2), and mean standing height of 156.4±8.3 cm were enrolled. The mean arm span length exceeded mean standing height by 7.7±4.6 cm (164.0±9.0 vs 156.4±8.3 cm, P<0.001), at a ratio of 1.05±0.03. Percentages of both predicted FVC and FEV1 values based on arm span length were significantly lower than those using current standing height (76.6±25.4 vs 61.6±16.8, P<0.001 and 50.8±25.4 vs 41.1±15.3, P<0.001). Disease severity increased in 39.6% (42/106) of subjects using arm span length over current standing height for predicted lung function. Direct substitution of arm span length for current standing height in elderly Thai COPD patients should not be recommended in cases where arm span length exceeds standing height by more than 4 cm.
NASA Astrophysics Data System (ADS)
Jiang, X.; Lu, W. X.; Yang, Q. C.; Yang, Z. P.
2014-03-01
Aim of the present study is to evaluate the potential ecological risk and predict the trend of soil heavy metal pollution around a~coal gangue dump in Jilin Province (Northeast China). The concentrations of Cd, Pb, Cu, Cr and Zn were monitored by inductively coupled plasma mass spectrometry (ICP-MS). The potential ecological risk index method developed by Hakanson (1980) was employed to assess the potential risk of heavy metal pollution. The potential ecological risk in an order of E(Cd) > E(Pb) > E(Cu) > E(Cr) > E(Zn) have been obtained, which showed that Cd was the most important factor led to risk. Based on the Cd pollution history, the cumulative acceleration and cumulative rate of Cd were estimated, and the fixed number of years exceeding standard prediction model was established, which was used to predict the pollution trend of Cd under the accelerated accumulation mode and the uniform mode. Pearson correlation analysis and correspondence analysis are employed to identify the sources of heavy metal, and the relationship between sampling points and variables. These findings provide some useful insights for making appropriate management strategies to prevent and decrease heavy metal pollution around coal gangue dump in Yangcaogou coal mine and other similar areas elsewhere.
Karasov, W.H.; Kenow, K.P.; Meyer, M.W.; Fournier, F.
2007-01-01
A bioenergetics model was used to predict food intake of common loon (Gavia immer) chicks as a function of body mass during development, and a pharmacokinetics model, based on first-order kinetics in a single compartment, was used to predict blood Hg level as a function of food intake rate, food Hg content, body mass, and Hg absorption and elimination. Predictions were tested in captive growing chicks fed trout (Salmo gairdneri) with average MeHg concentrations of 0.02 (control), 0.4, and 1.2 ??g/g wet mass (delivered as CH3HgCl). Predicted food intake matched observed intake through 50 d of age but then exceeded observed intake by an amount that grew progressively larger with age, reaching a significant overestimate of 28% by the end of the trial. Respiration in older, nongrowing birds probably was overestimated by using rates measured in younger, growing birds. Close agreement was found between simulations and measured blood Hg, which varied significantly with dietary Hg and age. Although chicks may hatch with different blood Hg levels, their blood level is determined mainly by dietary Hg level beyond approximately two weeks of age. The model also may be useful for predicting Hg levels in adults and in the eggs that they lay, but its accuracy in both chicks and adults needs to be tested in free-living birds. ?? 2007 SETAC.
Modeling integrated photovoltaic–electrochemical devices using steady-state equivalent circuits
Winkler, Mark T.; Cox, Casandra R.; Nocera, Daniel G.; Buonassisi, Tonio
2013-01-01
We describe a framework for efficiently coupling the power output of a series-connected string of single-band-gap solar cells to an electrochemical process that produces storable fuels. We identify the fundamental efficiency limitations that arise from using solar cells with a single band gap, an arrangement that describes the use of currently economic solar cell technologies such as Si or CdTe. Steady-state equivalent circuit analysis permits modeling of practical systems. For the water-splitting reaction, modeling defines parameters that enable a solar-to-fuels efficiency exceeding 18% using laboratory GaAs cells and 16% using all earth-abundant components, including commercial Si solar cells and Co- or Ni-based oxygen evolving catalysts. Circuit analysis also provides a predictive tool: given the performance of the separate photovoltaic and electrochemical systems, the behavior of the coupled photovoltaic–electrochemical system can be anticipated. This predictive utility is demonstrated in the case of water oxidation at the surface of a Si solar cell, using a Co–borate catalyst.
Frieling, Joost; Gebhardt, Holger; Huber, Matthew; Adekeye, Olabisi A; Akande, Samuel O; Reichart, Gert-Jan; Middelburg, Jack J; Schouten, Stefan; Sluijs, Appy
2017-03-01
Global ocean temperatures rapidly warmed by ~5°C during the Paleocene-Eocene Thermal Maximum (PETM; ~56 million years ago). Extratropical sea surface temperatures (SSTs) met or exceeded modern subtropical values. With these warm extratropical temperatures, climate models predict tropical SSTs >35°C-near upper physiological temperature limits for many organisms. However, few data are available to test these projected extreme tropical temperatures or their potential lethality. We identify the PETM in a shallow marine sedimentary section deposited in Nigeria. On the basis of planktonic foraminiferal Mg/Ca and oxygen isotope ratios and the molecular proxy [Formula: see text], latest Paleocene equatorial SSTs were ~33°C, and [Formula: see text] indicates that SSTs rose to >36°C during the PETM. This confirms model predictions on the magnitude of polar amplification and refutes the tropical thermostat theory. We attribute a massive drop in dinoflagellate abundance and diversity at peak warmth to thermal stress, showing that the base of tropical food webs is vulnerable to rapid warming.
Prediction of Burst Pressure in Multistage Tube Hydroforming of Aerospace Alloys.
Saboori, M; Gholipour, J; Champliaud, H; Wanjara, P; Gakwaya, A; Savoie, J
2016-08-01
Bursting, an irreversible failure in tube hydroforming (THF), results mainly from the local plastic instabilities that occur when the biaxial stresses imparted during the process exceed the forming limit strains of the material. To predict the burst pressure, Oyan's and Brozzo's decoupled ductile fracture criteria (DFC) were implemented as user material models in a dynamic nonlinear commercial 3D finite-element (FE) software, ls-dyna. THF of a round to V-shape was selected as a generic representative of an aerospace component for the FE simulations and experimental trials. To validate the simulation results, THF experiments up to bursting were carried out using Inconel 718 (IN 718) tubes with a thickness of 0.9 mm to measure the internal pressures during the process. When comparing the experimental and simulation results, the burst pressure predicated based on Oyane's decoupled damage criterion was found to agree better with the measured data for IN 718 than Brozzo's fracture criterion.
Modal Survey of ETM-3, A 5-Segment Derivative of the Space Shuttle Solid Rocket Booster
NASA Technical Reports Server (NTRS)
Nielsen, D.; Townsend, J.; Kappus, K.; Driskill, T.; Torres, I.; Parks, R.
2005-01-01
The complex interactions between internal motor generated pressure oscillations and motor structural vibration modes associated with the static test configuration of a Reusable Solid Rocket Motor have potential to generate significant dynamic thrust loads in the 5-segment configuration (Engineering Test Motor 3). Finite element model load predictions for worst-case conditions were generated based on extrapolation of a previously correlated 4-segment motor model. A modal survey was performed on the largest rocket motor to date, Engineering Test Motor #3 (ETM-3), to provide data for finite element model correlation and validation of model generated design loads. The modal survey preparation included pretest analyses to determine an efficient analysis set selection using the Effective Independence Method and test simulations to assure critical test stand component loads did not exceed design limits. Historical Reusable Solid Rocket Motor modal testing, ETM-3 test analysis model development and pre-test loads analyses, as well as test execution, and a comparison of results to pre-test predictions are discussed.
NASA Astrophysics Data System (ADS)
Robinson, C.; Barry, D. A.
2008-12-01
Enhanced anaerobic dechlorination is a promising technology for in situ remediation of chlorinated ethene DNAPL source areas. However, the build-up of organic acids and HCl in the source zone can lead to significant groundwater acidification. The resulting pH drop inhibits the activity of the dechlorinating microorganisms and thus may stall the remediation process. Source zone remediation requires extensive dechlorination, such that it may be common for soil's natural buffering capacity to be exceeded, and for acidic conditions to develop. In these cases bicarbonate addition (e.g., NaHCO3, KHCO3) is required for pH control. As a design tool for treatment strategies, we have developed BUCHLORAC, a Windows Graphical User Interface based on an abiotic geochemical model that allows the user to predict the acidity generated during dechlorination and associated buffer requirements for their specific operating conditions. BUCHLORAC was motivated by the SABRE (Source Area BioREmediation) project, which aims to evaluate the effectiveness of enhanced reductive dechlorination in the treatment of chlorinated solvent source zones.
Bernstein, Diana N.; Neelin, J. David
2016-04-28
A branch-run perturbed-physics ensemble in the Community Earth System Model estimates impacts of parameters in the deep convection scheme on current hydroclimate and on end-of-century precipitation change projections under global warming. Regional precipitation change patterns prove highly sensitive to these parameters, especially in the tropics with local changes exceeding 3mm/d, comparable to the magnitude of the predicted change and to differences in global warming predictions among the Coupled Model Intercomparison Project phase 5 models. This sensitivity is distributed nonlinearly across the feasible parameter range, notably in the low-entrainment range of the parameter for turbulent entrainment in the deep convection scheme.more » This suggests that a useful target for parameter sensitivity studies is to identify such disproportionately sensitive dangerous ranges. Here, the low-entrainment range is used to illustrate the reduction in global warming regional precipitation sensitivity that could occur if this dangerous range can be excluded based on evidence from current climate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, Diana N.; Neelin, J. David
A branch-run perturbed-physics ensemble in the Community Earth System Model estimates impacts of parameters in the deep convection scheme on current hydroclimate and on end-of-century precipitation change projections under global warming. Regional precipitation change patterns prove highly sensitive to these parameters, especially in the tropics with local changes exceeding 3mm/d, comparable to the magnitude of the predicted change and to differences in global warming predictions among the Coupled Model Intercomparison Project phase 5 models. This sensitivity is distributed nonlinearly across the feasible parameter range, notably in the low-entrainment range of the parameter for turbulent entrainment in the deep convection scheme.more » This suggests that a useful target for parameter sensitivity studies is to identify such disproportionately sensitive dangerous ranges. Here, the low-entrainment range is used to illustrate the reduction in global warming regional precipitation sensitivity that could occur if this dangerous range can be excluded based on evidence from current climate.« less
The shaping of genetic variation in edge-of-range populations under past and future climate change
Razgour, Orly; Juste, Javier; Ibáñez, Carlos; Kiefer, Andreas; Rebelo, Hugo; Puechmaille, Sébastien J; Arlettaz, Raphael; Burke, Terry; Dawson, Deborah A; Beaumont, Mark; Jones, Gareth; Wiens, John
2013-01-01
With rates of climate change exceeding the rate at which many species are able to shift their range or adapt, it is important to understand how future changes are likely to affect biodiversity at all levels of organisation. Understanding past responses and extent of niche conservatism in climatic tolerance can help predict future consequences. We use an integrated approach to determine the genetic consequences of past and future climate changes on a bat species, Plecotus austriacus. Glacial refugia predicted by palaeo-modelling match those identified from analyses of extant genetic diversity and model-based inference of demographic history. Former refugial populations currently contain disproportionately high genetic diversity, but niche conservatism, shifts in suitable areas and barriers to migration mean that these hotspots of genetic diversity are under threat from future climate change. Evidence of population decline despite recent northward migration highlights the need to conserve leading-edge populations for spearheading future range shifts. PMID:23890483
An operational real-time flood forecasting system in Southern Italy
NASA Astrophysics Data System (ADS)
Ortiz, Enrique; Coccia, Gabriele; Todini, Ezio
2015-04-01
A real-time flood forecasting system has been operating since year 2012 as a non-structural measure for mitigating the flood risk in Campania Region (Southern Italy), within the Sele river basin (3.240 km2). The Sele Flood Forecasting System (SFFS) has been built within the FEWS (Flood Early Warning System) platform developed by Deltares and it assimilates the numerical weather predictions of the COSMO LAM family: the deterministic COSMO-LAMI I2, the deterministic COSMO-LAMI I7 and the ensemble numerical weather predictions COSMO-LEPS (16 members). Sele FFS is composed by a cascade of three main models. The first model is a fully continuous physically based distributed hydrological model, named TOPKAPI-eXtended (Idrologia&Ambiente s.r.l., Naples, Italy), simulating the dominant processes controlling the soil water dynamics, runoff generation and discharge with a spatial resolution of 250 m. The second module is a set of Neural-Networks (ANN) built for forecasting the river stages at a set of monitored cross-sections. The third component is a Model Conditional Processor (MCP), which provides the predictive uncertainty (i.e., the probability of occurrence of a future flood event) within the framework of a multi-temporal forecast, according to the most recent advancements on this topic (Coccia and Todini, HESS, 2011). The MCP provides information about the probability of exceedance of a maximum river stage within the forecast lead time, by means of a discrete time function representing the variation of cumulative probability of exceeding a river stage during the forecast lead time and the distribution of the time occurrence of the flood peak, starting from one or more model forecasts. This work shows the Sele FFS performance after two years of operation, evidencing the added-values that can provide to a flood early warning and emergency management system.
Predictions and Observations of Munitions Burial Under Intense Storm Waves at Duck, NC
NASA Astrophysics Data System (ADS)
Calantoni, J.; Klammer, H.; Sheremet, A.
2017-12-01
The fate of munitions or unexploded ordnance (UXO) resting on a submarine sediment bed is a critical safety concern. Munitions may remain in place or completely disappear for significant but unknown periods, after becoming buried in the sediment bed. Clearly, burial of munitions drastically complicates the detection and removal of potential threats. Here, we present field data of wave height and surrogate munitions burial depths near the 8-m isobath at the U.S. Army Corps of Engineers, Field Research Facility, Duck, North Carolina, observed between January and March 2015. The experiment captured a remarkable sequence of storms that included at least 10 events, of which 6 were characterized by wave fields of significant heights exceeding 2 m and with peak periods of approximately 10 s. During the strongest storm, waves of 14 s period and heights exceeding 2 m were recorded for more than 3 days; significant wave height reached 5 m at the peak of activity. At the end of the experiment, divers measured munition burial depths of up to 60 cm below the seabed level. However, the local bathymetry showed less than 5 cm variation between the before and after-storm states, suggesting the local net sediment accumulation / loss was negligible. The lack of bathymetric variability strongly suggests that the munitions sank into the bed, which would suggest an extreme state of sand agitation during the storm. We explore existing analytical solutions for the dynamic interaction between waves and sediment to predict munitions burial depths. Measured time series of wave pressure near the sediment bed were converted into wave-induced changes in pore pressures and the effective stress states of the sediment. Different sediment failure criteria based on minimum normal and maximum shear stresses were then applied to evaluate the appropriateness of individual failure criteria to predict observed burial depths. Results are subjected to a sensitivity analysis with respect to uncertain sediment parameters and summarized by representing cumulative failure times as a function of depth.
A New Insight into Probabilistic Seismic Hazard Analysis for Central India
NASA Astrophysics Data System (ADS)
Mandal, H. S.; Shukla, A. K.; Khan, P. K.; Mishra, O. P.
2013-12-01
The Son-Narmada-Tapti lineament and its surroundings of Central India (CI) is the second most important tectonic regime following the converging margin along Himalayas-Myanmar-Andaman of the Indian sub-continent, which attracted several geoscientists to assess its seismic hazard potential. Our study area, a part of CI, is bounded between latitudes 18°-26°N and longitudes 73°-83°E, representing a stable part of Peninsular India. Past damaging moderate magnitude earthquakes as well as continuing microseismicity in the area provided enough data for seismological study. Our estimates based on regional Gutenberg-Richter relationship showed lower b values (i.e., between 0.68 and 0.76) from the average for the study area. The Probabilistic Seismic Hazard Analysis carried out over the area with a radius of ~300 km encircling Bhopal yielded a conspicuous relationship between earthquake return period ( T) and peak ground acceleration (PGA). Analyses of T and PGA shows that PGA value at bedrock varies from 0.08 to 0.15 g for 10 % ( T = 475 years) and 2 % ( T = 2,475 years) probabilities exceeding 50 years, respectively. We establish the empirical relationships and between zero period acceleration (ZPA) and shear wave velocity up to a depth of 30 m [ V s (30)] for the two different return periods. These demonstrate that the ZPA values decrease with increasing shear wave velocity, suggesting a diagnostic indicator for designing the structures at a specific site of interest. The predictive designed response spectra generated at a site for periods up to 4.0 s at 10 and 2 % probability of exceedance of ground motion for 50 years can be used for designing duration dependent structures of variable vertical dimension. We infer that this concept of assimilating uniform hazard response spectra and predictive design at 10 and 2 % probability of exceedance in 50 years at 5 % damping at bedrocks of different categories may offer potential inputs for designing earthquake resistant structures of variable dimensions for the CI region under the National Earthquake Hazard Reduction Program for India.
Andrews, Karen W; Roseland, Janet M; Gusev, Pavel A; Palachuvattil, Joel; Dang, Phuong T; Savarala, Sushma; Han, Fei; Pehrsson, Pamela R; Douglass, Larry W; Dwyer, Johanna T; Betz, Joseph M; Saldanha, Leila G; Bailey, Regan L
2017-01-01
Background: Multivitamin/mineral products (MVMs) are the dietary supplements most commonly used by US adults. During manufacturing, some ingredients are added in amounts exceeding the label claims to compensate for expected losses during the shelf life. Establishing the health benefits and harms of MVMs requires accurate estimates of nutrient intake from MVMs based on measures of actual rather than labeled ingredient amounts. Objectives: Our goals were to determine relations between analytically measured and labeled ingredient content and to compare adult MVM composition with Recommended Dietary Allowances (RDAs) and Tolerable Upper Intake Levels. Design: Adult MVMs were purchased while following a national sampling plan and chemically analyzed for vitamin and mineral content with certified reference materials in qualified laboratories. For each ingredient, predicted mean percentage differences between analytically obtained and labeled amounts were calculated with the use of regression equations. Results: For 12 of 18 nutrients, most products had labeled amounts at or above RDAs. The mean measured content of all ingredients (except thiamin) exceeded labeled amounts (overages). Predicted mean percentage differences exceeded labeled amounts by 1.5–13% for copper, manganese, magnesium, niacin, phosphorus, potassium, folic acid, riboflavin, and vitamins B-12, C, and E, and by ∼25% for selenium and iodine, regardless of labeled amount. In contrast, thiamin, vitamin B-6, calcium, iron, and zinc had linear or quadratic relations between the labeled and percentage differences, with ranges from −6.5% to 8.6%, −3.5% to 21%, 7.1% to 29.3%, −0.5% to 16.4%, and −1.9% to 8.1%, respectively. Analytically adjusted ingredient amounts are linked to adult MVMs reported in the NHANES 2003–2008 via the Dietary Supplement Ingredient Database (http://dsid.usda.nih.gov) to facilitate more accurate intake quantification. Conclusions: Vitamin and mineral overages were measured in adult MVMs, most of which already meet RDAs. Therefore, nutrient overexposures from supplements combined with typical food intake may have unintended health consequences, although this would require further examination. PMID:27974309
NASA Astrophysics Data System (ADS)
Waltham, Nathan J.; Sheaves, Marcus
2017-09-01
Understanding acute hyperthermic exposure risk to animals, including fish in tropical estuaries, is increasingly necessary under future climate change. To examine this hypothesis, fish (upper water column species - glassfish, Ambassis vachellii; river mullet, Chelon subviridis; diamond scale mullet, Ellochelon vaigiensis; and ponyfish, Leiognathus equulus; and lower water bottom dwelling species - whiting Sillago analis) were caught in an artificial tidal lake in tropical north Queensland (Australia), and transported to a laboratory tank to acclimate (3wks). After acclimation, fish (between 10 and 17 individuals each time) were transferred to a temperature ramping experimental tank, where a thermoline increased (2.5 °C/hr; which is the average summer water temperature increasing rate measured in the urban lakes) tank water temperature to establish threshold points where each fish species lost equilibrium (defined here as Acute Effect Temperature; AET). The coolest AET among all species was 33.1 °C (S. analis), while the highest was 39.9 °C (A. vachellii). High frequency loggers were deployed (November and March representing Austral summer) in the same urban lake where fish were sourced, to measure continuous (20min) surface (0.15 m) and bottom (0.1 m) temperature to derive thermal frequency curves to examine how often lake temperatures exceed AET thresholds. For most fish species examined, water temperature that could be lethal were exceeded at the surface, but rarely, if ever, at the bottom waters suggesting deep, cooler, water provides thermal refugia for fish. An energy-balance model was used to estimate daily mean lake water temperature with good accuracy (±1 °C; R2 = 0.91, modelled vs lake measured temperature). The model was used to predict climate change effects on lake water temperature, and the exceedance of thermal threshold change. A 2.3 °C climate warming (based on 2100 local climate prediction) raised lake water temperature by 1.3 °C. However, small as this increase might seem, it led to a doubling of time that water temperatures were in excess of AET thresholds at the surface, but also the bottom waters that presently provide thermal refugia for fish.
Stone, Wesley W.; Gilliom, Robert J.
2011-01-01
The 95-percent prediction intervals are well within a factor of 10 above and below the predicted concentration statistic. WARP-CB model predictions were within a factor of 5 of the observed concentration statistic for over 90 percent of the model-development sites. The WARP-CB residuals and uncertainty are lower than those of the National WARP model for the same sites. The WARP-CB models provide improved predictions of the probability of exceeding a specified criterion or benchmark for Corn Belt streams draining watersheds with high atrazine use intensities; however, National WARP models should be used for Corn Belt streams where atrazine use intensities are less than 17 kg/km2 of watershed area.
The Swift Gamma-ray Burst Explorer: Early views into Black-hole Creation
NASA Technical Reports Server (NTRS)
Hill, Joe
2006-01-01
Swift has exceeded every pre-launch predicted advance in GRB science. It has discovered the farthest GRB ever seen and identified new GRBs at a rate of 100/year. It has also explored a brand new time interval in GRB light curves by revealing unpredicted phenomena of GRB flares and rapid x-ray afterglow declines. Swift has conducted 20,00o successful slews to sources and is predicted to stay in orbit until 2022.
Outbrief - Long Life Rocket Engine Panel
NASA Technical Reports Server (NTRS)
Quinn, Jason Eugene
2004-01-01
This white paper is an overview of the JANNAF Long Life Rocket Engine (LLRE) Panel results from the last several years of activity. The LLRE Panel has met over the last several years in order to develop an approach for the development of long life rocket engines. Membership for this panel was drawn from a diverse set of the groups currently working on rocket engines (Le. government labs, both large and small companies and university members). The LLRE Panel was formed in order to determine the best way to enable the design of rocket engine systems that have life capability greater than 500 cycles while meeting or exceeding current performance levels (Specific Impulse and Thrust/Weight) with a 1/1,OOO,OOO likelihood of vehicle loss due to rocket system failure. After several meetings and much independent work the panel reached a consensus opinion that the primary issues preventing LLRE are a lack of: physics based life prediction, combined loads prediction, understanding of material microphysics, cost effective system level testing. and the inclusion of fabrication process effects into physics based models. With the expected level of funding devoted to LLRE development, the panel recommended that fundamental research efforts focused on these five areas be emphasized.
Predicting financial trouble using call data—On social capital, phone logs, and financial trouble
Lin, Chia-Ching; Chen, Kuan-Ta; Singh, Vivek Kumar
2018-01-01
An ability to understand and predict financial wellbeing for individuals is of interest to economists, policy designers, financial institutions, and the individuals themselves. According to the Nilson reports, there were more than 3 billion credit cards in use in 2013, accounting for purchases exceeding US$ 2.2 trillion, and according to the Federal Reserve report, 39% of American households were carrying credit card debt from month to month. Prior literature has connected individual financial wellbeing with social capital. However, as yet, there is limited empirical evidence connecting social interaction behavior with financial outcomes. This work reports results from one of the largest known studies connecting financial outcomes and phone-based social behavior (180,000 individuals; 2 years’ time frame; 82.2 million monthly bills, and 350 million call logs). Our methodology tackles highly imbalanced dataset, which is a pertinent problem with modelling credit risk behavior, and offers a novel hybrid method that yields improvements over, both, a traditional transaction data only approach, and an approach that uses only call data. The results pave way for better financial modelling of billions of unbanked and underbanked customers using non-traditional metrics like phone-based credit scoring. PMID:29474411
Effect of Rolling Bearing Refurbishment and Restoration on Bearing Life and Reliability
NASA Technical Reports Server (NTRS)
Zaretsky, Erwin V.; Branzai, Emanuel V.
2005-01-01
For nearly four decades it has been a practice in commercial and military aircraft application that rolling-element bearings removed at maintenance or overhaul be reworked and returned to service. The work presented extends previously reported bearing life analysis to consider the depth (Z(45)) to maximum shear stress (45) on stressed volume removal and the effect of replacing the rolling elements with a new set. A simple algebraic relationship was established to determine the L(10) life of bearing races subject to bearing rework. Depending on the extent of rework and based upon theoretical analysis, representative life factors (LF) for bearings subject to rework ranged from 0.87 to 0.99 the lives of new bearings. Based on bearing endurance data, 92 percent of the bearing sets that would be subject to rework would result in L(10) lives equaling and/or exceeding that predicted for new bearings with the remaining 8 percent having the potential to achieve the analytically predicted life of new bearings when one of the rings is replaced at rework.. The potential savings from bearing rework varies from 53 to 82 percent that of new bearings depending on the cost, size and complexity of the bearing.
Predicting financial trouble using call data-On social capital, phone logs, and financial trouble.
Agarwal, Rishav Raj; Lin, Chia-Ching; Chen, Kuan-Ta; Singh, Vivek Kumar
2018-01-01
An ability to understand and predict financial wellbeing for individuals is of interest to economists, policy designers, financial institutions, and the individuals themselves. According to the Nilson reports, there were more than 3 billion credit cards in use in 2013, accounting for purchases exceeding US$ 2.2 trillion, and according to the Federal Reserve report, 39% of American households were carrying credit card debt from month to month. Prior literature has connected individual financial wellbeing with social capital. However, as yet, there is limited empirical evidence connecting social interaction behavior with financial outcomes. This work reports results from one of the largest known studies connecting financial outcomes and phone-based social behavior (180,000 individuals; 2 years' time frame; 82.2 million monthly bills, and 350 million call logs). Our methodology tackles highly imbalanced dataset, which is a pertinent problem with modelling credit risk behavior, and offers a novel hybrid method that yields improvements over, both, a traditional transaction data only approach, and an approach that uses only call data. The results pave way for better financial modelling of billions of unbanked and underbanked customers using non-traditional metrics like phone-based credit scoring.
Shear Rheology of Suspensions of Porous Zeolite Particles in Concentrated Polymer Solutions
NASA Astrophysics Data System (ADS)
Olanrewaju, Kayode O.; Breedveld, Victor
2008-07-01
We present experimental data on the shear rheology of Ultem (polyetherimide)/NMP(l-methyl-2-pyrrolidinone) solutions with and without suspended surface-modified porous/nonporous zeolite (ZSM-5) particles. We found that the porous zeolite suspensions have relative viscosities that significantly exceed the Krieger-Dougherty predictions for hard sphere suspensions. The major origin of this discrepancy is the selective absorption of NMP solvent into the zeolite pores, which raises both the polymer concentration and the particle volume fraction, thus enhancing both the viscosity of the continuous phase Ultem/NMP polymer solution and the particle contribution to the suspension viscosity. Other factors, such as zeolite non-sphericity and specific interactions with Ultem polymer, contribute to the suspension viscosity to a lesser extent. We propose a predictive model for the viscosity of porous zeolite suspensions by incorporating an absorption parameter, α, into the Krieger-Dougherty model. We also propose independent approaches to determine α. The first one is indirect and based on zeolite density/porosity data, assuming that all pores will be filled with solvent. The other method is based on our experimental data, by comparing the viscosity data of porous versus non-porous zeolite suspensions. The different approaches are compared.
Simulation Assisted Risk Assessment: Blast Overpressure Modeling
NASA Technical Reports Server (NTRS)
Lawrence, Scott L.; Gee, Ken; Mathias, Donovan; Olsen, Michael
2006-01-01
A probabilistic risk assessment (PRA) approach has been developed and applied to the risk analysis of capsule abort during ascent. The PRA is used to assist in the identification of modeling and simulation applications that can significantly impact the understanding of crew risk during this potentially dangerous maneuver. The PRA approach is also being used to identify the appropriate level of fidelity for the modeling of those critical failure modes. The Apollo launch escape system (LES) was chosen as a test problem for application of this approach. Failure modes that have been modeled and/or simulated to date include explosive overpressure-based failure, explosive fragment-based failure, land landing failures (range limits exceeded either near launch or Mode III trajectories ending on the African continent), capsule-booster re-contact during separation, and failure due to plume-induced instability. These failure modes have been investigated using analysis tools in a variety of technical disciplines at various levels of fidelity. The current paper focuses on the development and application of a blast overpressure model for the prediction of structural failure due to overpressure, including the application of high-fidelity analysis to predict near-field and headwinds effects.
NASA Astrophysics Data System (ADS)
Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.
2015-04-01
Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural versus model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty among reference ET is far more important than model parametric uncertainty introduced by crop coefficients. These crop coefficients are used to estimate irrigation water requirement following the single crop coefficient approach. Using the reliability ensemble averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.
NASA Astrophysics Data System (ADS)
Dehghan, A.; Mariani, Z.; Gascon, G.; Bélair, S.; Milbrandt, J.; Joe, P. I.; Crawford, R.; Melo, S.
2017-12-01
Environment and Climate Change Canada (ECCC) is implementing a 2.5-km resolution version of the Global Environmental Multiscale (GEM) model over the Canadian Arctic. Radiosonde observations were used to evaluate the numerical representation of surface-based temperature inversion which is a major feature in the Arctic region. Arctic surface-based inversions are often created by imbalance between radiative cooling processes at surface and warm air advection above. This can have a significant effect on vertical mixing of pollutants and moisture, and ultimately, on cloud formation. It is therefore important to correctly predict the existence of surface inversions along with their characteristics (i.e., intensity and depth). Previous climatological studies showed that the frequency and intensity of surface-based inversions are larger during colder months in the Arctic. Therefore, surface-based inversions were estimated using radiosonde measurements during winter (December 2015 to February 2016) at Iqaluit (Nunavut, Canada). Results show that the inversion intensity can exceed 10 K with depths as large as 1 km. Preliminary evaluation of GEM outputs reveals that the model tends to underestimate the intensity of near-surface inversions, and in some cases, the model failed to predict an inversion. This study presents the factors contributing to this bias including surface temperature and snow cover.
Examining INM Accuracy Using Empirical Sound Monitoring and Radar Data
NASA Technical Reports Server (NTRS)
Miller, Nicholas P.; Anderson, Grant S.; Horonjeff, Richard D.; Kimura, Sebastian; Miller, Jonathan S.; Senzig, David A.; Thompson, Richard H.; Shepherd, Kevin P. (Technical Monitor)
2000-01-01
Aircraft noise measurements were made using noise monitoring systems at Denver International and Minneapolis St. Paul Airports. Measured sound exposure levels for a large number of operations of a wide range of aircraft types were compared with predictions using the FAA's Integrated Noise Model. In general it was observed that measured levels exceeded the predicted levels by a significant margin. These differences varied according to the type of aircraft and also depended on the distance from the aircraft. Many of the assumptions which affect the predicted sound levels were examined but none were able to fully explain the observed differences.
NEW CATEGORICAL METRICS FOR AIR QUALITY MODEL EVALUATION
Traditional categorical metrics used in model evaluations are "clear-cut" measures in that the model's ability to predict an exceedance is defined by a fixed threshold concentration and the metrics are defined by observation-forecast sets that are paired both in space and time. T...
Space radiation dose analysis for solar flare of August 1989
NASA Technical Reports Server (NTRS)
Nealy, John E.; Simonsen, Lisa C.; Sauer, Herbert H.; Wilson, John W.; Townsend, Lawrence W.
1990-01-01
Potential dose and dose rate levels to astronauts in deep space are predicted for the solar flare event which occurred during the week of August 13, 1989. The Geostationary Operational Environmental Satellite (GOES-7) monitored the temporal development and energy characteristics of the protons emitted during this event. From these data, differential fluence as a function of energy was obtained in order to analyze the flare using the Langley baryon transport code, BRYNTRN, which describes the interactions of incident protons in matter. Dose equivalent estimates for the skin, ocular lens, and vital organs for 0.5 to 20 g/sq cm of aluminum shielding were predicted. For relatively light shielding (less than 2 g/sq cm), the skin and ocular lens 30-day exposure limits are exceeded within several hours of flare onset. The vital organ (5 cm depth) dose equivalent is exceeded only for the thinnest shield (0.5 g/sq cm). Dose rates (rem/hr) for the skin, ocular lens, and vital organs are also computed.
Space radiation dose analysis for solar flare of August 1989
NASA Astrophysics Data System (ADS)
Nealy, John E.; Simonsen, Lisa C.; Sauer, Herbert H.; Wilson, John W.; Townsend, Lawrence W.
1990-12-01
Potential dose and dose rate levels to astronauts in deep space are predicted for the solar flare event which occurred during the week of August 13, 1989. The Geostationary Operational Environmental Satellite (GOES-7) monitored the temporal development and energy characteristics of the protons emitted during this event. From these data, differential fluence as a function of energy was obtained in order to analyze the flare using the Langley baryon transport code, BRYNTRN, which describes the interactions of incident protons in matter. Dose equivalent estimates for the skin, ocular lens, and vital organs for 0.5 to 20 g/sq cm of aluminum shielding were predicted. For relatively light shielding (less than 2 g/sq cm), the skin and ocular lens 30-day exposure limits are exceeded within several hours of flare onset. The vital organ (5 cm depth) dose equivalent is exceeded only for the thinnest shield (0.5 g/sq cm). Dose rates (rem/hr) for the skin, ocular lens, and vital organs are also computed.
NASA Technical Reports Server (NTRS)
Goldhirsh, J.
1978-01-01
Yearly, monthly, and time of day fade statistics are presented and characterized. A 19.04 GHz yearly fade distribution, corresponding to a second COMSTAR beacon frequency, is predicted using the concept of effective path length, disdrometer, and rain rate results. The yearly attenuation and rain rate distributions follow with good approximation log normal variations for most fade and rain rate levels. Attenuations were exceeded for the longest and shortest periods of times for all fades in August and February, respectively. The eight hour time period showing the maximum and minimum number of minutes over the year for which fades exceeded 12 db were approximately between 1600 to 2400, and 0400 to 1200 hours, respectively. In employing the predictive method for obtaining the 19.04 GHz fade distribution, it is demonstrated theoretically that the ratio of attenuations at two frequencies is minimally dependent of raindrop size distribution providing these frequencies are not widely separated.
NASA Technical Reports Server (NTRS)
Kierein, J. W.
1977-01-01
The baseline configuration defined has the SERGE antenna panel array mounted on the OFT-2 pallet sufficiently high in the bay that negligible amounts of radiation from the beam are reflected from orbiter surfaces into the shuttle payload bay. The array is symmetrically mounted to the pallet along the array long dimension with the pallet at the center. It utilizes a graphite epoxy trusswork support structure. The antenna panels are of SEASAT engineering model design and construction. The antenna array has 7 panels and a 7-way naturally tapered coax corporate feed system. The performance of the system is predicted to exceed 33 db gain, have -15 db sidelobes in the E-plane and even lower in the H-plane, and have and E-plane beamwidth less than 2.2 deg, all within performance specification. The primary support structure is predicted to exceed the specified greater than 25 hertz fundamental frequency, although individual panels will have hertz fundamental frequency.
Wiegner, T N; Edens, C J; Abaya, L M; Carlson, K M; Lyon-Colbert, A; Molloy, S L
2017-01-30
Spatial and temporal patterns of coastal microbial pollution are not well documented. Our study examined these patterns through measurements of fecal indicator bacteria (FIB), nutrients, and physiochemical parameters in Hilo Bay, Hawai'i, during high and low river flow. >40% of samples tested positive for the human-associated Bacteroides marker, with highest percentages near rivers. Other FIB were also higher near rivers, but only Clostridium perfringens concentrations were related to discharge. During storms, FIB concentrations were three times to an order of magnitude higher, and increased with decreasing salinity and water temperature, and increasing turbidity. These relationships and high spatial resolution data for these parameters were used to create Enterococcus spp. and C. perfringens maps that predicted exceedances with 64% and 95% accuracy, respectively. Mapping microbial pollution patterns and predicting exceedances is a valuable tool that can improve water quality monitoring and aid in visualizing FIB hotspots for management actions. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Hague, M. J.; Ferrari, M. R.; Miller, J. R.; Patterson, D. A.; Russell, G. L.; Farrell, A.P.; Hinch, S. G.
2010-01-01
Short episodic high temperature events can be lethal for migrating adult Pacific salmon (Oncorhynchus spp.). We downscaled temperatures for the Fraser River, British Columbia to evaluate the impact of climate warming on the frequency of exceeding thermal thresholds associated with salmon migratory success. Alarmingly, a modest 1.0 C increase in average summer water temperature over 100 years (1981-2000 to 2081-2100) tripled the number of days per year exceeding critical salmonid thermal thresholds (i.e. 19.0 C). Refined thresholds for two populations (Gates Creek and Weaver Creek) of sockeye salmon (Oncorhynchus nerka) were defined using physiological constraint models based on aerobic scope. While extreme temperatures leading to complete aerobic collapse remained unlikely under our warming scenario, both populations were increasingly forced to migrate upriver at reduced levels of aerobic performance (e.g. in 80% of future simulations, => 90% of salmon encountered temperatures exceeding population specific thermal optima for maximum aerobic scope; T(sub opt)) = 16.3 C for Gates Creek and T(sub sopt)=14.5 C for Weaver Creek). Assuming recent changes to river entry timing persist, we also predicted dramatic increases in the probability of freshwater mortality for Weaver Creek salmon due to reductions in aerobic, and general physiological, performance (e.g. in 42% of future simulations =>50% of Weaver Creek fish exceeded temperature thresholds associated with 0 - 60% of maximum aerobic scope). Potential for adaptation via directional selection on run-timing was more evident for the Weaver Creek population. Early entry Weaver Creek fish experienced 25% (range: 15 - 31%) more suboptimal temperatures than late entrants, compared with an 8% difference (range: 0 - 17%) between early and late Gates Creek fish. Our results emphasize the need to consider daily temperature variability in association with population-specific differences in behaviour and physiological constraints when forecasting impacts of climate change on migratory survival of aquatic species.
13 CFR 124.513 - Under what circumstances can a joint venture be awarded an 8(a) contract?
Code of Federal Regulations, 2010 CFR
2010-01-01
...; and (ii)(A) For a procurement having a revenue-based size standard, the procurement exceeds half the... an employee-based size standard, the procurement exceeds $10 million; (2) For sole source and... the purpose of performing one or more specific 8(a) contracts. (2) A joint venture agreement is...
Fiscal Year 2001 Medicaid Home and Community-Based Services Expenditures Exceed Those of ICFs/MR.
ERIC Educational Resources Information Center
Lakin, K. Charlie; Prouty, Robert; Smith, Jerra; Polister, Barb; Smith, Gary
2002-01-01
This article reports that in 2001, for the first time since its creation 20 years earlier, Medicaid Home and Community-Based Services (HCBS) Waiver programs for persons with intellectual and developmental disabilities had Federal and state expenditures that exceeded those for Medicaid Intermediate Care Facilities for Persons with Mental…
Burnside, Elizabeth S.; Liu, Jie; Wu, Yirong; Onitilo, Adedayo A.; McCarty, Catherine; Page, C. David; Peissig, Peggy; Trentham-Dietz, Amy; Kitchner, Terrie; Fan, Jun; Yuan, Ming
2015-01-01
Rationale and Objectives The discovery of germline genetic variants associated with breast cancer has engendered interest in risk stratification for improved, targeted detection and diagnosis. However, there has yet to be a comparison of the predictive ability of these genetic variants with mammography abnormality descriptors. Materials and Methods Our IRB-approved, HIPAA-compliant study utilized a personalized medicine registry in which participants consented to provide a DNA sample and participate in longitudinal follow-up. In our retrospective, age-matched, case-controlled study of 373 cases and 395 controls who underwent breast biopsy, we collected risk factors selected a priori based on the literature including: demographic variables based on the Gail model, common germline genetic variants, and diagnostic mammography findings according to BI-RADS. We developed predictive models using logistic regression to determine the predictive ability of: 1) demographic variables, 2) 10 selected genetic variants, or 3) mammography BI-RADS features. We evaluated each model in turn by calculating a risk score for each patient using 10-fold cross validation; used this risk estimate to construct ROC curves; and compared the AUC of each using the DeLong method. Results The performance of the regression model using demographic risk factors was not statistically different from the model using genetic variants (p=0.9). The model using mammography features (AUC = 0.689) was superior to both the demographic model (AUC = .598; p<0.001) and the genetic model (AUC = .601; p<0.001). Conclusion BI-RADS features exceeded the ability of demographic and 10 selected germline genetic variants to predict breast cancer in women recommended for biopsy. PMID:26514439
A Bayesian antedependence model for whole genome prediction.
Yang, Wenzhao; Tempelman, Robert J
2012-04-01
Hierarchical mixed effects models have been demonstrated to be powerful for predicting genomic merit of livestock and plants, on the basis of high-density single-nucleotide polymorphism (SNP) marker panels, and their use is being increasingly advocated for genomic predictions in human health. Two particularly popular approaches, labeled BayesA and BayesB, are based on specifying all SNP-associated effects to be independent of each other. BayesB extends BayesA by allowing a large proportion of SNP markers to be associated with null effects. We further extend these two models to specify SNP effects as being spatially correlated due to the chromosomally proximal effects of causal variants. These two models, that we respectively dub as ante-BayesA and ante-BayesB, are based on a first-order nonstationary antedependence specification between SNP effects. In a simulation study involving 20 replicate data sets, each analyzed at six different SNP marker densities with average LD levels ranging from r(2) = 0.15 to 0.31, the antedependence methods had significantly (P < 0.01) higher accuracies than their corresponding classical counterparts at higher LD levels (r(2) > 0. 24) with differences exceeding 3%. A cross-validation study was also conducted on the heterogeneous stock mice data resource (http://mus.well.ox.ac.uk/mouse/HS/) using 6-week body weights as the phenotype. The antedependence methods increased cross-validation prediction accuracies by up to 3.6% compared to their classical counterparts (P < 0.001). Finally, we applied our method to other benchmark data sets and demonstrated that the antedependence methods were more accurate than their classical counterparts for genomic predictions, even for individuals several generations beyond the training data.
Predicting the particle size distribution of eroded sediment using artificial neural networks.
Lagos-Avid, María Paz; Bonilla, Carlos A
2017-03-01
Water erosion causes soil degradation and nonpoint pollution. Pollutants are primarily transported on the surfaces of fine soil and sediment particles. Several soil loss models and empirical equations have been developed for the size distribution estimation of the sediment leaving the field, including the physically-based models and empirical equations. Usually, physically-based models require a large amount of data, sometimes exceeding the amount of available data in the modeled area. Conversely, empirical equations do not always predict the sediment composition associated with individual events and may require data that are not always available. Therefore, the objective of this study was to develop a model to predict the particle size distribution (PSD) of eroded soil. A total of 41 erosion events from 21 soils were used. These data were compiled from previous studies. Correlation and multiple regression analyses were used to identify the main variables controlling sediment PSD. These variables were the particle size distribution in the soil matrix, the antecedent soil moisture condition, soil erodibility, and hillslope geometry. With these variables, an artificial neural network was calibrated using data from 29 events (r 2 =0.98, 0.97, and 0.86; for sand, silt, and clay in the sediment, respectively) and then validated and tested on 12 events (r 2 =0.74, 0.85, and 0.75; for sand, silt, and clay in the sediment, respectively). The artificial neural network was compared with three empirical models. The network presented better performance in predicting sediment PSD and differentiating rain-runoff events in the same soil. In addition to the quality of the particle distribution estimates, this model requires a small number of easily obtained variables, providing a convenient routine for predicting PSD in eroded sediment in other pollutant transport models. Copyright © 2017 Elsevier B.V. All rights reserved.
Bah, Mamadou T; Nair, Prasanth B; Browne, Martin
2009-12-01
Finite element (FE) analysis of the effect of implant positioning on the performance of cementless total hip replacements (THRs) requires the generation of multiple meshes to account for positioning variability. This process can be labour intensive and time consuming as CAD operations are needed each time a specific orientation is to be analysed. In the present work, a mesh morphing technique is developed to automate the model generation process. The volume mesh of a baseline femur with the implant in a nominal position is deformed as the prosthesis location is varied. A virtual deformation field, obtained by solving a linear elasticity problem with appropriate boundary conditions, is applied. The effectiveness of the technique is evaluated using two metrics: the percentages of morphed elements exceeding an aspect ratio of 20 and an angle of 165 degrees between the adjacent edges of each tetrahedron. Results show that for 100 different implant positions, the first and second metrics never exceed 3% and 3.5%, respectively. To further validate the proposed technique, FE contact analyses are conducted using three selected morphed models to predict the strain distribution in the bone and the implant micromotion under joint and muscle loading. The entire bone strain distribution is well captured and both percentages of bone volume with strain exceeding 0.7% and bone average strains are accurately computed. The results generated from the morphed mesh models correlate well with those for models generated from scratch, increasing confidence in the methodology. This morphing technique forms an accurate and efficient basis for FE based implant orientation and stability analysis of cementless hip replacements.
The game of making decisions under uncertainty: How sure must one be?
NASA Astrophysics Data System (ADS)
Werner, Micha; Verkade, Jan; Wetterhall, Fredrik; van Andel, Schalk-Jan; Ramos, Maria-Helena
2016-04-01
Probabilistic hydrometeorological forecasting is now widely accepted to be more skillful than deterministic forecasts, and is increasingly being integrated into operational practice. Provided they are reliable and unbiased, probabilistic forecasts have the advantage that they give decision maker not only the forecast value, but also the uncertainty associated to that prediction. Though that information provides more insight, it does now leave the forecaster/decision maker with the challenge of deciding at what level of probability of a threshold being exceeded the decision to act should be taken. According to the cost-loss theory, that probability should be related to the impact of the threshold being exceeded. However, it is not entirely clear how easy it is for decision makers to follow that rule, even when the impact of a threshold being exceeded, and the actions to choose from are known. To continue the tradition in the "Ensemble Hydrometeorological Forecast" session, we will address the challenge of making decisions based on probabilistic forecasts through a game to be played with the audience. We will explore how decisions made differ depending on the known impacts of the forecasted events. Participants will be divided into a number of groups with differing levels of impact, and will be faced with a number of forecast situations. They will be asked to make decisions and record the consequence of those decisions. A discussion of the differences in the decisions made will be presented at the end of the game, with a fuller analysis later posted on the HEPEX web site blog (www.hepex.org).
Unbound bilirubin measurements by a novel probe in preterm infants.
Hegyi, Thomas; Kleinfeld, Alan; Huber, Andrew; Weinberger, Barry; Memon, Naureen; Shih, Weichung; Carayannopoulos, Mary; Oh, William
2018-03-12
Hyperbilirubinemia occurs in over 80% of newborns and severe bilirubin toxicity can lead to neurological dysfunction and death, especially in preterm infants. Currently, the risk of bilirubin toxicity is assessed by measuring the levels of total serum bilirubin (TSB), which are used to direct treatments including immunoglobulin administration, phototherapy, and exchange transfusion. However, free, unbound bilirubin levels (Bf) predict the risk of bilirubin neurotoxicity more accurately than TSB. To examine Bf levels in preterm infants and determine the frequency with which they exceed reported neurotoxic thresholds. One hundred thirty preterm infants (BW 500-2000 g; GA 23-34 weeks) were enrolled and Bf levels measured during the first week of life by the fluorescent Bf sensor BL22P1B11-Rh. TSB and plasma albumin were measured by standard techniques. Bilirubin-albumin dissociation constants (K d ) were calculated based on Bf and plasma albumin. Five hundred eighty samples were measured during the first week of life, with an overall mean Bf of 13.6 ± 9.0 nM. A substantial number of measurements exceeded potential toxic thresholds levels as reported in the literature. The correlation between Bf and TSB was statistically significant (r 2 0.17), but this weak relationship was lost at high Bf levels. Infants <28-week gestations had more hearing screening failures than infants ≥28-week gestation. Unbound (free) bilirubin values are extremely variable during the first week of life in preterm infants. A significant proportion of these values exceeded reported neurotoxic thresholds.
Environmental Assessment: West Coast Basing of C-17 Aircraft
2003-06-01
will not be regionally significant by United States Environmental Protection Agency standards, will not exceed de minimis thresholds, and that a...emissions for criteria pollutants will not be regionally significant by United States Environmental Protection Agency standards, will not exceed de minimis...would not exceed de minimis thresholds, and that a Conformity Determination would not be required. MTRs. Emissions from C-17 operations on the
Hahn, Tim; Kircher, Tilo; Straube, Benjamin; Wittchen, Hans-Ulrich; Konrad, Carsten; Ströhle, Andreas; Wittmann, André; Pfleiderer, Bettina; Reif, Andreas; Arolt, Volker; Lueken, Ulrike
2015-01-01
Although neuroimaging research has made substantial progress in identifying the large-scale neural substrate of anxiety disorders, its value for clinical application lags behind expectations. Machine-learning approaches have predictive potential for individual-patient prognostic purposes and might thus aid translational efforts in psychiatric research. To predict treatment response to cognitive behavioral therapy (CBT) on an individual-patient level based on functional magnetic resonance imaging data in patients with panic disorder with agoraphobia (PD/AG). We included 49 patients free of medication for at least 4 weeks and with a primary diagnosis of PD/AG in a longitudinal study performed at 8 clinical research institutes and outpatient centers across Germany. The functional magnetic resonance imaging study was conducted between July 2007 and March 2010. Twelve CBT sessions conducted 2 times a week focusing on behavioral exposure. Treatment response was defined as exceeding a 50% reduction in Hamilton Anxiety Rating Scale scores. Blood oxygenation level-dependent signal was measured during a differential fear-conditioning task. Regional and whole-brain gaussian process classifiers using a nested leave-one-out cross-validation were used to predict the treatment response from data acquired before CBT. Although no single brain region was predictive of treatment response, integrating regional classifiers based on data from the acquisition and the extinction phases of the fear-conditioning task for the whole brain yielded good predictive performance (accuracy, 82%; sensitivity, 92%; specificity, 72%; P < .001). Data from the acquisition phase enabled 73% correct individual-patient classifications (sensitivity, 80%; specificity, 67%; P < .001), whereas data from the extinction phase led to an accuracy of 74% (sensitivity, 64%; specificity, 83%; P < .001). Conservative reanalyses under consideration of potential confounders yielded nominally lower but comparable accuracy rates (acquisition phase, 70%; extinction phase, 71%; combined, 79%). Predicting treatment response to CBT based on functional neuroimaging data in PD/AG is possible with high accuracy on an individual-patient level. This novel machine-learning approach brings personalized medicine within reach, directly supporting clinical decisions for the selection of treatment options, thus helping to improve response rates.
Risk assessment for pesticides' MRL non-compliances in Poland in the years 2011-2015.
Struciński, Paweł; Ludwicki, Jan K; Góralczyk, Katarzyna; Czaja, Katarzyna; Hernik, Agnieszka; Liszewska, Monika
2015-01-01
Human exposure to trace levels of pesticide residues present in food of plant origin is inevitable as long as pesticides continue to be applied in agriculture. Since Maximum Residue Levels (MRL) are not toxicological endpoint values, their violation is not by default equivalent to health risk for consumers. However, its essential to provide a health- based risk assessment for each case of MRL non-compliance reported during monitoring and official control of foodstuffs. To assess the potential short-term risk associated with consumption of food products of plant origin containing pesticide residues above MRL values based on notifications forwarded by the National Contact Point for RASFF in Poland during 2011-2015. 115 notifications including 127 analytical results non-compliant with respective MRL values were forwarded to provide risk assessment. An internationally accepted deterministic approach based on conservative model assumptions for short-term exposure assessment was applied. The risk was characterized by comparing an estimated dietary intake with respective acute reference dose (ARfD). Black currant, tea, lettuce, Chinese cabbage and carrot were among the most frequently notified products in years 2011-2015. Among pesticides exceeding respective MRL values, over 90% belonged to fungicides and insecticides/acaricides such as acetamiprid, chlorpyrifos, dimethoate, imidacloprid, dithiocarbamates and procymidone. For 15 and 6 results noncompliant with respective MRL value, a predicted short-term intake exceeded ARfD for children and adults, respectively. Residue levels that could potentially pose a health threat are found incidentally. The science-based and transparent risk assessment process with regard to the data, methods and assumptions that are applied is essential to risk management authorities. risk assessment, pesticide residues, MRL, dietary intake, RASFF, food safety.
NASA Astrophysics Data System (ADS)
Ren, Jingye; Zhang, Fang; Wang, Yuying; Collins, Don; Fan, Xinxin; Jin, Xiaoai; Xu, Weiqi; Sun, Yele; Cribb, Maureen; Li, Zhanqing
2018-05-01
Understanding the impacts of aerosol chemical composition and mixing state on cloud condensation nuclei (CCN) activity in polluted areas is crucial for accurately predicting CCN number concentrations (NCCN). In this study, we predict NCCN under five assumed schemes of aerosol chemical composition and mixing state based on field measurements in Beijing during the winter of 2016. Our results show that the best closure is achieved with the assumption of size dependent chemical composition for which sulfate, nitrate, secondary organic aerosols, and aged black carbon are internally mixed with each other but externally mixed with primary organic aerosol and fresh black carbon (external-internal size-resolved, abbreviated as EI-SR scheme). The resulting ratios of predicted-to-measured NCCN (RCCN_p/m) were 0.90 - 0.98 under both clean and polluted conditions. Assumption of an internal mixture and bulk chemical composition (INT-BK scheme) shows good closure with RCCN_p/m of 1.0 -1.16 under clean conditions, implying that it is adequate for CCN prediction in continental clean regions. On polluted days, assuming the aerosol is internally mixed and has a chemical composition that is size dependent (INT-SR scheme) achieves better closure than the INT-BK scheme due to the heterogeneity and variation in particle composition at different sizes. The improved closure achieved using the EI-SR and INT-SR assumptions highlight the importance of measuring size-resolved chemical composition for CCN predictions in polluted regions. NCCN is significantly underestimated (with RCCN_p/m of 0.66 - 0.75) when using the schemes of external mixtures with bulk (EXT-BK scheme) or size-resolved composition (EXT-SR scheme), implying that primary particles experience rapid aging and physical mixing processes in urban Beijing. However, our results show that the aerosol mixing state plays a minor role in CCN prediction when the κorg exceeds 0.1.
Kuntsche, Sandra; Gmel, Gerhard
2005-01-01
Cultural and sex differences in smoking rates among countries indicate different phases of the smoking epidemic. Their background is summarized in a four-stage model based on the Rogers Theory of Diffusion of Innovations. First, to test predictions of the Rogers theory and, second, to test whether, according to the theory, today's innovative process is smoking cessation, predicted by higher rates of cessation among the more highly educated and among men of all educational levels. Data covered respondents older than 24 years from two Swiss Health Surveys (1997 and 2002). Logistic regression models were on lifetime smoking versus never-smoking, and on former smoking versus current smoking. Declining smoking rates in both sexes over time, measured by birth cohorts, indicate that the epidemic has peaked, but women of all educational levels and men of lower education still show high prevalence rates. The gap between higher-educated and lower-educated individuals is widening. Smoking prevalence is expected to decline further, particularly among women and little educated men. The incidence of tobacco-related diseases in women is predicted to exceed that of men, owing to their lower cessation rates.
SPEER-SERVER: a web server for prediction of protein specificity determining sites
Chakraborty, Abhijit; Mandloi, Sapan; Lanczycki, Christopher J.; Panchenko, Anna R.; Chakrabarti, Saikat
2012-01-01
Sites that show specific conservation patterns within subsets of proteins in a protein family are likely to be involved in the development of functional specificity. These sites, generally termed specificity determining sites (SDS), might play a crucial role in binding to a specific substrate or proteins. Identification of SDS through experimental techniques is a slow, difficult and tedious job. Hence, it is very important to develop efficient computational methods that can more expediently identify SDS. Herein, we present Specificity prediction using amino acids’ Properties, Entropy and Evolution Rate (SPEER)-SERVER, a web server that predicts SDS by analyzing quantitative measures of the conservation patterns of protein sites based on their physico-chemical properties and the heterogeneity of evolutionary changes between and within the protein subfamilies. This web server provides an improved representation of results, adds useful input and output options and integrates a wide range of analysis and data visualization tools when compared with the original standalone version of the SPEER algorithm. Extensive benchmarking finds that SPEER-SERVER exhibits sensitivity and precision performance that, on average, meets or exceeds that of other currently available methods. SPEER-SERVER is available at http://www.hpppi.iicb.res.in/ss/. PMID:22689646
Mechanical model of orthopaedic drilling for augmented-haptics-based training.
Pourkand, Ashkan; Zamani, Naghmeh; Grow, David
2017-10-01
In this study, augmented-haptic feedback is used to combine a physical object with virtual elements in order to simulate anatomic variability in bone. This requires generating levels of force/torque consistent with clinical bone drilling, which exceed the capabilities of commercially available haptic devices. Accurate total force generation is facilitated by a predictive model of axial force during simulated orthopaedic drilling. This model is informed by kinematic data collected while drilling into synthetic bone samples using an instrumented linkage attached to the orthopaedic drill. Axial force is measured using a force sensor incorporated into the bone fixture. A nonlinear function, relating force to axial position and velocity, was used to fit the data. The normalized root-mean-square error (RMSE) of forces predicted by the model compared to those measured experimentally was 0.11 N across various bones with significant differences in geometry and density. This suggests that a predictive model can be used to capture relevant variations in the thickness and hardness of cortical and cancellous bone. The practical performance of this approach is measured using the Phantom Premium haptic device, with some required customizations. Copyright © 2017 Elsevier Ltd. All rights reserved.
The predictive value of skin prick testing for challenge-proven food allergy: a systematic review.
Peters, Rachel L; Gurrin, Lyle C; Allen, Katrina J
2012-06-01
Immunoglobulin E-mediated (IgE) food allergy affects 6-8% of children, and the prevalence is believed to be increasing. The gold standard of food allergy diagnosis is oral food challenges (OFCs); however, they are resource-consuming and potentially dangerous. Skin prick tests (SPTs) are able to detect the presence of allergen-specific IgE antibodies (sensitization), but they have low specificity for clinically significant food allergy. To reduce the need for OFCs, it has been suggested that children forgo an OFC if their SPT wheal size exceeds a cutoff that has a high predictability for food allergy. Although data for these studies are almost always gathered from high-risk populations, the 95% positive predictive values (PPVs) vary substantially between studies. SPT thresholds with a high probability of food allergy generated from these studies may not be generalizable to other populations, because of highly selective samples and variability in participant's age, test allergens, and food challenge protocol. Standardization of SPT devices and allergens, OFC protocols including standardized cessation criteria, and population-based samples would all help to improve generalizability of PPVs of SPTs. © 2011 John Wiley & Sons A/S.
SPEER-SERVER: a web server for prediction of protein specificity determining sites.
Chakraborty, Abhijit; Mandloi, Sapan; Lanczycki, Christopher J; Panchenko, Anna R; Chakrabarti, Saikat
2012-07-01
Sites that show specific conservation patterns within subsets of proteins in a protein family are likely to be involved in the development of functional specificity. These sites, generally termed specificity determining sites (SDS), might play a crucial role in binding to a specific substrate or proteins. Identification of SDS through experimental techniques is a slow, difficult and tedious job. Hence, it is very important to develop efficient computational methods that can more expediently identify SDS. Herein, we present Specificity prediction using amino acids' Properties, Entropy and Evolution Rate (SPEER)-SERVER, a web server that predicts SDS by analyzing quantitative measures of the conservation patterns of protein sites based on their physico-chemical properties and the heterogeneity of evolutionary changes between and within the protein subfamilies. This web server provides an improved representation of results, adds useful input and output options and integrates a wide range of analysis and data visualization tools when compared with the original standalone version of the SPEER algorithm. Extensive benchmarking finds that SPEER-SERVER exhibits sensitivity and precision performance that, on average, meets or exceeds that of other currently available methods. SPEER-SERVER is available at http://www.hpppi.iicb.res.in/ss/.
Prediction of Baseflow Index of Catchments using Machine Learning Algorithms
NASA Astrophysics Data System (ADS)
Yadav, B.; Hatfield, K.
2017-12-01
We present the results of eight machine learning techniques for predicting the baseflow index (BFI) of ungauged basins using a surrogate of catchment scale climate and physiographic data. The tested algorithms include ordinary least squares, ridge regression, least absolute shrinkage and selection operator (lasso), elasticnet, support vector machine, gradient boosted regression trees, random forests, and extremely randomized trees. Our work seeks to identify the dominant controls of BFI that can be readily obtained from ancillary geospatial databases and remote sensing measurements, such that the developed techniques can be extended to ungauged catchments. More than 800 gauged catchments spanning the continental United States were selected to develop the general methodology. The BFI calculation was based on the baseflow separated from daily streamflow hydrograph using HYSEP filter. The surrogate catchment attributes were compiled from multiple sources including digital elevation model, soil, landuse, climate data, other publicly available ancillary and geospatial data. 80% catchments were used to train the ML algorithms, and the remaining 20% of the catchments were used as an independent test set to measure the generalization performance of fitted models. A k-fold cross-validation using exhaustive grid search was used to fit the hyperparameters of each model. Initial model development was based on 19 independent variables, but after variable selection and feature ranking, we generated revised sparse models of BFI prediction that are based on only six catchment attributes. These key predictive variables selected after the careful evaluation of bias-variance tradeoff include average catchment elevation, slope, fraction of sand, permeability, temperature, and precipitation. The most promising algorithms exceeding an accuracy score (r-square) of 0.7 on test data include support vector machine, gradient boosted regression trees, random forests, and extremely randomized trees. Considering both the accuracy and the computational complexity of these algorithms, we identify the extremely randomized trees as the best performing algorithm for BFI prediction in ungauged basins.
NASA Astrophysics Data System (ADS)
Strahm, Ivo; Munz, Nicole; Braun, Christian; Gälli, René; Leu, Christian; Stamm, Christian
2014-05-01
Water quality in the Swiss river network is affected by many micropollutants from a variety of diffuse sources. This study compares, for the first time, in a comprehensive manner the diffuse sources and the substance groups that contribute the most to water contamination in Swiss streams and highlights the major regions for water pollution. For this a simple but comprehensive model was developed to estimate emission from diffuse sources for the entire Swiss river network of 65 000 km. Based on emission factors the model calculates catchment specific losses to streams for more than 15 diffuse sources (such as crop lands, grassland, vineyards, fruit orchards, roads, railways, facades, roofs, green space in urban areas, landfills, etc.) and more than 130 different substances from 5 different substance groups (pesticides, biocides, heavy metals, human drugs, animal drugs). For more than 180 000 stream sections estimates of mean annual pollutant loads and mean annual concentration levels were modeled. This data was validated with a set of monitoring data and evaluated based on annual average environmental quality standards (AA-EQS). Model validation showed that the estimated mean annual concentration levels are within the range of measured data. Therefore simulations were considered as adequately robust for identifying the major sources of diffuse pollution. The analysis depicted that in Switzerland widespread pollution of streams can be expected. Along more than 18 000 km of the river network one or more simulated substances has a concentration exceeding the AA-EQS. In single stream sections it could be more than 50 different substances. Moreover, the simulations showed that in two-thirds of small streams (Strahler order 1 and 2) at least one AA-EQS is always exceeded. The highest number of substances exceeding the AA-EQS are in areas with large fractions of arable cropping, vineyards and fruit orchards. Urban areas are also of concern even without considering wastewater treatment plants. Only a small number of problematic substances are expected from grassland. Landfills and roadways are insignificant within the entire Swiss river network, but may locally lead to considerable water pollution. Considering all substance groups, pesticides and some heavy metals are the main polluters. Many pesticides are expected to exceed AA-EQS and in a substantial percentage of the river network. Modeling a large number of substances from many sources and a huge quantity of stream sections is only possible with a simple model. Nevertheless conclusions are robust and may indicate where and for what kind of substance groups additional efforts for water quality improvements should be undertaken.
Predictability and prediction of the total number of winter extremely cold days over China
NASA Astrophysics Data System (ADS)
Luo, Xiao; Wang, Bin
2018-03-01
The current dynamical climate models have limited skills in predicting winter temperature in China. The present study uses physics-based empirical models (PEMs) to explore the sources and limits of the seasonal predictability in the total number of extremely cold days (NECD) over China. A combined cluster-rotated EOF analysis reveals two sub-regions of homogeneous variability among hundreds of stations, namely the Northeast China (NE) and Main China (MC). This reduces the large-number of predictands to only two indices, the NCED-NE and NCED-MC, which facilitates detection of the common sources of predictability for all stations. The circulation anomalies associated with the NECD-NE exhibit a zonally symmetric Arctic Oscillation-like pattern, whereas those associated with the NECD-MC feature a North-South dipolar pattern over Asia. The predictability of the NECD originates from SST and snow cover anomalies in the preceding September and October. However, the two regions have different SST predictors: The NE predictor is in the western Eurasian Arctic while the MC predictor is over the tropical-North Pacific. The October snow cover predictors also differ: The NE predictor primarily resides in the central Eurasia while the MC predictor is over the western and eastern Eurasia. The PEM prediction results suggest that about 60% (55%) of the total variance of winter NECD over the NE (Main) China are likely predictable 1 month in advance. The NECD at each station can also be predicted by using the four predictors that were detected for the two indices. The cross-validated temporal correlation skills exceed 0.70 at most stations. The physical mechanisms by which the autumn Arctic sea ice, snow cover, and tropical-North Pacific SST anomalies affect winter NECD over the NE and Main China are discussed.
A multi-model framework for simulating wildlife population response to land-use and climate change
McRae, B.H.; Schumaker, N.H.; McKane, R.B.; Busing, R.T.; Solomon, A.M.; Burdick, C.A.
2008-01-01
Reliable assessments of how human activities will affect wildlife populations are essential for making scientifically defensible resource management decisions. A principle challenge of predicting effects of proposed management, development, or conservation actions is the need to incorporate multiple biotic and abiotic factors, including land-use and climate change, that interact to affect wildlife habitat and populations through time. Here we demonstrate how models of land-use, climate change, and other dynamic factors can be integrated into a coherent framework for predicting wildlife population trends. Our framework starts with land-use and climate change models developed for a region of interest. Vegetation changes through time under alternative future scenarios are predicted using an individual-based plant community model. These predictions are combined with spatially explicit animal habitat models to map changes in the distribution and quality of wildlife habitat expected under the various scenarios. Animal population responses to habitat changes and other factors are then projected using a flexible, individual-based animal population model. As an example application, we simulated animal population trends under three future land-use scenarios and four climate change scenarios in the Cascade Range of western Oregon. We chose two birds with contrasting habitat preferences for our simulations: winter wrens (Troglodytes troglodytes), which are most abundant in mature conifer forests, and song sparrows (Melospiza melodia), which prefer more open, shrubby habitats. We used climate and land-use predictions from previously published studies, as well as previously published predictions of vegetation responses using FORCLIM, an individual-based forest dynamics simulator. Vegetation predictions were integrated with other factors in PATCH, a spatially explicit, individual-based animal population simulator. Through incorporating effects of landscape history and limited dispersal, our framework predicted population changes that typically exceeded those expected based on changes in mean habitat suitability alone. Although land-use had greater impacts on habitat quality than did climate change in our simulations, we found that small changes in vital rates resulting from climate change or other stressors can have large consequences for population trajectories. The ability to integrate bottom-up demographic processes like these with top-down constraints imposed by climate and land-use in a dynamic modeling environment is a key advantage of our approach. The resulting framework should allow researchers to synthesize existing empirical evidence, and to explore complex interactions that are difficult or impossible to capture through piecemeal modeling approaches. ?? 2008 Elsevier B.V.
NASA Astrophysics Data System (ADS)
Hartmann, Jens; Jansen, Nils; Dürr, Hans H.; Harashima, Akira; Okubo, Kenji; Kempe, Stephan
2010-05-01
Silicate weathering and resulting transport of dissolved matter influence the global carbon cycle in two ways. First, by the uptake of atmospheric/soil CO2, and second, by providing the oceanic ecosystems via the fluvial systems with the nutrient dissolved silica (DSi). Previous work suggests that regions dominated by volcanics are hyperactive or even 'hot spots' concerning DSi-mobilization from the critical zone. Here, we present a new approach for predicting riverine DSi-fluxes by chemical weathering, emphasizing 'first-order' controlling factors (lithology, runoff, relief, land cover and temperature). This approach is applied to the Japanese Archipelago, a region characterized by a high percentage of volcanics (29.1% of surface area). The presented DSi-flux model is based on data of 516 catchments, covering approximately 56.7% of the area of the Japanese Archipelago. The spatial distribution of lithology - one of the most important first order controls - is taken from a new, high resolution map of Japan. Results show that the Japanese Archipelago is a hyperactive region with a specific DSi-yield 6.6 times higher than the world average of 3.3 t SiO2 km-2 a-1, but with large regional variations. Approximately 10% of its area exceeds 10 times the world average specific DSi-yield. Slope constitutes another important controlling factor on the mobilization of DSi-fluxes from the critical zone, besides lithology and runoff, and can exceed the influence of runoff on specific DSi-yields. Even though the monitored area on the Japanese Archipelago stretches from about 31° to 46° N, temperature is not identified as a significant first-order model variable. This may be due to the fact that slope, runoff and lithology are correlated with temperature due to regional settings of the Archipelago, and temperature information is substituted to a certain extent by these factors. Land cover data also do not improve the prediction model. This may partly be attributed to misinterpreted land cover information from satellite images. Implications of results for chemical weathering rates based on lithological information applied are discussed. Reference: Hartmann, J., Jansen, N., Dürr, H.H., Harashima, A., Okubo, K., Kempe S. (2010) Predicting riverine dissolved silica fluxes into coastal zones from a hyperactive region and analysis of their first order controls. International Journal of Earth Sciences, 99(1), 207-230. doi:10.1007/s00531-008-0381-5.
Observations on saliva osmolality during progressive dehydration and partial rehydration.
Taylor, Nigel A S; van den Heuvel, Anne M J; Kerry, Pete; McGhee, Sheena; Peoples, Gregory E; Brown, Marc A; Patterson, Mark J
2012-09-01
A need exists to identify dehydrated individuals under stressful settings beyond the laboratory. A predictive index based on changes in saliva osmolality has been proposed, and its efficacy and sensitivity was appraised across mass (water) losses from 1 to 7%. Twelve euhydrated males [serum osmolality: 286.1 mOsm kg(-1) H(2)O (SD 4.3)] completed three exercise- and heat-induced dehydration trials (35.6°C, 56% relative humidity): 7% dehydration (6.15 h), 3% dehydration (with 60% fluid replacement: 2.37 h), repeat 7% dehydration (5.27 h). Expectorated saliva osmolality, measured at baseline and at each 1% mass change, was used to predict instantaneous hydration state relative to mass losses of 3 and 6%. Saliva osmolality increased linearly with dehydration, although its basal osmolality and its rate of change varied among and within subjects across trials. Receiver operating characteristic curves indicated a good predictive power for saliva osmolality when used with two, single-threshold cutoffs to differentiate between hydrated and dehydrated individuals (area under curve: 3% cutoff = 0.868, 6% cutoff = 0.831). However, when analysed using a double-threshold detection technique (3 and 6%), as might be used in a field-based monitor, <50% of the osmolality data could correctly identify individuals who exceeded 3% dehydration. Indeed, within the 3-6% dehydration range, its sensitivity was 64%, while beyond 6% dehydration, this fell to 42%. Therefore, while expectorated saliva osmolality tracked mass losses within individuals, its large intra- and inter-individual variability limited its predictive power and sensitivity, rendering its utility questionable within a universal dehydration monitor.
Tomao, Federica; D'Incalci, Maurizio; Biagioli, Elena; Peccatori, Fedro A; Colombo, Nicoletta
2017-09-15
The platinum-free interval is the most important predictive factor of a response to subsequent lines of chemotherapy and the most important prognostic factor for progression-free and overall survival in patients with recurrent epithelial ovarian cancer. A nonplatinum regimen is generally considered the most appropriate approach when the disease recurs very early after the end of chemotherapy, whereas platinum-based chemotherapy is usually adopted when the platinum-free interval exceeds 12 months. However, the therapeutic management of patients with intermediate sensitivity (ie, when the relapse occurs between 6 and 12 months) remains debatable. Preclinical and clinical data suggest that the extension of platinum-free interval (using a nonplatinum-based regimen) might restore platinum sensitivity, thus allowing survival improvement. The objective of this review was to critically analyze preclinical and clinical evidences supporting this hypothesis. Cancer 2017;123:3450-9. © 2017 American Cancer Society. © 2017 American Cancer Society.
Simulating stick-slip failure in a sheared granular layer using a physics-based constitutive model
Lieou, Charles K. C.; Daub, Eric G.; Guyer, Robert A.; ...
2017-01-14
In this paper, we model laboratory earthquakes in a biaxial shear apparatus using the Shear-Transformation-Zone (STZ) theory of dense granular flow. The theory is based on the observation that slip events in a granular layer are attributed to grain rearrangement at soft spots called STZs, which can be characterized according to principles of statistical physics. We model lab data on granular shear using STZ theory and document direct connections between the STZ approach and rate-and-state friction. We discuss the stability transition from stable shear to stick-slip failure and show that stick slip is predicted by STZ when the applied shearmore » load exceeds a threshold value that is modulated by elastic stiffness and frictional rheology. Finally, we also show that STZ theory mimics fault zone dilation during the stick phase, consistent with lab observations.« less
NASA Astrophysics Data System (ADS)
Murray, Cathryn Clarke; Wong, Janson; Singh, Gerald G.; Mach, Megan; Lerner, Jackie; Ranieri, Bernardo; Peterson St-Laurent, Guillaume; Guimaraes, Alice; Chan, Kai M. A.
2018-06-01
Environmental assessment is the process that decision-makers rely on to predict, evaluate, and prevent biophysical, social, and economic impacts of potential project developments. The determination of significance in environmental assessment is central to environmental management in many nations. We reviewed ten recent environmental impact assessments from British Columbia, Canada and systematically reviewed and scored significance determination and the approaches used by assessors, the use of thresholds in significance determination, threshold exceedances, and the outcomes. Findings of significant impacts were exceedingly rare and practitioners used a combination of significance determination approaches, most commonly relying upon reasoned argumentation. Quantitative thresholds were rarely employed, with less than 10% of the valued components evaluated using thresholds. Even where quantitative thresholds for significance were exceeded, in every case practitioners used a variety of rationales to demote negative impacts to non-significance. These reasons include combinations of scale (temporal and spatial) of impacts, an already exceeded baseline, model uncertainty and/or substituting less stringent thresholds. Governments and agencies can better protect resources by requiring clear and defensible significance determinations, by making government-defined thresholds legally enforceable and accountable, and by requiring or encouraging significance determination through inclusive and collaborative approaches.
Ash deposits - Initiating the change from empiricism to generic engineering. Part 2: Initial results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wessel, R.A.; Wagoner, C.L.
1986-01-01
The goal is to develop and use calculations and measurements from several engineering disciplines that exceed the demonstrated limitations of present empirical techniques for predicting slagging/fouling behavior. In Part I of this paper, general relationships were presented for assessing effects of deposits and sootblowing on the real-time performance of heat transfer surfaces in pilot- and commercial-scale steam generators. In Part 2, these concepts are applied to the gas-side fouling of heat exchanger tubes. Deposition and heat transfer are calculated for superheater tubes in laboratory and utility furnaces. Numerical results for deposit thickness and heat flux are presented. Comparisons with datamore » show agreement, demonstrating that the broad-base engineering approach is promising.« less
Thermal ablation therapeutics based on CNx multi-walled nanotubes
Torti, Suzy V; Byrne, Fiona; Whelan, Orla; Levi, Nicole; Ucer, Burak; Schmid, Michael; Torti, Frank M; Akman, Steven; Liu, Jiwen; Ajayan, Pulickel M; Nalamasu, Omkaram; Carroll, David L
2007-01-01
We demonstrate that nitrogen doped, multi-walled carbon nanotubes (CNx-MWNT) result in photo-ablative destruction of kidney cancer cells when excited by near infrared (NIR) irradiation. Further, we show that effective heat transduction and cellular cytotoxicity depends on nanotube length: effective NIR coupling occurs at nanotube lengths that exceed half the wavelength of the stimulating radiation, as predicted in classical antenna theory. We also demonstrate that this radiation heats the nanotubes through induction processes, resulting in significant heat transfer to surrounding media and cell killing at extraordinarily small radiation doses. This cell death was attributed directly to photothermal effect generated within the culture, since neither the infrared irradiation itself nor the CNx-MWNT were toxic to the cells. PMID:18203437
Application of Markov chain model to daily maximum temperature for thermal comfort in Malaysia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nordin, Muhamad Asyraf bin Che; Hassan, Husna
2015-10-22
The Markov chain’s first order principle has been widely used to model various meteorological fields, for prediction purposes. In this study, a 14-year (2000-2013) data of daily maximum temperatures in Bayan Lepas were used. Earlier studies showed that the outdoor thermal comfort range based on physiologically equivalent temperature (PET) index in Malaysia is less than 34°C, thus the data obtained were classified into two state: normal state (within thermal comfort range) and hot state (above thermal comfort range). The long-run results show the probability of daily temperature exceed TCR will be only 2.2%. On the other hand, the probability dailymore » temperature within TCR will be 97.8%.« less
Theoretical and Experimental Investigation of Particle Trapping via Acoustic Bubbles
NASA Astrophysics Data System (ADS)
Chen, Yun; Fang, Zecong; Merritt, Brett; Saadat-Moghaddam, Darius; Strack, Dillon; Xu, Jie; Lee, Sungyon
2014-11-01
One important application of lab-on-a-chip devices is the trapping and sorting of micro-objects, with acoustic bubbles emerging as an effective, non-contact method. Acoustically actuated bubbles are known to exert a secondary radiation force on micro-particles and trap them, when this radiation force exceeds the drag force that acts to keep the particles in motion. In this study, we theoretically evaluate the magnitudes of these two forces for varying actuation frequencies and voltages. In particular, the secondary radiation force is calculated directly from bubble oscillation shapes that have been experimentally measured for varying acoustic parameters. Finally, based on the force estimates, we predict the threshold voltage and frequency for trapping and compare them to the experimental results.
NASA Astrophysics Data System (ADS)
Kujala, J.; Segercrantz, N.; Tuomisto, F.; Slotte, J.
2014-10-01
We have applied positron annihilation spectroscopy to study native point defects in Te-doped n-type and nominally undoped p-type GaSb single crystals. The results show that the dominant vacancy defect trapping positrons in bulk GaSb is the gallium monovacancy. The temperature dependence of the average positron lifetime in both p- and n-type GaSb indicates that negative ion type defects with no associated open volume compete with the Ga vacancies. Based on comparison with theoretical predictions, these negative ions are identified as Ga antisites. The concentrations of these negatively charged defects exceed the Ga vacancy concentrations nearly by an order of magnitude. We conclude that the Ga antisite is the native defect responsible for p-type conductivity in GaSb single crystals.
Multivariate Statistical Models for Predicting Sediment Yields from Southern California Watersheds
Gartner, Joseph E.; Cannon, Susan H.; Helsel, Dennis R.; Bandurraga, Mark
2009-01-01
Debris-retention basins in Southern California are frequently used to protect communities and infrastructure from the hazards of flooding and debris flow. Empirical models that predict sediment yields are used to determine the size of the basins. Such models have been developed using analyses of records of the amount of material removed from debris retention basins, associated rainfall amounts, measures of watershed characteristics, and wildfire extent and history. In this study we used multiple linear regression methods to develop two updated empirical models to predict sediment yields for watersheds located in Southern California. The models are based on both new and existing measures of volume of sediment removed from debris retention basins, measures of watershed morphology, and characterization of burn severity distributions for watersheds located in Ventura, Los Angeles, and San Bernardino Counties. The first model presented reflects conditions in watersheds located throughout the Transverse Ranges of Southern California and is based on volumes of sediment measured following single storm events with known rainfall conditions. The second model presented is specific to conditions in Ventura County watersheds and was developed using volumes of sediment measured following multiple storm events. To relate sediment volumes to triggering storm rainfall, a rainfall threshold was developed to identify storms likely to have caused sediment deposition. A measured volume of sediment deposited by numerous storms was parsed among the threshold-exceeding storms based on relative storm rainfall totals. The predictive strength of the two models developed here, and of previously-published models, was evaluated using a test dataset consisting of 65 volumes of sediment yields measured in Southern California. The evaluation indicated that the model developed using information from single storm events in the Transverse Ranges best predicted sediment yields for watersheds in San Bernardino, Los Angeles, and Ventura Counties. This model predicts sediment yield as a function of the peak 1-hour rainfall, the watershed area burned by the most recent fire (at all severities), the time since the most recent fire, watershed area, average gradient, and relief ratio. The model that reflects conditions specific to Ventura County watersheds consistently under-predicted sediment yields and is not recommended for application. Some previously-published models performed reasonably well, while others either under-predicted sediment yields or had a larger range of errors in the predicted sediment yields.
Uyttendaele, M; Busschaert, P; Valero, A; Geeraerd, A H; Vermeulen, A; Jacxsens, L; Goh, K K; De Loy, A; Van Impe, J F; Devlieghere, F
2009-07-31
Processed ready-to-eat (RTE) foods with a prolonged shelf-life under refrigeration are at risk products for listeriosis. This manuscript provides an overview of prevalence data (n=1974) and challenge tests (n=299) related to Listeria monocytogenes for three categories of RTE food i) mayonnaise-based deli-salads (1187 presence/absence tests and 182 challenge tests), ii) cooked meat products (639 presence/absence tests and 92 challenge tests), and iii) smoked fish (90 presence/absence tests and 25 challenge tests), based on data records obtained from various food business operators in Belgium in the frame of the validation and verification of their HACCP plans over the period 2005-2007. Overall, the prevalence of L. monocytogenes in these RTE foods in the present study was lower compared to former studies in Belgium. For mayonnaise-based deli-salads, in 80 out of 1187 samples (6.7%) the pathogen was detected in 25 g. L. monocytogenes positive samples were often associated with smoked fish deli-salads. Cooked meat products showed a 1.1% (n=639) prevalence of the pathogen. For both food categories, numbers per gram never exceeded 100 CFU. L. monocytogenes was detected in 27.8% (25/90) smoked fish samples, while 4/25 positive samples failed to comply to the 100 CFU/g limit set out in EU Regulation 2073/2005. Challenge testing showed growth potential in 18/182 (9.9%) deli-salads and 61/92 (66%) cooked meat products. Nevertheless, both for deli-salads and cooked meat products, appropriate product formulation and storage conditions based upon hurdle technology could guarantee no growth of L. monocytogenes throughout the shelf-life as specified by the food business operator. Challenge testing of smoked fish showed growth of L. monocytogenes in 12/25 samples stored for 3-4 weeks at 4 degrees C. Of 45 (non-inoculated) smoked fish samples (13 of which were initially positive in 25 g) which were subjected to shelf-life testing, numbers exceeded 100 CFU/g in only one sample after storage until the end of shelf-life. Predictive models, dedicated to and validated for a particular food category, taking into account the inhibitory effect of various factors in hurdle technology, provided predictions of growth potential of L. monocytogenes corresponding to observed growth in challenge testing. Based on the combined prevalence data and growth potential, mayonnaise-based deli-salads and cooked meat products can be classified as intermediate risk foods, smoked fish as a high risk food.
Results of the basewide monitoring program at Wright-Patterson Air Force Base, Ohio, 1993-1994
Schalk, C.W.; Cunningham, W.L.
1996-01-01
Geologic and hydrologic data were collected at Wright-Patterson Air Force Base (WPAFB), Ohio, as part of Basewide Monitoring Program (BMP) that began in 1992. The BMP was designed as a long-term project to character ground-water and surface-water quality (including streambed sediments), describe water-quality changes as water enters, flows across, and exits the Base, and investigate the effects of activities at WPAFB on regional water quality. Ground water, surface ware, and streambed sediment were sampled in four rounds between August 1993 and September 1994 to provide the analytical data needed to address the objectives of the BMP. Surface-water-sampling rounds were designed to include most of the seasonal hydrologic conditions encountered in southwestern Ohio, including baseflow conditions and spring runoff. Ground-water-sampling rounds were scheduled for times of recession and recharfe. Ground-water data were used to construct water-table, potentiometric, and vertical gradient maps of the WPAFB area. Water levels have not changed significantly since 1987, but the effects of pumping on and near the Base can have a marked effect on water levels in localized areas. Ground-ware gradients generally were downward throughout Area B (the southwestern third of the Base) and in the eastern third of Areas A and C (the northeastern two-thirds of the Base), and were upward in the vicinity of Mad River. Stream-discharge measurements verified these gradients. Many of the U.S. Environmental Protection Agency maximum contaminant level (MCL) exceedances of inorganic constituents in ground water were associated with water from the bedrock. Exceedances of concentrations of chromium and nickel were found consistently in five wells completed in the glacial aquifer beneath the Base. Five organic compounds [trichloroethylene (TCE), tetrachloroethylene (PCE), vinyl chloride, benzene, and bis(2-ethylhexyl) phthalate] were detected at concentrations that exceeded MCLs; all of the TCE, PCE, and vinyl chloride exceedances were in water from glacial aquifer, whereas the benzene exceedance and most of the bis(2-ethylhexyl) phthalate exceedances were in water from the bedrock. TCE (16 exceedances) and PCE (11 exceedances) most frequently exceeded the MCLs and were detected in the most samples. A decrease in concentrations of inorganic and organic compounds with depth suggest that many constituents detected in ground-water samples are associated partly with human activities, in addition to their natural occurrence. Included in the list of these constituents are nickel, chromium, copper, lead vanadium, zinc, bromide, and nitrate. Many constituents are not found at depths greater than 60 to 80 feet, possibly indicating that human effects on ground-water quality are limited to shallow flow systems. Organic compounds detected in shallow or intermediate-depth wells were aligned mostly with flowpaths that pass through or near identified hazardous-waste sites. Few organic contaminants were detected in surface water. The only organic compound to exceed MCLs for drinking water was bis(2-ethylhexyl) phthalate, but it was detected at concentrations just above the MCL. Inorganic constituents detected at concentration exceeding MCLs include beryllium (twice), lead (once), thallium (once), and gross alpha radiation (once). No polycyclic aromatic (PAHs) were detected in surface-water samples. The highest concentrations of contaminants detected during a storm event were in samples from upgradient locations, indicating that off-Base sources may contribute to surface-water contamination. Inorganic and organic contaminants were found in streambed sediments at WPAFB, primarily in Areas A and C. Trace metals such as lead, mercury, arsenic, and cadmium were detected at 16 locations at concentrations considered 'elevated' according to a ranking scheme for sediments. PAHS were the organic compounds detected most frequently and in highest concentrations organo
Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A
2017-04-01
In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.
Mendenhall, Tai J; Harper, Peter G; Henn, Lisa; Rudser, Kyle D; Schoeller, Bill P
2014-03-01
Students Against Nicotine and Tobacco Addiction is a community-based participatory research project that engages local medical and mental health providers in partnership with students, teachers, and administrators at the Minnesota-based Job Corps. This intervention contains multiple and synchronous elements designed to allay the stress that students attribute to smoking, including physical activities, nonphysical activities, purposeful modifications to the campus's environment and rules/policies, and on-site smoking cessation education and peer support. The intent of the present investigation was to evaluate (a) the types of stress most predictive of smoking behavior and/or nicotine dependence, (b) which activities students are participating in, and (c) which activities are most predictive of behavior change (or readiness to change). Quantitative data were collected through 5 campus-wide surveys. Response rates for each survey exceeded 85%. Stressors most commonly cited included struggles to find a job, financial problems, family conflict, lack of privacy or freedom, missing family or being homesick, dealing with Job Corps rules, and other-unspecified. The most popular activities in which students took part were physically active ones. However, activities most predictive of beneficent change were nonphysical. Approximately one third of respondents were nicotine dependent at baseline. Nearly half intended to quit within 1 month and 74% intended to quit within 6 months. Interventions perceived as most helpful toward reducing smoking were nonphysical in nature. Future efforts with this and comparable populations should engage youth in advancing such activities within a broader range of activity choices, alongside conventional education and support.
Trait-based diversification shifts reflect differential extinction among fossil taxa
Wagner, Peter J.; Estabrook, George F.
2014-01-01
Evolution provides many cases of apparent shifts in diversification associated with particular anatomical traits. Three general models connect these patterns to anatomical evolution: (i) elevated net extinction of taxa bearing particular traits, (ii) elevated net speciation of taxa bearing particular traits, and (iii) elevated evolvability expanding the range of anatomies available to some species. Trait-based diversification shifts predict elevated hierarchical stratigraphic compatibility (i.e., primitive→derived→highly derived sequences) among pairs of anatomical characters. The three specific models further predict (i) early loss of diversity for taxa retaining primitive conditions (elevated net extinction), (ii) increased diversification among later members of a clade (elevated net speciation), and (iii) increased disparity among later members in a clade (elevated evolvability). Analyses of 319 anatomical and stratigraphic datasets for fossil species and genera show that hierarchical stratigraphic compatibility exceeds the expectations of trait-independent diversification in the vast majority of cases, which was expected if trait-dependent diversification shifts are common. Excess hierarchical stratigraphic compatibility correlates with early loss of diversity for groups retaining primitive conditions rather than delayed bursts of diversity or disparity across entire clades. Cambrian clades (predominantly trilobites) alone fit null expectations well. However, it is not clear whether evolution was unusual among Cambrian taxa or only early trilobites. At least among post-Cambrian taxa, these results implicate models, such as competition and extinction selectivity/resistance, as major drivers of trait-based diversification shifts at the species and genus levels while contradicting the predictions of elevated net speciation and elevated evolvability models. PMID:25331898
Larson, Emily S; Conder, Jason M; Arblaster, Jennifer A
2018-06-01
Releases of Perfluoroalkyl and Polyfluoroalkyl Substances (PFASs) associated with Aqueous Film Forming Foams (AFFFs) have the potential to impact on-site and downgradient aquatic habitats. Dietary exposures of aquatic-dependent birds were modeled for seven PFASs (PFHxA, PFOA, PFNA, PFDA, PFHxS, PFOS, and PFDS) using five different scenarios based on measurements of PFASs obtained from five investigations of sites historically-impacted by AFFF. Exposure modeling was conducted for four avian receptors representing various avian feeding guilds: lesser scaup (Aythya affinis), spotted sandpiper (Actitis macularia), great blue heron (Ardea herodias), and osprey (Pandion haliaetus). For the receptor predicted to receive the highest PFAS exposure (spotted sandpiper), model-predicted exposure to PFOS exceeded a laboratory-based, No Observed Adverse Effect Level exposure benchmark in three of the five model scenarios, confirming that risks to aquatic-dependent avian wildlife should be considered for investigations of historic AFFF releases. Perfluoroalkyl sulfonic acids (PFHxS, PFOS, and PFDS) represented 94% (on average) of total PFAS exposures due to their prevalence in historical AFFF formulations, and increased bioaccumulation in aquatic prey items and partitioning to aquatic sediment relative to perfluoroalkyl carboxylic acids. Sediment-associated PFASs (rather than water-associated PFASs) were the source of the highest predicted PFAS exposures, and are likely to be very important for understanding and managing AFFF site-specific ecological risks. Additional considerations for research needs and site-specific ecological risk assessments are discussed with the goal of optimizing ecological risk-based decision making at AFFF sites and prioritizing research needs. Copyright © 2018 Elsevier Ltd. All rights reserved.
Stone, Wesley W.; Crawford, Charles G.; Gilliom, Robert J.
2013-01-01
Watershed Regressions for Pesticides for multiple pesticides (WARP-MP) are statistical models developed to predict concentration statistics for a wide range of pesticides in unmonitored streams. The WARP-MP models use the national atrazine WARP models in conjunction with an adjustment factor for each additional pesticide. The WARP-MP models perform best for pesticides with application timing and methods similar to those used with atrazine. For other pesticides, WARP-MP models tend to overpredict concentration statistics for the model development sites. For WARP and WARP-MP, the less-than-ideal sampling frequency for the model development sites leads to underestimation of the shorter-duration concentration; hence, the WARP models tend to underpredict 4- and 21-d maximum moving-average concentrations, with median errors ranging from 9 to 38% As a result of this sampling bias, pesticides that performed well with the model development sites are expected to have predictions that are biased low for these shorter-duration concentration statistics. The overprediction by WARP-MP apparent for some of the pesticides is variably offset by underestimation of the model development concentration statistics. Of the 112 pesticides used in the WARP-MP application to stream segments nationwide, 25 were predicted to have concentration statistics with a 50% or greater probability of exceeding one or more aquatic life benchmarks in one or more stream segments. Geographically, many of the modeled streams in the Corn Belt Region were predicted to have one or more pesticides that exceeded an aquatic life benchmark during 2009, indicating the potential vulnerability of streams in this region.
Climate change effects on livestock in the Northeast U.S. and strategies for adaptation
USDA-ARS?s Scientific Manuscript database
The livestock industries are a major contributor to the economy of the northeastern United States. Climate models predict increased average maximum temperatures, days with temperatures exceeding 25°C, and higher annual precipitation in the Northeast. These environmental changes combined with increas...
Friend or Foe: Subjective Expected Relative Similarity as a Determinant of Cooperation
ERIC Educational Resources Information Center
Fischer, Ilan
2009-01-01
Subjective expected relative similarity (SERS) is a descriptive theory that explains cooperation levels in single-step prisoner's dilemma (PD) games. SERS predicts that individuals cooperate whenever their "subjectively perceived similarity" with their opponent exceeds a situational index, namely the game's "similarity threshold." A thought…
Mark J. Twery; Aaron R. Weiskittel
2013-01-01
Forests are complex and dynamic ecosystems comprising individual trees that can vary in both size and species. In comparison to other organisms, trees are relatively long lived (40-2000 years), quite plastic in terms of their morphology and ecological niche, and adapted to a wide variety of habitats, which can make predicting their behaviour exceedingly difficult....
USDA-ARS?s Scientific Manuscript database
Atmospheric CO2 concentration will likely exceed 500 uL L-1 by 2050, often increasing plant community productivity in part by increasing abundance of species favored by increased CA. Whether increased abundance translates to increased inflorescence production is poorly understood, and is important ...
Identification of conditions for successful aphid control by ladybirds in greenhouses
USDA-ARS?s Scientific Manuscript database
As part of my research on the mass production and augmentative release of ladybirds, I reviewed the primary research literature to test the prediction that ladybirds are effective aphid predators in greenhouses. Aphid population reduction exceeded 50% in most studies and ladybird release rates usual...
Marine beaches are occasionally contaminated by unacceptably high levels of fecal indicator bacteria (FIB) that exceed EPA water quality criteria. Here we describe application of a recent version of the software package Virtual Beach tool (VB 3.0.6) to build and evaluate multiple...
ToxCast and Virtual Embryo: in vitro data and in silico models for predictive toxicology
Human populations may be exposed to thousands of chemicals only a fraction of which have detailed toxicity data. Traditional in vivo animal testing is costly, lengthy and normally conducted with dosages that exceed relatively insensitive to concentrations of chemicals at realisti...
Universality of quantum gravity corrections.
Das, Saurya; Vagenas, Elias C
2008-11-28
We show that the existence of a minimum measurable length and the related generalized uncertainty principle (GUP), predicted by theories of quantum gravity, influence all quantum Hamiltonians. Thus, they predict quantum gravity corrections to various quantum phenomena. We compute such corrections to the Lamb shift, the Landau levels, and the tunneling current in a scanning tunneling microscope. We show that these corrections can be interpreted in two ways: (a) either that they are exceedingly small, beyond the reach of current experiments, or (b) that they predict upper bounds on the quantum gravity parameter in the GUP, compatible with experiments at the electroweak scale. Thus, more accurate measurements in the future should either be able to test these predictions, or further tighten the above bounds and predict an intermediate length scale between the electroweak and the Planck scale.
Mathematical modeling of the in-mold coating process for injection-molded thermoplastic parts
NASA Astrophysics Data System (ADS)
Chen, Xu
In-Mold Coating (IMC) has been successfully used for many years for exterior body panels made from compression molded Sheet Molding Compound (SMC). The coating material is a single component reactive fluid, designed to improve the surface quality of SMC moldings in terms of functional and cosmetic properties. When injected onto a cured SMC part, IMC cures and bonds to provide a pain-like surface. Because of its distinct advantages, IMC is being considered for application to injection molded thermoplastic parts. For a successful in mold coating operation, there are two key issues related to the flow of the coating. First, the injection nozzle should be located such that the thermoplastic substrate is totally covered and the potential for air trapping is minimized. The selected location should be cosmetically acceptable since it most likely will leave a mark on the coated surface. The nozzle location also needs to be accessible for easy of maintenance. Secondly, the hydraulic force generated by the coating injection pressure should not exceed the available clamping tonnage. If the clamping force is exceeded, coating leakage will occur. In this study, mathematical models for IMC flow on the compressible thermoplastic substrate have been developed. Finite Difference Method (FDM) is first used to solve the 1 dimensional (1D) IMC flow problem. In order to investigate the application of Control Volume based Finite Element Method (CV/FEM) to more complicated two dimensional IMC flow, that method is first evaluated by solving the 1D IMC flow problem. An analytical solution, which can be obtained when a linear relationship between the coating thickness and coating injection pressure is assumed, is used to verify the numerical results. The mathematical models for the 2 dimensional (2D) IMC flow are based on the generalized Hele-Shaw approximation. It has been found experimentally that the power law viscosity model adequately predicts the rheological behavior of the coating. The compressibility of the substrate is modeled by the 2-domain Tait PVT equation. CV/FEM is used to solve the discretized governing equations. A computer code has been developed to predict the fill pattern of the coating and the injection pressure. A number of experiments have been conducted to verify the numerical predictions of the computer code. It has been found both numerically and experimentally that the substrate thickness plays a significant role on the IMC fill pattern.
Dong, Yun-Wei; Li, Xiao-Xu; Choi, Francis M P; Williams, Gray A; Somero, George N; Helmuth, Brian
2017-05-17
Biogeographic distributions are driven by cumulative effects of smaller scale processes. Thus, vulnerability of animals to thermal stress is the result of physiological sensitivities to body temperature ( T b ), microclimatic conditions, and behavioural thermoregulation. To understand interactions among these variables, we analysed the thermal tolerances of three species of intertidal snails from different latitudes along the Chinese coast, and estimated potential T b in different microhabitats at each site. We then empirically determined the temperatures at which heart rate decreased sharply with rising temperature (Arrhenius breakpoint temperature, ABT) and at which it fell to zero (flat line temperature, FLT) to calculate thermal safety margins (TSM). Regular exceedance of FLT in sun-exposed microhabitats, a lethal effect, was predicted for only one mid-latitude site. However, ABTs of some individuals were exceeded at sun-exposed microhabitats in most sites, suggesting physiological impairment for snails with poor behavioural thermoregulation and revealing inter-individual variations (physiological polymorphism) of thermal limits. An autocorrelation analysis of T b showed that predictability of extreme temperatures was lowest at the hottest sites, indicating that the effectiveness of behavioural thermoregulation is potentially lowest at these sites. These results illustrate the critical roles of mechanistic studies at small spatial scales when predicting effects of climate change. © 2017 The Author(s).
Li, Xiao-xu; Choi, Francis M. P.; Williams, Gray A.; Somero, George N.; Helmuth, Brian
2017-01-01
Biogeographic distributions are driven by cumulative effects of smaller scale processes. Thus, vulnerability of animals to thermal stress is the result of physiological sensitivities to body temperature (Tb), microclimatic conditions, and behavioural thermoregulation. To understand interactions among these variables, we analysed the thermal tolerances of three species of intertidal snails from different latitudes along the Chinese coast, and estimated potential Tb in different microhabitats at each site. We then empirically determined the temperatures at which heart rate decreased sharply with rising temperature (Arrhenius breakpoint temperature, ABT) and at which it fell to zero (flat line temperature, FLT) to calculate thermal safety margins (TSM). Regular exceedance of FLT in sun-exposed microhabitats, a lethal effect, was predicted for only one mid-latitude site. However, ABTs of some individuals were exceeded at sun-exposed microhabitats in most sites, suggesting physiological impairment for snails with poor behavioural thermoregulation and revealing inter-individual variations (physiological polymorphism) of thermal limits. An autocorrelation analysis of Tb showed that predictability of extreme temperatures was lowest at the hottest sites, indicating that the effectiveness of behavioural thermoregulation is potentially lowest at these sites. These results illustrate the critical roles of mechanistic studies at small spatial scales when predicting effects of climate change. PMID:28469014
Cost analysis of whole genome sequencing in German clinical practice.
Plöthner, Marika; Frank, Martin; von der Schulenburg, J-Matthias Graf
2017-06-01
Whole genome sequencing (WGS) is an emerging tool in clinical diagnostics. However, little has been said about its procedure costs, owing to a dearth of related cost studies. This study helps fill this research gap by analyzing the execution costs of WGS within the setting of German clinical practice. First, to estimate costs, a sequencing process related to clinical practice was undertaken. Once relevant resources were identified, a quantification and monetary evaluation was conducted using data and information from expert interviews with clinical geneticists, and personnel at private enterprises and hospitals. This study focuses on identifying the costs associated with the standard sequencing process, and the procedure costs for a single WGS were analyzed on the basis of two sequencing platforms-namely, HiSeq 2500 and HiSeq Xten, both by Illumina, Inc. In addition, sensitivity analyses were performed to assess the influence of various uses of sequencing platforms and various coverage values on a fixed-cost degression. In the base case scenario-which features 80 % utilization and 30-times coverage-the cost of a single WGS analysis with the HiSeq 2500 was estimated at €3858.06. The cost of sequencing materials was estimated at €2848.08; related personnel costs of €396.94 and acquisition/maintenance costs (€607.39) were also found. In comparison, the cost of sequencing that uses the latest technology (i.e., HiSeq Xten) was approximately 63 % cheaper, at €1411.20. The estimated costs of WGS currently exceed the prediction of a 'US$1000 per genome', by more than a factor of 3.8. In particular, the material costs in themselves exceed this predicted cost.
Nansen, Christian; Vaughn, Kathy; Xue, Yingen; Rush, Charlie; Workneh, Fekede; Goolsby, John; Troxclair, Noel; Anciso, Juan; Gregory, Ashley; Holman, Daniel; Hammond, Abby; Mirkov, Erik; Tantravahi, Pratyusha; Martini, Xavier
2011-08-01
Approximately US $1.3 billion is spent each year on insecticide applications in major row crops. Despite this significant economic importance, there are currently no widely established decision-support tools available to assess suitability of spray application conditions or of the predicted quality or performance of a given commercial insecticide applications. We conducted a field study, involving 14 commercial spray applications with either fixed wing airplane (N=8) or ground rig (N=6), and we used environmental variables as regression fits to obtained spray deposition (coverage in percentage). We showed that (1) ground rig applications provided higher spray deposition than aerial applications, (2) spray deposition was lowest in the bottom portion of the canopy, (3) increase in plant height reduced spray deposition, (4) wind speed increased spray deposition, and (5) higher ambient temperatures and dew point increased spray deposition. Potato psyllid, Bactericera cockerelli (Sulc) (Hemiptera: Triozidae), mortality increased asymptotically to approximately 60% in response to abamectin spray depositions exceeding around 20%, whereas mortality of psyllid adults reached an asymptotic response approximately 40% when lambda-cyhalothrin/thiamethoxam spray deposition exceeded 30%. A spray deposition support tool was developed (http://pilcc.tamu.edu/) that may be used to make decisions regarding (1) when is the best time of day to conduct spray applications and (2) selecting which insecticide to spray based on expected spray deposition. The main conclusion from this analysis is that optimization of insecticide spray deposition should be considered a fundamental pillar of successful integrated pest management programs to increase efficiency of sprays (and therefore reduce production costs) and to reduce risk of resistance development in target pest populations.
Topological events in single molecules of E. coli DNA confined in nanochannels
Reifenberger, Jeffrey G.; Dorfman, Kevin D.; Cao, Han
2015-01-01
We present experimental data concerning potential topological events such as folds, internal backfolds, and/or knots within long molecules of double-stranded DNA when they are stretched by confinement in a nanochannel. Genomic DNA from E. coli was labeled near the ‘GCTCTTC’ sequence with a fluorescently labeled dUTP analog and stained with the DNA intercalator YOYO. Individual long molecules of DNA were then linearized and imaged using methods based on the NanoChannel Array technology (Irys® System) available from BioNano Genomics. Data were collected on 189,153 molecules of length greater than 50 kilobases. A custom code was developed to search for abnormal intensity spikes in the YOYO backbone profile along the length of individual molecules. By correlating the YOYO intensity spikes with the aligned barcode pattern to the reference, we were able to correlate the bright intensity regions of YOYO with abnormal stretching in the molecule, which suggests these events were either a knot or a region of internal backfolding within the DNA. We interpret the results of our experiments involving molecules exceeding 50 kilobases in the context of existing simulation data for relatively short DNA, typically several kilobases. The frequency of these events is lower than the predictions from simulations, while the size of the events is larger than simulation predictions and often exceeds the molecular weight of the simulated molecules. We also identified DNA molecules that exhibit large, single folds as they enter the nanochannels. Overall, topological events occur at a low frequency (~7% of all molecules) and pose an easily surmountable obstacle for the practice of genome mapping in nanochannels. PMID:25991508
NASA Astrophysics Data System (ADS)
Singh, Saurabh; Subrahmanyan, Ravi; Shankar, N. Udaya; Rao, Mayuri Sathyanarayana; Girish, B. S.; Raghunathan, A.; Somashekar, R.; Srivani, K. S.
2018-04-01
The global 21-cm signal from Cosmic Dawn (CD) and the Epoch of Reionization (EoR), at redshifts z ˜ 6-30, probes the nature of first sources of radiation as well as physics of the Inter-Galactic Medium (IGM). Given that the signal is predicted to be extremely weak, of wide fractional bandwidth, and lies in a frequency range that is dominated by Galactic and Extragalactic foregrounds as well as Radio Frequency Interference, detection of the signal is a daunting task. Critical to the experiment is the manner in which the sky signal is represented through the instrument. It is of utmost importance to design a system whose spectral bandpass and additive spurious signals can be well calibrated and any calibration residual does not mimic the signal. Shaped Antenna measurement of the background RAdio Spectrum (SARAS) is an ongoing experiment that aims to detect the global 21-cm signal. Here we present the design philosophy of the SARAS 2 system and discuss its performance and limitations based on laboratory and field measurements. Laboratory tests with the antenna replaced with a variety of terminations, including a network model for the antenna impedance, show that the gain calibration and modeling of internal additive signals leave no residuals with Fourier amplitudes exceeding 2 mK, or residual Gaussians of 25 MHz width with amplitudes exceeding 2 mK. Thus, even accounting for reflection and radiation efficiency losses in the antenna, the SARAS 2 system is capable of detection of complex 21-cm profiles at the level predicted by currently favoured models for thermal baryon evolution.
Archaeological Feedback as a Research Methodology in Near-Surface Geophysics
NASA Astrophysics Data System (ADS)
Maillol, J.; Ortega-Ramírez, J.; Berard, B.
2005-05-01
A unique characteristic of archaeological geophysics is to present the researchers in applied geophysics with the opportunity to verify their interpretation of geophysical data through the direct observation of often extremely detailed excavations. This is usually known as archaeological feedback. Archaeological materials have been slowly buried over periods ranging from several hundreds to several thousands of years, undergoing natural sedimentary and soil-forming processes. Once excavated, archaeological features therefore constitute more realistic test subjects than the targets artifically buried in common geophysical test sites. We are presenting the outcome of several such verification tests aimed at clarifying issues in geometry and spatial resolution of ground penetrating radar (GPR) images. On the site of a Roman villa in SE Portugal 500 Mhz GPR images are shown to depict very accurately the position and geometry of partially excavated remains. In the Maya city of Palenque, Mexico, 900 Mhz data allows the depth of tombs and natural cavities to be determined with cm accuracy. The predicted lateral extent of the cavities is more difficult to match with the reality due to the cluttering caused by high frequency. In the rainforest of Western Africa, 500 MHz GPR was used to prospect for stone tool sites. When very careful positioning and high density data sampling is achieved, stones can be accurately located and retrieved at depths exceeding 1 m with maximum positioning errors of 12cm horizontally and 2 cm vertically. In more difficult data collection conditions however, errors in positioning are shown to actually largely exceed the predictions based on quantitative theoretical resolution considerations. Geophysics has long been recognized as a powerful tool for prospecting and characterizing archaeological sites. Reciprocally, these results show that archaeology is an unparalleled test environment for the assesment and development of high resolution geophysical methods.
NASA Astrophysics Data System (ADS)
Casado-Pascual, Jesús; Denk, Claus; Gómez-Ordóñez, José; Morillo, Manuel; Hänggi, Peter
2003-03-01
In the context of the phenomenon of stochastic resonance (SR), we study the correlation function, the signal-to-noise ratio (SNR), and the ratio of output over input SNR, i.e., the gain, which is associated to the nonlinear response of a bistable system driven by time-periodic forces and white Gaussian noise. These quantifiers for SR are evaluated using the techniques of linear response theory (LRT) beyond the usually employed two-mode approximation scheme. We analytically demonstrate within such an extended LRT description that the gain can indeed not exceed unity. We implement an efficient algorithm, based on work by Greenside and Helfand (detailed in the Appendix), to integrate the driven Langevin equation over a wide range of parameter values. The predictions of LRT are carefully tested against the results obtained from numerical solutions of the corresponding Langevin equation over a wide range of parameter values. We further present an accurate procedure to evaluate the distinct contributions of the coherent and incoherent parts of the correlation function to the SNR and the gain. As a main result we show for subthreshold driving that both the correlation function and the SNR can deviate substantially from the predictions of LRT and yet the gain can be either larger or smaller than unity. In particular, we find that the gain can exceed unity in the strongly nonlinear regime which is characterized by weak noise and very slow multifrequency subthreshold input signals with a small duty cycle. This latter result is in agreement with recent analog simulation results by Gingl et al. [ICNF 2001, edited by G. Bosman (World Scientific, Singapore, 2002), pp. 545 548; Fluct. Noise Lett. 1, L181 (2001)].
Heart strings and purse strings: Carryover effects of emotions on economic decisions.
Lerner, Jennifer S; Small, Deborah A; Loewenstein, George
2004-05-01
We examined the impact of specific emotions on the endowment effect, the tendency for selling prices to exceed buying or "choice" prices for the same object. As predicted by appraisal-tendency theory, disgust induced by a prior, irrelevant situation carried over to normatively unrelated economic decisions, reducing selling and choice prices and eliminating the endowment effect. Sadness also carried over, reducing selling prices but increasing choice prices--producing a "reverse endowment effect" in which choice prices exceeded selling prices. The results demonstrate that incidental emotions can influence decisions even when real money is at stake, and that emotions of the same valence can have opposing effects on such decisions.
NASA Astrophysics Data System (ADS)
Ramsey, Michael S.; Harris, Andrew J. L.
2013-01-01
Volcanological remote sensing spans numerous techniques, wavelength regions, data collection strategies, targets, and applications. Attempting to foresee and predict the growth vectors in this broad and rapidly developing field is therefore exceedingly difficult. However, we attempted to make such predictions at both the American Geophysical Union (AGU) meeting session entitled Volcanology 2010: How will the science and practice of volcanology change in the coming decade? held in December 2000 and the follow-up session 10 years later, Looking backward and forward: Volcanology in 2010 and 2020. In this summary paper, we assess how well we did with our predictions for specific facets of volcano remote sensing in 2000 the advances made over the most recent decade, and attempt a new look ahead to the next decade. In completing this review, we only consider the subset of the field focused on thermal infrared remote sensing of surface activity using ground-based and space-based technology and the subsequent research results. This review keeps to the original scope of both AGU presentations, and therefore does not address the entire field of volcanological remote sensing, which uses technologies in other wavelength regions (e.g., ultraviolet, radar, etc.) or the study of volcanic processes other than the those associated with surface (mostly effusive) activity. Therefore we do not consider remote sensing of ash/gas plumes, for example. In 2000, we had looked forward to a "golden age" in volcanological remote sensing, with a variety of new orbital missions both planned and recently launched. In addition, exciting field-based sensors such as hand-held thermal cameras were also becoming available and being quickly adopted by volcanologists for both monitoring and research applications. All of our predictions in 2000 came true, but at a pace far quicker than we predicted. Relative to the 2000-2010 timeframe, the coming decade will see far fewer new orbital instruments with direct applications to volcanology. However ground-based technologies and applications will continue to proliferate, and unforeseen technology promises many exciting possibilities that will advance volcano thermal monitoring and science far beyond what we can currently envision.
Individual laboratory-measured discount rates predict field behavior
Chabris, Christopher F.; Laibson, David; Morris, Carrie L.; Schuldt, Jonathon P.; Taubinsky, Dmitry
2009-01-01
We estimate discount rates of 555 subjects using a laboratory task and find that these individual discount rates predict inter-individual variation in field behaviors (e.g., exercise, BMI, smoking). The correlation between the discount rate and each field behavior is small: none exceeds 0.28 and many are near 0. However, the discount rate has at least as much predictive power as any variable in our dataset (e.g., sex, age, education). The correlation between the discount rate and field behavior rises when field behaviors are aggregated: these correlations range from 0.09-0.38. We present a model that explains why specific intertemporal choice behaviors are only weakly correlated with discount rates, even though discount rates robustly predict aggregates of intertemporal decisions. PMID:19412359
Individual laboratory-measured discount rates predict field behavior.
Chabris, Christopher F; Laibson, David; Morris, Carrie L; Schuldt, Jonathon P; Taubinsky, Dmitry
2008-12-01
We estimate discount rates of 555 subjects using a laboratory task and find that these individual discount rates predict inter-individual variation in field behaviors (e.g., exercise, BMI, smoking). The correlation between the discount rate and each field behavior is small: none exceeds 0.28 and many are near 0. However, the discount rate has at least as much predictive power as any variable in our dataset (e.g., sex, age, education). The correlation between the discount rate and field behavior rises when field behaviors are aggregated: these correlations range from 0.09-0.38. We present a model that explains why specific intertemporal choice behaviors are only weakly correlated with discount rates, even though discount rates robustly predict aggregates of intertemporal decisions.
Faulisi, Sonia; Reschini, Marco; Borroni, Raffaella; Paffoni, Alessio; Busnelli, Andrea; Somigliana, Edgardo
2017-01-01
The routine assessment of day 3 serum progesterone prior to initiation of ovarian hyper-stimulation with the use of GnRH antagonists is under debate. In this study, we evaluated the clinical utility of this policy. Retrospective cohort study of women undergoing in vitro fertilization (IVF) with the use of GnRH antagonists aimed at determining the frequency of cases with progesterone levels exceeding the recommended threshold of 1,660 pg/ml and at evaluating whether this assessment may be predictive of pregnancy. Serum progesterone exceeded the recommended threshold in one case (0.3%, 95% CI 0.01-1.5). The median (interquartile range) basal progesterone in women who did (n = 95) and did not (n = 217) become pregnant were 351 (234-476) and 380 (237-531) pg/ml, respectively (p = 0.28). The 90th percentile of the basal progesterone distribution in women who became pregnant was 660 pg/ml. Cases with serum progesterone exceeding this threshold in successful and unsuccessful cycles were 10 (10%) and 30 (14%), respectively (p = 0.47). The capacity of basal progesterone to predict pregnancy was evaluated using receiver operating characteristic curve (area under the curve = 0.54, 95% CI 0.47-0.61, p = 0.28). No graphically evident threshold emerged. Routine day 3 serum progesterone assessment in IVF cycles with the use of GnRH antagonists is not justified. Further evidence is warranted prior to claiming its systematic use. © 2016 S. Karger AG, Basel.
Siddique, Tariq; Gupta, Rajender; Fedorak, Phillip M; MacKinnon, Michael D; Foght, Julia M
2008-08-01
A small fraction of the naphtha diluent used for oil sands processing escapes with tailings and supports methane (CH(4)) biogenesis in large anaerobic settling basins such as Mildred Lake Settling Basin (MLSB) in northern Alberta, Canada. Based on the rate of naphtha metabolism in tailings incubated in laboratory microcosms, a kinetic model comprising lag phase, rate of hydrocarbon metabolism and conversion to CH(4) was developed to predict CH(4) biogenesis and flux from MLSB. Zero- and first-order kinetic models, respectively predicted generation of 5.4 and 5.1 mmol CH(4) in naphtha-amended microcosms compared to 5.3 (+/-0.2) mmol CH(4) measured in microcosms during 46 weeks of incubation. These kinetic models also predicted well the CH(4) produced by tailings amended with either naphtha-range n-alkanes or BTEX compounds at concentrations similar to those expected in MLSB. Considering 25% of MLSB's 200 million m(3) tailings volume to be methanogenic, the zero- and first-order kinetic models applied over a wide range of naphtha concentrations (0.01-1.0 wt%) predicted production of 8.9-400 million l CH(4) day(-1) from MLSB, which exceeds the estimated production of 3-43 million l CH(4) day(-1). This discrepancy may result from heterogeneity and density of the tailings, presence of nutrients in the microcosms, and/or overestimation of the readily biodegradable fraction of the naphtha in MLSB tailings.
Statistical and dynamical forecast of regional precipitation after mature phase of ENSO
NASA Astrophysics Data System (ADS)
Sohn, S.; Min, Y.; Lee, J.; Tam, C.; Ahn, J.
2010-12-01
While the seasonal predictability of general circulation models (GCMs) has been improved, the current model atmosphere in the mid-latitude does not respond correctly to external forcing such as tropical sea surface temperature (SST), particularly over the East Asia and western North Pacific summer monsoon regions. In addition, the time-scale of prediction scope is considerably limited and the model forecast skill still is very poor beyond two weeks. Although recent studies indicate that coupled model based multi-model ensemble (MME) forecasts show the better performance, the long-lead forecasts exceeding 9 months still show a dramatic decrease of the seasonal predictability. This study aims at diagnosing the dynamical MME forecasts comprised of the state of art 1-tier models as well as comparing them with the statistical model forecasts, focusing on the East Asian summer precipitation predictions after mature phase of ENSO. The lagged impact of El Nino as major climate contributor on the summer monsoon in model environments is also evaluated, in the sense of the conditional probabilities. To evaluate the probability forecast skills, the reliability (attributes) diagram and the relative operating characteristics following the recommendations of the World Meteorological Organization (WMO) Standardized Verification System for Long-Range Forecasts are used in this study. The results should shed light on the prediction skill for dynamical model and also for the statistical model, in forecasting the East Asian summer monsoon rainfall with a long-lead time.
Dixon, Jennifer; Smith, Peter; Gravelle, Hugh; Martin, Steve; Bardsley, Martin; Rice, Nigel; Georghiou, Theo; Dusheiko, Mark; Billings, John; Lorenzo, Michael De; Sanderson, Colin
2011-11-22
To develop a formula for allocating resources for commissioning hospital care to all general practices in England based on the health needs of the people registered in each practice Multivariate prospective statistical models were developed in which routinely collected electronic information from 2005-6 and 2006-7 on individuals and the areas in which they lived was used to predict their costs of hospital care in the next year, 2007-8. Data on individuals included all diagnoses recorded at any inpatient admission. Models were developed on a random sample of 5 million people and validated on a second random sample of 5 million people and a third sample of 5 million people drawn from a random sample of practices. All general practices in England as of 1 April 2007. All NHS inpatient admissions and outpatient attendances for individuals registered with a general practice on that date. All individuals registered with a general practice in England at 1 April 2007. Power of the statistical models to predict the costs of the individual patient or each practice's registered population for 2007-8 tested with a range of metrics (R(2) reported here). Comparisons of predicted costs in 2007-8 with actual costs incurred in the same year were calculated by individual and by practice. Models including person level information (age, sex, and ICD-10 codes diagnostic recorded) and a range of area level information (such as socioeconomic deprivation and supply of health facilities) were most predictive of costs. After accounting for person level variables, area level variables added little explanatory power. The best models for resource allocation could predict upwards of 77% of the variation in costs at practice level, and about 12% at the person level. With these models, the predicted costs of about a third of practices would exceed or undershoot the actual costs by 10% or more. Smaller practices were more likely to be in these groups. A model was developed that performed well by international standards, and could be used for allocations to practices for commissioning. The best formulas, however, could predict only about 12% of the variation in next year's costs of most inpatient and outpatient NHS care for each individual. Person-based diagnostic data significantly added to the predictive power of the models.
Are Plant Species Able to Keep Pace with the Rapidly Changing Climate?
Cunze, Sarah; Heydel, Felix; Tackenberg, Oliver
2013-01-01
Future climate change is predicted to advance faster than the postglacial warming. Migration may therefore become a key driver for future development of biodiversity and ecosystem functioning. For 140 European plant species we computed past range shifts since the last glacial maximum and future range shifts for a variety of Intergovernmental Panel on Climate Change (IPCC) scenarios and global circulation models (GCMs). Range shift rates were estimated by means of species distribution modelling (SDM). With process-based seed dispersal models we estimated species-specific migration rates for 27 dispersal modes addressing dispersal by wind (anemochory) for different wind conditions, as well as dispersal by mammals (dispersal on animal's coat – epizoochory and dispersal by animals after feeding and digestion – endozoochory) considering different animal species. Our process-based modelled migration rates generally exceeded the postglacial range shift rates indicating that the process-based models we used are capable of predicting migration rates that are in accordance with realized past migration. For most of the considered species, the modelled migration rates were considerably lower than the expected future climate change induced range shift rates. This implies that most plant species will not entirely be able to follow future climate-change-induced range shifts due to dispersal limitation. Animals with large day- and home-ranges are highly important for achieving high migration rates for many plant species, whereas anemochory is relevant for only few species. PMID:23894290
Evaluation of light penetration on Navigation Pools 8 and 13 of the Upper Mississippi River
Giblin, Shawn; Hoff, Kraig; Fischer, Jim; Dukerschein, Terry
2010-01-01
The availability of light can have a dramatic affect on macrophyte and phytoplankton abundance in virtually all aquatic ecosystems. The Long Term Resource Monitoring Program and other monitoring programs often measure factors that affect light extinction (nonvolatile suspended solids, volatile suspended solids, and chlorophyll) and correlates of light extinction (turbidity and Secchi depth), but rarely do they directly measure light extinction. Data on light extinction, Secchi depth, transparency tube, turbidity, total suspended solids, and volatile suspended solids were collected during summer 2003 on Pools 8 and 13 of the Upper Mississippi River. Regressions were developed to predict light extinction based upon Secchi depth, transparency tube, turbidity, and total suspended solids. Transparency tube, Secchi depth, and turbidity all showed strong relations with light extinction and can effectively predict light extinction. Total suspended solids did not show as strong a relation to light extinction. Volatile suspended solids had a greater affect on light extinction than nonvolatile suspended solids. The data were compared to recommended criteria established for light extinction, Secchi depth, total suspended solids, and turbidity by the Upper Mississippi River Conservation Committee to sustain submersed aquatic vegetation in the Upper Mississippi River. During the study period, the average condition in Pool 8 met or exceeded all of the criteria whereas the average condition in Pool 13 failed to meet any of the criteria. This report provides river managers with an effective tool to predict light extinction based upon readily available data.
Self-assessment in schizophrenia: Accuracy of evaluation of cognition and everyday functioning.
Gould, Felicia; McGuire, Laura Stone; Durand, Dante; Sabbag, Samir; Larrauri, Carlos; Patterson, Thomas L; Twamley, Elizabeth W; Harvey, Philip D
2015-09-01
Self-assessment deficits, often referred to as impaired insight or unawareness of illness, are well established in people with schizophrenia. There are multiple levels of awareness, including awareness of symptoms, functional deficits, cognitive impairments, and the ability to monitor cognitive and functional performance in an ongoing manner. The present study aimed to evaluate the comparative predictive value of each aspect of awareness on the levels of everyday functioning in people with schizophrenia. We examined multiple aspects of self-assessment of functioning in 214 people with schizophrenia. We also collected information on everyday functioning rated by high contact clinicians and examined the importance of self-assessment for the prediction of real-world functional outcomes. The relative impact of performance-based measures of cognition, functional capacity, and metacognitive performance on everyday functioning was also examined. Misestimation of ability emerged as the strongest predictor of real-world functioning and exceeded the influences of cognitive performance, functional capacity performance, and performance-based assessment of metacognitive monitoring. The relative contribution of the factors other than self-assessment varied according to which domain of everyday functioning was being examined, but, in all cases, accounted for less predictive variance. These results underscore the functional impact of misestimating one's current functioning and relative level of ability. These findings are consistent with the use of insight-focused treatments and compensatory strategies designed to increase self-awareness in multiple functional domains. (c) 2015 APA, all rights reserved).
Self Assessment in Schizophrenia: Accuracy of Evaluation of Cognition and Everyday Functioning
Gould, Felicia; McGuire, Laura Stone; Durand, Dante; Sabbag, Samir; Larrauri, Carlos; Patterson, Thomas L.; Twamley, Elizabeth W.; Harvey, Philip D.
2015-01-01
Objective Self-assessment deficits, often referred to as impaired insight or unawareness of illness, are well established in people with schizophrenia. There are multiple levels of awareness, including awareness of symptoms, functional deficits, cognitive impairments, and the ability to monitor cognitive and functional performance in an ongoing manner. The present study aimed to evaluate the comparative predictive value of each aspect of awareness on the levels of everyday functioning in people with schizophrenia. Method We examined multiple aspects of self-assessment of functioning in 214 people with schizophrenia. We also collected information on everyday functioning rated by high contact clinicians and examined the importance of self-assessment for the prediction of real world functional outcomes. The relative impact of performance based measures of cognition, functional capacity, and metacognitive performance on everyday functioning was also examined. Results Misestimation of ability emerged as the strongest predictor of real world functioning and exceeded the influences of cognitive performance, functional capacity performance, and performance-based assessment of metacognitive monitoring. The relative contribution of the factors other than self-assessment varied according to which domain of everyday functioning was being examined, but in all cases, accounted for less predictive variance. Conclusions These results underscore the functional impact of misestimating one’s current functioning and relative level of ability. These findings are consistent with the use of insight-focused treatments and compensatory strategies designed to increase self-awareness in multiple functional domains. PMID:25643212
Carvalhais, Carlos; Santos, Joana; Vieira da Silva, Manuela
2016-01-01
Hospital facilities are normally very complex, which combined with patient requirements promote conditions for potential development of uncomfortable working conditions. Thermal discomfort is one such example. This study aimed to determine levels of thermal comfort, sensations, and preferences, from a field investigation conducted in two sterilization services (SS) of two hospitals from Porto and Aveiro, Portugal. The analytical determination and interpretation of thermal comfort was based upon assumptions of ISO 7726:1998 and ISO 7730:2005. The predicted mean vote (PMV) and predicted percentage of dissatisfaction (PPD) indices were obtained by measurement and estimation of environmental and personal variables, respectively, and calculated according to ISO 7730 equations. The subjective variables were obtained from thermal sensation (subjective PMV) and affective assessment (subjective PPD), reported by a questionnaire based upon ISO 10551:1995. Both approaches confirmed thermal discomfort in both SS (codified as SS1 and SS2). For all areas, PMV and PPD exceeded in all periods of the day the recommended range of -0.5 to +0.5 and <10%, respectively. No significant differences were found between day periods. The questionnaire results showed that SS2 workers reported a higher level of thermal discomfort. There were no significant differences between PMV and thermal sensations, as well as between PPD and affective assessment. The PMV/PPD model was found suitable to predict thermal sensations of occupants in hospital SS located in areas with a mild climate in Portugal.
Pissadaki, Eleftheria K; Bolam, J Paul
2013-01-01
Dopamine neurons of the substantia nigra pars compacta (SNc) are uniquely sensitive to degeneration in Parkinson's disease (PD) and its models. Although a variety of molecular characteristics have been proposed to underlie this sensitivity, one possible contributory factor is their massive, unmyelinated axonal arbor that is orders of magnitude larger than other neuronal types. We suggest that this puts them under such a high energy demand that any stressor that perturbs energy production leads to energy demand exceeding supply and subsequent cell death. One prediction of this hypothesis is that those dopamine neurons that are selectively vulnerable in PD will have a higher energy cost than those that are less vulnerable. We show here, through the use of a biology-based computational model of the axons of individual dopamine neurons, that the energy cost of axon potential propagation and recovery of the membrane potential increases with the size and complexity of the axonal arbor according to a power law. Thus SNc dopamine neurons, particularly in humans, whose axons we estimate to give rise to more than 1 million synapses and have a total length exceeding 4 m, are at a distinct disadvantage with respect to energy balance which may be a factor in their selective vulnerability in PD.
Pissadaki, Eleftheria K.; Bolam, J. Paul
2013-01-01
Dopamine neurons of the substantia nigra pars compacta (SNc) are uniquely sensitive to degeneration in Parkinson's disease (PD) and its models. Although a variety of molecular characteristics have been proposed to underlie this sensitivity, one possible contributory factor is their massive, unmyelinated axonal arbor that is orders of magnitude larger than other neuronal types. We suggest that this puts them under such a high energy demand that any stressor that perturbs energy production leads to energy demand exceeding supply and subsequent cell death. One prediction of this hypothesis is that those dopamine neurons that are selectively vulnerable in PD will have a higher energy cost than those that are less vulnerable. We show here, through the use of a biology-based computational model of the axons of individual dopamine neurons, that the energy cost of axon potential propagation and recovery of the membrane potential increases with the size and complexity of the axonal arbor according to a power law. Thus SNc dopamine neurons, particularly in humans, whose axons we estimate to give rise to more than 1 million synapses and have a total length exceeding 4 m, are at a distinct disadvantage with respect to energy balance which may be a factor in their selective vulnerability in PD. PMID:23515615
Two Decades of WRF/CMAQ simulations over the continental ...
Confidence in the application of models for forecasting and regulatory assessments is furthered by conducting four types of model evaluation: operational, dynamic, diagnostic, and probabilistic. Operational model evaluation alone does not reveal the confidence limits that can be associated with modeled air quality concentrations. This paper presents novel approaches for performing dynamic model evaluation and for evaluating the confidence limits of ozone exceedances using the WRF/CMAQ model simulations over the continental United States for the period from 1990 to 2010. The methodology presented here entails spectral decomposition of ozone time series using the KZ filter to assess the variations in the strengths of the synoptic (i.e., weather-induced variation) and baseline (i.e., long-term variation attributable to emissions, policy, and trends) forcings embedded in the modeled and observed concentrations. A method is presented where the future year observations are estimated based on the changes in the concentrations predicted by the model applied to the current year observations. The proposed method can provide confidence limits for ozone exceedances for a given emission reduction scenario. We present and discuss these new approaches to identify the strengths of the model in representing the changes in simulated O3 air quality over the 21-year period. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates
Parallel task processing of very large datasets
NASA Astrophysics Data System (ADS)
Romig, Phillip Richardson, III
This research concerns the use of distributed computer technologies for the analysis and management of very large datasets. Improvements in sensor technology, an emphasis on global change research, and greater access to data warehouses all are increase the number of non-traditional users of remotely sensed data. We present a framework for distributed solutions to the challenges of datasets which exceed the online storage capacity of individual workstations. This framework, called parallel task processing (PTP), incorporates both the task- and data-level parallelism exemplified by many image processing operations. An implementation based on the principles of PTP, called Tricky, is also presented. Additionally, we describe the challenges and practical issues in modeling the performance of parallel task processing with large datasets. We present a mechanism for estimating the running time of each unit of work within a system and an algorithm that uses these estimates to simulate the execution environment and produce estimated runtimes. Finally, we describe and discuss experimental results which validate the design. Specifically, the system (a) is able to perform computation on datasets which exceed the capacity of any one disk, (b) provides reduction of overall computation time as a result of the task distribution even with the additional cost of data transfer and management, and (c) in the simulation mode accurately predicts the performance of the real execution environment.
GESPA: classifying nsSNPs to predict disease association.
Khurana, Jay K; Reeder, Jay E; Shrimpton, Antony E; Thakar, Juilee
2015-07-25
Non-synonymous single nucleotide polymorphisms (nsSNPs) are the most common DNA sequence variation associated with disease in humans. Thus determining the clinical significance of each nsSNP is of great importance. Potential detrimental nsSNPs may be identified by genetic association studies or by functional analysis in the laboratory, both of which are expensive and time consuming. Existing computational methods lack accuracy and features to facilitate nsSNP classification for clinical use. We developed the GESPA (GEnomic Single nucleotide Polymorphism Analyzer) program to predict the pathogenicity and disease phenotype of nsSNPs. GESPA is a user-friendly software package for classifying disease association of nsSNPs. It allows flexibility in acceptable input formats and predicts the pathogenicity of a given nsSNP by assessing the conservation of amino acids in orthologs and paralogs and supplementing this information with data from medical literature. The development and testing of GESPA was performed using the humsavar, ClinVar and humvar datasets. Additionally, GESPA also predicts the disease phenotype associated with a nsSNP with high accuracy, a feature unavailable in existing software. GESPA's overall accuracy exceeds existing computational methods for predicting nsSNP pathogenicity. The usability of GESPA is enhanced by fast SQL-based cloud storage and retrieval of data. GESPA is a novel bioinformatics tool to determine the pathogenicity and phenotypes of nsSNPs. We anticipate that GESPA will become a useful clinical framework for predicting the disease association of nsSNPs. The program, executable jar file, source code, GPL 3.0 license, user guide, and test data with instructions are available at http://sourceforge.net/projects/gespa.
Entry, James A
2013-09-01
Water quality was monitored in the Loxahatchee National Wildlife Refuge based on the Consent Decree (CDN), the Enhanced Refuge (ERN), the four-part Test impacted (FPTIN), and the four-part test unimpacted (FPTUN) networks. Alkalinity, dissolved organic carbon, total organic carbon, dissolved oxygen, total dissolved solids, total suspended solids, turbidity, pH, specific conductivity, calcium, chloride, silicon, sulfate, and total phosphorus (TP) were measured from 2005 through 2009. When the ERN was used, the 10 μg TP L(-1) Consent Decree limit would have been exceeded and would have ranged from a low of 2 months in 2009 to a high of 9 months in 2005. Based on the CDN, the limit exceeded only for 1 month in each year from 2006 through 2008. Based on the FPTIN, the 10 μg TP L(-1) limit would have been exceeded and would have ranged from a low of 1 month in 2007 to a high of 7 months in 2005 and 2008. Based on the CDN, the limit only exceeded for 1 month in each year from 2006 through 2008. Since TP is rapidly removed from canal water intruded into the Refuge marsh, one cannot expect a water quality sampling station located 2 km from the source to reliably detect violations. This may be the primary reason why there have been very few months when TP concentration has exceeded the limit since 1992 or part four of the four-part test annual 15 μg L(-1) limit since 2006.
Skillful regional prediction of Arctic sea ice on seasonal timescales
NASA Astrophysics Data System (ADS)
Bushuk, Mitchell; Msadek, Rym; Winton, Michael; Vecchi, Gabriel A.; Gudgel, Rich; Rosati, Anthony; Yang, Xiaosong
2017-05-01
Recent Arctic sea ice seasonal prediction efforts and forecast skill assessments have primarily focused on pan-Arctic sea ice extent (SIE). In this work, we move toward stakeholder-relevant spatial scales, investigating the regional forecast skill of Arctic sea ice in a Geophysical Fluid Dynamics Laboratory (GFDL) seasonal prediction system. Using a suite of retrospective initialized forecasts spanning 1981-2015 made with a coupled atmosphere-ocean-sea ice-land model, we show that predictions of detrended regional SIE are skillful at lead times up to 11 months. Regional prediction skill is highly region and target month dependent and generically exceeds the skill of an anomaly persistence forecast. We show for the first time that initializing the ocean subsurface in a seasonal prediction system can yield significant regional skill for winter SIE. Similarly, as suggested by previous work, we find that sea ice thickness initial conditions provide a crucial source of skill for regional summer SIE.
Current and Future Deposition of Reactive Nitrogen to United States National Parks
NASA Astrophysics Data System (ADS)
Ellis, R.; Jacob, D. J.; Zhang, L.; Payer, M.; Holmes, C. D.; Schichtel, B. A.
2012-12-01
The concentrations of reactive nitrogen species in the atmosphere have been altered by anthropogenic activities such as fossil fuel combustion and agriculture. The United States National Parks are protected areas wherein the natural habitat is to be conserved for future generations. However, deposition of reactive nitrogen (N) to terrestrial and aquatic ecosystems can lead to changes, some of which may not be reversible. We investigate the deposition of N to U.S. National Parks using the GEOS-Chem chemical transport model with 0.5 x 0.667 degree resolution over North America. We compare the annual nitrogen deposition for each park to a critical load, above which significant harmful effects on specific ecosystem elements are likely to occur. For our base year 2006, we find 9 parks to be in exceedance of their critical load, mainly located in the east where N deposition can reach up to 25 kg N per hectare per year. Future changes in N deposition are also investigated using the IPCC Representative Concentration Pathway (RCP) emission scenarios for 2050. We use RCP8.5 as a "business-as-usual" scenario for N deposition and find that under this emission scenario, 18 parks are predicted to be in exceedance of their critical loads by the year 2050. Most of this increase in N deposition is due to increases in the emissions of ammonia. RCP4.5 was used as a more optimistic scenario but we still find 12 parks in exceedance of their critical loads. This work suggests that in order to meet N deposition critical load goals in U.S. National Parks, policy-makers should consider regulations on ammonia.
Probability density function of non-reactive solute concentration in heterogeneous porous formations
Alberto Bellin; Daniele Tonina
2007-01-01
Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for...
47 CFR 90.771 - Field strength limits.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Policies Governing the Licensing and Use of Phase II Ea, Regional and Nationwide Systems § 90.771 Field... transmit frequencies, of EA and Regional licensees may not exceed a predicted 38 dBu field strength at... required in paragraph (a) of this section if all affected, co-channel EA and Regional licensees agree to...
47 CFR 90.771 - Field strength limits.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Policies Governing the Licensing and Use of Phase II Ea, Regional and Nationwide Systems § 90.771 Field... transmit frequencies, of EA and Regional licensees may not exceed a predicted 38 dBu field strength at... required in paragraph (a) of this section if all affected, co-channel EA and Regional licensees agree to...
Bark beetle-induced forest mortality in the North American Rocky Mountains
Kevin Hyde; Scott Peckham; Tom Holmes; Brent Ewers
2016-01-01
The epidemic of mortality by insects and disease throughout the Northern American Rocky Mountains exceeds previous records both in severity and spatial extent. Beetle attacks weaken trees and introduce blue-stain fungi that induce hydraulic failure leading to mortality. The magnitude of this outbreak spurs predictions of major changes to...
Buoyancy of gas-filled bladders at great depth
NASA Astrophysics Data System (ADS)
Priede, Imants G.
2018-02-01
At high hydrostatic pressures exceeding 20 MPa or 200 bar, equivalent to depths exceeding ca.2000 m, the behaviour of gases deviates significantly from the predictions of standard equations such as Boyle's Law, the Ideal Gas Law and Van der Waals equation. The predictions of these equations are compared with experimental data for nitrogen, oxygen and air at 0 °C and 15 °C, at pressures up to 1100 bar (110 MPa) equivalent to full ocean depth of ca. 11000 m. Owing to reduced compressibility of gases at high pressures, gas-filled bladders at full ocean depth have a density of 847 kg m-3 for Oxygen, 622 kg m-3 for Nitrogen and 660 kg m-3 for air providing potentially useful buoyancy comparable with that available from man-made materials. This helps explain why some of the deepest-living fishes at ca. 7000 m depth (700 bar or 70 MPa) have gas-filled swim bladders. A table is provided of the density and buoyancy of oxygen, nitrogen and air at 0 °C and 15 °C from 100 to 1100 bar.
NASA Astrophysics Data System (ADS)
Peeters, L. J.; Mallants, D.; Turnadge, C.
2017-12-01
Groundwater impact assessments are increasingly being undertaken in a probabilistic framework whereby various sources of uncertainty (model parameters, model structure, boundary conditions, and calibration data) are taken into account. This has resulted in groundwater impact metrics being presented as probability density functions and/or cumulative distribution functions, spatial maps displaying isolines of percentile values for specific metrics, etc. Groundwater management on the other hand typically uses single values (i.e., in a deterministic framework) to evaluate what decisions are required to protect groundwater resources. For instance, in New South Wales, Australia, a nominal drawdown value of two metres is specified by the NSW Aquifer Interference Policy as trigger-level threshold. In many cases, when drawdowns induced by groundwater extraction exceed two metres, "make-good" provisions are enacted (such as the surrendering of extraction licenses). The information obtained from a quantitative uncertainty analysis can be used to guide decision making in several ways. Two examples are discussed here: the first of which would not require modification of existing "deterministic" trigger or guideline values, whereas the second example assumes that the regulatory criteria are also expressed in probabilistic terms. The first example is a straightforward interpretation of calculated percentile values for specific impact metrics. The second examples goes a step further, as the previous deterministic thresholds do not currently allow for a probabilistic interpretation; e.g., there is no statement that "the probability of exceeding the threshold shall not be larger than 50%". It would indeed be sensible to have a set of thresholds with an associated acceptable probability of exceedance (or probability of not exceeding a threshold) that decreases as the impact increases. We here illustrate how both the prediction uncertainty and management rules can be expressed in a probabilistic framework, using groundwater metrics derived for a highly stressed groundwater system.
Beaulieu, Jean; Doerksen, Trevor K; MacKay, John; Rainville, André; Bousquet, Jean
2014-12-02
Genomic selection (GS) may improve selection response over conventional pedigree-based selection if markers capture more detailed information than pedigrees in recently domesticated tree species and/or make it more cost effective. Genomic prediction accuracies using 1748 trees and 6932 SNPs representative of as many distinct gene loci were determined for growth and wood traits in white spruce, within and between environments and breeding groups (BG), each with an effective size of Ne ≈ 20. Marker subsets were also tested. Model fits and/or cross-validation (CV) prediction accuracies for ridge regression (RR) and the least absolute shrinkage and selection operator models approached those of pedigree-based models. With strong relatedness between CV sets, prediction accuracies for RR within environment and BG were high for wood (r = 0.71-0.79) and moderately high for growth (r = 0.52-0.69) traits, in line with trends in heritabilities. For both classes of traits, these accuracies achieved between 83% and 92% of those obtained with phenotypes and pedigree information. Prediction into untested environments remained moderately high for wood (r ≥ 0.61) but dropped significantly for growth (r ≥ 0.24) traits, emphasizing the need to phenotype in all test environments and model genotype-by-environment interactions for growth traits. Removing relatedness between CV sets sharply decreased prediction accuracies for all traits and subpopulations, falling near zero between BGs with no known shared ancestry. For marker subsets, similar patterns were observed but with lower prediction accuracies. Given the need for high relatedness between CV sets to obtain good prediction accuracies, we recommend to build GS models for prediction within the same breeding population only. Breeding groups could be merged to build genomic prediction models as long as the total effective population size does not exceed 50 individuals in order to obtain high prediction accuracy such as that obtained in the present study. A number of markers limited to a few hundred would not negatively impact prediction accuracies, but these could decrease more rapidly over generations. The most promising short-term approach for genomic selection would likely be the selection of superior individuals within large full-sib families vegetatively propagated to implement multiclonal forestry.
Communicating Storm Surge Forecast Uncertainty
NASA Astrophysics Data System (ADS)
Troutman, J. A.; Rhome, J.
2015-12-01
When it comes to tropical cyclones, storm surge is often the greatest threat to life and property along the coastal United States. The coastal population density has dramatically increased over the past 20 years, putting more people at risk. Informing emergency managers, decision-makers and the public about the potential for wind driven storm surge, however, has been extremely difficult. Recently, the Storm Surge Unit at the National Hurricane Center in Miami, Florida has developed a prototype experimental storm surge watch/warning graphic to help communicate this threat more effectively by identifying areas most at risk for life-threatening storm surge. This prototype is the initial step in the transition toward a NWS storm surge watch/warning system and highlights the inundation levels that have a 10% chance of being exceeded. The guidance for this product is the Probabilistic Hurricane Storm Surge (P-Surge) model, which predicts the probability of various storm surge heights by statistically evaluating numerous SLOSH model simulations. Questions remain, however, if exceedance values in addition to the 10% may be of equal importance to forecasters. P-Surge data from 2014 Hurricane Arthur is used to ascertain the practicality of incorporating other exceedance data into storm surge forecasts. Extracting forecast uncertainty information through analyzing P-surge exceedances overlaid with track and wind intensity forecasts proves to be beneficial for forecasters and decision support.
Kang, Jian; Li, Xin; Jin, Rui; Ge, Yong; Wang, Jinfeng; Wang, Jianghao
2014-01-01
The eco-hydrological wireless sensor network (EHWSN) in the middle reaches of the Heihe River Basin in China is designed to capture the spatial and temporal variability and to estimate the ground truth for validating the remote sensing productions. However, there is no available prior information about a target variable. To meet both requirements, a hybrid model-based sampling method without any spatial autocorrelation assumptions is developed to optimize the distribution of EHWSN nodes based on geostatistics. This hybrid model incorporates two sub-criteria: one for the variogram modeling to represent the variability, another for improving the spatial prediction to evaluate remote sensing productions. The reasonability of the optimized EHWSN is validated from representativeness, the variogram modeling and the spatial accuracy through using 15 types of simulation fields generated with the unconditional geostatistical stochastic simulation. The sampling design shows good representativeness; variograms estimated by samples have less than 3% mean error relative to true variograms. Then, fields at multiple scales are predicted. As the scale increases, estimated fields have higher similarities to simulation fields at block sizes exceeding 240 m. The validations prove that this hybrid sampling method is effective for both objectives when we do not know the characteristics of an optimized variables. PMID:25317762
Tunable Resonance Coupling in Single Si Nanoparticle-Monolayer WS2 Structures.
Lepeshov, Sergey; Wang, Mingsong; Krasnok, Alex; Kotov, Oleg; Zhang, Tianyi; Liu, He; Jiang, Taizhi; Korgel, Brian; Terrones, Mauricio; Zheng, Yuebing; Alú, Andrea
2018-05-16
Two-dimensional semiconducting transition metal dichalcogenides (TMDCs) are extremely attractive materials for optoelectronic applications in the visible and near-infrared range. Coupling these materials to optical nanocavities enables advanced quantum optics and nanophotonic devices. Here, we address the issue of resonance coupling in hybrid exciton-polariton structures based on single Si nanoparticles (NPs) coupled to monolayer (1L)-WS 2 . We predict a strong coupling regime with a Rabi splitting energy exceeding 110 meV for a Si NP covered by 1L-WS 2 at the magnetic optical Mie resonance because of the symmetry of the mode. Further, we achieve a large enhancement in the Rabi splitting energy up to 208 meV by changing the surrounding dielectric material from air to water. The prediction is based on the experimental estimation of TMDC dipole moment variation obtained from the measured photoluminescence spectra of 1L-WS 2 in different solvents. An ability of such a system to tune the resonance coupling is realized experimentally for optically resonant spherical Si NPs placed on 1L-WS 2 . The Rabi splitting energy obtained for this scenario increases from 49.6 to 86.6 meV after replacing air by water. Our findings pave the way to develop high-efficiency optoelectronic, nanophotonic, and quantum optical devices.
Brinkmann, Markus; Schlechtriem, Christian; Reininghaus, Mathias; Eichbaum, Kathrin; Buchinger, Sebastian; Reifferscheid, Georg; Hollert, Henner; Preuss, Thomas G
2016-02-16
The potential to bioconcentrate is generally considered to be an unwanted property of a substance. Consequently, chemical legislation, including the European REACH regulations, requires the chemical industry to provide bioconcentration data for chemicals that are produced or imported at volumes exceeding 100 tons per annum or if there is a concern that a substance is persistent, bioaccumulative, and toxic. For the filling of the existing data gap for chemicals produced or imported at levels that are below this stipulated volume, without the need for additional animal experiments, physiologically-based toxicokinetic (PBTK) models can be used to predict whole-body and tissue concentrations of neutral organic chemicals in fish. PBTK models have been developed for many different fish species with promising results. In this study, we developed PBTK models for zebrafish (Danio rerio) and roach (Rutilus rutilus) and combined them with existing models for rainbow trout (Onchorhynchus mykiss), lake trout (Salvelinus namaycush), and fathead minnow (Pimephales promelas). The resulting multispecies model framework allows for cross-species extrapolation of the bioaccumulative potential of neutral organic compounds. Predictions were compared with experimental data and were accurate for most substances. Our model can be used for probabilistic risk assessment of chemical bioaccumulation, with particular emphasis on cross-species evaluations.
Kang, Jian; Li, Xin; Jin, Rui; Ge, Yong; Wang, Jinfeng; Wang, Jianghao
2014-10-14
The eco-hydrological wireless sensor network (EHWSN) in the middle reaches of the Heihe River Basin in China is designed to capture the spatial and temporal variability and to estimate the ground truth for validating the remote sensing productions. However, there is no available prior information about a target variable. To meet both requirements, a hybrid model-based sampling method without any spatial autocorrelation assumptions is developed to optimize the distribution of EHWSN nodes based on geostatistics. This hybrid model incorporates two sub-criteria: one for the variogram modeling to represent the variability, another for improving the spatial prediction to evaluate remote sensing productions. The reasonability of the optimized EHWSN is validated from representativeness, the variogram modeling and the spatial accuracy through using 15 types of simulation fields generated with the unconditional geostatistical stochastic simulation. The sampling design shows good representativeness; variograms estimated by samples have less than 3% mean error relative to true variograms. Then, fields at multiple scales are predicted. As the scale increases, estimated fields have higher similarities to simulation fields at block sizes exceeding 240 m. The validations prove that this hybrid sampling method is effective for both objectives when we do not know the characteristics of an optimized variables.
Discrete square root filtering - A survey of current techniques.
NASA Technical Reports Server (NTRS)
Kaminskii, P. G.; Bryson, A. E., Jr.; Schmidt, S. F.
1971-01-01
Current techniques in square root filtering are surveyed and related by applying a duality association. Four efficient square root implementations are suggested, and compared with three common conventional implementations in terms of computational complexity and precision. It is shown that the square root computational burden should not exceed the conventional by more than 50% in most practical problems. An examination of numerical conditioning predicts that the square root approach can yield twice the effective precision of the conventional filter in ill-conditioned problems. This prediction is verified in two examples.
NASA Technical Reports Server (NTRS)
Glasser, M. E.; Rundel, R. D.
1978-01-01
A method for formulating these changes into the model input parameters using a preprocessor program run on a programed data processor was implemented. The results indicate that any changes in the input parameters are small enough to be negligible in comparison to meteorological inputs and the limitations of the model and that such changes will not substantially increase the number of meteorological cases for which the model will predict surface hydrogen chloride concentrations exceeding public safety levels.
Big bang nucleosynthesis: An update
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olive, Keith A.
An update on the standard model of big bang nucleosynthesis (BBN) is presented. With the value of the baryon-tophoton ratio determined to high precision by WMAP, standard BBN is a parameter-free theory. In this context, the theoretical prediction for the abundances of D, {sup 4}He, and {sup 7}Li is discussed and compared to their observational determination. While concordance for D and {sup 4}He is satisfactory, the prediction for {sup 7}Li exceeds the observational determination by a factor of about four. Possible solutions to this problem are discussed.
Denny, M W; Dowd, W W
2012-03-15
As the air temperature of the Earth rises, ecological relationships within a community might shift, in part due to differences in the thermal physiology of species. Prediction of these shifts - an urgent task for ecologists - will be complicated if thermal tolerance itself can rapidly evolve. Here, we employ a mechanistic approach to predict the potential for rapid evolution of thermal tolerance in the intertidal limpet Lottia gigantea. Using biophysical principles to predict body temperature as a function of the state of the environment, and an environmental bootstrap procedure to predict how the environment fluctuates through time, we create hypothetical time-series of limpet body temperatures, which are in turn used as a test platform for a mechanistic evolutionary model of thermal tolerance. Our simulations suggest that environmentally driven stochastic variation of L. gigantea body temperature results in rapid evolution of a substantial 'safety margin': the average lethal limit is 5-7°C above the average annual maximum temperature. This predicted safety margin approximately matches that found in nature, and once established is sufficient, in our simulations, to allow some limpet populations to survive a drastic, century-long increase in air temperature. By contrast, in the absence of environmental stochasticity, the safety margin is dramatically reduced. We suggest that the risk of exceeding the safety margin, rather than the absolute value of the safety margin, plays an underappreciated role in the evolution of thermal tolerance. Our predictions are based on a simple, hypothetical, allelic model that connects genetics to thermal physiology. To move beyond this simple model - and thereby potentially to predict differential evolution among populations and among species - will require significant advances in our ability to translate the details of thermal histories into physiological and population-genetic consequences.
40 CFR 51.164 - Stack height procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 51.164 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS... source's stack height that exceeds good engineering practice or by any other dispersion technique, except... source based on a good engineering practice stack height that exceeds the height allowed by § 51.100(ii...
How Much Global Burned Area Can Be Forecast on Seasonal Time Scales Using Sea Surface Temperatures?
NASA Technical Reports Server (NTRS)
Chen, Yang; Morton, Douglas C.; Andela, Niels; Giglio, Louis; Randerson, James T.
2016-01-01
Large-scale sea surface temperature (SST) patterns influence the interannual variability of burned area in many regions by means of climate controls on fuel continuity, amount, and moisture content. Some of the variability in burned area is predictable on seasonal timescales because fuel characteristics respond to the cumulative effects of climate prior to the onset of the fire season. Here we systematically evaluated the degree to which annual burned area from the Global Fire Emissions Database version 4 with small fires (GFED4s) can be predicted using SSTs from 14 different ocean regions. We found that about 48 of global burned area can be forecast with a correlation coefficient that is significant at a p < 0.01 level using a single ocean climate index (OCI) 3 or more months prior to the month of peak burning. Continental regions where burned area had a higher degree of predictability included equatorial Asia, where 92% of the burned area exceeded the correlation threshold, and Central America, where 86% of the burned area exceeded this threshold. Pacific Ocean indices describing the El Nino-Southern Oscillation were more important than indices from other ocean basins, accounting for about 1/3 of the total predictable global burned area. A model that combined two indices from different oceans considerably improved model performance, suggesting that fires in many regions respond to forcing from more than one ocean basin. Using OCI-burned area relationships and a clustering algorithm, we identified 12 hotspot regions in which fires had a consistent response to SST patterns. Annual burned area in these regions can be predicted with moderate confidence levels, suggesting operational forecasts may be possible with the aim of improving ecosystem management.
NASA Astrophysics Data System (ADS)
Kibler, K. M.; Alipour, M.
2016-12-01
Achieving the universal energy access Sustainable Development Goal will require great investment in renewable energy infrastructure in the developing world. Much growth in the renewable sector will come from new hydropower projects, including small and diversion hydropower in remote and mountainous regions. Yet, human impacts to hydrological systems from diversion hydropower are poorly described. Diversion hydropower is often implemented in ungauged rivers, thus detection of impact requires flow analysis tools suited to prediction in poorly-gauged and human-altered catchments. We conduct a comprehensive analysis of hydrologic alteration in 32 rivers developed with diversion hydropower in southwestern China. As flow data are sparse, we devise an approach for estimating streamflow during pre- and post-development periods, drawing upon a decade of research into prediction in ungauged basins. We apply a rainfall-runoff model, parameterized and forced exclusively with global-scale data, in hydrologically-similar gauged and ungauged catchments. Uncertain "soft" data are incorporated through fuzzy numbers and confidence-based weighting, and a multi-criteria objective function is applied to evaluate model performance. Testing indicates that the proposed framework returns superior performance (NSE = 0.77) as compared to models parameterized by rote calibration (NSE = 0.62). Confident that the models are providing `the right answer for the right reasons', our analysis of hydrologic alteration based on simulated flows indicates statistically significant hydrologic effects of diversion hydropower across many rivers. Mean annual flows, 7-day minimum and 7-day maximum flows decreased. Frequency and duration of flow exceeding Q25 decreased while duration of flows sustained below the Q75 increased substantially. Hydrograph rise and fall rates and flow constancy increased. The proposed methodology may be applied to improve diversion hydropower design in data-limited regions.
Grewal, Jagteshwar; Zhang, Jun; Mikolajczyk, Rafael T; Ford, Jessie
2010-08-01
Estimates of gestational age based on early second-trimester ultrasound often differ from that based on the last menstrual period (LMP) even when a woman is certain about her LMP. Discrepancies in these gestational age estimates may be associated with an increased risk of cesarean section and low birth weight. We analyzed 7228 singleton, low-risk, white women from The Routine Antenatal Diagnostic Imaging with Ultrasound trial. The women were recruited at less than 14 weeks of gestation and received ultrasound exams between 15 and 22 weeks. Our results indicate that among nulliparous women, the risk of cesarean section increased from 10% when the ultrasound-based gestational age exceeded the LMP-based estimate by 4 days to 60% when the discrepancy increased to 21 days. Moreover, for each additional day the ultrasound-based estimate exceeded the LMP-based estimate, birth weight was higher by 9.6 g. Our findings indicate that a positive discrepancy (i.e., ultrasound-based estimate exceeds LMP-based estimate) in gestational age is associated with an increased risk of cesarean section. A negative discrepancy, by contrast, may reflect early intrauterine growth restriction and an increased risk of low birth weight. Copyright Thieme Medical Publishers.
Nanotechnology: From Science Fiction to Reality
NASA Technical Reports Server (NTRS)
Siochi, Mia
2016-01-01
Nanotechnology promises unconventional solutions to challenging problems because of expectations that matter can be manipulated at the atomic scale to yield properties that exceed those predicted for bulk materials. The excitement at this possibility has been fueled by significant investments in this technology area. This talk will focus on three examples of where advances are being made to exploit unique properties made possible by nanoscale features for aerospace applications. The first two topics will involve the development of carbon nanotubes for (a) lightweight structural applications and (b) net shape fabricated multifunctional components. The third topic will highlight lessons learned from the demonstration of the effect of nanoengineered surfaces on insect residue adhesion. In all three cases, the approaches used to mature these emerging technologies are based on the acceleration of technology development through multidisciplinary collaborations.
A discrete gust model for use in the design of wind energy conversion systems
NASA Technical Reports Server (NTRS)
Frost, W.; Turner, R. E.
1982-01-01
A discrete gust model has been designed which includes an expression for the number of times per unit time thy wind exceeds a specific value. This expression, based on Rice's (1944, 1945) number-of-crossings model, assumes that the yearly mean wind speed is averaged over a period of 10 minutes to 1 (one) hour. Vertical and lateral coherence functions are the basis for a mathematical filter which isolates atmospheric disturbances of a characteristic size (e. g., those which would completely engulf a rotor). Predictions are calculated usising the given definition of cut-off frequency, then they are compared with actual data, showing that the model is reliable. The expression is provided in a format such that it may be used for engineering design calculations.
Reservoir computing on the hypersphere
NASA Astrophysics Data System (ADS)
Andrecut, M.
Reservoir Computing (RC) refers to a Recurrent Neural Network (RNNs) framework, frequently used for sequence learning and time series prediction. The RC system consists of a random fixed-weight RNN (the input-hidden reservoir layer) and a classifier (the hidden-output readout layer). Here, we focus on the sequence learning problem, and we explore a different approach to RC. More specifically, we remove the nonlinear neural activation function, and we consider an orthogonal reservoir acting on normalized states on the unit hypersphere. Surprisingly, our numerical results show that the system’s memory capacity exceeds the dimensionality of the reservoir, which is the upper bound for the typical RC approach based on Echo State Networks (ESNs). We also show how the proposed system can be applied to symmetric cryptography problems, and we include a numerical implementation.
Sparse interferometric millimeter-wave array for centimeter-level 100-m standoff imaging
NASA Astrophysics Data System (ADS)
Suen, Jonathan Y.; Lubin, Philip M.; Solomon, Steven L.; Ginn, Robert P.
2013-05-01
We present work on the development of a long range standoff concealed weapons detection system capable of imaging under very heavy clothing at distances exceeding 100 m with a cm resolution. The system is based off a combination of phased array technologies used in radio astronomy and SAR radar by using a coherent, multi-frequency reconstruction algorithm which can run at up to 1000 Hz frame rates and high SNR with a multi-tone transceiver. We show the flexible design space of our system as well as algorithm development, predicted system performance and impairments, and simulated reconstructed images. The system can be used for a variety of purposes including portal applications, crowd scanning and tactical situations. Additional uses include seeing through dust and fog.
Research on mechanical and sensoric set-up for high strain rate testing of high performance fibers
NASA Astrophysics Data System (ADS)
Unger, R.; Schegner, P.; Nocke, A.; Cherif, C.
2017-10-01
Within this research project, the tensile behavior of high performance fibers, such as carbon fibers, is investigated under high velocity loads. This contribution (paper) focuses on the clamp set-up of two testing machines. Based on a kinematic model, weight optimized clamps are designed and evaluated. By analyzing the complex dynamic behavior of conventional high velocity testing machines, it has been shown that the impact typically exhibits an elastic characteristic. This leads to barely predictable breaking speeds and will not work at higher speeds when acceleration force exceeds material specifications. Therefore, a plastic impact behavior has to be achieved, even at lower testing speeds. This type of impact behavior at lower speeds can be realized by means of some minor test set-up adaptions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kujala, J.; Segercrantz, N.; Tuomisto, F.
2014-10-14
We have applied positron annihilation spectroscopy to study native point defects in Te-doped n-type and nominally undoped p-type GaSb single crystals. The results show that the dominant vacancy defect trapping positrons in bulk GaSb is the gallium monovacancy. The temperature dependence of the average positron lifetime in both p- and n-type GaSb indicates that negative ion type defects with no associated open volume compete with the Ga vacancies. Based on comparison with theoretical predictions, these negative ions are identified as Ga antisites. The concentrations of these negatively charged defects exceed the Ga vacancy concentrations nearly by an order of magnitude.more » We conclude that the Ga antisite is the native defect responsible for p-type conductivity in GaSb single crystals.« less
Attosecond Thomson-scattering x-ray source driven by laser-based electron acceleration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, W.; College of Science, National University of Defense Technology, Changsha 410073; Zhuo, H. B.
2013-10-21
The possibility of producing attosecond x-rays through Thomson scattering of laser light off laser-driven relativistic electron beams is investigated. For a ≤200-as, tens-MeV electron bunch produced with laser ponderomotive-force acceleration in a plasma wire, exceeding 10{sup 6} photons/s in the form of ∼160 as pulses in the range of 3–300 keV are predicted, with a peak brightness of ≥5 × 10{sup 20} photons/(s mm{sup 2} mrad{sup 2} 0.1% bandwidth). Our study suggests that the physical scheme discussed in this work can be used for an ultrafast (attosecond) x-ray source, which is the most beneficial for time-resolved atomic physics, dubbed “attosecondmore » physics.”.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lieou, Charles K. C.; Daub, Eric G.; Guyer, Robert A.
In this paper, we model laboratory earthquakes in a biaxial shear apparatus using the Shear-Transformation-Zone (STZ) theory of dense granular flow. The theory is based on the observation that slip events in a granular layer are attributed to grain rearrangement at soft spots called STZs, which can be characterized according to principles of statistical physics. We model lab data on granular shear using STZ theory and document direct connections between the STZ approach and rate-and-state friction. We discuss the stability transition from stable shear to stick-slip failure and show that stick slip is predicted by STZ when the applied shearmore » load exceeds a threshold value that is modulated by elastic stiffness and frictional rheology. Finally, we also show that STZ theory mimics fault zone dilation during the stick phase, consistent with lab observations.« less
Flynn, Robert H.
2011-01-01
During May 13-16, 2006, rainfall in excess of 8.8 inches flooded central and southern New Hampshire. On May 15, 2006, a breach in a bank of the Suncook River in Epsom, New Hampshire, caused the river to follow a new path. In order to assess and predict the effect of the sediment in, and the subsequent flooding on, the river and flood plain, a study by the U.S. Geological Survey (USGS) characterizing sediment transport in the Suncook River was undertaken in cooperation with the Federal Emergency Management Agency (FEMA) and the New Hampshire Department of Environmental Services (NHDES). The U.S. Army Corps of Engineers (USACE) Hydrologic Engineering Center-River Analysis System (HEC-RAS) model was used to simulate flow and the transport of noncohesive sediments in the Suncook River from the upstream corporate limit of Epsom to the river's confluence with the Merrimack River in the Village of Suncook (Allenstown and Pembroke, N.H.), a distance of approximately 16 miles. In addition to determining total sediment loads, analyses in this study reflect flooding potentials for selected recurrence intervals that are based on the Suncook River streamgage flow data (streamgage 01089500) and on streambed elevations predicted by HEC-RAS for the end of water year 2010 (September 30, 2010) in the communities of Epsom, Pembroke, and Allenstown. This report presents changes in streambed and water-surface elevations predicted by the HEC-RAS model using data through the end of water year 2010 for the 50-, 10-, 2-, 1-, 0.2-percent annual exceedence probabilities (2-, 10-, 50-, 100-, and 500-year recurrence-interval floods, respectively), calculated daily and annual total sediment loads, and a determination of aggrading and degrading stream reaches. The model was calibrated and evaluated for a 400-day span from May 8, 2008 through June 11, 2009; these two dates coincided with field collection of stream cross-sectional elevation data. Seven sediment-transport functions were evaluated in the model with the Laursen (Copeland) sediment-transport function best describing the sediment load, transport behavior, and changes in streambed elevation for the specified spatial and temporal conditions of the 400-day calibration period. Simulation results from the model and field-collected sediment data indicate that, downstream of the avulsion channel, for the average daily mean flow during the study period, approximately 100 to 400 tons per day of sediment (varying with daily mean flow) was moving past the Short Falls Road Bridge over the Suncook River in Epsom, while approximately 0.05 to 0.5 tons per day of sediment was moving past the Route 28 bridge in Pembroke and Allenstown, and approximately 1 to 10 tons per day was moving past the Route 3 bridge in Pembroke and Allenstown. Changes in water-surface elevation that the model predicted for the end of water year 2010 to be a result of changes in streambed elevation ranged from a mean increase of 0.20 feet (ft) for the 50-percent annual exceedence-probability flood (2-year recurrence-interval flood) due to an average thalweg increase of 0.88 ft between the Short Falls Road Bridge and the Buck Street Dams in Pembroke and Allenstown to a mean decrease of 0.41 ft for the 50-percent annual exceedence-probability flood due to an average thalweg decrease of 0.49 ft above the avulsion in Epsom. An analysis of shear stress (force created by a fluid acting on sediment particles) was undertaken to determine potential areas of erosion and deposition. Based on the median grain size (d50) and shear stress analysis, the study found that in general, for floods greater than the 50-percent annual exceedence probability flood, the shear stress in the streambed is greater than the critical shear stress in much of the river study reach. The result is an expectation of streambed-sediment movement and erosion even at high exceedence-probability events, pending although the stream ultimately attains equilibrium through stream-stabilization measures or the adjustment of the river over time. The potential for aggradation in the Suncook River is greatest in the reach downstream of the avulsion. Specifically, these reaches are (1) downstream of the former sand pit from adjacent to Round Pond to downstream of the flood chute at the large meander bends, and (2) downstream of the Short Falls Road Bridge to approximately 3,800 ft upstream of the Route 28 bridge. The potential for degradation-net lowering of the streambed-is greatest for the reach upstream of the avulsion to the Route 4 bridge.
Impact of cruise ship emissions in Victoria, BC, Canada
NASA Astrophysics Data System (ADS)
Poplawski, Karla; Setton, Eleanor; McEwen, Bryan; Hrebenyk, Dan; Graham, Mark; Keller, Peter
2011-02-01
Characterization of the effects of cruise ship emissions on local air quality is scarce. Our objective was to investigate community level concentrations of fine particulate matter (PM 2.5), nitrogen dioxide (NO 2) and sulphur dioxide (SO 2) associated with cruise ships in James Bay, Victoria, British Columbia (BC), Canada. Data obtained over four years (2005-2008) at the nearest air quality network site located 3.5 km from the study area, a CALPUFF modeling exercise (2007), and continuous measurements taken in the James Bay community over a three-month period during the 2009 cruise ship season were examined. Concentrations of PM 2.5 and nitrogen oxide (NO) were elevated on weekends with ships present with winds from the direction of the terminal to the monitoring station. SO 2 displayed the greatest impact from the presence of cruise ships in the area. Network data showed peaks in hourly SO 2 when ships were in port during all years. The CALPUFF modeling analysis found predicted 24-hour SO 2 levels to exceed World Health Organization (WHO) guidelines of 20 μg m -3 for approximately 3% of 24-hour periods, with a maximum 24-hour concentration in the community of 41 μg m -3; however, the CALPUFF model underestimated concentrations when predicted and measured concentrations were compared at the network site. Continuous monitoring at the location in the community predicted to experience highest SO 2 concentrations measured a maximum 24-hour concentration of 122 μg m -3 and 16% of 24-hour periods were above the WHO standard. The 10-minute concentrations of SO 2 reached up to 599 μg m -3 and exceeded the WHO 10-minute SO 2 guideline (500 μg m -3) for 0.03% of 10-minute periods. No exceedences of BC Provincial or Canadian guidelines or standards were observed.
Wolf, P F J; Verreet, A
2008-01-01
Powdery mildew, caused by Erysiphe betae (Vanha) Weltzien, may be assumed as an important leaf disease in sugar beet growing areas of central Europe. Although the causal agent is mainly adapted to arid climatic zones, the disease is appearing every year, where the extent of infection is mainly dependent on weather conditions and susceptibility of cultivar. The losses caused by powdery mildew seldom exceed 10% of sugar yield; moreover, losses are likely only under the condition that the epidemic onset occurs before end-August. Nevertheless, the epidemic onset varies in a wide range, as there are years with high incidence followed by growing periods without severe infection. Therefore, in order to have a flexible control of the disease, where the use of fungicides could be minimised to an essential amount, a quaternary IPM (Integrated Pest Management) -concept was developed. The development is based on epidemiological field studies (Germany, 1993-2004, n = 76) of sugar beet leaf diseases under variation of year, site and cultivar. Efficacy of fungicide treatment timing was assessed in relation to the epidemic development. Comparison of treatments comprised fungicide sprays carried out from disease initiation till later stages of the epidemic. Additionally, the assessments were performed in relation to an untreated and a healthy control--the latter was three times treated according to a treatment regime with three to four week intervals. The effect of different application timings was measured by the potential of disease and yield loss control. The quaternary concept combines the advantages of four elements in order to compensate the constraints of the single tools: The period without disease risk is determined by a so-called negative-prognosis (i). First symptoms appear in the period from mid-July till the beginning of September. If disease initiation cannot be excluded, field observations by a sample of 100 leaves are advised. The disease scores enable the appliance of action thresholds (ii). The latter are defined as early stages of the epidemic in order to optimize the efficiency of fungicide treatments. For an initial treatment a threshold of 5% infected leaves is defined. However, incidence in the height of action thresholds is not affecting an instant damage. The stage when a sugar beet is damaged effectively is rather defined by the economic damage threshold (iii). As a consequence, because exceeding of action threshold doesn't implicate immediate yield risk, loss prediction (iv) is required. The loss prediction assesses the likelihood of disease progress will exceed the economic damage threshold at harvest time. Loss risk is existent in case of action threshold exceeding till mid-August if cultivar susceptibility is low respectively end-August if susceptibility is high.
Back muscle strength, lifting, and stooped working postures.
Poulsen, E; Jørgensen, K
1971-09-01
When lifting loads and working in a forward stooped position, the muscles of the back rather than the ligaments and bony structures of the spine should overcome the gravitational forces. Formulae, based on measurements of back muscle strength, for prediction of maximal loads to be lifted, and for the ability to sustain work in a stooped position, have been worked out and tested in practical situations. From tests with 50 male and female subjects the simplest prediction formulae for maximum loads were: max. load = 1.10 x isometric back muscle strength for men; and max. load = 0.95 x isometric back muscle strength - 8 kg for women. Some standard values for maximum lifts and permissible single and repeated lifts have been calculated for men and women separately and are given in Table 1. From tests with 65 rehabilitees it was found that the maximum isometric strength of the back muscles measured at shoulder height should exceed 2/3 of the body weight, if fatigue and/or pain in the back muscles is to be avoided during work in a standing stooped position.
Skull counting in late stages after internal contamination by actinides.
Tani, Kotaro; Shutt, Arron; Kurihara, Osamu; Kosako, Toshiso
2015-02-01
Monitoring preparation for internal contamination with actinides (e.g. Pu and Am) is required to assess internal doses at nuclear fuel cycle-related facilities. In this paper, the authors focus on skull counting in case of single-incident inhalation of (241)Am and propose an effective procedure for skull counting with an existing system, taking into account the biokinetic behaviour of (241)Am in the human body. The predicted response of the system to skull counting under a certain counting geometry was found to be only ∼1.0 × 10(-5) cps Bq(-1) 1y after intake. However, this disadvantage could be remedied by repeated measurements of the skull during the late stage of the intake due to the predicted response reaching a plateau at about the 1000th day after exposure and exceeding that in the lung counting. Further studies are needed for the development of a new detection system with higher sensitivity to perform reliable internal dose estimations based on direct measurements. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Gladden, Herbert J.; Melis, Matthew E.; Mockler, Theodore T.; Tong, Mike
1990-01-01
The aerodynamic heating at high flight Mach numbers, when shock interference heating is included, can be extremely high and can exceed the capability of most conventional metallic and potential ceramic materials available. Numerical analyses of the heat transfer and thermal stresses are performed on three actively cooled leading-edge geometries (models) made of three different materials to address the issue of survivability in a hostile environment. These analyses show a mixture of results from one configuration to the next. Results for each configuration are presented and discussed. Combinations of enhanced internal film coefficients and high material thermal conductivity of copper and tungsten are predicted to maintain the maximum wall temperature for each concept within acceptable operating limits. The exception is the TD nickel material which is predicted to melt for most cases. The wide range of internal impingement film coefficients (based on correlations) for these conditions can lead to a significant uncertainty in expected leading-edge wall temperatures. The equivalent plastic strain, inherent in each configuration which results from the high thermal gradients, indicates a need for further cyclic analysis to determine component life.
Semi-empirical studies of atomic structure. Progress report, 1 July 1982-1 February 1983
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, L.J.
1983-01-01
A program of studies of the properties of the heavy and highly ionized atomic systems which often occur as contaminants in controlled fusion devices is continuing. The project combines experimental measurements by fast-ion-beam excitation with semi-empirical data parametrizations to identify and exploit regularities in the properties of these very heavy and very highly ionized systems. The increasing use of spectroscopic line intensities as diagnostics for determining thermonuclear plasma temperatures and densities requires laboratory observation and analysis of such spectra, often to accuracies that exceed the capabilities of ab initio theoretical methods for these highly relativistic many electron systems. Through themore » acquisition and systematization of empirical data, remarkably precise methods for predicting excitation energies, transition wavelengths, transition probabilities, level lifetimes, ionization potentials, core polarizabilities, and core penetrabilities are being developed and applied. Although the data base for heavy, highly ionized atoms is still sparse, parametrized extrapolations and interpolations along isoelectronic, homologous, and Rydberg sequences are providing predictions for large classes of quantities, with a precision that is sharpened by subsequent measurements.« less
Semiempirical studies of atomic structure. Progress report, 1 July 1983-1 June 1984
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, L.J.
1984-01-01
A program of studies of the properties of the heavy and highly ionized atomic systems which often occur as contaminants in controlled fusion devices is continuing. The project combines experimental measurements by fast ion beam excitation with semiempirical data parametrizations to identify and exploit regularities in the properties of these very heavy and very highly ionized systems. The increasing use of spectroscopic line intensities as diagnostics for determining thermonuclear plasma temperatures and densities requires laboratory observation and analysis of such spectra, often to accuracies that exceed the capabilities of ab initio theoretical methods for these highly relativistic many electron systems.more » Through the acquisition and systematization of empirical data, remarkably precise methods for predicting excitation energies, transition wavelengths, transition probabilities, level lifetimes, ionization potentials, core polarizabilities, and core penetrabilities are being developed and applied. Although the data base for heavy, highly ionized atoms is still sparse, parametrized extrapolations and interpolations along isoelectronic, homologous, and Rydberg sequences are providing predictions for large classes of quantities, with a precision that is sharpened by subsequent measurements.« less
Superconducting critical fields of alkali and alkaline-earth intercalates of MoS2
NASA Technical Reports Server (NTRS)
Woollam, J. A.; Somoano, R. B.
1976-01-01
Results are reported for measurements of the critical-field anisotropy and temperature dependence of group-VIB semiconductor MoS2 intercalated with the alkali and alkaline-earth metals Na, K, Rb, Cs, and Sr. The temperature dependences are compared with present theories on the relation between critical field and transition temperature in the clean and dirty limits over the reduced-temperature range from 1 to 0.1. The critical-field anisotropy data are compared with predictions based on coupled-layers and thin-film ('independent-layers') models. It is found that the critical-field boundaries are steep in all cases, that the fields are greater than theoretical predictions at low temperatures, and that an unusual positive curvature in the temperature dependence appears which may be related to the high anisotropy of the layer structure. The results show that materials with the largest ionic intercalate atom diameters and hexagonal structures (K, Rb, and Cs compounds) have the highest critical temperatures, critical fields, and critical-boundary slopes; the critical fields of these materials are observed to exceed the paramagnetic limiting fields.
Estimating soil moisture exceedance probability from antecedent rainfall
NASA Astrophysics Data System (ADS)
Cronkite-Ratcliff, C.; Kalansky, J.; Stock, J. D.; Collins, B. D.
2016-12-01
The first storms of the rainy season in coastal California, USA, add moisture to soils but rarely trigger landslides. Previous workers proposed that antecedent rainfall, the cumulative seasonal rain from October 1 onwards, had to exceed specific amounts in order to trigger landsliding. Recent monitoring of soil moisture upslope of historic landslides in the San Francisco Bay Area shows that storms can cause positive pressure heads once soil moisture values exceed a threshold of volumetric water content (VWC). We propose that antecedent rainfall could be used to estimate the probability that VWC exceeds this threshold. A major challenge to estimating the probability of exceedance is that rain gauge records are frequently incomplete. We developed a stochastic model to impute (infill) missing hourly precipitation data. This model uses nearest neighbor-based conditional resampling of the gauge record using data from nearby rain gauges. Using co-located VWC measurements, imputed data can be used to estimate the probability that VWC exceeds a specific threshold for a given antecedent rainfall. The stochastic imputation model can also provide an estimate of uncertainty in the exceedance probability curve. Here we demonstrate the method using soil moisture and precipitation data from several sites located throughout Northern California. Results show a significant variability between sites in the sensitivity of VWC exceedance probability to antecedent rainfall.
Alcohol use among university students: Considering a positive deviance approach.
Tucker, Maryanne; Harris, Gregory E
2016-09-01
Harmful alcohol consumption among university students continues to be a significant issue. This study examined whether variables identified in the positive deviance literature would predict responsible alcohol consumption among university students. Surveyed students were categorized into three groups: abstainers, responsible drinkers and binge drinkers. Multinomial logistic regression modelling was significant (χ(2) = 274.49, degrees of freedom = 24, p < .001), with several variables predicting group membership. While the model classification accuracy rate (i.e. 71.2%) exceeded the proportional by chance accuracy rate (i.e. 38.4%), providing further support for the model, the model itself best predicted binge drinker membership over the other two groups. © The Author(s) 2015.
Stall flutter analysis of propfans
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.
1988-01-01
Three semi-empirical aerodynamic stall models are compared with respect to their lift and moment hysteresis loop prediction, limit cycle behavior, easy implementation, and feasibility in developing the parameters required for stall flutter prediction of advanced turbines. For the comparison of aeroelastic response prediction including stall, a typical section model and a plate structural model are considered. The response analysis includes both plunging and pitching motions of the blades. In model A, a correction of the angle of attack is applied when the angle of attack exceeds the static stall angle. In model B, a synthesis procedure is used for angles of attack above static stall angles, and the time history effects are accounted for through the Wagner function.
Developing Hydrogeological Site Characterization Strategies based on Human Health Risk
NASA Astrophysics Data System (ADS)
de Barros, F.; Rubin, Y.; Maxwell, R. M.
2013-12-01
In order to provide better sustainable groundwater quality management and minimize the impact of contamination in humans, improved understanding and quantification of the interaction between hydrogeological models, geological site information and human health are needed. Considering the joint influence of these components in the overall human health risk assessment and the corresponding sources of uncertainty aid decision makers to better allocate resources in data acquisition campaigns. This is important to (1) achieve remediation goals in a cost-effective manner, (2) protect human health and (3) keep water supplies clean in order to keep with quality standards. Such task is challenging since a full characterization of the subsurface is unfeasible due to financial and technological constraints. In addition, human exposure and physiological response to contamination are subject to uncertainty and variability. Normally, sampling strategies are developed with the goal of reducing uncertainty, but less often they are developed in the context of their impacts on the overall system uncertainty. Therefore, quantifying the impact from each of these components (hydrogeological, behavioral and physiological) in final human health risk prediction can provide guidance for decision makers to best allocate resources towards minimal prediction uncertainty. In this presentation, a multi-component human health risk-based framework is presented which allows decision makers to set priorities through an information entropy-based visualization tool. Results highlight the role of characteristic length-scales characterizing flow and transport in determining data needs within an integrated hydrogeological-health framework. Conditions where uncertainty reduction in human health risk predictions may benefit from better understanding of the health component, as opposed to a more detailed hydrogeological characterization, are also discussed. Finally, results illustrate how different dose-response models can impact the probability of human health risk exceeding a regulatory threshold.
NASA Astrophysics Data System (ADS)
Wong, C. K.; Poon, Y. M.; Shin, F. G.
2003-01-01
Explicit formulas were derived for the effective piezoelectric stress coefficients of a 0-3 composite of ferroelectric spherical particles in a ferroelectric matrix which were then combined to give the more commonly used strain coefficients. Assuming that the elastic stiffness of the inclusion phase is sufficiently larger than that of the matrix phase, the previously derived explicit expressions for the case of a low volume concentration of inclusion particles [C. K. Wong, Y. M. Poon, and F. G. Shin, Ferroelectrics 264, 39 (2001); J. Appl. Phys. 90, 4690 (2001)] were "transformed" analytically by an effective medium theory (EMT) with appropriate approximations, to suit the case of a more concentrated suspension. Predictions of the EMT expressions were compared with the experimental values of composites of lead zirconate titanate ceramic particles dispersed in polyvinylidene fluoride and polyvinylidene fluoride-trifluoroethylene copolymer, reported by Furukawa [IEEE Trans. Electr. Insul. 24, 375 (1989)] and by Ng et al. [IEEE Trans. Ultrason. Ferroelectr. Freq. Control 47, 1308 (2000)] respectively. Fairly good agreement was obtained. Comparisons with other predictions, including the predictions given by numerically solving the EMT scheme, were also made. It was found that the analytic and numeric EMT schemes agreed with each other very well for an inclusion of volume fraction not exceeding 60%.
Buckley, Thomas N; Vice, Heather; Adams, Mark A
2017-12-01
The Kok effect - an abrupt decline in quantum yield (QY) of net CO 2 assimilation at low photosynthetic photon flux density (PPFD) - is widely used to estimate respiration in the light (R), which assumes the effect is caused by light suppression of R. A recent report suggested much of the Kok effect can be explained by declining chloroplastic CO 2 concentration (c c ) at low PPFD. Several predictions arise from the hypothesis that the Kok effect is caused by declining c c , and we tested these predictions in Vicia faba. We measured CO 2 exchange at low PPFD, in 2% and 21% oxygen, in developing and mature leaves, which differed greatly in R in darkness. Our results contradicted each of the predictions based on the c c effect: QY exceeded the theoretical maximum value for photosynthetic CO 2 uptake; QY was larger in 21% than 2% oxygen; and the change in QY at the Kok effect breakpoint was unaffected by oxygen. Our results strongly suggest the Kok effect arises largely from a progressive decline in R with PPFD that includes both oxygen-sensitive and -insensitive components. We suggest an improved Kok method that accounts for high c c at low PPFD. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.
Tree mortality predicted from drought-induced vascular damage
Anderegg, William R.L.; Flint, Alan L.; Huang, Cho-ying; Flint, Lorraine E.; Berry, Joseph A.; Davis, Frank W.; Sperry, John S.; Field, Christopher B.
2015-01-01
The projected responses of forest ecosystems to warming and drying associated with twenty-first-century climate change vary widely from resiliency to widespread tree mortality1, 2, 3. Current vegetation models lack the ability to account for mortality of overstorey trees during extreme drought owing to uncertainties in mechanisms and thresholds causing mortality4, 5. Here we assess the causes of tree mortality, using field measurements of branch hydraulic conductivity during ongoing mortality in Populus tremuloides in the southwestern United States and a detailed plant hydraulics model. We identify a lethal plant water stress threshold that corresponds with a loss of vascular transport capacity from air entry into the xylem. We then use this hydraulic-based threshold to simulate forest dieback during historical drought, and compare predictions against three independent mortality data sets. The hydraulic threshold predicted with 75% accuracy regional patterns of tree mortality as found in field plots and mortality maps derived from Landsat imagery. In a high-emissions scenario, climate models project that drought stress will exceed the observed mortality threshold in the southwestern United States by the 2050s. Our approach provides a powerful and tractable way of incorporating tree mortality into vegetation models to resolve uncertainty over the fate of forest ecosystems in a changing climate.
NASA Astrophysics Data System (ADS)
Jiang, X.; Lu, W. X.; Zhao, H. Q.; Yang, Q. C.; Yang, Z. P.
2014-06-01
The aim of the present study is to evaluate the potential ecological risk and trend of soil heavy-metal pollution around a coal gangue dump in Jilin Province (Northeast China). The concentrations of Cd, Pb, Cu, Cr and Zn were monitored by inductively coupled plasma mass spectrometry (ICP-MS). The potential ecological risk index method developed by Hakanson (1980) was employed to assess the potential risk of heavy-metal pollution. The potential ecological risk in the order of ER(Cd) > ER(Pb) > ER(Cu) > ER(Cr) > ER(Zn) have been obtained, which showed that Cd was the most important factor leading to risk. Based on the Cd pollution history, the cumulative acceleration and cumulative rate of Cd were estimated, then the fixed number of years exceeding the standard prediction model was established, which was used to predict the pollution trend of Cd under the accelerated accumulation mode and the uniform mode. Pearson correlation analysis and correspondence analysis are employed to identify the sources of heavy metals and the relationship between sampling points and variables. These findings provided some useful insights for making appropriate management strategies to prevent or decrease heavy-metal pollution around a coal gangue dump in the Yangcaogou coal mine and other similar areas elsewhere.
An overview of self-consistent methods for fiber-reinforced composites
NASA Technical Reports Server (NTRS)
Gramoll, Kurt C.; Freed, Alan D.; Walker, Kevin P.
1991-01-01
The Walker et al. (1989) self-consistent method to predict both the elastic and the inelastic effective material properties of composites is examined and compared with the results of other self-consistent and elastically based solutions. The elastic part of their method is shown to be identical to other self-consistent methods for non-dilute reinforced composite materials; they are the Hill (1965), Budiansky (1965), and Nemat-Nasser et al. (1982) derivations. A simplified form of the non-dilute self-consistent method is also derived. The predicted, elastic, effective material properties for fiber reinforced material using the Walker method was found to deviate from the elasticity solution for the v sub 31, K sub 12, and mu sub 31 material properties (fiber is in the 3 direction) especially at the larger volume fractions. Also, the prediction for the transverse shear modulus, mu sub 12, exceeds one of the accepted Hashin bounds. Only the longitudinal elastic modulus E sub 33 agrees with the elasticity solution. The differences between the Walker and the elasticity solutions are primarily due to the assumption used in the derivation of the self-consistent method, i.e., the strain fields in the inclusions and the matrix are assumed to remain constant, which is not a correct assumption for a high concentration of inclusions.
Weltman, R; Brands, C M J; Corral, E; Desmares-Koopmans, M J E; Migchielsen, M H J; Oudhoff, K A; de Roode, D F
2012-06-01
In this paper the results of a thorough evaluation of the environmental fate and effects of azilsartan are presented. Azilsartan medoxomil is administered as a pro-drug for the treatment of patients with essential hypertension. The pro-drug is converted by hydrolysis to the active pharmaceutical ingredient azilsartan. Laboratory tests to evaluate the environmental fate and effects of azilsartan medoxomil were conducted with azilsartan and performed in accordance with OECD test guidelines. The predicted environmental concentration (PEC) in surface water was estimated at 0.32 μg L(-1) (above the action limit of 0.01 μg L(-1)), triggering a Phase II assessment. Azilsartan is not readily biodegradable. Results of the water sediment study demonstrated significant shifting of azilsartan metabolites to sediment. Based on the equilibrium partitioning method, metabolites are unlikely to pose a risk to sediment-dwelling organisms. Ratios of the predicted environmental concentrations (PECs) to the predicted-no-effect concentrations (PNECs) did not exceed the relevant triggers, and the risk to aquatic, sewage treatment plant (STP), groundwater and sediment compartments was concluded acceptable. A terrestrial assessment was not triggered. Azilsartan poses an acceptable risk to the environment. Copyright © 2012 Elsevier Ltd. All rights reserved.
Virtual cathode formations in nested-well configurations
NASA Astrophysics Data System (ADS)
Stephens, K. F.; Ordonez, C. A.; Peterkin, R. E.
1999-12-01
Complete transmission of an electron beam through a cavity is not possible if the current exceeds the space-charge limited current. The formation of a virtual cathode reflects some of the beam electrons and reduces the current transmitted through the cavity. Transients in the injected current have been shown to lower the transmitted current below the value predicted by the electrostatic Child-Langmuir law. The present work considers the propagation of an electron beam through a nested-well configuration. Electrostatic particle-in-cell simulations are used to demonstrate that ions can be trapped in the electric potential depression of an electron beam. Furthermore, the trapped ions can prevent the formation of a virtual cathode for beam currents exceeding the space-charge limit.
Thermal and solutal conditions at the tips of a directional dendritic growth front
NASA Technical Reports Server (NTRS)
Mccay, T. D.; Mccay, Mary H.; Hopkins, John A.
1991-01-01
The line-of-sight averaged, time-dependent dendrite tip concentrations for the diffusion dominated vertical directional solidification of a metal model (ammonium chloride and water) were obtained by extrapolating exponentially fit diffusion layer profiles measured using a laser interferometer. The tip concentrations were shown to increase linearly with time throughout the diffusion dominated growth process for an initially stagnant dendritic array. The process was terminated for the cases chosen by convective breakdown suffered when the conditionally stable diffusion layer exceeded the critical Rayleigh criteria. The transient tip concentrations were determined to significantly exceed the values predicted for steady state, thus producing much larger constitutional undercoolings. This has ramifications for growth speeds, arm spacings and the dendritic structure itself.
Novel application of species richness estimators to predict the host range of parasites.
Watson, David M; Milner, Kirsty V; Leigh, Andrea
2017-01-01
Host range is a critical life history trait of parasites, influencing prevalence, virulence and ultimately determining their distributional extent. Current approaches to measure host range are sensitive to sampling effort, the number of known hosts increasing with more records. Here, we develop a novel application of results-based stopping rules to determine how many hosts should be sampled to yield stable estimates of the number of primary hosts within regions, then use species richness estimation to predict host ranges of parasites across their distributional ranges. We selected three mistletoe species (hemiparasitic plants in the Loranthaceae) to evaluate our approach: a strict host specialist (Amyema lucasii, dependent on a single host species), an intermediate species (Amyema quandang, dependent on hosts in one genus) and a generalist (Lysiana exocarpi, dependent on many genera across multiple families), comparing results from geographically-stratified surveys against known host lists derived from herbarium specimens. The results-based stopping rule (stop sampling bioregion once observed host richness exceeds 80% of the host richness predicted using the Abundance-based Coverage Estimator) worked well for most bioregions studied, being satisfied after three to six sampling plots (each representing 25 host trees) but was unreliable in those bioregions with high host richness or high proportions of rare hosts. Although generating stable predictions of host range with minimal variation among six estimators trialled, distribution-wide estimates fell well short of the number of hosts known from herbarium records. This mismatch, coupled with the discovery of nine previously unrecorded mistletoe-host combinations, further demonstrates the limited ecological relevance of simple host-parasite lists. By collecting estimates of host range of constrained completeness, our approach maximises sampling efficiency while generating comparable estimates of the number of primary hosts, with broad applicability to many host-parasite systems. Copyright © 2016 Australian Society for Parasitology. Published by Elsevier Ltd. All rights reserved.
Variable Selection for Regression Models of Percentile Flows
NASA Astrophysics Data System (ADS)
Fouad, G.
2017-12-01
Percentile flows describe the flow magnitude equaled or exceeded for a given percent of time, and are widely used in water resource management. However, these statistics are normally unavailable since most basins are ungauged. Percentile flows of ungauged basins are often predicted using regression models based on readily observable basin characteristics, such as mean elevation. The number of these independent variables is too large to evaluate all possible models. A subset of models is typically evaluated using automatic procedures, like stepwise regression. This ignores a large variety of methods from the field of feature (variable) selection and physical understanding of percentile flows. A study of 918 basins in the United States was conducted to compare an automatic regression procedure to the following variable selection methods: (1) principal component analysis, (2) correlation analysis, (3) random forests, (4) genetic programming, (5) Bayesian networks, and (6) physical understanding. The automatic regression procedure only performed better than principal component analysis. Poor performance of the regression procedure was due to a commonly used filter for multicollinearity, which rejected the strongest models because they had cross-correlated independent variables. Multicollinearity did not decrease model performance in validation because of a representative set of calibration basins. Variable selection methods based strictly on predictive power (numbers 2-5 from above) performed similarly, likely indicating a limit to the predictive power of the variables. Similar performance was also reached using variables selected based on physical understanding, a finding that substantiates recent calls to emphasize physical understanding in modeling for predictions in ungauged basins. The strongest variables highlighted the importance of geology and land cover, whereas widely used topographic variables were the weakest predictors. Variables suffered from a high degree of multicollinearity, possibly illustrating the co-evolution of climatic and physiographic conditions. Given the ineffectiveness of many variables used here, future work should develop new variables that target specific processes associated with percentile flows.
Pezzulo, Giovanni; Rigoli, Francesco; Chersi, Fabian
2013-01-01
Instrumental behavior depends on both goal-directed and habitual mechanisms of choice. Normative views cast these mechanisms in terms of model-free and model-based methods of reinforcement learning, respectively. An influential proposal hypothesizes that model-free and model-based mechanisms coexist and compete in the brain according to their relative uncertainty. In this paper we propose a novel view in which a single Mixed Instrumental Controller produces both goal-directed and habitual behavior by flexibly balancing and combining model-based and model-free computations. The Mixed Instrumental Controller performs a cost-benefits analysis to decide whether to chose an action immediately based on the available “cached” value of actions (linked to model-free mechanisms) or to improve value estimation by mentally simulating the expected outcome values (linked to model-based mechanisms). Since mental simulation entails cognitive effort and increases the reward delay, it is activated only when the associated “Value of Information” exceeds its costs. The model proposes a method to compute the Value of Information, based on the uncertainty of action values and on the distance of alternative cached action values. Overall, the model by default chooses on the basis of lighter model-free estimates, and integrates them with costly model-based predictions only when useful. Mental simulation uses a sampling method to produce reward expectancies, which are used to update the cached value of one or more actions; in turn, this updated value is used for the choice. The key predictions of the model are tested in different settings of a double T-maze scenario. Results are discussed in relation with neurobiological evidence on the hippocampus – ventral striatum circuit in rodents, which has been linked to goal-directed spatial navigation. PMID:23459512
Pezzulo, Giovanni; Rigoli, Francesco; Chersi, Fabian
2013-01-01
Instrumental behavior depends on both goal-directed and habitual mechanisms of choice. Normative views cast these mechanisms in terms of model-free and model-based methods of reinforcement learning, respectively. An influential proposal hypothesizes that model-free and model-based mechanisms coexist and compete in the brain according to their relative uncertainty. In this paper we propose a novel view in which a single Mixed Instrumental Controller produces both goal-directed and habitual behavior by flexibly balancing and combining model-based and model-free computations. The Mixed Instrumental Controller performs a cost-benefits analysis to decide whether to chose an action immediately based on the available "cached" value of actions (linked to model-free mechanisms) or to improve value estimation by mentally simulating the expected outcome values (linked to model-based mechanisms). Since mental simulation entails cognitive effort and increases the reward delay, it is activated only when the associated "Value of Information" exceeds its costs. The model proposes a method to compute the Value of Information, based on the uncertainty of action values and on the distance of alternative cached action values. Overall, the model by default chooses on the basis of lighter model-free estimates, and integrates them with costly model-based predictions only when useful. Mental simulation uses a sampling method to produce reward expectancies, which are used to update the cached value of one or more actions; in turn, this updated value is used for the choice. The key predictions of the model are tested in different settings of a double T-maze scenario. Results are discussed in relation with neurobiological evidence on the hippocampus - ventral striatum circuit in rodents, which has been linked to goal-directed spatial navigation.
Khoury, Ghassan A; Diamond, Gary L
2003-01-01
Superfund sites that are contaminated with lead and undergoing remedial action generate lead-enriched dust that can be released into the air. Activities that can emit lead-enriched dust include demolition of lead smelter buildings, stacks, and baghouses; on-site traffic of heavy construction vehicles; and excavation of soil. Typically, air monitoring stations are placed around the perimeter of a site of an ongoing remediation to monitor air lead concentrations that might result from site emissions. The National Ambient Air Quality (NAAQ) standard, established in 1978 to be a quarterly average of 1.5 microg/m(3), is often used as a trigger level for corrective action to reduce emissions. This study explored modeling approaches for assessing potential risks to children from air lead emissions from the RSR Superfund site in West Dallas, TX, during demolition and removal of a smelter facility. The EPA Integrated Exposure Uptake Biokinetic (IEUBK) model and the International Commission of Radiologic Protection (ICRP) lead model were used to simulate blood lead concentrations in children, based on monitored air lead concentrations. Although air lead concentrations at monitoring stations located in the downwind community intermittently exceeded the NAAQ standard, both models indicated that exposures to children in the community areas did not pose a significant long-term or acute risk. Long-term risk was defined as greater than 5% probability of a child having a long-term blood lead concentration that exceeded 10 microg/dl, which is the CDC and the EPA blood lead concern level. Short-term or acute risk was defined as greater than 5% probability of a child having a blood lead concentration on any given day that exceeded 20 microg/dl, which is the CDC trigger level for medical evaluation (this is not intended to imply that 20 microg/dl is a threshold for health effects in children exposed acutely to airborne lead). The estimated potential long-term and short-term exposures at the downwind West Dallas community did not result in more than 5% of children exceeding the target blood lead levels. The models were also used to estimate air lead levels for short-term and long-term exposures that would not exceed specified levels of risk (risk-based concentrations, RBCs). RBCs were derived for various daily exposure durations (3 or 8 h/day) and frequencies (1-7 days/week). RBCs based on the ICRP model ranged from 0.3 (7 days/week, 8 h/day) to 4.4 microg/m(3) (1 day/week, 3 h/day) for long-term exposures and were lower than those based on the IEUBK model. For short-term exposures, the RBCs ranged from 3.5 to 29.0 microg/m(3). Recontamination of remediated residential yards from deposition of air lead emitted during remedial activities at the RSR Superfund site was also examined. The predicted increase in soil concentration due to lead deposition at the monitoring station, which represented the community at large, was 3.0 mg/kg. This potential increase in soil lead concentration was insignificant, less than 1% increase, when compared to the clean-up level of 500 mg/kg developed for residential yards at the site.
Forecasting the timing of peak mandibular growth in males by using skeletal age.
Hunter, W Stuart; Baumrind, Sheldon; Popovich, Frank; Jorgensen, Gertrud
2007-03-01
It is generally believed that the orthodontic treatment of a patient with a Class II malocclusion and a small mandible is enhanced by good growth at puberty, so that the timing of peak mandibular growth at puberty becomes of interest. To test the belief that skeletal age, whether early, average, or late, can be used to predict the timing of maximum growth of the mandible, whether early, average, or late, the predictive relationship between skeletal age and peak mandibular growth velocity (PMdV) at puberty was evaluated in 94 boys by using their longitudinal records from 4 to 18 years of age. Skeletal age was determined for each subject at ages 9 through 14 by using the method of Greulich and Pyle. At age 9, the Greulich and Pyle measurements predicted that 30 of the 94 subjects would have delayed PMdV equal to or exceeding 1 SD (of the mean age for PMdV), and 10 would have advanced PMdV equal to or exceeding 1 SD. When the actual age of PMdV was determined retrospectively from plots of annual mandibular growth increments, it was found that only 4 of the 30 in the delayed group had actually experienced delays in PMdV, and only 2 of the 10 in the advanced group had experienced accelerated PMdV. Skeletal age is not a reliable predictor of the timing of PMdV.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bova, Valentina; Miraglia, Roberto, E-mail: rmiraglia@ismett.edu; Maruzzelli, Luigi
2013-04-15
This study was designed to analyze the clinical results in patients suitable for liver transplantation with hepatocellular carcinoma (HCC) who exceeded Milan criteria, which underwent intra-arterial therapies (IAT), to determine predictive factors of successful downstaging. A total of 277 consecutive patients with cirrhosis and HCC were treated by IAT (transarterial oily chemoembolization, transarterial chemoembolization, transarterial embolization) in a single center. Eighty patients exceed the Milan criteria. Patients with infiltrative HCC, hypovascular HCC, and portal vein thrombosis were excluded, with a final study population of 48 patients. Tumor response to IAT was evaluated with CT and/or MRI according to modified RECISTmore » criteria. Successful downstaging was defined as a reduction in the number and size of viable tumors to within the Milan criteria, and serum alpha-fetoprotein (AFP) <100 ng/mL, for at least 6 months. Nineteen patients (39 %) had their tumors successfully downstaged; 29 patients (61 %) did not. Multivariate analysis showed that AFP level <100 ng/mL and 3-year calculated survival probability using the Metroticket calculator were the only independent predictors of successful downstaging (p < 0.023 and p < 0.049 respectively). Biological characteristics of HCC as AFP levels <100 ng/mL and high 3-year calculated survival probability may predict a good response to downstage after IAT.« less
NASA Astrophysics Data System (ADS)
Waal, H. D.; Muntendam-Bos, A.; Breunese, J.; Roest, H.; Fokker, P. A.
2012-12-01
Reliable management of subsidence caused by hydrocarbon production and salt solution mining is important for a country like the Netherlands where most land surface is below or near sea level. However, a factor two difference between prediction and observation is not uncommon. To nevertheless ensure a high probability that subsidence is kept within the limits an area can robustly sustain, a tightly integrated prediction/monitoring/updating loop is applied. Prior to production, scenario's spanning the range of parameter and model uncertainties are generated to calculate possible subsidence outcomes. The probability of each scenario is updated over time through confrontation with measurements (e.g. using Bayesian statistics) as they become available. Production can thus be halted or adjusted timely if probabilities start to indicate an unacceptable risk of exceeding set limits now or in the future. A number of projects with well documented, high quality prediction and monitoring were started in the Netherlands in the second half of the previous century. They provide quality case histories covering multi-decade production periods from which important learnings have been been extracted. Firstly, from the data it is clear that sandstone reservoir compaction is not a linear function of pressure depletion. Initially the rock in the field compacts much less than expected based on standard lab measurements. As pressure drops further, compaction gradually increases, reaching and exceeding lab values. Various mechanisms could be responsible: delayed compaction in lower permeability/poorly connected parts of the reservoir or aquifers; intrinsic non-linear, time-dependent, rate-type or diffusive behavior of the reservoir rock; previous deeper burial or increasing overpressure over geological time. The observed field behavior is described reasonably well by a single exponential time decay model. The non-linear and/or time-dependent field behavior has to be accounted for when updating predictions based on early field data. Otherwise it leads to under-prediction of subsidence, followed by multiple upward adjustments as new data become available. Secondly, the large difference between lab and field loading rate results in late time field compressibilities that can be 20 to 30% higher then the lab data. For chalk reservoirs the difference in loading rate causes much earlier pore collapse in the field. These effects need and can be accounted for. Thirdly, the case histories show that the shape of the subsidence bowl changes over time. The bowl shape becomes steeper in time for hydrocarbon extraction and flatter in the case of salt extraction. This is believed to be related to the changing elasticity contrast between the compacting volume and its surroundings as the reservoir compressibility increases and surrounding salt layers start to creep. The observed shape changes can be modeled numerically or by a varying rigid basement depth in the analytical van Opstal model. Not accounting for it can result in large subsidence allocation errors where salt mining and hydrocarbon production bowls overlap.
Ethnic differences in risk from mercury among Savannah River fishermen.
Burger, J; Gaines, K F; Gochfeld, M
2001-06-01
Fishing plays an important role in people's lives and contaminant levels in fish are a public health concern. Many states have issued consumption advisories; South Carolina and Georgia have issued them for the Savannah River based on mercury and radionuclide levels. This study examined ethnic differences in risk from mercury exposure among people consuming fish from the Savannah River, based on site-specific consumption patterns and analysis of mercury in fish. Among fish, there were significant interspecies differences in mercury levels, and there were ethnic differences in consumption patterns. Two methods of examining risk are presented: (1) Hazard Index (HI), and (2) estimates of how much and how often people of different body mass can consume different species of fish. Blacks consumed more fish and had higher HIs than Whites. Even at the median consumption, the HI for Blacks exceeded 1.0 for bass and bowfin, and, at the 75th percentile of consumption, the HI exceeded 1.0 for almost all species. At the White male median consumption, noHI exceeded 1, but for the 95th percentile consumer, the HI exceeded 1.0 almost regardless of which species were eaten. Although females consumed about two thirds the quantity of males, HIs exceeded 1 for most Black females and for White females at or above the 75th percentile of consumption. Thus, close to half of the Black fishermen were eating enough Savannah River fish to exceed HI = 1. Caution must be used in evaluating an HI because the RfDs were developed to protect the most vulnerable individuals. The percentage of each fish species tested that exceeded the maximum permitted limits of mercury in fish was also examined. Over 80% of bowfin, 38% of bass, and 21% of pickerel sampled exceeded 0.5 ppm. The risk methodology is applicable anywhere that comparable data can be obtained. The risk estimates are representative for fishermen along the Savannah River, and are not necessarily for the general populations.
24 CFR 235.1218 - Additional eligibility requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Additional eligibility requirements... interest due and delinquent interest not to exceed two months; and (2) The original principal amount of the... based on the closing price for three-month forward delivery contracts closest to par but not exceeding...
47 CFR 22.659 - Effective radiated power limits.
Code of Federal Regulations, 2011 CFR
2011-10-01
... radiated power limits. The purpose of the rules in this section, which limit effective radiated power (ERP... subsequently relocated. (a) Maximum ERP. The ERP of base transmitters must not exceed 100 Watts under any circumstances. The ERP of mobile transmitters must not exceed 60 Watts under any circumstances. (b) Co-channel...
50 CFR 648.24 - Fishery closures and accountability measures.
Code of Federal Regulations, 2012 CFR
2012-10-01
... § 648.26 will be prohibited. (2) Mackerel commercial landings overage repayment. If the mackerel ACL is... recreational landings overage repayment. If the mackerel ACL is exceeded, and the recreational fishery landings... ACL is exceeded, and that the overage has not been accommodated through other landing-based AMs, but...
50 CFR 648.103 - Minimum fish sizes.
Code of Federal Regulations, 2011 CFR
2011-10-01
... determines that the inaction of one or more states will cause the commercial sector ACL to be exceeded, or if... more states have been reopened without causing the sector ACL to be exceeded. (b) State commercial...) Commercial ACL overage evaluation. The commercial sector ACL will be evaluated based on a single-year...
Code of Federal Regulations, 2014 CFR
2014-10-01
... exceed $15 million for one year after April 2, 2013. Twelve (12) months after April 2, 2013, the amount... exceed $30 million. Every two years, on the anniversary after the cap on required financial responsibility reaches $30 million, the cap shall automatically adjust to the nearest $1 million based on changes...
DOT National Transportation Integrated Search
2015-01-01
Traditionally, the Iowa DOT has used the Iowa Runoff Chart and single-variable regional regression equations (RREs) from a USGS report : (published in 1987) as the primary methods to estimate annual exceedance-probability discharge : (AEPD) for small...
Survivor aspen: Can we predict who will get voted off the island?
F. A. Baker; J. D. Shaw
2008-01-01
During the past few years, aspen have been dying at rates that appear to exceed normal rates. We believe that this mortality should not be unexpected, given the severe drought of the past 10 years. We examine the literature and FIA data and identify several factors that indicate such mortality should be expected.
40 CFR 799.9130 - TSCA acute inhalation toxicity.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., final guideline). These sources are available at the address in paragraph (g) of this section. (c... settling velocity as the particle in question, whatever its size, shape, and density. It is used to predict... between groups used in a test should not exceed ±20% of the mean weight of each sex. (C) Number of animals...
40 CFR 799.9130 - TSCA acute inhalation toxicity.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., final guideline). These sources are available at the address in paragraph (g) of this section. (c... settling velocity as the particle in question, whatever its size, shape, and density. It is used to predict... between groups used in a test should not exceed ±20% of the mean weight of each sex. (C) Number of animals...
Relative Weights of the Backpacks of Elementary-Aged Children
ERIC Educational Resources Information Center
Bryant, Benjamin P.; Bryant, Judith B.
2014-01-01
The purpose of the study was to describe the range of relative backpack weights of one group of elementary-aged children and the extent to which they exceeded recommended levels. A second purpose was to explore whether gender and age help predict the relative weight of children's backpacks. Ninety-five 8- to 12-year-old elementary school students…
1980-11-01
CMACH $ FMACH) 132 TRJTRY STOP normal termination (test: TIME > TIMEF-0.00001 or NDIFEQ = 1) BDCOEF message warning - NAFLD exceeds blank common...BDCOEF**, NTAP7- , IUB= , NAFLD = . (Test: NAFLD > NADIM) CRFWBD STOP 001 (Test: NER > 1; equation solution singular) INVER2 STOP **Matrix is singular
Rural areas with close proximity to oil and natural gas operations in Utah have experienced winter ozone levels that exceed EPA’s National Ambient Air Quality Standards (NAAQS). Through a collaborative effort, EPA Region 8 – Air Program, ORD, and OAQPS used the Commun...
Cost Discrepancy, Signaling, and Risk Taking
ERIC Educational Resources Information Center
Lemon, Jim
2005-01-01
If risk taking is in some measure a signal to others by the person taking risks, the model of "costly signaling" predicts that the more the apparent cost of the risk to others exceeds the perceived cost of the risk to the risk taker, the more attractive that risk will be as a signal. One hundred and twelve visitors to youth…
ERIC Educational Resources Information Center
Rucklidge, Julia J.; McLean, Anthony P.; Bateup, Paula
2013-01-01
Sixty youth (16-19 years) from two youth prison sites participate in a prospective study examining criminal offending and learning disabilities (LD), completing measures of estimated IQ, attention, reading, and mathematical and oral language abilities. Prevalence rates of LDs exceed those of international studies, with 91.67% of the offenders…
Prediction and observation of munitions burial in energetic storms
NASA Astrophysics Data System (ADS)
Klammler, Harald; Sheremet, Alexandru; Calantoni, Joseph
2017-04-01
The fate of munitions or unexploded ordnance (UXO) resting on a submarine sediment bed is a critical safety concern. Munitions may be transported in uncontrolled ways to create potentially dangerous situations at places like beaches or ports. Alternatively, they may remain in place or completely disappear for significant but unknown periods, after becoming buried in the sediment bed. Clearly, burial of munitions drastically complicates the detection and removal of potential threats. Here, we present field data of wave height and (surrogate) munitions burial depths near the 8-m isobath at the U.S. Army Corps of Engineers, Field Research Facility, Duck, North Carolina, observed between January and March 2015. The experiment captured a remarkable sequence of storms that included at least 10 events, of which 6 were characterized by wave fields of significant heights exceeding 2 m and with peak periods of approximately 10 s. During the strongest storm, waves of 14 s period and heights exceeding 2 m were recorded for more than 3 days; significant wave height reached 5 m at the peak of activity. At the end of the experiment, divers measured munition burial depths of up to 60 cm below the seabed level. However, the local bathymetry showed less than 5 cm variation between the before and after-storm states, suggesting the local net sediment accumulation / loss was negligible. The lack of bathymetric variability excludes the possibility of burial by a migrating bed form or by sediment deposition, and strongly indicates that the munitions sank into the bed. The depth of burial also suggest an extreme state of sand agitation during the storm. For predicting munitions burial depths, we explore existing analytical solutions for the dynamic interaction between waves and sediment. Measured time series of wave pressure near the sediment bed were converted into wave-induced changes in pore pressures and the effective stress states of the sediment. Different sediment failure criteria based on minimum normal and maximum shear stresses are then applied to evaluate the appropriateness of individual failure criteria to predict observed burial depths. Results are subjected to a sensitivity analysis with respect to uncertain sediment parameters and summarized by representing cumulative failure times as a function of depth.
Farmer, William H.; Over, Thomas M.; Vogel, Richard M.
2015-01-01
Understanding the spatial structure of daily streamflow is essential for managing freshwater resources, especially in poorly-gaged regions. Spatial scaling assumptions are common in flood frequency prediction (e.g., index-flood method) and the prediction of continuous streamflow at ungaged sites (e.g. drainage-area ratio), with simple scaling by drainage area being the most common assumption. In this study, scaling analyses of daily streamflow from 173 streamgages in the southeastern US resulted in three important findings. First, the use of only positive integer moment orders, as has been done in most previous studies, captures only the probabilistic and spatial scaling behavior of flows above an exceedance probability near the median; negative moment orders (inverse moments) are needed for lower streamflows. Second, assessing scaling by using drainage area alone is shown to result in a high degree of omitted-variable bias, masking the true spatial scaling behavior. Multiple regression is shown to mitigate this bias, controlling for regional heterogeneity of basin attributes, especially those correlated with drainage area. Previous univariate scaling analyses have neglected the scaling of low-flow events and may have produced biased estimates of the spatial scaling exponent. Third, the multiple regression results show that mean flows scale with an exponent of one, low flows scale with spatial scaling exponents greater than one, and high flows scale with exponents less than one. The relationship between scaling exponents and exceedance probabilities may be a fundamental signature of regional streamflow. This signature may improve our understanding of the physical processes generating streamflow at different exceedance probabilities.
Selenium in irrigated agricultural areas of the western United States
Nolan, B.T.; Clark, M.L.
1997-01-01
A logistic regression model was developed to predict the likelihood that Se exceeds the USEPA chronic criterion for aquatic life (5 ??g/L) in irrigated agricultural areas of the western USA. Preliminary analysis of explanatory variables used in the model indicated that surface-water Se concentration increased with increasing dissolved solids (DS) concentration and with the presence of Upper Cretaceous, mainly marine sediment. The presence or absence of Cretaceous sediment was the major variable affecting Se concentration in surface-water samples from the National Irrigation Water Quality Program. Median Se concentration was 14 ??g/L in samples from areas underlain by Cretaceous sediments and < 1 ??g/L in samples from areas underlain by non-Cretaceous sediments. Wilcoxon rank sum tests indicated that elevated Se concentrations in samples from areas with Cretaceous sediments, irrigated areas, and from closed lakes and ponds were statistically significant. Spearman correlations indicated that Se was positively correlated with a binary geology variable (0.64) and DS (0.45). Logistic regression models indicated that the concentration of Se in surface water was almost certain to exceed the Environmental Protection Agency aquatic-life chronic criterion of 5 ??g/L when DS was greater than 3000 mg/L in areas with Cretaceous sediments. The 'best' logistic regression model correctly predicted Se exceedances and nonexceedances 84.4% of the time, and model sensitivity was 80.7%. A regional map of Cretaceous sediment showed the location of potential problem areas. The map and logistic regression model are tools that can be used to determine the potential for Se contamination of irrigated agricultural areas in the western USA.