These model-based estimates use two surveys, the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS). The two surveys are combined using novel statistical methodology.
Robust small area prediction for counts.
Tzavidis, Nikos; Ranalli, M Giovanna; Salvati, Nicola; Dreassi, Emanuela; Chambers, Ray
2015-06-01
A new semiparametric approach to model-based small area prediction for counts is proposed and used for estimating the average number of visits to physicians for Health Districts in Central Italy. The proposed small area predictor can be viewed as an outlier robust alternative to the more commonly used empirical plug-in predictor that is based on a Poisson generalized linear mixed model with Gaussian random effects. Results from the real data application and from a simulation experiment confirm that the proposed small area predictor has good robustness properties and in some cases can be more efficient than alternative small area approaches. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
This model-based approach uses data from both the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS) to produce estimates of the prevalence rates of cancer risk factors and screening behaviors at the state, health service area, and county levels.
The model-based estimates of important cancer risk factors and screening behaviors are obtained by combining the responses to the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS).
Hirve, Siddhivinayak; Vounatsou, Penelope; Juvekar, Sanjay; Blomstedt, Yulia; Wall, Stig; Chatterji, Somnath; Ng, Nawi
2014-03-01
We compared prevalence estimates of self-rated health (SRH) derived indirectly using four different small area estimation methods for the Vadu (small) area from the national Study on Global AGEing (SAGE) survey with estimates derived directly from the Vadu SAGE survey. The indirect synthetic estimate for Vadu was 24% whereas the model based estimates were 45.6% and 45.7% with smaller prediction errors and comparable to the direct survey estimate of 50%. The model based techniques were better suited to estimate the prevalence of SRH than the indirect synthetic method. We conclude that a simplified mixed effects regression model can produce valid small area estimates of SRH. © 2013 Published by Elsevier Ltd.
Small area estimation for estimating the number of infant mortality in West Java, Indonesia
NASA Astrophysics Data System (ADS)
Anggreyani, Arie; Indahwati, Kurnia, Anang
2016-02-01
Demographic and Health Survey Indonesia (DHSI) is a national designed survey to provide information regarding birth rate, mortality rate, family planning and health. DHSI was conducted by BPS in cooperation with National Population and Family Planning Institution (BKKBN), Indonesia Ministry of Health (KEMENKES) and USAID. Based on the publication of DHSI 2012, the infant mortality rate for a period of five years before survey conducted is 32 for 1000 birth lives. In this paper, Small Area Estimation (SAE) is used to estimate the number of infant mortality in districts of West Java. SAE is a special model of Generalized Linear Mixed Models (GLMM). In this case, the incidence of infant mortality is a Poisson distribution which has equdispersion assumption. The methods to handle overdispersion are binomial negative and quasi-likelihood model. Based on the results of analysis, quasi-likelihood model is the best model to overcome overdispersion problem. The basic model of the small area estimation used basic area level model. Mean square error (MSE) which based on resampling method is used to measure the accuracy of small area estimates.
The model-based estimates of important cancer risk factors and screening behaviors are obtained by combining the responses to the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS).
Small area estimation of proportions with different levels of auxiliary data.
Chandra, Hukum; Kumar, Sushil; Aditya, Kaustav
2018-03-01
Binary data are often of interest in many small areas of applications. The use of standard small area estimation methods based on linear mixed models becomes problematic for such data. An empirical plug-in predictor (EPP) under a unit-level generalized linear mixed model with logit link function is often used for the estimation of a small area proportion. However, this EPP requires the availability of unit-level population information for auxiliary data that may not be always accessible. As a consequence, in many practical situations, this EPP approach cannot be applied. Based on the level of auxiliary information available, different small area predictors for estimation of proportions are proposed. Analytic and bootstrap approaches to estimating the mean squared error of the proposed small area predictors are also developed. Monte Carlo simulations based on both simulated and real data show that the proposed small area predictors work well for generating the small area estimates of proportions and represent a practical alternative to the above approach. The developed predictor is applied to generate estimates of the proportions of indebted farm households at district-level using debt investment survey data from India. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Explanatory models concerning the effects of small-area characteristics on individual health.
Voigtländer, Sven; Vogt, Verena; Mielck, Andreas; Razum, Oliver
2014-06-01
Material and social living conditions at the small-area level are assumed to have an effect on individual health. We review existing explanatory models concerning the effects of small-area characteristics on health and describe the gaps future research should try to fill. Systematic literature search for, and analysis of, studies that propose an explanatory model of the relationship between small-area characteristics and health. Fourteen studies met our inclusion criteria. Using various theoretical approaches, almost all of the models are based on a three-tier structure linking social inequalities (posited at the macro-level), small-area characteristics (posited at the meso-level) and individual health (micro-level). No study explicitly defines the geographical borders of the small-area context. The health impact of the small-area characteristics is explained by specific pathways involving mediating factors (psychological, behavioural, biological). These pathways tend to be seen as uni-directional; often, causality is implied. They may be modified by individual factors. A number of issues need more attention in research on explanatory models concerning small-area effects on health. Among them are the (geographical) definition of the small-area context; the systematic description of pathways comprising small-area contextual as well as compositional factors; questions of direction of association and causality; and the integration of a time dimension.
Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William
2014-01-01
Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies.
Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William
2014-01-01
Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies. PMID:24992657
Small area estimation (SAE) model: Case study of poverty in West Java Province
NASA Astrophysics Data System (ADS)
Suhartini, Titin; Sadik, Kusman; Indahwati
2016-02-01
This paper showed the comparative of direct estimation and indirect/Small Area Estimation (SAE) model. Model selection included resolve multicollinearity problem in auxiliary variable, such as choosing only variable non-multicollinearity and implemented principal component (PC). Concern parameters in this paper were the proportion of agricultural venture poor households and agricultural poor households area level in West Java Province. The approach for estimating these parameters could be performed based on direct estimation and SAE. The problem of direct estimation, three area even zero and could not be conducted by directly estimation, because small sample size. The proportion of agricultural venture poor households showed 19.22% and agricultural poor households showed 46.79%. The best model from agricultural venture poor households by choosing only variable non-multicollinearity and the best model from agricultural poor households by implemented PC. The best estimator showed SAE better then direct estimation both of the proportion of agricultural venture poor households and agricultural poor households area level in West Java Province. The solution overcame small sample size and obtained estimation for small area was implemented small area estimation method for evidence higher accuracy and better precision improved direct estimator.
NASA Astrophysics Data System (ADS)
Girinoto, Sadik, Kusman; Indahwati
2017-03-01
The National Socio-Economic Survey samples are designed to produce estimates of parameters of planned domains (provinces and districts). The estimation of unplanned domains (sub-districts and villages) has its limitation to obtain reliable direct estimates. One of the possible solutions to overcome this problem is employing small area estimation techniques. The popular choice of small area estimation is based on linear mixed models. However, such models need strong distributional assumptions and do not easy allow for outlier-robust estimation. As an alternative approach for this purpose, M-quantile regression approach to small area estimation based on modeling specific M-quantile coefficients of conditional distribution of study variable given auxiliary covariates. It obtained outlier-robust estimation from influence function of M-estimator type and also no need strong distributional assumptions. In this paper, the aim of study is to estimate the poverty indicator at sub-district level in Bogor District-West Java using M-quantile models for small area estimation. Using data taken from National Socioeconomic Survey and Villages Potential Statistics, the results provide a detailed description of pattern of incidence and intensity of poverty within Bogor district. We also compare the results with direct estimates. The results showed the framework may be preferable when direct estimate having no incidence of poverty at all in the small area.
Mauro, Francisco; Monleon, Vicente J; Temesgen, Hailemariam; Ford, Kevin R
2017-01-01
Forest inventories require estimates and measures of uncertainty for subpopulations such as management units. These units often times hold a small sample size, so they should be regarded as small areas. When auxiliary information is available, different small area estimation methods have been proposed to obtain reliable estimates for small areas. Unit level empirical best linear unbiased predictors (EBLUP) based on plot or grid unit level models have been studied more thoroughly than area level EBLUPs, where the modelling occurs at the management unit scale. Area level EBLUPs do not require a precise plot positioning and allow the use of variable radius plots, thus reducing fieldwork costs. However, their performance has not been examined thoroughly. We compared unit level and area level EBLUPs, using LiDAR auxiliary information collected for inventorying 98,104 ha coastal coniferous forest. Unit level models were consistently more accurate than area level EBLUPs, and area level EBLUPs were consistently more accurate than field estimates except for large management units that held a large sample. For stand density, volume, basal area, quadratic mean diameter, mean height and Lorey's height, root mean squared errors (rmses) of estimates obtained using area level EBLUPs were, on average, 1.43, 2.83, 2.09, 1.40, 1.32 and 1.64 times larger than those based on unit level estimates, respectively. Similarly, direct field estimates had rmses that were, on average, 1.37, 1.45, 1.17, 1.17, 1.26, and 1.38 times larger than rmses of area level EBLUPs. Therefore, area level models can lead to substantial gains in accuracy compared to direct estimates, and unit level models lead to very important gains in accuracy compared to area level models, potentially justifying the additional costs of obtaining accurate field plot coordinates.
Monleon, Vicente J.; Temesgen, Hailemariam; Ford, Kevin R.
2017-01-01
Forest inventories require estimates and measures of uncertainty for subpopulations such as management units. These units often times hold a small sample size, so they should be regarded as small areas. When auxiliary information is available, different small area estimation methods have been proposed to obtain reliable estimates for small areas. Unit level empirical best linear unbiased predictors (EBLUP) based on plot or grid unit level models have been studied more thoroughly than area level EBLUPs, where the modelling occurs at the management unit scale. Area level EBLUPs do not require a precise plot positioning and allow the use of variable radius plots, thus reducing fieldwork costs. However, their performance has not been examined thoroughly. We compared unit level and area level EBLUPs, using LiDAR auxiliary information collected for inventorying 98,104 ha coastal coniferous forest. Unit level models were consistently more accurate than area level EBLUPs, and area level EBLUPs were consistently more accurate than field estimates except for large management units that held a large sample. For stand density, volume, basal area, quadratic mean diameter, mean height and Lorey’s height, root mean squared errors (rmses) of estimates obtained using area level EBLUPs were, on average, 1.43, 2.83, 2.09, 1.40, 1.32 and 1.64 times larger than those based on unit level estimates, respectively. Similarly, direct field estimates had rmses that were, on average, 1.37, 1.45, 1.17, 1.17, 1.26, and 1.38 times larger than rmses of area level EBLUPs. Therefore, area level models can lead to substantial gains in accuracy compared to direct estimates, and unit level models lead to very important gains in accuracy compared to area level models, potentially justifying the additional costs of obtaining accurate field plot coordinates. PMID:29216290
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Scott A.
This research has two areas of focus. The first area is to investigate offshore wind turbine (OWT) designs, for use in the Maryland offshore wind area (MOWA), using intensive modeling techniques. The second focus area is to investigate a way to detect damage in wind turbine towers and small electrical components.
Law, Jane
2016-01-01
Intrinsic conditional autoregressive modeling in a Bayeisan hierarchical framework has been increasingly applied in small-area ecological studies. This study explores the specifications of spatial structure in this Bayesian framework in two aspects: adjacency, i.e., the set of neighbor(s) for each area; and (spatial) weight for each pair of neighbors. Our analysis was based on a small-area study of falling injuries among people age 65 and older in Ontario, Canada, that was aimed to estimate risks and identify risk factors of such falls. In the case study, we observed incorrect adjacencies information caused by deficiencies in the digital map itself. Further, when equal weights was replaced by weights based on a variable of expected count, the range of estimated risks increased, the number of areas with probability of estimated risk greater than one at different probability thresholds increased, and model fit improved. More importantly, significance of a risk factor diminished. Further research to thoroughly investigate different methods of variable weights; quantify the influence of specifications of spatial weights; and develop strategies for better defining spatial structure of a map in small-area analysis in Bayesian hierarchical spatial modeling is recommended. PMID:29546147
Cancer Related-Knowledge - Small Area Estimates
These model-based estimates are produced using statistical models that combine data from the Health Information National Trends Survey, and auxiliary variables obtained from relevant sources and borrow strength from other areas with similar characteristics.
The HINTS is designed to produce reliable estimates at the national and regional levels. GIS maps using HINTS data have been used to provide a visual representation of possible geographic relationships in HINTS cancer-related variables.
Low, slow, small target recognition based on spatial vision network
NASA Astrophysics Data System (ADS)
Cheng, Zhao; Guo, Pei; Qi, Xin
2018-03-01
Traditional photoelectric monitoring is monitored using a large number of identical cameras. In order to ensure the full coverage of the monitoring area, this monitoring method uses more cameras, which leads to more monitoring and repetition areas, and higher costs, resulting in more waste. In order to reduce the monitoring cost and solve the difficult problem of finding, identifying and tracking a low altitude, slow speed and small target, this paper presents spatial vision network for low-slow-small targets recognition. Based on camera imaging principle and monitoring model, spatial vision network is modeled and optimized. Simulation experiment results demonstrate that the proposed method has good performance.
Shared and Distinct Rupture Discriminants of Small and Large Intracranial Aneurysms.
Varble, Nicole; Tutino, Vincent M; Yu, Jihnhee; Sonig, Ashish; Siddiqui, Adnan H; Davies, Jason M; Meng, Hui
2018-04-01
Many ruptured intracranial aneurysms (IAs) are small. Clinical presentations suggest that small and large IAs could have different phenotypes. It is unknown if small and large IAs have different characteristics that discriminate rupture. We analyzed morphological, hemodynamic, and clinical parameters of 413 retrospectively collected IAs (training cohort; 102 ruptured IAs). Hierarchal cluster analysis was performed to determine a size cutoff to dichotomize the IA population into small and large IAs. We applied multivariate logistic regression to build rupture discrimination models for small IAs, large IAs, and an aggregation of all IAs. We validated the ability of these 3 models to predict rupture status in a second, independently collected cohort of 129 IAs (testing cohort; 14 ruptured IAs). Hierarchal cluster analysis in the training cohort confirmed that small and large IAs are best separated at 5 mm based on morphological and hemodynamic features (area under the curve=0.81). For small IAs (<5 mm), the resulting rupture discrimination model included undulation index, oscillatory shear index, previous subarachnoid hemorrhage, and absence of multiple IAs (area under the curve=0.84; 95% confidence interval, 0.78-0.88), whereas for large IAs (≥5 mm), the model included undulation index, low wall shear stress, previous subarachnoid hemorrhage, and IA location (area under the curve=0.87; 95% confidence interval, 0.82-0.93). The model for the aggregated training cohort retained all the parameters in the size-dichotomized models. Results in the testing cohort showed that the size-dichotomized rupture discrimination model had higher sensitivity (64% versus 29%) and accuracy (77% versus 74%), marginally higher area under the curve (0.75; 95% confidence interval, 0.61-0.88 versus 0.67; 95% confidence interval, 0.52-0.82), and similar specificity (78% versus 80%) compared with the aggregate-based model. Small (<5 mm) and large (≥5 mm) IAs have different hemodynamic and clinical, but not morphological, rupture discriminants. Size-dichotomized rupture discrimination models performed better than the aggregate model. © 2018 American Heart Association, Inc.
Luan, Hui; Law, Jane; Quick, Matthew
2015-12-30
Obesity and other adverse health outcomes are influenced by individual- and neighbourhood-scale risk factors, including the food environment. At the small-area scale, past research has analysed spatial patterns of food environments for one time period, overlooking how food environments change over time. Further, past research has infrequently analysed relative healthy food access (RHFA), a measure that is more representative of food purchasing and consumption behaviours than absolute outlet density. This research applies a Bayesian hierarchical model to analyse the spatio-temporal patterns of RHFA in the Region of Waterloo, Canada, from 2011 to 2014 at the small-area level. RHFA is calculated as the proportion of healthy food outlets (healthy outlets/healthy + unhealthy outlets) within 4-km from each small-area. This model measures spatial autocorrelation of RHFA, temporal trend of RHFA for the study region, and spatio-temporal trends of RHFA for small-areas. For the study region, a significant decreasing trend in RHFA is observed (-0.024), suggesting that food swamps have become more prevalent during the study period. For small-areas, significant decreasing temporal trends in RHFA were observed for all small-areas. Specific small-areas located in south Waterloo, north Kitchener, and southeast Cambridge exhibited the steepest decreasing spatio-temporal trends and are classified as spatio-temporal food swamps. This research demonstrates a Bayesian spatio-temporal modelling approach to analyse RHFA at the small-area scale. Results suggest that food swamps are more prevalent than food deserts in the Region of Waterloo. Analysing spatio-temporal trends of RHFA improves understanding of local food environment, highlighting specific small-areas where policies should be targeted to increase RHFA and reduce risk factors of adverse health outcomes such as obesity.
Small area estimation for semicontinuous data.
Chandra, Hukum; Chambers, Ray
2016-03-01
Survey data often contain measurements for variables that are semicontinuous in nature, i.e. they either take a single fixed value (we assume this is zero) or they have a continuous, often skewed, distribution on the positive real line. Standard methods for small area estimation (SAE) based on the use of linear mixed models can be inefficient for such variables. We discuss SAE techniques for semicontinuous variables under a two part random effects model that allows for the presence of excess zeros as well as the skewed nature of the nonzero values of the response variable. In particular, we first model the excess zeros via a generalized linear mixed model fitted to the probability of a nonzero, i.e. strictly positive, value being observed, and then model the response, given that it is strictly positive, using a linear mixed model fitted on the logarithmic scale. Empirical results suggest that the proposed method leads to efficient small area estimates for semicontinuous data of this type. We also propose a parametric bootstrap method to estimate the MSE of the proposed small area estimator. These bootstrap estimates of the MSE are compared to the true MSE in a simulation study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Finding big shots: small-area mapping and spatial modelling of obesity among Swiss male conscripts.
Panczak, Radoslaw; Held, Leonhard; Moser, André; Jones, Philip A; Rühli, Frank J; Staub, Kaspar
2016-01-01
In Switzerland, as in other developed countries, the prevalence of overweight and obesity has increased substantially since the early 1990s. Most of the analyses so far have been based on sporadic surveys or self-reported data and did not offer potential for small-area analyses. The goal of this study was to investigate spatial variation and determinants of obesity among young Swiss men using recent conscription data. A complete, anonymized dataset of conscription records for the 2010-2012 period were provided by Swiss Armed Forces. We used a series of Bayesian hierarchical logistic regression models to investigate the spatial pattern of obesity across 3,187 postcodes, varying them by type of random effects (spatially unstructured and structured), level of adjustment by individual (age and professional status) and area-based [urbanicity and index of socio-economic position (SEP)] characteristics. The analysed dataset consisted of 100,919 conscripts, out of which 5,892 (5.8 %) were obese. Crude obesity prevalence increased with age among conscripts of lower individual and area-based SEP and varied greatly over postcodes. Best model's estimates of adjusted odds ratios of obesity on postcode level ranged from 0.61 to 1.93 and showed a strong spatial pattern of obesity risk across the country. Odds ratios above 1 concentrated in central and north Switzerland. Smaller pockets of elevated obesity risk also emerged around cities of Geneva, Fribourg and Lausanne. Lower estimates were observed in North-East and East as well as south of the Alps. Importantly, small regional outliers were observed and patterning did not follow administrative boundaries. Similarly as with crude obesity prevalence, the best fitting model confirmed increasing risk of obesity with age and among conscripts of lower professional status. The risk decreased with higher area-based SEP and, to a lesser degree - in rural areas. In Switzerland, there is a substantial spatial variation in obesity risk among young Swiss men. Small-area estimates of obesity risk derived from conscripts records contribute to its understanding and could be used to design further studies and interventions.
Why Be a Shrub? A Basic Model and Hypotheses for the Adaptive Values of a Common Growth Form
Götmark, Frank; Götmark, Elin; Jensen, Anna M.
2016-01-01
Shrubs are multi-stemmed short woody plants, more widespread than trees, important in many ecosystems, neglected in ecology compared to herbs and trees, but currently in focus due to their global expansion. We present a novel model based on scaling relationships and four hypotheses to explain the adaptive significance of shrubs, including a review of the literature with a test of one hypothesis. Our model describes advantages for a small shrub compared to a small tree with the same above-ground woody volume, based on larger cross-sectional stem area, larger area of photosynthetic tissue in bark and stem, larger vascular cambium area, larger epidermis (bark) area, and larger area for sprouting, and faster production of twigs and canopy. These components form our Hypothesis 1 that predicts higher growth rate for a small shrub than a small tree. This prediction was supported by available relevant empirical studies (14 publications). Further, a shrub will produce seeds faster than a tree (Hypothesis 2), multiple stems in shrubs insure future survival and growth if one or more stems die (Hypothesis 3), and three structural traits of short shrub stems improve survival compared to tall tree stems (Hypothesis 4)—all hypotheses have some empirical support. Multi-stemmed trees may be distinguished from shrubs by more upright stems, reducing bending moment. Improved understanding of shrubs can clarify their recent expansion on savannas, grasslands, and alpine heaths. More experiments and other empirical studies, followed by more elaborate models, are needed to understand why the shrub growth form is successful in many habitats. PMID:27507981
NASA Astrophysics Data System (ADS)
Gong, L.
2013-12-01
Large-scale hydrological models and land surface models are by far the only tools for accessing future water resources in climate change impact studies. Those models estimate discharge with large uncertainties, due to the complex interaction between climate and hydrology, the limited quality and availability of data, as well as model uncertainties. A new purely data-based scale-extrapolation method is proposed, to estimate water resources for a large basin solely from selected small sub-basins, which are typically two-orders-of-magnitude smaller than the large basin. Those small sub-basins contain sufficient information, not only on climate and land surface, but also on hydrological characteristics for the large basin In the Baltic Sea drainage basin, best discharge estimation for the gauged area was achieved with sub-basins that cover 2-4% of the gauged area. There exist multiple sets of sub-basins that resemble the climate and hydrology of the basin equally well. Those multiple sets estimate annual discharge for gauged area consistently well with 5% average error. The scale-extrapolation method is completely data-based; therefore it does not force any modelling error into the prediction. The multiple predictions are expected to bracket the inherent variations and uncertainties of the climate and hydrology of the basin. The method can be applied in both un-gauged basins and un-gauged periods with uncertainty estimation.
Economic Impacts of Infrastructure Damages on Industrial Sector
NASA Astrophysics Data System (ADS)
Kajitani, Yoshio
This paper proposes a basic model for evaluating economic impacts on industrial sectors under the conditions that multiple infrastructures are simultaneously damaged during the earthquake disasters. Especially, focusing on the available economic data developed in the smallest spatial scale in Japan (small area statistics), economic loss estimation model based on the small area statistics and its applicability are investigated on. In the detail, a loss estimation framework, utilizing survey results on firms' activities under electricity, water and gas disruptions, and route choice models in Transportation Engineering, are applied to the case of 2004 Mid-Niigata Earthquake.
A method for managing re-identification risk from small geographic areas in Canada
2010-01-01
Background A common disclosure control practice for health datasets is to identify small geographic areas and either suppress records from these small areas or aggregate them into larger ones. A recent study provided a method for deciding when an area is too small based on the uniqueness criterion. The uniqueness criterion stipulates that an the area is no longer too small when the proportion of unique individuals on the relevant variables (the quasi-identifiers) approaches zero. However, using a uniqueness value of zero is quite a stringent threshold, and is only suitable when the risks from data disclosure are quite high. Other uniqueness thresholds that have been proposed for health data are 5% and 20%. Methods We estimated uniqueness for urban Forward Sortation Areas (FSAs) by using the 2001 long form Canadian census data representing 20% of the population. We then constructed two logistic regression models to predict when the uniqueness is greater than the 5% and 20% thresholds, and validated their predictive accuracy using 10-fold cross-validation. Predictor variables included the population size of the FSA and the maximum number of possible values on the quasi-identifiers (the number of equivalence classes). Results All model parameters were significant and the models had very high prediction accuracy, with specificity above 0.9, and sensitivity at 0.87 and 0.74 for the 5% and 20% threshold models respectively. The application of the models was illustrated with an analysis of the Ontario newborn registry and an emergency department dataset. At the higher thresholds considerably fewer records compared to the 0% threshold would be considered to be in small areas and therefore undergo disclosure control actions. We have also included concrete guidance for data custodians in deciding which one of the three uniqueness thresholds to use (0%, 5%, 20%), depending on the mitigating controls that the data recipients have in place, the potential invasion of privacy if the data is disclosed, and the motives and capacity of the data recipient to re-identify the data. Conclusion The models we developed can be used to manage the re-identification risk from small geographic areas. Being able to choose among three possible thresholds, a data custodian can adjust the definition of "small geographic area" to the nature of the data and recipient. PMID:20361870
NASA Astrophysics Data System (ADS)
Kukkonen, S.; Kostama, V.-P.
2018-05-01
The availability of very high-resolution images has made it possible to extend crater size-frequency distribution studies to small, deca/hectometer-scale craters. This has enabled the dating of small and young surface units, as well as recent, short-time and small-scale geologic processes that have occurred on the units. Usually, however, the higher the spatial resolution of space images is, the smaller area is covered by the images. Thus the use of single, very high-resolution images in crater count age determination may be debatable if the images do not cover the studied region entirely. Here we compare the crater count results for the floor of the Harmakhis Vallis outflow channel obtained from the images of the ConTeXt camera (CTX) and High Resolution Imaging Science Experiment (HiRISE) aboard the Mars Reconnaissance Orbiter (MRO). The CTX images enable crater counts for entire units on the Harmakhis Vallis main valley, whereas the coverage of the higher-resolution HiRISE images is limited and thus the images can only be used to date small parts of the units. Our case study shows that the crater count data based on small impact craters and small surface areas mainly correspond with the crater count data based on larger craters and more extensive counting areas on the same unit. If differences between the results were founded, they could usually be explained by the regional geology. Usually, these differences appeared when at least one cratering model age is missing from either of the crater datasets. On the other hand, we found only a few cases in which the cratering model ages were completely different. We conclude that the crater counts using small impact craters on small counting areas provide useful information about the geological processes which have modified the surface. However, it is important to remember that all the crater counts results obtained from a specific counting area always primarily represent the results from the counting area-not the whole unit. On the other hand, together with crater count results from extensive counting areas and lower-resolution images, crater counts on small counting areas but by using very high-resolution images is a very valuable tool for obtaining unique additional information about the local processes on the surface units.
NASA Astrophysics Data System (ADS)
Hanike, Yusrianti; Sadik, Kusman; Kurnia, Anang
2016-02-01
This research implemented unemployment rate in Indonesia that based on Poisson distribution. It would be estimated by modified the post-stratification and Small Area Estimation (SAE) model. Post-stratification was one of technique sampling that stratified after collected survey data. It's used when the survey data didn't serve for estimating the interest area. Interest area here was the education of unemployment which separated in seven category. The data was obtained by Labour Employment National survey (Sakernas) that's collected by company survey in Indonesia, BPS, Statistic Indonesia. This company served the national survey that gave too small sample for level district. Model of SAE was one of alternative to solved it. According the problem above, we combined this post-stratification sampling and SAE model. This research gave two main model of post-stratification sampling. Model I defined the category of education was the dummy variable and model II defined the category of education was the area random effect. Two model has problem wasn't complied by Poisson assumption. Using Poisson-Gamma model, model I has over dispersion problem was 1.23 solved to 0.91 chi square/df and model II has under dispersion problem was 0.35 solved to 0.94 chi square/df. Empirical Bayes was applied to estimate the proportion of every category education of unemployment. Using Bayesian Information Criteria (BIC), Model I has smaller mean square error (MSE) than model II.
Kroll, Lars Eric; Schumann, Maria; Müters, Stephan; Lampert, Thomas
2017-12-01
Nationwide health surveys can be used to estimate regional differences in health. Using traditional estimation techniques, the spatial depth for these estimates is limited due to the constrained sample size. So far - without special refreshment samples - results have only been available for larger populated federal states of Germany. An alternative is regression-based small-area estimation techniques. These models can generate smaller-scale data, but are also subject to greater statistical uncertainties because of the model assumptions. In the present article, exemplary regionalized results based on the studies "Gesundheit in Deutschland aktuell" (GEDA studies) 2009, 2010 and 2012, are compared to the self-rated health status of the respondents. The aim of the article is to analyze the range of regional estimates in order to assess the usefulness of the techniques for health reporting more adequately. The results show that the estimated prevalence is relatively stable when using different samples. Important determinants of the variation of the estimates are the achieved sample size on the district level and the type of the district (cities vs. rural regions). Overall, the present study shows that small-area modeling of prevalence is associated with additional uncertainties compared to conventional estimates, which should be taken into account when interpreting the corresponding findings.
Nahuelhual, Laura; Benra, Felipe; Laterra, Pedro; Marin, Sandra; Arriagada, Rodrigo; Jullian, Cristobal
2018-09-01
In developing countries, the protection of biodiversity and ecosystem services (ES) rests on the hands of millions of small landowners that coexist with large properties, in a reality of highly unequal land distribution. Guiding the effective allocation of ES-based incentives in such contexts requires researchers and practitioners to tackle a largely overlooked question: for a given targeted area, will single large farms or several small ones provide the most ES supply? The answer to this question has important implications for conservation planning and rural development alike, which transcend efficiency to involve equity issues. We address this question by proposing and testing ES supply-area relations (ESSARs) around three basic hypothesized models, characterized by constant (model 1), increasing (model 2), and decreasing increments (model 3) of ES supply per unit of area or ES "productivity". Data to explore ESSARs came from 3384 private landholdings located in southern Chile ranging from 0.5ha to over 30,000ha and indicators of four ES (forage, timber, recreation opportunities, and water supply). Forage provision best fit model 3, which suggests that targeting several small farms to provide this ES should be a preferred choice, as compared to a single large farm. Timber provision best fit model 2, suggesting that in this case targeting a single large farm would be a more effective choice. Recreation opportunities best fit model 1, which indicates that several small or a single large farm of a comparable size would be equally effective in delivering this ES. Water provision fit model 1 or model 2 depending on the study site. The results corroborate that ES provision is not independent from property area and therefore understanding ESSARs is a necessary condition for setting conservation incentives that are both efficient (deliver the highest conservation outcome at the least cost) and fair for landowners. Copyright © 2018 Elsevier B.V. All rights reserved.
[Impact of small-area context on health: proposing a conceptual model].
Voigtländer, S; Mielck, A; Razum, O
2012-11-01
Recent empirical studies stress the impact of features related to the small-area context on individual health. However, so far there exists no standard explanatory model that integrates the different kinds of such features and that conceptualises their relation to individual characteristics of social inequality. A review of theoretical publications on the relationship between social position and health as well as existing conceptual models for the impact of features related to the small-area context on health was undertaken. In the present article we propose a conceptual model for the health impact of the small-area context. This model conceptualises the location of residence as one dimension of social inequality that affects health through the resources as well as stressors which are inherent in the small-area context. The proposed conceptual model offers an orientation for future empirical studies and can serve as a basis for further discussions concerning the health relevance of the small-area context. © Georg Thieme Verlag KG Stuttgart · New York.
Shin, Sangmi; Park, Seongha; Kim, Yongho; Matson, Eric T
2016-04-22
Recently, commercial unmanned aerial systems (UAS) have gained popularity. However, these UAS are potential threats to people in terms of safety in public places, such as public parks or stadiums. To reduce such threats, we consider a design, modeling, and evaluation of a cost-efficient sensor system that detects and tracks small UAS. In this research, we focus on discovering the best sensor deployments by simulating different types and numbers of sensors in a designated area, which provide reasonable detection rates at low costs. Also, the system should cover the crowded areas more thoroughly than vacant areas to reduce direct threats to people underneath. This research study utilized the Agent-Based Modeling (ABM) technique to model a system consisting of independent and heterogeneous agents that interact with each other. Our previous work presented the ability to apply ABM to analyze the sensor configurations with two types of radars in terms of cost-efficiency. The results from the ABM simulation provide a list of candidate configurations and deployments that can be referred to for applications in the real world environment.
Shin, Sangmi; Park, Seongha; Kim, Yongho; Matson, Eric T.
2016-01-01
Recently, commercial unmanned aerial systems (UAS) have gained popularity. However, these UAS are potential threats to people in terms of safety in public places, such as public parks or stadiums. To reduce such threats, we consider a design, modeling, and evaluation of a cost-efficient sensor system that detects and tracks small UAS. In this research, we focus on discovering the best sensor deployments by simulating different types and numbers of sensors in a designated area, which provide reasonable detection rates at low costs. Also, the system should cover the crowded areas more thoroughly than vacant areas to reduce direct threats to people underneath. This research study utilized the Agent-Based Modeling (ABM) technique to model a system consisting of independent and heterogeneous agents that interact with each other. Our previous work presented the ability to apply ABM to analyze the sensor configurations with two types of radars in terms of cost-efficiency. The results from the ABM simulation provide a list of candidate configurations and deployments that can be referred to for applications in the real world environment. PMID:27110790
Alegría, Margarita; Kessler, Ronald C.; McLaughlin, Katie A.; Gruber, Michael J.; Sampson, Nancy A.; Zaslavsky, Alan M.
2014-01-01
We evaluate the precision of a model estimating school prevalence of SED using a small area estimation method based on readily-available predictors from area-level census block data and school principal questionnaires. Adolescents at 314 schools participated in the National Comorbidity Supplement, a national survey of DSM-IV disorders among adolescents. A multilevel model indicated that predictors accounted for under half of the variance in school-level SED and even less when considering block-group predictors or principal report alone. While Census measures and principal questionnaires are significant predictors of individual-level SED, associations are too weak to generate precise school-level predictions of SED prevalence. PMID:24740174
NASA Astrophysics Data System (ADS)
Muchlisoh, Siti; Kurnia, Anang; Notodiputro, Khairil Anwar; Mangku, I. Wayan
2016-02-01
Labor force surveys conducted over time by the rotating panel design have been carried out in many countries, including Indonesia. Labor force survey in Indonesia is regularly conducted by Statistics Indonesia (Badan Pusat Statistik-BPS) and has been known as the National Labor Force Survey (Sakernas). The main purpose of Sakernas is to obtain information about unemployment rates and its changes over time. Sakernas is a quarterly survey. The quarterly survey is designed only for estimating the parameters at the provincial level. The quarterly unemployment rate published by BPS (official statistics) is calculated based on only cross-sectional methods, despite the fact that the data is collected under rotating panel design. The study purpose to estimate a quarterly unemployment rate at the district level used small area estimation (SAE) model by combining time series and cross-sectional data. The study focused on the application and comparison between the Rao-Yu model and dynamic model in context estimating the unemployment rate based on a rotating panel survey. The goodness of fit of both models was almost similar. Both models produced an almost similar estimation and better than direct estimation, but the dynamic model was more capable than the Rao-Yu model to capture a heterogeneity across area, although it was reduced over time.
Lindner-Lunsford, J. B.; Ellis, S.R.
1987-01-01
Multievent, conceptually based models and a single-event, multiple linear-regression model for estimating storm-runoff quantity and quality from urban areas were calibrated and verified for four small (57 to 167 acres) basins in the Denver metropolitan area, Colorado. The basins represented different land-use types - light commercial, single-family housing, and multi-family housing. Both types of models were calibrated using the same data set for each basin. A comparison was made between the storm-runoff volume, peak flow, and storm-runoff loads of seven water quality constituents simulated by each of the models by use of identical verification data sets. The models studied were the U.S. Geological Survey 's Distributed Routing Rainfall-Runoff Model-Version II (DR3M-II) (a runoff-quantity model designed for urban areas), and a multievent urban runoff quality model (DR3M-QUAL). Water quality constituents modeled were chemical oxygen demand, total suspended solids, total nitrogen, total phosphorus, total lead, total manganese, and total zinc. (USGS)
NASA Astrophysics Data System (ADS)
Chen, Shuo-Bin; Liu, Guo-Cai; Gu, Lian-Quan; Huang, Zhi-Shu; Tan, Jia-Heng
2018-02-01
Design of small molecules targeted at human telomeric G-quadruplex DNA is an extremely active research area. Interestingly, the telomeric G-quadruplex is a highly polymorphic structure. Changes in its conformation upon small molecule binding may be a powerful method to achieve a desired biological effect. However, the rational development of small molecules capable of regulating conformational change of telomeric G-quadruplex structures is still challenging. In this study, we developed a reliable ligand-based pharmacophore model based on isaindigotone derivatives with conformational change activity toward telomeric G-quadruplex DNA. Furthermore, virtual screening of database was conducted using this pharmacophore model and benzopyranopyrimidine derivatives in the database were identified as a strong inducer of the telomeric G-quadruplex DNA conformation, transforming it from hybrid-type structure to parallel structure.
Modelling small-area inequality in premature mortality using years of life lost rates
NASA Astrophysics Data System (ADS)
Congdon, Peter
2013-04-01
Analysis of premature mortality variations via standardized expected years of life lost (SEYLL) measures raises questions about suitable modelling for mortality data, especially when developing SEYLL profiles for areas with small populations. Existing fixed effects estimation methods take no account of correlations in mortality levels over ages, causes, socio-ethnic groups or areas. They also do not specify an underlying data generating process, or a likelihood model that can include trends or correlations, and are likely to produce unstable estimates for small-areas. An alternative strategy involves a fully specified data generation process, and a random effects model which "borrows strength" to produce stable SEYLL estimates, allowing for correlations between ages, areas and socio-ethnic groups. The resulting modelling strategy is applied to gender-specific differences in SEYLL rates in small-areas in NE London, and to cause-specific mortality for leading causes of premature mortality in these areas.
Attempt Quit Smoking 24+ Hours is defined as a person 18 years of age or older who must have reported smoking at least 100 cigarettes in his/her life, and now does not smoke at all but it has been less than 365 days since completely stopped smoking cigarettes, or now smoke everyday or some days but reported that have made attempt of quitting for more than 24 hours in the past 12 months.
NASA Astrophysics Data System (ADS)
Haigang, Sui; Zhina, Song
2016-06-01
Reliably ship detection in optical satellite images has a wide application in both military and civil fields. However, this problem is very difficult in complex backgrounds, such as waves, clouds, and small islands. Aiming at these issues, this paper explores an automatic and robust model for ship detection in large-scale optical satellite images, which relies on detecting statistical signatures of ship targets, in terms of biologically-inspired visual features. This model first selects salient candidate regions across large-scale images by using a mechanism based on biologically-inspired visual features, combined with visual attention model with local binary pattern (CVLBP). Different from traditional studies, the proposed algorithm is high-speed and helpful to focus on the suspected ship areas avoiding the separation step of land and sea. Largearea images are cut into small image chips and analyzed in two complementary ways: Sparse saliency using visual attention model and detail signatures using LBP features, thus accordant with sparseness of ship distribution on images. Then these features are employed to classify each chip as containing ship targets or not, using a support vector machine (SVM). After getting the suspicious areas, there are still some false alarms such as microwaves and small ribbon clouds, thus simple shape and texture analysis are adopted to distinguish between ships and nonships in suspicious areas. Experimental results show the proposed method is insensitive to waves, clouds, illumination and ship size.
NASA Astrophysics Data System (ADS)
Koksbang, S. M.
2017-03-01
Light propagation in two Swiss-cheese models based on anisotropic Szekeres structures is studied and compared with light propagation in Swiss-cheese models based on the Szekeres models' underlying Lemaitre-Tolman-Bondi models. The study shows that the anisotropy of the Szekeres models has only a small effect on quantities such as redshift-distance relations, projected shear and expansion rate along individual light rays. The average angular diameter distance to the last scattering surface is computed for each model. Contrary to earlier studies, the results obtained here are (mostly) in agreement with perturbative results. In particular, a small negative shift, δ DA≔D/A-DA ,b g DA ,b g , in the angular diameter distance is obtained upon line-of-sight averaging in three of the four models. The results are, however, not statistically significant. In the fourth model, there is a small positive shift which has an especially small statistical significance. The line-of-sight averaged inverse magnification at z =1100 is consistent with 1 to a high level of confidence for all models, indicating that the area of the surface corresponding to z =1100 is close to that of the background.
Use of inequality constrained least squares estimation in small area estimation
NASA Astrophysics Data System (ADS)
Abeygunawardana, R. A. B.; Wickremasinghe, W. N.
2017-05-01
Traditional surveys provide estimates that are based only on the sample observations collected for the population characteristic of interest. However, these estimates may have unacceptably large variance for certain domains. Small Area Estimation (SAE) deals with determining precise and accurate estimates for population characteristics of interest for such domains. SAE usually uses least squares or maximum likelihood procedures incorporating prior information and current survey data. Many available methods in SAE use constraints in equality form. However there are practical situations where certain inequality restrictions on model parameters are more realistic. It will lead to Inequality Constrained Least Squares (ICLS) estimates if the method used is least squares. In this study ICLS estimation procedure is applied to many proposed small area estimates.
NASA Astrophysics Data System (ADS)
Rackow, Thomas; Wesche, Christine; Timmermann, Ralph; Hellmer, Hartmut H.; Juricke, Stephan; Jung, Thomas
2017-04-01
We present a simulation of Antarctic iceberg drift and melting that includes small (<2.2 km), medium-sized, and giant tabular icebergs with lengths of more than 10km. The model is initialized with a realistic size distribution obtained from satellite observations. Our study highlights the necessity to account for larger and giant icebergs in order to obtain accurate melt climatologies. Taking iceberg modeling a step further, we simulate drift and melting using iceberg-draft averaged ocean currents, temperature, and salinity. A new basal melting scheme, originally applied in ice shelf melting studies, uses in situ temperature, salinity, and relative velocities at an iceberg's keel. The climatology estimates of Antarctic iceberg melting based on simulations of small, 'small-to-medium'-sized, and small-to-giant icebergs (including icebergs > 10km) exhibit differential characteristics: successive inclusion of larger icebergs leads to a reduced seasonality of the iceberg meltwater flux and a shift of the mass input to the area north of 58°S, while less meltwater is released into the coastal areas. This suggests that estimates of meltwater input solely based on the simulation of small icebergs introduce a systematic meridional bias; they underestimate the northward mass transport and are, thus, closer to the rather crude treatment of iceberg melting as coastal runoff in models without an interactive iceberg model. Future ocean simulations will benefit from the improved meridional distribution of iceberg melt, especially in climate change scenarios where the impact of iceberg melt is likely to increase due to increased calving from the Antarctic ice sheet.
A stochastic model for eye movements during fixation on a stationary target.
NASA Technical Reports Server (NTRS)
Vasudevan, R.; Phatak, A. V.; Smith, J. D.
1971-01-01
A stochastic model describing small eye movements occurring during steady fixation on a stationary target is presented. Based on eye movement data for steady gaze, the model has a hierarchical structure; the principal level represents the random motion of the image point within a local area of fixation, while the higher level mimics the jump processes involved in transitions from one local area to another. Target image motion within a local area is described by a Langevin-like stochastic differential equation taking into consideration the microsaccadic jumps pictured as being due to point processes and the high frequency muscle tremor, represented as a white noise. The transform of the probability density function for local area motion is obtained, leading to explicit expressions for their means and moments. Evaluation of these moments based on the model is comparable with experimental results.
POSTERIOR PREDICTIVE MODEL CHECKS FOR DISEASE MAPPING MODELS. (R827257)
Disease incidence or disease mortality rates for small areas are often displayed on maps. Maps of raw rates, disease counts divided by the total population at risk, have been criticized as unreliable due to non-constant variance associated with heterogeneity in base population si...
A Bayesian approach to multisource forest area estimation
Andrew O. Finley
2007-01-01
In efforts such as land use change monitoring, carbon budgeting, and forecasting ecological conditions and timber supply, demand is increasing for regional and national data layers depicting forest cover. These data layers must permit small area estimates of forest and, most importantly, provide associated error estimates. This paper presents a model-based approach for...
Comparing models for growth and management of forest tracts
J.J. Colbert; Michael Schuckers; Desta Fekedulegn
2003-01-01
The Stand Damage Model (SDM) is a PC-based model that is easily installed, calibrated and initialized for use in exploring the future growth and management of forest stands or small wood lots. We compare the basic individual tree growth model incorporated in this model with alternative models that predict the basal area growth of trees. The SDM is a gap-type simulator...
Congdon, Peter
2006-12-01
This paper considers the development of estimates of mental illness prevalence for small areas and applications in explaining psychiatric outcomes and in assessing service provision. Estimates of prevalence are based on a logistic regression analysis of two national studies that provides model based estimates of relative morbidity risk by demographic, socio-economic and ethnic group for major psychiatric conditions; household/marital and area status also figure in the regression. Relative risk estimates are used, along with suitably disaggregated census populations, to make prevalence estimates for 354 English local authorities (LAs). Two applications are considered: the first involves analysis of variations in schizophrenia referrals and suicide mortality over English LAs that takes account of prevalence differences, and the second involves assessing hospital referral and bed use in relation to prevalence (for ages 16-74) for a case study area, Waltham Forest in NE London.
Area variations in multiple morbidity using a life table methodology.
Congdon, Peter
Analysis of healthy life expectancy is typically based on a binary distinction between health and ill-health. By contrast, this paper considers spatial modelling of disease free life expectancy taking account of the number of chronic conditions. Thus the analysis is based on population sub-groups with no disease, those with one disease only, and those with two or more diseases (multiple morbidity). Data on health status is accordingly modelled using a multinomial likelihood. The analysis uses data for 258 small areas in north London, and shows wide differences in the disease burden related to multiple morbidity. Strong associations between area socioeconomic deprivation and multiple morbidity are demonstrated, as well as strong spatial clustering.
NASA Astrophysics Data System (ADS)
Nasri, S.; Cudennec, C.; Albergel, J.; Berndtsson, R.
2004-02-01
In the beginning of the 1990s, the Tunisian Ministry of Agriculture launched an ambitious program for constructing small hillside reservoirs in the northern and central region of the country. At present, more than 720 reservoirs have been created. They consist of small compacted earth dams supplied with a horizontal overflow weir. Due to lack of hydrological data and the area's extreme floods, however, it is very difficult to design the overflow weirs. Also, catchments are very sensitive to erosion and the reservoirs are rapidly silted up. Consequently, prediction of flood volumes for important rainfall events becomes crucial. Few hydrological observations, however, exist for the catchment areas. For this purpose a geomorphological model methodology is presented to predict shape and volume of hydrographs for important floods. This model is built around a production function that defines the net storm rainfall (portion of rainfall during a storm which reaches a stream channel as direct runoff) from the total rainfall (observed rainfall in the catchment) and a transfer function based on the most complete possible definition of the surface drainage system. Observed rainfall during 5-min time steps was used in the model. The model runoff generation is based on surface drainage characteristics which can be easily extracted from maps. The model was applied to two representative experimental catchments in central Tunisia. The conceptual rainfall-runoff model based on surface topography and drainage network was seen to reproduce observed runoff satisfactory. The calibrated model was used to estimate runoff from 5, 10, 20, and 50 year rainfall return periods regarding runoff volume, maximum runoff, as well as the general shape of the runoff hydrograph. Practical conclusions to design hill reservoirs and to extrapolate results using this model methodology for ungauged small catchments in semiarid Tunisia are made.
NASA Astrophysics Data System (ADS)
Chen, W. A.; Woods, C. P.; Li, J. F.; Waliser, D. E.; Chern, J.; Tao, W.; Jiang, J. H.; Tompkins, A. M.
2010-12-01
CloudSat provides important estimates of vertically resolved ice water content (IWC) on a global scale based on radar reflectivity. These estimates of IWC have proven beneficial in evaluating the representations of ice clouds in global models. An issue when performing model-data comparisons of IWC particularly germane to this investigation, is the question of which component(s) of the frozen water mass are represented by retrieval estimates and how they relate to what is represented in models. The present study developed and applied a new technique to partition CloudSat total IWC into small and large ice hydrometeors, based on the CloudSat-retrieved ice particle size distribution (PSD) parameters. The new method allows one to make relevant model-data comparisons and provides new insights into the model’s representation of atmospheric IWC. The partitioned CloudSat IWC suggests that the small ice particles contribute to 20-30% of the total IWC in the upper troposphere when a threshold size of 100 μm is used. Sensitivity measures with respect to the threshold size, the PSD parameters, and the retrieval algorithms are presented. The new dataset is compared to model estimates, pointing to areas for model improvement. Cloud ice analyses from the European Centre for Medium-Range Weather Forecasts model agree well with the small IWC from CloudSat. The finite-volume multi-scale modeling framework model underestimates total IWC at 147 and 215 hPa, while overestimating the fractional contribution from the small ice species. These results are discussed in terms of their applications to, and implications for, the evaluation of global atmospheric models, providing constraints on the representations of cloud feedback and precipitation in global models, which in turn can help reduce uncertainties associated with climate change projections. Figure 1. A sample lognormal ice number distribution (red curve), and the corresponding mass distribution (black curve). The dotted line represents the cutoff size for IWC partitioning (Dc = 100 µm as an example). The partial integrals of the mass distribution for particles smaller and larger than Dc correspond to IWC<100 (green area) and IWC>100 (blue area), respectively.
Drewnowski, Adam; Rehm, Colin D; Moudon, Anne V; Arterburn, David
2014-07-24
Identifying areas of high diabetes prevalence can have an impact on public health prevention and intervention programs. Local health practitioners and public health agencies lack small-area data on obesity and diabetes. Clinical data from the Group Health Cooperative health care system were used to estimate diabetes prevalence among 59,767 adults by census tract. Area-based measures of socioeconomic status and the Modified Retail Food Environment Index were obtained at the census-tract level in King County, Washington. Spatial analyses and regression models were used to assess the relationship between census tract-level diabetes and area-based socioeconomic status and food environment variables. The mediating effect of obesity on the geographic distribution of diabetes was also examined. In this population of insured adults, diabetes was concentrated in south and southeast King County, with smoothed diabetes prevalence ranging from 6.9% to 21.2%. In spatial regression models, home value and college education were more strongly associated with diabetes than was household income. For each 50% increase in median home value, diabetes prevalence was 1.2 percentage points lower. The Modified Retail Food Environment Index was not related to diabetes at the census-tract level. The observed associations between area-based socioeconomic status and diabetes were largely mediated by obesity (home value, 58%; education, 47%). The observed geographic disparities in diabetes among insured adults by census tract point to the importance of area socioeconomic status. Small-area studies can help health professionals design community-based programs for diabetes prevention and control.
Test Platforms for Model-Based Flight Research
NASA Astrophysics Data System (ADS)
Dorobantu, Andrei
Demonstrating the reliability of flight control algorithms is critical to integrating unmanned aircraft systems into the civilian airspace. For many potential applications, design and certification of these algorithms will rely heavily on mathematical models of the aircraft dynamics. Therefore, the aerospace community must develop flight test platforms to support the advancement of model-based techniques. The University of Minnesota has developed a test platform dedicated to model-based flight research for unmanned aircraft systems. This thesis provides an overview of the test platform and its research activities in the areas of system identification, model validation, and closed-loop control for small unmanned aircraft.
Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model
NASA Astrophysics Data System (ADS)
Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley
2017-05-01
Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kostova, T; Carlsen, T
2003-11-21
We present a spatially-explicit individual-based computational model of rodent dynamics, customized for the prairie vole species, M. Ochrogaster. The model is based on trophic relationships and represents important features such as territorial competition, mating behavior, density-dependent predation and dispersal out of the modeled spatial region. Vegetation growth and vole fecundity are dependent on climatic components. The results of simulations show that the model correctly predicts the overall temporal dynamics of the population density. Time-series analysis shows a very good match between the periods corresponding to the peak population density frequencies predicted by the model and the ones reported in themore » literature. The model is used to study the relation between persistence, landscape area and predation. We introduce the notions of average time to extinction (ATE) and persistence frequency to quantify persistence. While the ATE decreases with decrease of area, it is a bell-shaped function of the predation level: increasing for 'small' and decreasing for 'large' predation levels.« less
Assessing the Local Need for Family and Child Care Services: A Small Area Utilization Analysis.
ERIC Educational Resources Information Center
Percy, Andrew; Carr-Hill, Roy; Dixon, Paul; Jamison, James Q.
2000-01-01
Describes study of administrative data from Northern Ireland on the costs of family and child care services, using small area utilization modeling, to derive a new set of needs indicators that could be used within the family and child care capitation funding formula. Argues that small area utilization modeling produces a fairer and more equitable…
NASA Astrophysics Data System (ADS)
Rackow, Thomas; Wesche, Christine; Timmermann, Ralph; Hellmer, Hartmut H.; Juricke, Stephan; Jung, Thomas
2017-04-01
We present a simulation of Antarctic iceberg drift and melting that includes small, medium-sized, and giant tabular icebergs with a realistic size distribution. For the first time, an iceberg model is initialized with a set of nearly 7000 observed iceberg positions and sizes around Antarctica. The study highlights the necessity to account for larger and giant icebergs in order to obtain accurate melt climatologies. We simulate drift and lateral melt using iceberg-draft averaged ocean currents, temperature, and salinity. A new basal melting scheme, originally applied in ice shelf melting studies, uses in situ temperature, salinity, and relative velocities at an iceberg's bottom. Climatology estimates of Antarctic iceberg melting based on simulations of small (≤2.2 km), "small-to-medium-sized" (≤10 km), and small-to-giant icebergs (including icebergs >10 km) exhibit differential characteristics: successive inclusion of larger icebergs leads to a reduced seasonality of the iceberg meltwater flux and a shift of the mass input to the area north of 58°S, while less meltwater is released into the coastal areas. This suggests that estimates of meltwater input solely based on the simulation of small icebergs introduce a systematic meridional bias; they underestimate the northward mass transport and are, thus, closer to the rather crude treatment of iceberg melting as coastal runoff in models without an interactive iceberg model. Future ocean simulations will benefit from the improved meridional distribution of iceberg melt, especially in climate change scenarios where the impact of iceberg melt is likely to increase due to increased calving from the Antarctic ice sheet.
García-Alonso, Carlos; Pérez-Naranjo, Leonor
2009-01-01
Introduction Knowledge management, based on information transfer between experts and analysts, is crucial for the validity and usability of data envelopment analysis (DEA). Aim To design and develop a methodology: i) to assess technical efficiency of small health areas (SHA) in an uncertainty environment, and ii) to transfer information between experts and operational models, in both directions, for improving expert’s knowledge. Method A procedure derived from knowledge discovery from data (KDD) is used to select, interpret and weigh DEA inputs and outputs. Based on KDD results, an expert-driven Monte-Carlo DEA model has been designed to assess the technical efficiency of SHA in Andalusia. Results In terms of probability, SHA 29 is the most efficient being, on the contrary, SHA 22 very inefficient. 73% of analysed SHA have a probability of being efficient (Pe) >0.9 and 18% <0.5. Conclusions Expert knowledge is necessary to design and validate any operational model. KDD techniques make the transfer of information from experts to any operational model easy and results obtained from the latter improve expert’s knowledge.
Congdon, Peter
2014-12-20
Existing analyses of trends in disability free life expectancy (DFLE) are mainly at aggregate level (national or broad regional). However, major differences in DFLE, and trends in these expectancies, exist between different neighbourhoods within regions, so supporting a small area perspective. However, this raises issues regarding the stability of conventional life table estimation methods at small area scales. This paper advocates a Bayesian borrowing strength technique to model trends in mortality and disability differences across 625 small areas in London, using illness data from the 2001 and 2011 population Censuses, and deaths data for two periods centred on the Census years. From this analysis, estimates of total life expectancy and DFLE are obtained. The spatio-temporal modelling perspective allows assessment of whether significant compression or expansion of morbidity has occurred in each small area. Appropriate models involve random effects that recognise correlation and interaction effects over relevant dimensions of the observed deaths and illness data (areas, ages), as well as major spatial trends (e.g. gradients in health and mortality according to area deprivation category). Whilst borrowing strength is a primary consideration (and demonstrated by raised precision for estimated life expectancies), so also is model parsimony. Therefore, pure borrowing strength models are compared with models allowing selection of random age-area interaction effects using a spike-slab prior, and in fact borrowing strength combined with random effects selection provides better fit. Copyright © 2014 John Wiley & Sons, Ltd.
Eiseman, Julie L; Sciullo, Michael; Wang, Hong; Beumer, Jan H; Horn, Charles C
2017-10-01
Several cancer chemotherapies cause nausea and vomiting, which can be dose-limiting. Musk shrews are used as preclinical models for chemotherapy-induced emesis and for antiemetic effectiveness. Unlike rats and mice, shrews possess a vomiting reflex and demonstrate an emetic profile similar to humans, including acute and delayed phases. As with most animals, dosing of shrews is based on body weight, while translation of such doses to clinically equivalent exposure requires doses based on body surface area. In the current study body surface area in musk shrews was directly assessed to determine the Meeh constant (K m ) conversion factor (female = 9.97, male = 9.10), allowing estimation of body surface area based on body weight. These parameters can be used to determine dosing strategies for shrew studies that model human drug exposures, particularly for investigating the emetic liability of cancer chemotherapeutic agents.
NASA Astrophysics Data System (ADS)
Liu, Y. B.; Gebremeskel, S.; de Smedt, F.; Hoffmann, L.; Pfister, L.
2006-02-01
A method is presented to evaluate the storm runoff contributions from different land-use class areas within a river basin using the geographical information system-based hydrological model WetSpa. The modelling is based on division of the catchment into a grid mesh. Each cell has a unique response function independent of the functioning of other cells. Summation of the flow responses from the cells with the same land-use type results in the storm runoff contribution from these areas. The model was applied on the Steinsel catchment in the Alzette river basin, Grand Duchy of Luxembourg, with 52 months of meteo-hydrological measurements. The simulation results show that the direct runoff from urban areas is dominant for a flood event compared with runoff from other land-use areas in this catchment, and this tends to increase for small floods and for the dry-season floods, whereas the interflow from forested, pasture and agricultural field areas contributes to recession flow. It is demonstrated that the relative contribution from urban areas decreases with flow coefficient, that cropland relative contribution is nearly constant, and that the relative contribution from grassland and woodland increases with flow coefficient with regard to their percentage of land-use class areas within the study catchment.
Model-based inference for small area estimation with sampling weights
Vandendijck, Y.; Faes, C.; Kirby, R.S.; Lawson, A.; Hens, N.
2017-01-01
Obtaining reliable estimates about health outcomes for areas or domains where only few to no samples are available is the goal of small area estimation (SAE). Often, we rely on health surveys to obtain information about health outcomes. Such surveys are often characterised by a complex design, stratification, and unequal sampling weights as common features. Hierarchical Bayesian models are well recognised in SAE as a spatial smoothing method, but often ignore the sampling weights that reflect the complex sampling design. In this paper, we focus on data obtained from a health survey where the sampling weights of the sampled individuals are the only information available about the design. We develop a predictive model-based approach to estimate the prevalence of a binary outcome for both the sampled and non-sampled individuals, using hierarchical Bayesian models that take into account the sampling weights. A simulation study is carried out to compare the performance of our proposed method with other established methods. The results indicate that our proposed method achieves great reductions in mean squared error when compared with standard approaches. It performs equally well or better when compared with more elaborate methods when there is a relationship between the responses and the sampling weights. The proposed method is applied to estimate asthma prevalence across districts. PMID:28989860
Beyond theories of plant invasions: Lessons from natural landscapes
Stohlgren, Thomas J.
2002-01-01
There are a growing number of contrasting theories about plant invasions, but most are only weakly supported by small-scale field experiments, observational studies, and mathematical models. Among the most contentious theories is that species-rich habitats should be less vulnerable to plant invasion than species-poor sites, stemming from earlier theories that competition is a major force in structuring plant communities. Early ecologists such as Charles Darwin (1859) and Charles Elton (1958) suggested that a lack of intense interspecific competition on islands made these low-diversity habitats vulnerable to invasion. Small-scale field experiments have supported and contradicted this theory, as have various mathematical models. In contrast, many large-scale observational studies and detailed vegetation surveys in continental areas often report that species-rich areas are more heavily invaded than species-poor areas, but there are exceptions here as well. In this article, I show how these seemingly contrasting patterns converge once appropriate spatial and temporal scales are considered in complex natural environments. I suggest ways in which small-scale experiments, mathematical models, and large- scale observational studies can be improved and better integrated to advance a theoretically based understanding of plant invasions.
Ross, Michelle; Wakefield, Jon
2015-10-01
Two-phase study designs are appealing since they allow for the oversampling of rare sub-populations which improves efficiency. In this paper we describe a Bayesian hierarchical model for the analysis of two-phase data. Such a model is particularly appealing in a spatial setting in which random effects are introduced to model between-area variability. In such a situation, one may be interested in estimating regression coefficients or, in the context of small area estimation, in reconstructing the population totals by strata. The efficiency gains of the two-phase sampling scheme are compared to standard approaches using 2011 birth data from the research triangle area of North Carolina. We show that the proposed method can overcome small sample difficulties and improve on existing techniques. We conclude that the two-phase design is an attractive approach for small area estimation.
The forward modelling and analysis of magnetic field on the East Asia area using tesseroids
NASA Astrophysics Data System (ADS)
Chen, Z.; Meng, X.; Xu, G.
2017-12-01
As the progress of airborne and satellite magnetic survey, high-resolution magnetic data could be measured at different scale. In order to test and improve the accuracy of the existing crustal model, the forward modeling method is usually used to simulate the magnetic field of the lithosphere. Traditional models to forward modelling the magnetic field are based on the Cartesian coordinate system, and are always used to calculate the magnetic field of the local and small area. However, the Cartesian coordinate system is not an ideal choice for calculating the magnetic field of the global or continental area at the height of the satellite and Earth's curvature cannot be ignored in this situation. The spherical element (called tesseroids) can be used as a model element in the spherical coordinate system to solve this problem. On the basis of studying the principle of this forward method, we focus the selection of data source and the mechanism of adaptive integration. Then we calculate the magnetic anomaly data of East Asia area based on the model Crust1.0. The results presented the crustal susceptibility distribution, which was well consistent with the basic tectonic features in the study area.
Micro enterprise initiative in water sector and poverty reduction .
Jose, T K
2003-01-01
The author reports on the Kerala model for water sector development, broadly adopted as a role model for poverty reduction and build up of social capital. It is a community based organisation with its focus on facilitating a stable income to the poor, and composed of a People's Plan Campaign, the Kudumbashree (women-based poverty eradication programme), with grassroot level neighbourhood groups, federated into an area development society. It promotes savings and credit channelling, capacity building and entrepreneurship development. Activities include awareness raising on water conservation and hygiene, utilization of student power, promotion of small, cheap and low technology projects that people can understand and undertake (small reservoirs, tanks, rainwater harvesting structures, water meters), as well as microenterprises, and training of women-based repair groups.
Benach, Joan; Yasui, Yutaka; Borrell, Carme; Rosa, Elisabeth; Pasarín, M Isabel; Benach, Núria; Español, Esther; Martínez, José Miguel; Daponte, Antonio
2003-06-01
Small-area mortality atlases have been demonstrated to be a useful tool for both showing general geographical patterns in mortality data and identifying specific high-risk locations. In Spain no study has so far systematically examined geographic patterns of small-area mortality for the main causes of death. This paper presents the main features, contents and potential uses of the Spanish Atlas of Mortality in small areas (1987-1995). Population data for 2,218 small areas were drawn from the 1991 Census. Aggregated mortality data for 14 specific causes of death for the period 1987-1995 were obtained for each small area. Empirical Bayes-model-based estimates of age-adjusted relative risk were displayed in small-area maps for each cause/gender/age group (0-64 or 65 and over) combination using the same range of values (i.e. septiles) and colour schemes. The 'Spanish Atlas of Mortality' includes multiple choropleth (area-shaded) small-area maps and graphs to answer different questions about the data. The atlas is divided into three main sections. Section 1 includes the methods and comments on the main maps. Section 2 presents a two-page layout for each leading cause of death by gender including 1) a large map with relative risk estimates, 2) a map that indicates high- and low-risk small areas, 3) a graph with median and interquartile range of relative risk estimates for 17 large regions of Spain, and 4) relative-risk maps for two age groups. Section 3 provides specific information on the geographical units of analysis, statistical methods and other supplemental maps. The 'Spanish Atlas of Mortality' is a useful tool for examining geographical patterns of mortality risk and identifying specific high-risk areas. Mortality patterns displayed in the atlas may have important implications for research and social/health policy planning purposes.
Design Issues in Small-Area Studies of Environment and Health
Elliott, Paul; Savitz, David A.
2008-01-01
Background Small-area studies are part of the tradition of spatial epidemiology, which is concerned with the analysis of geographic patterns of disease with respect to environmental, demographic, socioeconomic, and other factors. We focus on etiologic research, where the aim is to make inferences about spatially varying environmental factors influencing the risk of disease. Methods and results We illustrate the approach through three exemplars: a) magnetic fields from overhead electric power lines and the occurrence of childhood leukemia, which illustrates the use of geographic information systems to focus on areas with high exposure prevalence; b) drinking-water disinfection by-products and reproductive outcomes, taking advantage of large between- to within-area variability in exposures from the water supply; and c) chronic exposure to air pollutants and cardiorespiratory health, where issues of socioeconomic confounding are particularly important. Discussion The small-area epidemiologic approach assigns exposure estimates to individuals based on location of residence or other geographic variables such as workplace or school. In this way, large populations can be studied, increasing the ability to investigate rare exposures or rare diseases. The approach is most effective when there is well-defined exposure variation across geographic units, limited within-area variation, and good control for potential confounding across areas. Conclusions In conjunction with traditional individual-based approaches, small-area studies offer a valuable addition to the armamentarium of the environmental epidemiologist. Modeling of exposure patterns coupled with collection of individual-level data on subsamples of the population should lead to improved risk estimates (i.e., less potential for bias) and help strengthen etiologic inference. PMID:18709174
Chin, Calvin W L; Khaw, Hwan J; Luo, Elton; Tan, Shuwei; White, Audrey C; Newby, David E; Dweck, Marc R
2014-09-01
Discordance between small aortic valve area (AVA; < 1.0 cm(2)) and low mean pressure gradient (MPG; < 40 mm Hg) affects a third of patients with moderate or severe aortic stenosis (AS). We hypothesized that this is largely due to inaccurate echocardiographic measurements of the left ventricular outflow tract area (LVOTarea) and stroke volume alongside inconsistencies in recommended thresholds. One hundred thirty-three patients with mild to severe AS and 33 control individuals underwent comprehensive echocardiography and cardiovascular magnetic resonance imaging (MRI). Stroke volume and LVOTarea were calculated using echocardiography and MRI, and the effects on AVA estimation were assessed. The relationship between AVA and MPG measurements was then modelled with nonlinear regression and consistent thresholds for these parameters calculated. Finally the effect of these modified AVA measurements and novel thresholds on the number of patients with small-area low-gradient AS was investigated. Compared with MRI, echocardiography underestimated LVOTarea (n = 40; -0.7 cm(2); 95% confidence interval [CI], -2.6 to 1.3), stroke volumes (-6.5 mL/m(2); 95% CI, -28.9 to 16.0) and consequently, AVA (-0.23 cm(2); 95% CI, -1.01 to 0.59). Moreover, an AVA of 1.0 cm(2) corresponded to MPG of 24 mm Hg based on echocardiographic measurements and 37 mm Hg after correction with MRI-derived stroke volumes. Based on conventional measures, 56 patients had discordant small-area low-gradient AS. Using MRI-derived stroke volumes and the revised thresholds, a 48% reduction in discordance was observed (n = 29). Echocardiography underestimated LVOTarea, stroke volume, and therefore AVA, compared with MRI. The thresholds based on current guidelines were also inconsistent. In combination, these factors explain > 40% of patients with discordant small-area low-gradient AS. Copyright © 2014 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
McCleary, R. J.; Hassan, M. A.
2006-12-01
An automated procedure was developed to model spatial fish distributions within small streams in the Foothills of Alberta. Native fish populations and their habitats are susceptible to impacts arising from both industrial forestry and rapid development of petroleum resources in the region. Knowledge of fish distributions and the effects of industrial activities on their habitats is required to help conserve native fish populations. Resource selection function (RSF) models were used to explain presence/absence of fish in small streams. Target species were bull trout, rainbow trout and non-native brook trout. Using GIS, the drainage network was divided into reaches with uniform slope and drainage area and then polygons for each reach were created. Predictor variables described stream size, stream energy, climate and land-use. We identified a set of candidate models and selected the best model using a standard Akaike Information Criteria approach. The best models were validated with two external data sets. Drainage area and basin slope parameters were included in all best models. This finding emphasizes the importance of controlling for the energy dimension at the basin scale in investigations into the effects of land-use on aquatic resources in this transitional landscape between the mountains and plains. The best model for bull trout indicated a relation between the presence of artificial migration barriers in downstream areas and the extirpation of the species from headwater reaches. We produced reach-scale maps by species and summarized this information within all small catchments across the 12,000 km2 study area. These maps had included three categories based on predicted probability of capture for individual reaches. The high probability category had a 78 percent accuracy for correctly predicting both fish present and fish not-present reaches. Basin scale maps highlight specific watersheds likely to support both native bull trout and invasive brook trout, while reach-scale maps indicate specific reaches where interactions between these two species are likely to occur. With regional calibration, this automated modeling and mapping procedure could apply in headwater catchments throughout the Rocky Mountain Foothills and other areas where sporadic waterfalls or other natural migration barriers are not an important feature limiting fish distribution.
Pelletier, J.D.; Mayer, L.; Pearthree, P.A.; House, P.K.; Demsey, K.A.; Klawon, J.K.; Vincent, K.R.
2005-01-01
Millions of people in the western United States live near the dynamic, distributary channel networks of alluvial fans where flood behavior is complex and poorly constrained. Here we test a new comprehensive approach to alluvial-fan flood hazard assessment that uses four complementary methods: two-dimensional raster-based hydraulic modeling, satellite-image change detection, fieldbased mapping of recent flood inundation, and surficial geologic mapping. Each of these methods provides spatial detail lacking in the standard method and each provides critical information for a comprehensive assessment. Our numerical model simultaneously solves the continuity equation and Manning's equation (Chow, 1959) using an implicit numerical method. It provides a robust numerical tool for predicting flood flows using the large, high-resolution Digital Elevation Models (DEMs) necessary to resolve the numerous small channels on the typical alluvial fan. Inundation extents and flow depths of historic floods can be reconstructed with the numerical model and validated against field- and satellite-based flood maps. A probabilistic flood hazard map can also be constructed by modeling multiple flood events with a range of specified discharges. This map can be used in conjunction with a surficial geologic map to further refine floodplain delineation on fans. To test the accuracy of the numerical model, we compared model predictions of flood inundation and flow depths against field- and satellite-based flood maps for two recent extreme events on the southern Tortolita and Harquahala piedmonts in Arizona. Model predictions match the field- and satellite-based maps closely. Probabilistic flood hazard maps based on the 10 yr, 100 yr, and maximum floods were also constructed for the study areas using stream gage records and paleoflood deposits. The resulting maps predict spatially complex flood hazards that strongly reflect small-scale topography and are consistent with surficial geology. In contrast, FEMA Flood Insurance Rate Maps (FIRMs) based on the FAN model predict uniformly high flood risk across the study areas without regard for small-scale topography and surficial geology. ?? 2005 Geological Society of America.
NASA Astrophysics Data System (ADS)
Misztal, A.; Belu, N.
2016-08-01
Operation of every company is associated with the risk of interfering with proper performance of its fundamental processes. This risk is associated with various internal areas of the company, as well as the environment in which it operates. From the point of view of ensuring compliance of the course of specific technological processes and, consequently, product conformity with requirements, it is important to identify these threats and eliminate or reduce the risk of their occurrence. The purpose of this article is to present a model of areas of identifying risk affecting the compliance of processes and products, which is based on multiregional targeted monitoring of typical places of interference and risk management methods. The model is based on the verification of risk analyses carried out in small and medium-sized manufacturing companies in various industries..
F. Mauro; Vicente Monleon; H. Temesgen
2015-01-01
Small area estimation (SAE) techniques have been successfully applied in forest inventories to provide reliable estimates for domains where the sample size is small (i.e. small areas). Previous studies have explored the use of either Area Level or Unit Level Empirical Best Linear Unbiased Predictors (EBLUPs) in a univariate framework, modeling each variable of interest...
Meliyo, Joel L; Kimaro, Didas N; Msanya, Balthazar M; Mulungu, Loth S; Hieronimo, Proches; Kihupi, Nganga I; Gulinck, Hubert; Deckers, Jozef A
2014-07-01
Small mammals particularly rodents, are considered the primary natural hosts of plague. Literature suggests that plague persistence in natural foci has a root cause in soils. The objective of this study was to investigate the relationship between on the one hand landforms and associated soil properties, and on the other hand small mammals and fleas in West Usambara Mountains in Tanzania, a plague endemic area. Standard field survey methods coupled with Geographical Information System (GIS) technique were used to examine landform and soils characteristics. Soil samples were analysed in the laboratory for physico-chemical properties. Small mammals were trapped on pre-established landform positions and identified to genus/species level. Fleas were removed from the trapped small mammals and counted. Exploration of landform and soil data was done using ArcGIS Toolbox functions and descriptive statistical analysis. The relationships between landforms, soils, small mammals and fleas were established by generalised linear regression model (GLM) operated in R statistics software. Results show that landforms and soils influence the abundance of small mammals and fleas and their spatial distribution. The abundance of small mammals and fleas increased with increase in elevation. Small mammal species richness also increases with elevation. A landform-soil model shows that available phosphorus, slope aspect and elevation were statistically significant predictors explaining richness and abundance of small mammals. Fleas' abundance and spatial distribution were influenced by hill-shade, available phosphorus and base saturation. The study suggests that landforms and soils have a strong influence on the richness and evenness of small mammals and their fleas' abundance hence could be used to explain plague dynamics in the area.
DYNAMIC MODELING STRATEGY FOR FLOW REGIME TRANSITION IN GAS-LIQUID TWO-PHASE FLOWS
DOE Office of Scientific and Technical Information (OSTI.GOV)
X. Wang; X. Sun; H. Zhao
In modeling gas-liquid two-phase flows, the concept of flow regime has been used to characterize the global interfacial structure of the flows. Nearly all constitutive relations that provide closures to the interfacial transfers in two-phase flow models, such as the two-fluid model, are often flow regime dependent. Currently, the determination of the flow regimes is primarily based on flow regime maps or transition criteria, which are developed for steady-state, fully-developed flows and widely applied in nuclear reactor system safety analysis codes, such as RELAP5. As two-phase flows are observed to be dynamic in nature (fully-developed two-phase flows generally do notmore » exist in real applications), it is of importance to model the flow regime transition dynamically for more accurate predictions of two-phase flows. The present work aims to develop a dynamic modeling strategy for determining flow regimes in gas-liquid two-phase flows through the introduction of interfacial area transport equations (IATEs) within the framework of a two-fluid model. The IATE is a transport equation that models the interfacial area concentration by considering the creation and destruction of the interfacial area, such as the fluid particle (bubble or liquid droplet) disintegration, boiling and evaporation; and fluid particle coalescence and condensation, respectively. For the flow regimes beyond bubbly flows, a two-group IATE has been proposed, in which bubbles are divided into two groups based on their size and shape (which are correlated), namely small bubbles and large bubbles. A preliminary approach to dynamically identifying the flow regimes is provided, in which discriminators are based on the predicted information, such as the void fraction and interfacial area concentration of small bubble and large bubble groups. This method is expected to be applied to computer codes to improve their predictive capabilities of gas-liquid two-phase flows, in particular for the applications in which flow regime transition occurs.« less
Comparing different methods to model scenarios of future glacier change for the entire Swiss Alps
NASA Astrophysics Data System (ADS)
Linsbauer, A.; Paul, F.; Haeberli, W.
2012-04-01
There is general agreement that observed climate change already has strong impacts on the cryosphere. The rapid shrinkage of glaciers during the past two decades as observed in many mountain ranges globally and in particular in the Alps, are impressive confirmations of a changed climate. With the expected future temperature increase glacier shrinkage will likely further accelerate and their role as an important water resource more and more diminish. To determine the future contribution of glaciers to run-off with hydrological models, the change in glacier area and/or volume must be considered. As these models operate at regional scales, simplified approaches to model the future development of all glaciers in a mountain range need to be applied. In this study we have compared different simplified approaches to model the area and volume evolution of all glaciers in the Swiss Alps over the 21st century according to given climate change scenarios. One approach is based on an upward shift of the ELA (by 150 m per degree temperature increase) and the assumption that the glacier extent will shrink until the smaller accumulation area covers again 60% of the total glacier area. A second approach is based on observed elevation changes between 1985 and 2000 as derived from DEM differencing for all glaciers in Switzerland. With a related elevation-dependent parameterization of glacier thickness change and a modelled glacier thickness distribution, the 15-year trends in observed thickness loss are extrapolated into the future with glacier area loss taking place when thickness becomes zero. The models show an overall glacier area reduction between 60-80% until 2100 with some ice remaining at the highest elevations. However, compared to the ongoing temperature increase and considering that several reinforcement feedbacks (albedo lowering, lake formation) are not accounted for, the real area loss might even be stronger. Uncertainties in the modelled glacier thickness have only a small influence on the final area loss, but influence the temporal evolution of the loss. In particular the largest valley glaciers will suffer from a strong volume loss, as large parts of their beds have a small inclination and are thus located at low elevations.
Local bipolar-transistor gain measurement for VLSI devices
NASA Astrophysics Data System (ADS)
Bonnaud, O.; Chante, J. P.
1981-08-01
A method is proposed for measuring the gain of a bipolar transistor region as small as possible. The measurement then allows the evaluation particularly of the effect of the emitter-base junction edge and the technology-process influence of VLSI-technology devices. The technique consists in the generation of charge carriers in the transistor base layer by a focused laser beam in order to bias the device in as small a region as possible. To reduce the size of the conducting area, a transversal reverse base current is forced through the base layer resistance in order to pinch in the emitter current in the illuminated region. Transistor gain is deduced from small signal measurements. A model associated with this technique is developed, and this is in agreement with the first experimental results.
Horowitz, A.J.; Elrick, K.A.; Demas, C.R.; Demcheck, D.K.
1991-01-01
Studies have demonstrated the utility of fluvial bed sediment chemical data in assesing local water-quality conditions. However, establishing local background trace element levels can be difficult. Reference to published average concentrations or the use of dated cores are often of little use in small areas of diverse local petrology, geology, land use, or hydrology. An alternative approach entails the construction of a series of sediment-trace element predictive models based on data from environmentally diverse but unaffected areas. Predicted values could provide a measure of local background concentrations and comparison with actual measured concentrations could identify elevated trace elements and affected sites. Such a model set was developed from surface bed sediments collected nationwide in the United States. Tests of the models in a small Louisiana basin indicated that they could be used to establish local trace element background levels, but required recalibration to account for local geochemical conditions outside the range of samples used to generate the nationwide models.
Scheuhammer, A M; Lord, S I; Wayland, M; Burgess, N M; Champoux, L; Elliott, J E
2016-03-01
We investigated mercury (Hg) concentrations in small fish (mainly yellow perch, Perca flavescens; ∼60% of fish collected) and in blood of common loons (Gavia immer) that prey upon them during the breeding season on lakes in 4 large, widely separated study areas in Canada (>13 lakes per study area; total number of lakes = 93). Although surface sediments from lakes near a base metal smelter in Flin Flon, Manitoba had the highest Hg concentrations, perch and other small fish and blood of common loon chicks sampled from these same lakes had low Hg concentrations similar to those from uncontaminated reference lakes. Multiple regression modeling with AIC analysis indicated that lake pH was by far the most important single factor influencing perch Hg concentrations in lakes across the four study areas (R(2) = 0.29). The best model was a three-variable model (pH + alkalinity + sediment Se; Wi = 0.61, R(2) = 0.85). A single-variable model (fish Hg) best explained among-lake variability in loon chick blood Hg (Wi = 0.17; R(2) = 0.53). From a toxicological risk perspective, all lakes posing a potential Hg health risk for perch and possibly other small pelagic fish species (where mean fish muscle Hg concentrations exceeded 2.4 μg/g dry wt.), and for breeding common loons (where mean fish muscle Hg concentrations exceeded 0.8 μg/g dry wt., and loon chick blood Hg exceeded 1.4 μg/g dry wt.) had pH < 6.7 and were located in eastern Canada. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bass, Jeremy Hugh
Available from UMI in association with The British Library. Requires signed TDF. An evaluation is made of the potential fuel and financial savings possible when a small, autonomous diesel system sized to meet the demands of an individual, domestic consumer is adapted to include: (1) combined heat and power (CHP) generation, (2) wind turbine generation, (3) direct load control. The potential of these three areas is investigated by means of time-step simulation modelling on a microcomputer. Models are used to evaluate performance and a Net Present Value analysis used to assess costs. A cost/benefit analysis then enables those areas, or combination of areas, that facilitate and greatest savings to be identified. The modelling work is supported by experience gained from the following: (1) field study of the Lundy Island wind/diesel system, (2) laboratory testing of a small diesel generator set, (3) study of a diesel based CHP unit, (4) study of a diesel based direct load control system, (5) statistical analysis of data obtained from the long-term monitoring of a large number of individual household's electricity consumption. Rather than consider the consumer's electrical demand in isolation, a more flexible approach is adopted, with consumer demand being regarded as the sum of primarily two components: a small, electricity demand for essential services and a large, reschedulable demand for heating/cooling. The results of the study indicate that: (1) operating a diesel set in a CHP mode is the best strategy for both financial and fuel savings. A simple retrofit enables overall conversion efficiencies to be increased from 25% to 60%, or greater, at little cost. (2) wind turbine generation in association with direct load control is a most effective combination. (3) a combination of both the above areas enables greatest overall financial savings, in favourable winds resulting in unit energy costs around 20% of those of diesel only operation.
DOT National Transportation Integrated Search
2012-09-01
A peer exchange on Modeling and Analysis Needs and Resources for Small Metropolitan Area Transportation Planning was convened on August 28 and 29, 2011, to explore the state of transportation modeling and analysis practice in communities with populat...
Capture mechanism in Palaeotropical pitcher plants (Nepenthaceae) is constrained by climate
Moran, Jonathan A.; Gray, Laura K.; Clarke, Charles; Chin, Lijin
2013-01-01
Background and Aims Nepenthes (Nepenthaceae, approx. 120 species) are carnivorous pitcher plants with a centre of diversity comprising the Philippines, Borneo, Sumatra and Sulawesi. Nepenthes pitchers use three main mechanisms for capturing prey: epicuticular waxes inside the pitcher; a wettable peristome (a collar-shaped structure around the opening); and viscoelastic fluid. Previous studies have provided evidence suggesting that the first mechanism may be more suited to seasonal climates, whereas the latter two might be more suited to perhumid environments. In this study, this idea was tested using climate envelope modelling. Methods A total of 94 species, comprising 1978 populations, were grouped by prey capture mechanism (large peristome, small peristome, waxy, waxless, viscoelastic, non-viscoelastic, ‘wet’ syndrome and ‘dry’ syndrome). Nineteen bioclimatic variables were used to model habitat suitability at approx. 1 km resolution for each group, using Maxent, a presence-only species distribution modelling program. Key Results Prey capture groups putatively associated with perhumid conditions (large peristome, waxless, viscoelastic and ‘wet’ syndrome) had more restricted areas of probable habitat suitability than those associated putatively with less humid conditions (small peristome, waxy, non-viscoelastic and‘dry’ syndrome). Overall, the viscoelastic group showed the most restricted area of modelled suitable habitat. Conclusions The current study is the first to demonstrate that the prey capture mechanism in a carnivorous plant is constrained by climate. Nepenthes species employing peristome-based and viscoelastic fluid-based capture are largely restricted to perhumid regions; in contrast, the wax-based mechanism allows successful capture in both perhumid and more seasonal areas. Possible reasons for the maintenance of peristome-based and viscoelastic fluid-based capture mechanisms in Nepenthes are discussed in relation to the costs and benefits associated with a given prey capture strategy. PMID:23975653
Capture mechanism in Palaeotropical pitcher plants (Nepenthaceae) is constrained by climate.
Moran, Jonathan A; Gray, Laura K; Clarke, Charles; Chin, Lijin
2013-11-01
Nepenthes (Nepenthaceae, approx. 120 species) are carnivorous pitcher plants with a centre of diversity comprising the Philippines, Borneo, Sumatra and Sulawesi. Nepenthes pitchers use three main mechanisms for capturing prey: epicuticular waxes inside the pitcher; a wettable peristome (a collar-shaped structure around the opening); and viscoelastic fluid. Previous studies have provided evidence suggesting that the first mechanism may be more suited to seasonal climates, whereas the latter two might be more suited to perhumid environments. In this study, this idea was tested using climate envelope modelling. A total of 94 species, comprising 1978 populations, were grouped by prey capture mechanism (large peristome, small peristome, waxy, waxless, viscoelastic, non-viscoelastic, 'wet' syndrome and 'dry' syndrome). Nineteen bioclimatic variables were used to model habitat suitability at approx. 1 km resolution for each group, using Maxent, a presence-only species distribution modelling program. Prey capture groups putatively associated with perhumid conditions (large peristome, waxless, viscoelastic and 'wet' syndrome) had more restricted areas of probable habitat suitability than those associated putatively with less humid conditions (small peristome, waxy, non-viscoelastic and'dry' syndrome). Overall, the viscoelastic group showed the most restricted area of modelled suitable habitat. The current study is the first to demonstrate that the prey capture mechanism in a carnivorous plant is constrained by climate. Nepenthes species employing peristome-based and viscoelastic fluid-based capture are largely restricted to perhumid regions; in contrast, the wax-based mechanism allows successful capture in both perhumid and more seasonal areas. Possible reasons for the maintenance of peristome-based and viscoelastic fluid-based capture mechanisms in Nepenthes are discussed in relation to the costs and benefits associated with a given prey capture strategy.
Athens, Jessica K.; Remington, Patrick L.; Gangnon, Ronald E.
2015-01-01
Objectives The University of Wisconsin Population Health Institute has published the County Health Rankings since 2010. These rankings use population-based data to highlight health outcomes and the multiple determinants of these outcomes and to encourage in-depth health assessment for all United States counties. A significant methodological limitation, however, is the uncertainty of rank estimates, particularly for small counties. To address this challenge, we explore the use of longitudinal and pooled outcome data in hierarchical Bayesian models to generate county ranks with greater precision. Methods In our models we used pooled outcome data for three measure groups: (1) Poor physical and poor mental health days; (2) percent of births with low birth weight and fair or poor health prevalence; and (3) age-specific mortality rates for nine age groups. We used the fixed and random effects components of these models to generate posterior samples of rates for each measure. We also used time-series data in longitudinal random effects models for age-specific mortality. Based on the posterior samples from these models, we estimate ranks and rank quartiles for each measure, as well as the probability of a county ranking in its assigned quartile. Rank quartile probabilities for univariate, joint outcome, and/or longitudinal models were compared to assess improvements in rank precision. Results The joint outcome model for poor physical and poor mental health days resulted in improved rank precision, as did the longitudinal model for age-specific mortality rates. Rank precision for low birth weight births and fair/poor health prevalence based on the univariate and joint outcome models were equivalent. Conclusion Incorporating longitudinal or pooled outcome data may improve rank certainty, depending on characteristics of the measures selected. For measures with different determinants, joint modeling neither improved nor degraded rank precision. This approach suggests a simple way to use existing information to improve the precision of small-area measures of population health. PMID:26098858
Functional Mixed Effects Model for Small Area Estimation.
Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou
2016-09-01
Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.
SimAlba: A Spatial Microsimulation Approach to the Analysis of Health Inequalities
Campbell, Malcolm; Ballas, Dimitris
2016-01-01
This paper presents applied geographical research based on a spatial microsimulation model, SimAlba, aimed at estimating geographically sensitive health variables in Scotland. SimAlba has been developed in order to answer a variety of “what-if” policy questions pertaining to health policy in Scotland. Using the SimAlba model, it is possible to simulate the distributions of previously unknown variables at the small area level such as smoking, alcohol consumption, mental well-being, and obesity. The SimAlba microdataset has been created by combining Scottish Health Survey and Census data using a deterministic reweighting spatial microsimulation algorithm developed for this purpose. The paper presents SimAlba outputs for Scotland’s largest city, Glasgow, and examines the spatial distribution of the simulated variables for small geographical areas in Glasgow as well as the effects on individuals of different policy scenario outcomes. In simulating previously unknown spatial data, a wealth of new perspectives can be examined and explored. This paper explores a small set of those potential avenues of research and shows the power of spatial microsimulation modeling in an urban context. PMID:27818989
Space-time latent component modeling of geo-referenced health data.
Lawson, Andrew B; Song, Hae-Ryoung; Cai, Bo; Hossain, Md Monir; Huang, Kun
2010-08-30
Latent structure models have been proposed in many applications. For space-time health data it is often important to be able to find the underlying trends in time, which are supported by subsets of small areas. Latent structure modeling is one such approach to this analysis. This paper presents a mixture-based approach that can be applied to component selection. The analysis of a Georgia ambulatory asthma county-level data set is presented and a simulation-based evaluation is made. Copyright (c) 2010 John Wiley & Sons, Ltd.
Leyk, Stefan; Binder, Claudia R; Nuckols, John R
2009-03-30
Pesticide poisoning is a global health issue with the largest impacts in the developing countries where residential and small-scale agricultural areas are often integrated and pesticides sprayed manually. To reduce health risks from pesticide exposure approaches for personalized exposure assessment (PEA) are needed. We present a conceptual framework to develop a spatial individual-based model (IBM) prototype for assessing potential exposure of farm-workers conducting small-scale agricultural production, which accounts for a considerable portion of global food crop production. Our approach accounts for dynamics in the contaminant distributions in the environment, as well as patterns of movement and activities performed on an individual level under different safety scenarios. We demonstrate a first prototype using data from a study area in a rural part of Colombia, South America. Different safety scenarios of PEA were run by including weighting schemes for activities performed under different safety conditions. We examined the sensitivity of individual exposure estimates to varying patterns of pesticide application and varying individual patterns of movement. This resulted in a considerable variation in estimates of magnitude, frequency and duration of exposure over the model runs for each individual as well as between individuals. These findings indicate the influence of patterns of pesticide application, individual spatial patterns of movement as well as safety conditions on personalized exposure in the agricultural production landscape that is the focus of our research. This approach represents a conceptual framework for developing individual based models to carry out PEA in small-scale agricultural settings in the developing world based on individual patterns of movement, safety conditions, and dynamic contaminant distributions. The results of our analysis indicate our prototype model is sufficiently sensitive to differentiate and quantify the influence of individual patterns of movement and decision-based pesticide management activities on potential exposure. This approach represents a framework for further understanding the contribution of agricultural pesticide use to exposure in the small-scale agricultural production landscape of many developing countries, and could be useful to evaluate public health intervention strategies to reduce risks to farm-workers and their families. Further research is needed to fully develop an operational version of the model.
Stream Flow Prediction by Remote Sensing and Genetic Programming
NASA Technical Reports Server (NTRS)
Chang, Ni-Bin
2009-01-01
A genetic programming (GP)-based, nonlinear modeling structure relates soil moisture with synthetic-aperture-radar (SAR) images to present representative soil moisture estimates at the watershed scale. Surface soil moisture measurement is difficult to obtain over a large area due to a variety of soil permeability values and soil textures. Point measurements can be used on a small-scale area, but it is impossible to acquire such information effectively in large-scale watersheds. This model exhibits the capacity to assimilate SAR images and relevant geoenvironmental parameters to measure soil moisture.
NASA Astrophysics Data System (ADS)
Jacobs, Jessica Lynn
Grounded in the Theory of Self-Efficacy and the Theory of Reasoned Action, this quantitative, correlational study examined if participation in literacy-based instructional coaching (one-on-one, small group) predicted both high school teachers' self-efficacy as measured by the Teachers' Sense of Efficacy Scale and teachers' attitudes toward teaching reading in the content areas measured by the Scale to Measure Attitudes Toward Teaching Reading in Content Classrooms. This study utilized a convenience sample of content teachers from three high schools in Northeastern Pennsylvania participating in a literacy coaching initiative. The volunteer sample of teachers completed the Likert-type questionnaires. The study used hierarchical regression analysis to determine values for each block of the regression models. The study correlated instances of literacy-based instructional coaching (one-on-one, small group) with the scores on the SMATTRCC and the TSES to examine predictive validity. Gender, years of experience, and content area were control variables in this study. The results of the first model indicated that there was a significant relationship between the number of coaching instances and attitudes toward teaching reading in the content area with participation in instructional coaching accounting for 9.6% of the variance in scores on the SMATTRCC. The results of the second model indicated that there was a significant relationship between the number of coaching instances and teachers' self-efficacy with participation in instructional coaching accounting for 6.1% of the variance in scores on the TSES.
NASA Astrophysics Data System (ADS)
Choi, Giehae; Bell, Michelle L.; Lee, Jong-Tae
2017-04-01
The land-use regression (LUR) approach to estimate the levels of ambient air pollutants is becoming popular due to its high validity in predicting small-area variations. However, only a few studies have been conducted in Asian countries, and much less research has been conducted on comparing the performances and applied estimates of different exposure assessments including LUR. The main objectives of the current study were to conduct nitrogen dioxide (NO2) exposure assessment with four methods including LUR in the Republic of Korea, to compare the model performances, and to estimate the empirical NO2 exposures of a cohort. The study population was defined as the year 2010 participants of a government-supported cohort established for bio-monitoring in Ulsan, Republic of Korea. The annual ambient NO2 exposures of the 969 study participants were estimated with LUR, nearest station, inverse distance weighting, and ordinary kriging. Modeling was based on the annual NO2 average, traffic-related data, land-use data, and altitude of the 13 regularly monitored stations. The final LUR model indicated that area of transportation, distance to residential area, and area of wetland were important predictors of NO2. The LUR model explained 85.8% of the variation observed in the 13 monitoring stations of the year 2009. The LUR model outperformed the others based on leave-one out cross-validation comparing the correlations and root-mean square error. All NO2 estimates ranged from 11.3-18.0 ppb, with that of LUR having the widest range. The NO2 exposure levels of the residents differed by demographics. However, the average was below the national annual guidelines of the Republic of Korea (30 ppb). The LUR models showed high performances in an industrial city in the Republic of Korea, despite the small sample size and limited data. Our findings suggest that the LUR method may be useful in similar settings in Asian countries where the target region is small and availability of data is low.
Maruyama, Toshisuke
2007-01-01
To estimate the amount of evapotranspiration in a river basin, the “short period water balance method” was formulated. Then, by introducing the “complementary relationship method,” the amount of evapotranspiration was estimated seasonally, and with reasonable accuracy, for both small and large areas. Moreover, to accurately estimate river discharge in the low water season, the “weighted statistical unit hydrograph method” was proposed and a procedure for the calculation of the unit hydrograph was developed. Also, a new model, based on the “equivalent roughness method,” was successfully developed for the estimation of flood runoff from newly reclaimed farmlands. Based on the results of this research, a “composite reservoir model” was formulated to analyze the repeated use of irrigation water in large spatial areas. The application of this model to a number of watershed areas provided useful information with regard to the realities of water demand-supply systems in watersheds predominately dedicated to paddy fields, in Japan. PMID:24367144
Managing landscape connectivity for a fragmented area using spatial analysis model at town scale
NASA Astrophysics Data System (ADS)
Liu, Shiliang; Dong, Yuhong; Fu, Wei; Zhang, Zhaoling
2009-10-01
Urban growth has great effect on land uses of its suburbs. The habitat loss and fragmentation in those areas are a main threat to conservation of biodiversity. Enhancing landscape functional connectivity is usually an effective way to maintain high biodiversity level in disturbed area. Taking a small town in Beijing as an example, we designed potential landscape corridors based on identification of landscape element quality and "least-cost" path analysis. We described a general approach to establish the corridor network in such fragmented area at town scale. The results showed that landscape elements position has various effects on landscape suitability. Small forest patches and other green lands such as meadow, shrub, even farmland could be a potential stepping-stone or corridor for animal movements. Also, the analysis reveals that critical areas should be managed to facilitate the movement of dispersers among habitat patches.
NASA Astrophysics Data System (ADS)
Iwata, T.; Asano, K.; Sekiguchi, H.
2011-12-01
We propose a prototype of the procedure to construct source models for strong motion prediction during intraslab earthquakes based on the characterized source model (Irikura and Miyake, 2011). The key is the characterized source model which is based on the empirical scaling relationships for intraslab earthquakes and involve the correspondence between the SMGA (strong motion generation area, Miyake et al., 2003) and the asperity (large slip area). Iwata and Asano (2011) obtained the empirical relationships of the rupture area (S) and the total asperity area (Sa) to the seismic moment (Mo) as follows, with assuming power of 2/3 dependency of S and Sa on M0, S (km**2) = 6.57×10**(-11)×Mo**(2/3) (Nm) (1) Sa (km**2) = 1.04 ×10**(-11)×Mo**(2/3) (Nm) (2). Iwata and Asano (2011) also pointed out that the position and the size of SMGA approximately corresponds to the asperity area for several intraslab events. Based on the empirical relationships, we gave a procedure for constructing source models of intraslab earthquakes for strong motion prediction. [1] Give the seismic moment, Mo. [2] Obtain the total rupture area and the total asperity area according to the empirical scaling relationships between S, Sa, and Mo given by Iwata and Asano (2011). [3] Square rupture area and asperities are assumed. [4] The source mechanism is assumed to be the same as that of small events in the source region. [5] Plural scenarios including variety of the number of asperities and rupture starting points are prepared. We apply this procedure by simulating strong ground motions for several observed events for confirming the methodology.
Impact of spatial variability and sampling design on model performance
NASA Astrophysics Data System (ADS)
Schrape, Charlotte; Schneider, Anne-Kathrin; Schröder, Boris; van Schaik, Loes
2017-04-01
Many environmental physical and chemical parameters as well as species distributions display a spatial variability at different scales. In case measurements are very costly in labour time or money a choice has to be made between a high sampling resolution at small scales and a low spatial cover of the study area or a lower sampling resolution at the small scales resulting in local data uncertainties with a better spatial cover of the whole area. This dilemma is often faced in the design of field sampling campaigns for large scale studies. When the gathered field data are subsequently used for modelling purposes the choice of sampling design and resulting data quality influence the model performance criteria. We studied this influence with a virtual model study based on a large dataset of field information on spatial variation of earthworms at different scales. Therefore we built a virtual map of anecic earthworm distributions over the Weiherbach catchment (Baden-Württemberg in Germany). First of all the field scale abundance of earthworms was estimated using a catchment scale model based on 65 field measurements. Subsequently the high small scale variability was added using semi-variograms, based on five fields with a total of 430 measurements divided in a spatially nested sampling design over these fields, to estimate the nugget, range and standard deviation of measurements within the fields. With the produced maps, we performed virtual samplings of one up to 50 random points per field. We then used these data to rebuild the catchment scale models of anecic earthworm abundance with the same model parameters as in the work by Palm et al. (2013). The results of the models show clearly that a large part of the non-explained deviance of the models is due to the very high small scale variability in earthworm abundance: the models based on single virtual sampling points on average obtain an explained deviance of 0.20 and a correlation coefficient of 0.64. With increasing sampling points per field, we averaged the measured abundance of the sampling within each field to obtain a more representative value of the field average. Doubling the samplings per field strongly improved the model performance criteria (explained deviance 0.38 and correlation coefficient 0.73). With 50 sampling points per field the performance criteria were 0.91 and 0.97 respectively for explained deviance and correlation coefficient. The relationship between number of samplings and performance criteria can be described with a saturation curve. Beyond five samples per field the model improvement becomes rather small. With this contribution we wish to discuss the impact of data variability at sampling scale on model performance and the implications for sampling design and assessment of model results as well as ecological inferences.
Cataife, Guido
2014-03-01
We propose the use of previously developed small area estimation techniques to monitor obesity and dietary habits in developing countries and apply the model to Rio de Janeiro city. We estimate obesity prevalence rates at the Census Tract through a combinatorial optimization spatial microsimulation model that matches body mass index and socio-demographic data in Brazil's 2008-9 family expenditure survey with Census 2010 socio-demographic data. Obesity ranges from 8% to 25% in most areas and affects the poor almost as much as the rich. Male and female obesity rates are uncorrelated at the small area level. The model is an effective tool to understand the complexity of the problem and to aid in policy design. © 2013 Published by Elsevier Ltd.
TRANSIT BUS LOAD-BASED MODAL EMISSION RATE MODEL DEVELOPMENT
Heavy-duty diesel vehicles (HDDVs) operations are a major source of oxides of nitrogen (NOx) and particulate matter (PM) emissions in metropolitan area nationwide. Although HD¬DVs constitute a small portion of the on-road fleet, they typically contribute more than 45% of NOx and ...
NASA Astrophysics Data System (ADS)
Ma, Shutian; Motazedian, Dariush; Corchete, Victor
2013-04-01
Many crucial tasks in seismology, such as locating seismic events and estimating focal mechanisms, need crustal velocity models. The velocity models of shallow structures are particularly important in the simulation of ground motions. In southern Ontario, Canada, many small shallow earthquakes occur, generating high-frequency Rayleigh ( Rg) waves that are sensitive to shallow structures. In this research, the dispersion of Rg waves was used to obtain shear-wave velocities in the top few kilometers of the crust in the Georgian Bay, Sudbury, and Thunder Bay areas of southern Ontario. Several shallow velocity models were obtained based on the dispersion of recorded Rg waves. The Rg waves generated by an m N 3.0 natural earthquake on the northern shore of Georgian Bay were used to obtain velocity models for the area of an earthquake swarm in 2007. The Rg waves generated by a mining induced event in the Sudbury area in 2005 were used to retrieve velocity models between Georgian Bay and the Ottawa River. The Rg waves generated by the largest event in a natural earthquake swarm near Thunder Bay in 2008 were used to obtain a velocity model in that swarm area. The basic feature of all the investigated models is that there is a top low-velocity layer with a thickness of about 0.5 km. The seismic velocities changed mainly within the top 2 km, where small earthquakes often occur.
Hieu, Nguyen Trong; Brochier, Timothée; Tri, Nguyen-Huu; Auger, Pierre; Brehmer, Patrice
2014-09-01
We consider a fishery model with two sites: (1) a marine protected area (MPA) where fishing is prohibited and (2) an area where the fish population is harvested. We assume that fish can migrate from MPA to fishing area at a very fast time scale and fish spatial organisation can change from small to large clusters of school at a fast time scale. The growth of the fish population and the catch are assumed to occur at a slow time scale. The complete model is a system of five ordinary differential equations with three time scales. We take advantage of the time scales using aggregation of variables methods to derive a reduced model governing the total fish density and fishing effort at the slow time scale. We analyze this aggregated model and show that under some conditions, there exists an equilibrium corresponding to a sustainable fishery. Our results suggest that in small pelagic fisheries the yield is maximum for a fish population distributed among both small and large clusters of school.
Mullaney, John R.
2004-01-01
Ground-water budgets were developed for 32 small basin-based zones in the Greenwich area of southwestern Connecticut, where crystalline-bedrock aquifers supply private wells, to determine the status of residential ground-water consumption relative to rates of ground-water recharge and discharge. Estimated residential ground-water withdrawals for small basins (averaging 1.7 square miles (mi2)) ranged from 0 to 0.16 million gallons per day per square mile (Mgal/d/mi2). To develop these budgets, residential ground-water withdrawals were estimated using multiple-linear regression models that relate water use from public water supply to data on residential property characteristics. Average daily water use of households with public water supply ranged from 219 to 1,082 gallons per day (gal/d). A steady-state finite-difference ground-water- flow model was developed to track water budgets, and to estimate optimal values for hydraulic conductivity of the bedrock (0.05 feet per day) and recharge to the overlying till deposits (6.9 inches) using nonlinear regression. Estimated recharge rates to the small basins ranged from 3.6 to 7.5 inches per year (in/yr) and relate to the percentage of the basin underlain by coarse- grained glacial stratified deposits. Recharge was not applied to impervious areas to account for the effects of urbanization. Net residential ground-water consumption was estimated as ground-water withdrawals increased during the growing season, and ranged from 0 to 0.9 in/yr. Long-term average stream base flows simulated by the ground-water-flow model were compared to calculated values of average base flow and low flow to determine if base flow was substantially reduced in any of the basins studied. Three of the 32 basins studied had simulated base flows less than 3 in/yr, as a result of either ground-water withdrawals or reduced recharge due to urbanization. A water-availability criteria of the difference between the 30-day 2-year low flow and the recharge rate for each basin was explored as a method to rate the status of water consumption in each basin. Water consumption ranged from 0 to 14.3 percent of available water based on this criteria for the 32 basins studied. Base-flow water quality was related to the amount of urbanized area in each basin sampled. Concentrations of total nitrogen and phosphorus, chloride, indicator bacteria, and the number of pesticide detections increased with basin urbanization, which ranged from 18 to 63 percent of basin area.
A GIS-based time-dependent seismic source modeling of Northern Iran
NASA Astrophysics Data System (ADS)
Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza
2017-01-01
The first step in any seismic hazard study is the definition of seismogenic sources and the estimation of magnitude-frequency relationships for each source. There is as yet no standard methodology for source modeling and many researchers have worked on this topic. This study is an effort to define linear and area seismic sources for Northern Iran. The linear or fault sources are developed based on tectonic features and characteristic earthquakes while the area sources are developed based on spatial distribution of small to moderate earthquakes. Time-dependent recurrence relationships are developed for fault sources using renewal approach while time-independent frequency-magnitude relationships are proposed for area sources based on Poisson process. GIS functionalities are used in this study to introduce and incorporate spatial-temporal and geostatistical indices in delineating area seismic sources. The proposed methodology is used to model seismic sources for an area of about 500 by 400 square kilometers around Tehran. Previous researches and reports are studied to compile an earthquake/fault catalog that is as complete as possible. All events are transformed to uniform magnitude scale; duplicate events and dependent shocks are removed. Completeness and time distribution of the compiled catalog is taken into account. The proposed area and linear seismic sources in conjunction with defined recurrence relationships can be used to develop time-dependent probabilistic seismic hazard analysis of Northern Iran.
Analysis of traffic congestion induced by the work zone
NASA Astrophysics Data System (ADS)
Fei, L.; Zhu, H. B.; Han, X. L.
2016-05-01
Based on the cellular automata model, a meticulous two-lane cellular automata model is proposed, in which the driving behavior difference and the difference of vehicles' accelerations between the moving state and the starting state are taken into account. Furthermore the vehicles' motion is refined by using the small cell of one meter long. Then accompanied by coming up with a traffic management measure, a two-lane highway traffic model containing a work zone is presented, in which the road is divided into normal area, merging area and work zone. The vehicles in different areas move forward according to different lane changing rules and position updating rules. After simulation it is found that when the density is small the cluster length in front of the work zone increases with the decrease of the merging probability. Then the suitable merging length and the appropriate speed limit value are recommended. The simulation result in the form of the speed-flow diagram is in good agreement with the empirical data. It indicates that the presented model is efficient and can partially reflect the real traffic. The results may be meaningful for traffic optimization and road construction management.
The method for detecting small lesions in medical image based on sliding window
NASA Astrophysics Data System (ADS)
Han, Guilai; Jiao, Yuan
2016-10-01
At present, the research on computer-aided diagnosis includes the sample image segmentation, extracting visual features, generating the classification model by learning, and according to the model generated to classify and judge the inspected images. However, this method has a large scale of calculation and speed is slow. And because medical images are usually low contrast, when the traditional image segmentation method is applied to the medical image, there is a complete failure. As soon as possible to find the region of interest, improve detection speed, this topic attempts to introduce the current popular visual attention model into small lesions detection. However, Itti model is mainly for natural images. But the effect is not ideal when it is used to medical images which usually are gray images. Especially in the early stages of some cancers, the focus of a disease in the whole image is not the most significant region and sometimes is very difficult to be found. But these lesions are prominent in the local areas. This paper proposes a visual attention mechanism based on sliding window, and use sliding window to calculate the significance of a local area. Combined with the characteristics of the lesion, select the features of gray, entropy, corner and edge to generate a saliency map. Then the significant region is segmented and distinguished. This method reduces the difficulty of image segmentation, and improves the detection accuracy of small lesions, and it has great significance to early discovery, early diagnosis and treatment of cancers.
Code of Federal Regulations, 2014 CFR
2014-01-01
... activities by considering a savings association's home mortgage, small business, small farm, and community... business, small farm, and consumer loans, if applicable, in the savings association's assessment area(s..., small business, small farm, and consumer loans, if applicable, based on the loan location, including: (i...
Code of Federal Regulations, 2013 CFR
2013-01-01
... activities by considering a savings association's home mortgage, small business, small farm, and community... business, small farm, and consumer loans, if applicable, in the savings association's assessment area(s..., small business, small farm, and consumer loans, if applicable, based on the loan location, including: (i...
Ellipsoidal geometry in asteroid thermal models - The standard radiometric model
NASA Technical Reports Server (NTRS)
Brown, R. H.
1985-01-01
The major consequences of ellipsoidal geometry in an othewise standard radiometric model for asteroids are explored. It is shown that for small deviations from spherical shape a spherical model of the same projected area gives a reasonable aproximation to the thermal flux from an ellipsoidal body. It is suggested that large departures from spherical shape require that some correction be made for geometry. Systematic differences in the radii of asteroids derived radiometrically at 10 and 20 microns may result partly from nonspherical geometry. It is also suggested that extrapolations of the rotational variation of thermal flux from a nonspherical body based solely on the change in cross-sectional area are in error.
Stanford automatic photogrammetry research
NASA Technical Reports Server (NTRS)
Quam, L. H.; Hannah, M. J.
1974-01-01
A feasibility study on the problem of computer automated aerial/orbital photogrammetry is documented. The techniques investigated were based on correlation matching of small areas in digitized pairs of stereo images taken from high altitude or planetary orbit, with the objective of deriving a 3-dimensional model for the surface of a planet.
The solution of a model problem of the atmospheric entry of a small meteoroid
NASA Astrophysics Data System (ADS)
Zalogin, G. N.; Kusov, A. L.
2016-03-01
Direct simulation Monte Carlo modeling (DSMC) is used to solve the problem of the entry into the Earth's atmosphere of a small meteoroid. The main aspects of the physical theory of meteors, such as mass loss (ablation) and effects of aerodynamic and thermal shielding, are considered based on the numerical solution of the model problem of the atmospheric entry of an iron meteoroid. The DSMC makes it possible to obtain insight into the structure of the disturbed area around the meteoroid (coma) and trace its evolution depending on entry velocity and height (Knudsen number) in a transitional flow regime where calculation methods used for free molecular and continuum regimes are inapplicable.
NASA Astrophysics Data System (ADS)
Reid, Lucas; Scherer, Ulrike; Zehe, Erwin
2016-04-01
Soil erosion modeling has always struggled with compensating for the difference in time and spatial scale between model, data and the actual processes involved. This is especially the case with non-event based long-term models based on the Universal Soil Loss Equation (USLE), yet USLE based soil erosion models are among the most common and widely used for they have rather low data requirements and can be applied to large areas. But the majority of mass from soil erosion is eroded within short periods of times during heavy rain events, often within minutes or hours. Advancements of the USLE (eg. the Modified Universal Soil Loss Equation, MUSLE) allow for a daily time step, but still apply the same empirical methods derived from the USLE. And to improve the actual quantification of sediment input into rivers soil erosion models are often combined with a Sediment Delivery Ratio (SDR) to get results within the range of measurements. This is still a viable approach for many applications, yet it leaves much to be desired in terms of understanding and reproducing the processes behind soil erosion and sediment input into rivers. That's why, instead of refining and retuning the existing methods, we explore a more comprehensive, physically consistent description on soil erosion. The idea is to describe soil erosion as a dissipative process (Kleidon et al., 2013) and test it in a small sub-basin of the River Inn catchment area in the pre-Alpine foothills. We then compare the results to sediment load measurements from the sub-basin and discuss the advantages and issues with the application of such an approach.
Holt, James B.; Zhang, Xingyou; Lu, Hua; Shah, Snehal N.; Dooley, Daniel P.; Matthews, Kevin A.; Croft, Janet B.
2017-01-01
Introduction Local health authorities need small-area estimates for prevalence of chronic diseases and health behaviors for multiple purposes. We generated city-level and census-tract–level prevalence estimates of 27 measures for the 500 largest US cities. Methods To validate the methodology, we constructed multilevel logistic regressions to predict 10 selected health indicators among adults aged 18 years or older by using 2013 Behavioral Risk Factor Surveillance System (BRFSS) data; we applied their predicted probabilities to census population data to generate city-level, neighborhood-level, and zip-code–level estimates for the city of Boston, Massachusetts. Results By comparing the predicted estimates with their corresponding direct estimates from a locally administered survey (Boston BRFSS 2010 and 2013), we found that our model-based estimates for most of the selected health indicators at the city level were close to the direct estimates from the local survey. We also found strong correlation between the model-based estimates and direct survey estimates at neighborhood and zip code levels for most indicators. Conclusion Findings suggest that our model-based estimates are reliable and valid at the city level for certain health outcomes. Local health authorities can use the neighborhood-level estimates if high quality local health survey data are not otherwise available. PMID:29049020
Code of Federal Regulations, 2013 CFR
2013-01-01
... assessment area(s) through its lending activities by considering a bank's home mortgage, small business... and amount of the bank's home mortgage, small business, small farm, and consumer loans, if applicable... bank's home mortgage, small business, small farm, and consumer loans, if applicable, based on the loan...
Ackers, Steven H.; Davis, Raymond J.; Olsen, K.; Dugger, Catherine
2015-01-01
Wildlife habitat mapping has evolved at a rapid pace over the last few decades. Beginning with simple, often subjective, hand-drawn maps, habitat mapping now involves complex species distribution models (SDMs) using mapped predictor variables derived from remotely sensed data. For species that inhabit large geographic areas, remote sensing technology is often essential for producing range wide maps. Habitat monitoring for northern spotted owls (Strix occidentalis caurina), whose geographic covers about 23 million ha, is based on SDMs that use Landsat Thematic Mapper imagery to create forest vegetation data layers using gradient nearest neighbor (GNN) methods. Vegetation data layers derived from GNN are modeled relationships between forest inventory plot data, climate and topographic data, and the spectral signatures acquired by the satellite. When used as predictor variables for SDMs, there is some transference of the GNN modeling error to the final habitat map.Recent increases in the use of light detection and ranging (lidar) data, coupled with the need to produce spatially accurate and detailed forest vegetation maps have spurred interest in its use for SDMs and habitat mapping. Instead of modeling predictor variables from remotely sensed spectral data, lidar provides direct measurements of vegetation height for use in SDMs. We expect a SDM habitat map produced from directly measured predictor variables to be more accurate than one produced from modeled predictors.We used maximum entropy (Maxent) SDM modeling software to compare predictive performance and estimates of habitat area between Landsat-based and lidar-based northern spotted owl SDMs and habitat maps. We explored the differences and similarities between these maps, and to a pre-existing aerial photo-interpreted habitat map produced by local wildlife biologists. The lidar-based map had the highest predictive performance based on 10 bootstrapped replicate models (AUC = 0.809 ± 0.011), but the performance of the Landsat-based map was within acceptable limits (AUC = 0.717 ± 0.021). As is common with photo-interpreted maps, there was no accuracy assessment available for comparison. The photo-interpreted map produced the highest and lowest estimates of habitat area, depending on which habitat classes were included (nesting, roosting, and foraging habitat = 9962 ha, nesting habitat only = 6036 ha). The Landsat-based map produced an estimate of habitat area that was within this range (95% CI: 6679–9592 ha), while the lidar-based map produced an area estimate similar to what was interpreted by local wildlife biologists as nesting (i.e., high quality) habitat using aerial imagery (95% CI: 5453–7216). Confidence intervals of habitat area estimates from the SDMs based on Landsat and lidar overlapped.We concluded that both Landsat- and lidar-based SDMs produced reasonable maps and area estimates for northern spotted owl habitat within the study area. The lidar-based map was more precise and spatially similar to what local wildlife biologists considered spotted owl nesting habitat. The Landsat-based map provided a less precise spatial representation of habitat within the relatively small geographic confines of the study area, but habitat area estimates were similar to both the photo-interpreted and lidar-based maps.Photo-interpreted maps are time consuming to produce, subjective in nature, and difficult to replicate. SDMs provide a framework for efficiently producing habitat maps that can be replicated as habitat conditions change over time, provided that comparable remotely sensed data are available. When the SDM uses predictor variables extracted from lidar data, it can produce a habitat map that is both accurate and useful at large and small spatial scales. In comparison, SDMs using Landsat-based data are more appropriate for large scale analyses of amounts and general spatial patterns of habitat at regional scales.
ERIC Educational Resources Information Center
Gough, John
2007-01-01
Children's natural curiosity about numbers, big and small can lead to exploring place-value ideas. But how can these abstract concepts be experienced more concretely? This article presents some practical approaches for conceptualising very small numbers using linear models, area models, volume models, and diagrams.
NASA Astrophysics Data System (ADS)
Stumpf, Felix; Goebes, Philipp; Schmidt, Karsten; Schindewolf, Marcus; Schönbrodt-Stitt, Sarah; Wadoux, Alexandre; Xiang, Wei; Scholten, Thomas
2017-04-01
Soil erosion by water outlines a major threat to the Three Gorges Reservoir Area in China. A detailed assessment of soil conservation measures requires a tool that spatially identifies sediment reallocations due to rainfall-runoff events in catchments. We applied EROSION 3D as a physically based soil erosion and deposition model in a small mountainous catchment. Generally, we aim to provide a methodological frame that facilitates the model parametrization in a data scarce environment and to identify sediment sources and deposits. We used digital soil mapping techniques to generate spatially distributed soil property information for parametrization. For model calibration and validation, we continuously monitored the catchment on rainfall, runoff and sediment yield for a period of 12 months. The model performed well for large events (sediment yield>1 Mg) with an averaged individual model error of 7.5%, while small events showed an average error of 36.2%. We focused on the large events to evaluate reallocation patterns. Erosion occurred in 11.1% of the study area with an average erosion rate of 49.9Mgha 1. Erosion mainly occurred on crop rotation areas with a spatial proportion of 69.2% for 'corn-rapeseed' and 69.1% for 'potato-cabbage'. Deposition occurred on 11.0%. Forested areas (9.7%), infrastructure (41.0%), cropland (corn-rapeseed: 13.6%, potatocabbage: 11.3%) and grassland (18.4%) were affected by deposition. Because the vast majority of annual sediment yields (80.3%) were associated to a few large erosive events, the modelling approach provides a useful tool to spatially assess soil erosion control and conservation measures.
Modeling seasonal dynamics of the small fish cohorts in fluctuating freshwater marsh landscapes
Jopp, Fred; DeAngelis, Donald L.; Trexler, Joel C.
2010-01-01
Small-bodied fishes constitute an important assemblage in many wetlands. In wetlands that dry periodically except for small permanent waterbodies, these fishes are quick to respond to change and can undergo large fluctuations in numbers and biomasses. An important aspect of landscapes that are mixtures of marsh and permanent waterbodies is that high rates of biomass production occur in the marshes during flooding phases, while the permanent waterbodies serve as refuges for many biotic components during the dry phases. The temporal and spatial dynamics of the small fishes are ecologically important, as these fishes provide a crucial food base for higher trophic levels, such as wading birds. We develop a simple model that is analytically tractable, describing the main processes of the spatio-temporal dynamics of a population of small-bodied fish in a seasonal wetland environment, consisting of marsh and permanent waterbodies. The population expands into newly flooded areas during the wet season and contracts during declining water levels in the dry season. If the marsh dries completely during these times (a drydown), the fish need refuge in permanent waterbodies. At least three new and general conclusions arise from the model: (1) there is an optimal rate at which fish should expand into a newly flooding area to maximize population production; (2) there is also a fluctuation amplitude of water level that maximizes fish production, and (3) there is an upper limit on the number of fish that can reach a permanent waterbody during a drydown, no matter how large the marsh surface area is that drains into the waterbody. Because water levels can be manipulated in many wetlands, it is useful to have an understanding of the role of these fluctuations.
A comparison between two inundation models for the 25 Ooctober 2010 Mentawai Islands Tsunami
NASA Astrophysics Data System (ADS)
Huang, Z.; Borrero, J. C.; Qiu, Q.; Hill, E. M.; Li, L.; Sieh, K. E.
2011-12-01
On 25 October 2010, an Mw~7.8 earthquake occurred on the Sumatra megathrust seaward of the Mentawai Islands, Indonesia, generating a tsunami which killed approximately 500 people. Following the event, the Earth Observatory of Singapore (EOS) initiated a post-tsunami field survey, collecting tsunami run-up data from more than 30 sites on Pagai Selatan, Pagai Utara and Sipora. The strongest tsunami effects were observed on several small islands offshore of Pagai Selatan, where runup exceeded 16 m. This presentation will focus on a detailed comparison between two tsunami propagation and inundation models: COMCOT (Cornell Multi-grid Coupled Tsunami model) and MOST (Method of Splitting Tsunami). Simulations are initialized using fault models based on data from a 1-hz GPS system that measured co-seismic deformation throughout the region. Preliminary simulations suggest that 2-m vertical seafloor deformation over a reasonably large area is required to recreate most of the observed tsunami effects. Since the GPS data suggest that subsidence of the islands is small, this implies that the tsunami source region is somewhat narrower and located further offshore than described in recently published earthquake source models based on teleseismic inversions alone. We will also discuss issues such as bathymetric and topographic data preparation and the uncertainty in the modeling results due to the lack of high resolution bathymetry and topography in the study area.
Matti, Jonathan C.; Cox, Brett F.; Rodriguez, Eduardo A.; Obi, Curtis M.; Powell, Robert E.; Hinkle, Margaret E.; Griscom, Andrew; Sabine, Charles; Cwick, Gary J.
1982-01-01
Geological, geochemical, and geophysical evidence, together with a review of historical mining and prospecting activities, suggests that most of the Bighorn Mountains Wilderness Study Area has low potential for the discovery of all types of mineral and energy resources-including precious and base metals, building stone and aggregate, fossil fuels, radioactive-mineral resources, and geothermal resources. Low-grade mineralization has been documented in one small area near Rattlesnake Canyon, and this area has low to moderate potential for future small-scale exploration and development of precious and base metals. Thorium and uranium enrichment have been documented in two small areas in the eastern part of the wilderness study area; these two areas have low to moderate potential for future small-scale exploration and development of radioactive-mineral resources.
NASA Technical Reports Server (NTRS)
Denardo, Billy Pat; Canning, Thomas N.
1952-01-01
Models of the Hermes A-3B missile were tested in the Ames supersonic free-flight wind tunnel to determine the static-longitudinal-stability characteristics at a Mach number of 5.0 and a Reynolds number based on body length of 10 million. The results indicated that the model center of pressure was 45.3 percent of the body length aft of the nose and the lift-curve slope based on body frontal area was 0.064 per degree. Estimates indicated that the effect on these characteristics of aeroelastic twisting of the model fins was small but important if a precise location of center of pressure is required. A comparison of the test results with predictions based on available theory showed that the theory was useful only for rough estimates, The drag coefficient at zero lift, based on body frontal area, was found to be 0.155.
A method for modelling GP practice level deprivation scores using GIS
Strong, Mark; Maheswaran, Ravi; Pearson, Tim; Fryers, Paul
2007-01-01
Background A measure of general practice level socioeconomic deprivation can be used to explore the association between deprivation and other practice characteristics. An area-based categorisation is commonly chosen as the basis for such a deprivation measure. Ideally a practice population-weighted area-based deprivation score would be calculated using individual level spatially referenced data. However, these data are often unavailable. One approach is to link the practice postcode to an area-based deprivation score, but this method has limitations. This study aimed to develop a Geographical Information Systems (GIS) based model that could better predict a practice population-weighted deprivation score in the absence of patient level data than simple practice postcode linkage. Results We calculated predicted practice level Index of Multiple Deprivation (IMD) 2004 deprivation scores using two methods that did not require patient level data. Firstly we linked the practice postcode to an IMD 2004 score, and secondly we used a GIS model derived using data from Rotherham, UK. We compared our two sets of predicted scores to "gold standard" practice population-weighted scores for practices in Doncaster, Havering and Warrington. Overall, the practice postcode linkage method overestimated "gold standard" IMD scores by 2.54 points (95% CI 0.94, 4.14), whereas our modelling method showed no such bias (mean difference 0.36, 95% CI -0.30, 1.02). The postcode-linked method systematically underestimated the gold standard score in less deprived areas, and overestimated it in more deprived areas. Our modelling method showed a small underestimation in scores at higher levels of deprivation in Havering, but showed no bias in Doncaster or Warrington. The postcode-linked method showed more variability when predicting scores than did the GIS modelling method. Conclusion A GIS based model can be used to predict a practice population-weighted area-based deprivation measure in the absence of patient level data. Our modelled measure generally had better agreement with the population-weighted measure than did a postcode-linked measure. Our model may also avoid an underestimation of IMD scores in less deprived areas, and overestimation of scores in more deprived areas, seen when using postcode linked scores. The proposed method may be of use to researchers who do not have access to patient level spatially referenced data. PMID:17822545
Application of a process-based shallow landslide hazard model over a broad area in Central Italy
Gioia, Eleonora; Speranza, Gabriella; Ferretti, Maurizio; Godt, Jonathan W.; Baum, Rex L.; Marincioni, Fausto
2015-01-01
Process-based models are widely used for rainfall-induced shallow landslide forecasting. Previous studies have successfully applied the U.S. Geological Survey’s Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability (TRIGRS) model (Baum et al. 2002) to compute infiltration-driven changes in the hillslopes’ factor of safety on small scales (i.e., tens of square kilometers). Soil data input for such models are difficult to obtain across larger regions. This work describes a novel methodology for the application of TRIGRS over broad areas with relatively uniform hydrogeological properties. The study area is a 550-km2 region in Central Italy covered by post-orogenic Quaternary sediments. Due to the lack of field data, we assigned mechanical and hydrological property values through a statistical analysis based on literature review of soils matching the local lithologies. We calibrated the model using rainfall data from 25 historical rainfall events that triggered landslides. We compared the variation of pressure head and factor of safety with the landslide occurrence to identify the best fitting input conditions. Using calibrated inputs and a soil depth model, we ran TRIGRS for the study area. Receiver operating characteristic (ROC) analysis, comparing the model’s output with a shallow landslide inventory, shows that TRIGRS effectively simulated the instability conditions in the post-orogenic complex during historical rainfall scenarios. The implication of this work is that rainfall-induced landslides over large regions may be predicted by a deterministic model, even where data on geotechnical and hydraulic properties as well as temporal changes in topography or subsurface conditions are not available.
Localized Multi-Model Extremes Metrics for the Fourth National Climate Assessment
NASA Astrophysics Data System (ADS)
Thompson, T. R.; Kunkel, K.; Stevens, L. E.; Easterling, D. R.; Biard, J.; Sun, L.
2017-12-01
We have performed localized analysis of scenario-based datasets for the Fourth National Climate Assessment (NCA4). These datasets include CMIP5-based Localized Constructed Analogs (LOCA) downscaled simulations at daily temporal resolution and 1/16th-degree spatial resolution. Over 45 temperature and precipitation extremes metrics have been processed using LOCA data, including threshold, percentile, and degree-days calculations. The localized analysis calculates trends in the temperature and precipitation extremes metrics for relatively small regions such as counties, metropolitan areas, climate zones, administrative areas, or economic zones. For NCA4, we are currently addressing metropolitan areas as defined by U.S. Census Bureau Metropolitan Statistical Areas. Such localized analysis provides essential information for adaptation planning at scales relevant to local planning agencies and businesses. Nearly 30 such regions have been analyzed to date. Each locale is defined by a closed polygon that is used to extract LOCA-based extremes metrics specific to the area. For each metric, single-model data at each LOCA grid location are first averaged over several 30-year historical and future periods. Then, for each metric, the spatial average across the region is calculated using model weights based on both model independence and reproducibility of current climate conditions. The range of single-model results is also captured on the same localized basis, and then combined with the weighted ensemble average for each region and each metric. For example, Boston-area cooling degree days and maximum daily temperature is shown below for RCP8.5 (red) and RCP4.5 (blue) scenarios. We also discuss inter-regional comparison of these metrics, as well as their relevance to risk analysis for adaptation planning.
Teng, Shizhu; Jia, Qiaojuan; Huang, Yijian; Chen, Liangcao; Fei, Xufeng; Wu, Jiaping
2015-10-01
Sporadic cases occurring in mall geographic unit could lead to extreme value of incidence due to the small population bases, which would influence the analysis of actual incidence. This study introduced a method of hierarchy clustering and partitioning regionalization, which integrates areas with small population into larger areas with enough population by using Geographic Information System (GIS) based on the principles of spatial continuity and geographical similarity (homogeneity test). This method was applied in spatial epidemiology by using a data set of thyroid cancer incidence in Yiwu, Zhejiang province, between 2010 and 2013. Thyroid cancer incidence data were more reliable and stable in the new regionalized areas. Hotspot analysis (Getis-Ord) on the incidence in new areas indicated that there was obvious case clustering in the central area of Yiwu. This method can effectively solve the problem of small population base in small geographic units in spatial epidemiological analysis of thyroid cancer incidence and can be used for other diseases and in other areas.
Night Attack Workload Steering Group. Volume 3. Simulation and Human Factors Subgroup
1982-06-01
information intepretation . The second is the use of pictorial formats or computer generated displays that combine many present-day displays into a small number...base exists in any form (digital, film , or model) which supports the wide area, long track, low level requirements levied by night attack training
NASA Astrophysics Data System (ADS)
Skok, Gregor; Žagar, Nedjeljka; Honzak, Luka; Žabkar, Rahela; Rakovec, Jože; Ceglar, Andrej
2016-01-01
The study presents a precipitation intercomparison based on two satellite-derived datasets (TRMM 3B42, CMORPH), four raingauge-based datasets (GPCC, E-OBS, Willmott & Matsuura, CRU), ERA Interim reanalysis (ERAInt), and a single climate simulation using the WRF model. The comparison was performed for a domain encompassing parts of Europe and the North Atlantic over the 11-year period of 2000-2010. The four raingauge-based datasets are similar to the TRMM dataset with biases over Europe ranging from -7 % to +4 %. The spread among the raingauge-based datasets is relatively small over most of Europe, although areas with greater uncertainty (more than 30 %) exist, especially near the Alps and other mountainous regions. There are distinct differences between the datasets over the European land area and the Atlantic Ocean in comparison to the TRMM dataset. ERAInt has a small dry bias over the land; the WRF simulation has a large wet bias (+30 %), whereas CMORPH is characterized by a large and spatially consistent dry bias (-21 %). Over the ocean, both ERAInt and CMORPH have a small wet bias (+8 %) while the wet bias in WRF is significantly larger (+47 %). ERAInt has the highest frequency of low-intensity precipitation while the frequency of high-intensity precipitation is the lowest due to its lower native resolution. Both satellite-derived datasets have more low-intensity precipitation over the ocean than over the land, while the frequency of higher-intensity precipitation is similar or larger over the land. This result is likely related to orography, which triggers more intense convective precipitation, while the Atlantic Ocean is characterized by more homogenous large-scale precipitation systems which are associated with larger areas of lower intensity precipitation. However, this is not observed in ERAInt and WRF, indicating the insufficient representation of convective processes in the models. Finally, the Fraction Skill Score confirmed that both models perform better over the Atlantic Ocean with ERAInt outperforming the WRF at low thresholds and WRF outperforming ERAInt at higher thresholds. The diurnal cycle is simulated better in the WRF simulation than in ERAInt, although WRF could not reproduce well the amplitude of the diurnal cycle. While the evaluation of the WRF model confirms earlier findings related to the model's wet bias over European land, the applied satellite-derived precipitation datasets revealed differences between the land and ocean areas along with uncertainties in the observation datasets.
NASA Astrophysics Data System (ADS)
Poletti, Maria Laura; Pignone, Flavio; Rebora, Nicola; Silvestro, Francesco
2017-04-01
The exposure of the urban areas to flash-floods is particularly significant to Mediterranean coastal cities, generally densely-inhabited. Severe rainfall events often associated to intense and organized thunderstorms produced, during the last century, flash-floods and landslides causing serious damages to urban areas and in the worst events led to human losses. The temporal scale of these events has been observed strictly linked to the size of the catchments involved: in the Mediterranean area a great number of catchments that pass through coastal cities have a small drainage area (less than 100 km2) and a corresponding hydrologic response timescale in the order of a few hours. A suitable nowcasting chain is essential for the on time forecast of this kind of events. In fact meteorological forecast systems are unable to predict precipitation at the scale of these events, small both at spatial (few km) and temporal (hourly) scales. Nowcasting models, covering the time interval of the following two hours starting from the observation try to extend the predictability limits of the forecasting models in support of real-time flood alert system operations. This work aims to present the use of hydrological models coupled with nowcasting techniques. The nowcasting model PhaSt furnishes an ensemble of equi-probable future precipitation scenarios on time horizons of 1-3 h starting from the most recent radar observations. The coupling of the nowcasting model PhaSt with the hydrological model Continuum allows to forecast the flood with a few hours in advance. In this way it is possible to generate different discharge prediction for the following hours and associated return period maps: these maps can be used as a support in the decisional process for the warning system.
Fossett, Mark
2011-01-01
This paper considers the potential for using agent models to explore theories of residential segregation in urban areas. Results of generative experiments conducted using an agent-based simulation of segregation dynamics document that varying a small number of model parameters representing constructs from urban-ecological theories of segregation can generate a wide range of qualitatively distinct and substantively interesting segregation patterns. The results suggest how complex, macro-level patterns of residential segregation can arise from a small set of simple micro-level social dynamics operating within particular urban-demographic contexts. The promise and current limitations of agent simulation studies are noted and optimism is expressed regarding the potential for such studies to engage and contribute to the broader research literature on residential segregation. PMID:21379372
NASA Technical Reports Server (NTRS)
Mascaro, Giuseppe; Vivoni, Enrique R.; Deidda, Roberto
2010-01-01
Accounting for small-scale spatial heterogeneity of soil moisture (theta) is required to enhance the predictive skill of land surface models. In this paper, we present the results of the development, calibration, and performance evaluation of a downscaling model based on multifractal theory using aircraft!based (800 m) theta estimates collected during the southern Great Plains experiment in 1997 (SGP97).We first demonstrate the presence of scale invariance and multifractality in theta fields of nine square domains of size 25.6 x 25.6 sq km, approximately a satellite footprint. Then, we estimate the downscaling model parameters and evaluate the model performance using a set of different calibration approaches. Results reveal that small-scale theta distributions are adequately reproduced across the entire region when coarse predictors include a dynamic component (i.e., the spatial mean soil moisture
Overland Flow Analysis Using Time Series of Suas-Derived Elevation Models
NASA Astrophysics Data System (ADS)
Jeziorska, J.; Mitasova, H.; Petrasova, A.; Petras, V.; Divakaran, D.; Zajkowski, T.
2016-06-01
With the advent of the innovative techniques for generating high temporal and spatial resolution terrain models from Unmanned Aerial Systems (UAS) imagery, it has become possible to precisely map overland flow patterns. Furthermore, the process has become more affordable and efficient through the coupling of small UAS (sUAS) that are easily deployed with Structure from Motion (SfM) algorithms that can efficiently derive 3D data from RGB imagery captured with consumer grade cameras. We propose applying the robust overland flow algorithm based on the path sampling technique for mapping flow paths in the arable land on a small test site in Raleigh, North Carolina. By comparing a time series of five flights in 2015 with the results of a simulation based on the most recent lidar derived DEM (2013), we show that the sUAS based data is suitable for overland flow predictions and has several advantages over the lidar data. The sUAS based data captures preferential flow along tillage and more accurately represents gullies. Furthermore the simulated water flow patterns over the sUAS based terrain models are consistent throughout the year. When terrain models are reconstructed only from sUAS captured RGB imagery, however, water flow modeling is only appropriate in areas with sparse or no vegetation cover.
Infrared Spectroscopic Imaging for Prostate Pathology Practice
2011-04-01
features – geometric properties of epithelial cells/nuclei and lumens – that are quantified based on H&E stained images as well as FT-IR images of...the samples. By restricting the features used to geometric measures, we sought to mimic the pattern recognition process employed by human experts, and...relatively dark and can be modeled as small elliptical areas in the stained images. This geometrical model is often confounded as multiple nuclei can be
Predicting red wolf release success in the southeastern United States
van Manen, Frank T.; Crawford, Barron A.; Clark, Joseph D.
2000-01-01
Although the red wolf (Canis rufus) was once found throughout the southeastern United States, indiscriminate killing and habitat destruction reduced its range to a small section of coastal Texas and Louisiana. Wolves trapped from 1973 to 1980 were taken to establish a captive breeding program that was used to repatriate 2 mainland and 3 island red wolf populations. We collected data from 320 red wolf releases in these areas and classified each as a success or failure based on survival and reproductive criteria, and whether recaptures were necessary to resolve conflicts with humans. We evaluated the relations between release success and conditions at the release sites, characteristics of released wolves, and release procedures. Although <44% of the variation in release success was explained, model performance based on jackknife tests indicated a 72-80% correct prediction rate for the 4 operational models we developed. The models indicated that success was associated with human influences on the landscape and the level of wolf habituation to humans prior to release. We applied the models to 31 prospective areas for wolf repatriation and calculated an index of release success for each area. Decision-makers can use these models to objectively rank prospective release areas and compare strengths and weaknesses of each.
Jung, Ho-Won; El Emam, Khaled
2014-05-29
A linear programming (LP) model was proposed to create de-identified data sets that maximally include spatial detail (e.g., geocodes such as ZIP or postal codes, census blocks, and locations on maps) while complying with the HIPAA Privacy Rule's Expert Determination method, i.e., ensuring that the risk of re-identification is very small. The LP model determines the transition probability from an original location of a patient to a new randomized location. However, it has a limitation for the cases of areas with a small population (e.g., median of 10 people in a ZIP code). We extend the previous LP model to accommodate the cases of a smaller population in some locations, while creating de-identified patient spatial data sets which ensure the risk of re-identification is very small. Our LP model was applied to a data set of 11,740 postal codes in the City of Ottawa, Canada. On this data set we demonstrated the limitations of the previous LP model, in that it produces improbable results, and showed how our extensions to deal with small areas allows the de-identification of the whole data set. The LP model described in this study can be used to de-identify geospatial information for areas with small populations with minimal distortion to postal codes. Our LP model can be extended to include other information, such as age and gender.
Geomorphically based predictive mapping of soil thickness in upland watersheds
NASA Astrophysics Data System (ADS)
Pelletier, Jon D.; Rasmussen, Craig
2009-09-01
The hydrologic response of upland watersheds is strongly controlled by soil (regolith) thickness. Despite the need to quantify soil thickness for input into hydrologic models, there is currently no widely used, geomorphically based method for doing so. In this paper we describe and illustrate a new method for predictive mapping of soil thicknesses using high-resolution topographic data, numerical modeling, and field-based calibration. The model framework works directly with input digital elevation model data to predict soil thicknesses assuming a long-term balance between soil production and erosion. Erosion rates in the model are quantified using one of three geomorphically based sediment transport models: nonlinear slope-dependent transport, nonlinear area- and slope-dependent transport, and nonlinear depth- and slope-dependent transport. The model balances soil production and erosion locally to predict a family of solutions corresponding to a range of values of two unconstrained model parameters. A small number of field-based soil thickness measurements can then be used to calibrate the local value of those unconstrained parameters, thereby constraining which solution is applicable at a particular study site. As an illustration, the model is used to predictively map soil thicknesses in two small, ˜0.1 km2, drainage basins in the Marshall Gulch watershed, a semiarid drainage basin in the Santa Catalina Mountains of Pima County, Arizona. Field observations and calibration data indicate that the nonlinear depth- and slope-dependent sediment transport model is the most appropriate transport model for this site. The resulting framework provides a generally applicable, geomorphically based tool for predictive mapping of soil thickness using high-resolution topographic data sets.
Computational simulations of vocal fold vibration: Bernoulli versus Navier-Stokes.
Decker, Gifford Z; Thomson, Scott L
2007-05-01
The use of the mechanical energy (ME) equation for fluid flow, an extension of the Bernoulli equation, to predict the aerodynamic loading on a two-dimensional finite element vocal fold model is examined. Three steady, one-dimensional ME flow models, incorporating different methods of flow separation point prediction, were compared. For two models, determination of the flow separation point was based on fixed ratios of the glottal area at separation to the minimum glottal area; for the third model, the separation point determination was based on fluid mechanics boundary layer theory. Results of flow rate, separation point, and intraglottal pressure distribution were compared with those of an unsteady, two-dimensional, finite element Navier-Stokes model. Cases were considered with a rigid glottal profile as well as with a vibrating vocal fold. For small glottal widths, the three ME flow models yielded good predictions of flow rate and intraglottal pressure distribution, but poor predictions of separation location. For larger orifice widths, the ME models were poor predictors of flow rate and intraglottal pressure, but they satisfactorily predicted separation location. For the vibrating vocal fold case, all models resulted in similar predictions of mean intraglottal pressure, maximum orifice area, and vibration frequency, but vastly different predictions of separation location and maximum flow rate.
[Ecological environmental quality assessment of Hangzhou urban area based on RS and GIS].
Xu, Pengwei; Zhao, Duo
2006-06-01
In allusion to the shortage of traditional ecological environmental quality assessment, this paper studied the spatial distribution of assessing factors at a mid-small scale, and the conversion of integer character to girding assessing cells. The main assessing factors including natural environmental condition, environmental quality, natural landscape and urbanization pressure, which were classified into four types with about eleven assessing factors, were selected from RS images and GIS-spatial analyzing environmental quality vector graph. Based on GIS, a comprehensive assessment model for the ecological environmental quality in Hangzhou urban area was established. In comparison with observed urban heat island effects, the assessment results were in good agreement with the ecological environmental quality in the urban area of Hangzhou.
An area model for on-chip memories and its application
NASA Technical Reports Server (NTRS)
Mulder, Johannes M.; Quach, Nhon T.; Flynn, Michael J.
1991-01-01
An area model suitable for comparing data buffers of different organizations and arbitrary sizes is described. The area model considers the supplied bandwidth of a memory cell and includes such buffer overhead as control logic, driver logic, and tag storage. The model gave less than 10 percent error when verified against real caches and register files. It is shown that, comparing caches and register files in terms of area for the same storage capacity, caches generally occupy more area per bit than register files for small caches because the overhead dominates the cache area at these sizes. For larger caches, the smaller storage cells in the cache provide a smaller total cache area per bit than the register set. Studying cache performance (traffic ratio) as a function of area, it is shown that, for small caches, direct-mapped caches perform significantly better than four-way set-associative caches and, for caches of medium areas, both direct-mapped and set-associative caches perform better than fully associative caches.
NASA Astrophysics Data System (ADS)
Seok, Song Young; Ho, Song Yang; Ho, Lee Jung; Moo Jong, Park
2015-04-01
Due to the increase of impervious layers caused by increased rainfall and urbanization which were brought about by the climate change after the late 1990s, the flood damage in urban watersheds is rising. The recent flood damage is occurring in medium and small stream rather than in large stream. Particularly, in medium stream which pass the cities, sudden flood occurs due to the short concentration of rainfall and urban areas suffer large damage, even though the flood damage is small, since residential areas and social infrastructures are concentrated. In spite of the importance of medium and small stream to pass the cities, there is no certain standard for classification of natural or urban stream and existing studies are mostly focused on the impervious area among the land use characteristics of watersheds. Most of existing river studies are based on the watershed scale, but in most urban watersheds where stream pass, urban areas are concentrated in the confluence, so urban areas only occupy less than 10% of the whole watershed and there is a high uncertainty in the classification of urban areas, based the watershed of stream. This study aims to suggest a classification standard of medium and small stream between local stream and small stream where suffer flood damage. According to the classified medium and small stream, this study analyzed the stream area to the stream width and distance using Arcgis Buffer tool, based on the stream line, not the existing watershed scale. This study then chose urban watersheds by analyzing the river area at certain intervals from the center of the chosen medium and small stream, in different ways. Among the land use characteristics in urban areas, the impervious area was applied to the selection standard of urban watersheds and the characteristics of urban watersheds were presented by calculating the ratio of the stream area to the impervious area using the Buffer tool. Acknowledgement "This research was supported by a grant [NEMA-NH-2011-45] from the Natural Hazard Mitigation Research Group, National Emergency Management Agency of Korea." Keywords: land use, urban watershed, medium and smaill stream, impervious area
Throughput assurance of wireless body area networks coexistence based on stochastic geometry
Wang, Yinglong; Shu, Minglei; Wu, Shangbin
2017-01-01
Wireless body area networks (WBANs) are expected to influence the traditional medical model by assisting caretakers with health telemonitoring. Within WBANs, the transmit power of the nodes should be as small as possible owing to their limited energy capacity but should be sufficiently large to guarantee the quality of the signal at the receiving nodes. When multiple WBANs coexist in a small area, the communication reliability and overall throughput can be seriously affected due to resource competition and interference. We show that the total network throughput largely depends on the WBANs distribution density (λp), transmit power of their nodes (Pt), and their carrier-sensing threshold (γ). Using stochastic geometry, a joint carrier-sensing threshold and power control strategy is proposed to meet the demand of coexisting WBANs based on the IEEE 802.15.4 standard. Given different network distributions and carrier-sensing thresholds, the proposed strategy derives a minimum transmit power according to varying surrounding environment. We obtain expressions for transmission success probability and throughput adopting this strategy. Using numerical examples, we show that joint carrier-sensing thresholds and transmit power strategy can effectively improve the overall system throughput and reduce interference. Additionally, this paper studies the effects of a guard zone on the throughput using a Matern hard-core point process (HCPP) type II model. Theoretical analysis and simulation results show that the HCPP model can increase the success probability and throughput of networks. PMID:28141841
Atmospheric dispersion modelling over complex terrain at small scale
NASA Astrophysics Data System (ADS)
Nosek, S.; Janour, Z.; Kukacka, L.; Jurcakova, K.; Kellnerova, R.; Gulikova, E.
2014-03-01
Previous study concerned of qualitative modelling neutrally stratified flow over open-cut coal mine and important surrounding topography at meso-scale (1:9000) revealed an important area for quantitative modelling of atmospheric dispersion at small-scale (1:3300). The selected area includes a necessary part of the coal mine topography with respect to its future expansion and surrounding populated areas. At this small-scale simultaneous measurement of velocity components and concentrations in specified points of vertical and horizontal planes were performed by two-dimensional Laser Doppler Anemometry (LDA) and Fast-Response Flame Ionization Detector (FFID), respectively. The impact of the complex terrain on passive pollutant dispersion with respect to the prevailing wind direction was observed and the prediction of the air quality at populated areas is discussed. The measured data will be used for comparison with another model taking into account the future coal mine transformation. Thus, the impact of coal mine transformation on pollutant dispersion can be observed.
Using small-area variations to inform health care service planning: what do we 'need' to know?
Mercuri, Mathew; Birch, Stephen; Gafni, Amiram
2013-12-01
Allocating resources on the basis of population need is a health care policy goal in many countries. Thus, resources must be allocated in accordance with need if stakeholders are to achieve policy goals. Small area methods have been presented as a means for revealing important information that can assist stakeholders in meeting policy goals. The purpose of this review is to examine the extent to which small area methods provide information relevant to meeting the goals of a needs-based health care policy. We present a conceptual framework explaining the terms 'demand', 'need', 'use' and 'supply', as commonly used in the literature. We critically review the literature on small area methods through the lens of this framework. 'Use' cannot be used as a proxy or surrogate of 'need'. Thus, if the goal of health care policy is to provide equal access for equal need, then traditional small area methods are inadequate because they measure small area variations in use of services in different populations, independent of the levels of need in those populations. Small area methods can be modified by incorporating direct measures of relative population need from population health surveys or by adjusting population size for levels of health risks in populations such as the prevalence of smoking and low birth weight. This might improve what can be learned from studies employing small area methods if they are to inform needs-based health care policies. © 2013 John Wiley & Sons Ltd.
Talmud, Philippa J; Hingorani, Aroon D; Cooper, Jackie A; Marmot, Michael G; Brunner, Eric J; Kumari, Meena; Kivimäki, Mika; Humphries, Steve E
2010-01-14
To assess the performance of a panel of common single nucleotide polymorphisms (genotypes) associated with type 2 diabetes in distinguishing incident cases of future type 2 diabetes (discrimination), and to examine the effect of adding genetic information to previously validated non-genetic (phenotype based) models developed to estimate the absolute risk of type 2 diabetes. Workplace based prospective cohort study with three 5 yearly medical screenings. 5535 initially healthy people (mean age 49 years; 33% women), of whom 302 developed new onset type 2 diabetes over 10 years. Non-genetic variables included in two established risk models-the Cambridge type 2 diabetes risk score (age, sex, drug treatment, family history of type 2 diabetes, body mass index, smoking status) and the Framingham offspring study type 2 diabetes risk score (age, sex, parental history of type 2 diabetes, body mass index, high density lipoprotein cholesterol, triglycerides, fasting glucose)-and 20 single nucleotide polymorphisms associated with susceptibility to type 2 diabetes. Cases of incident type 2 diabetes were defined on the basis of a standard oral glucose tolerance test, self report of a doctor's diagnosis, or the use of anti-diabetic drugs. A genetic score based on the number of risk alleles carried (range 0-40; area under receiver operating characteristics curve 0.54, 95% confidence interval 0.50 to 0.58) and a genetic risk function in which carriage of risk alleles was weighted according to the summary odds ratios of their effect from meta-analyses of genetic studies (area under receiver operating characteristics curve 0.55, 0.51 to 0.59) did not effectively discriminate cases of diabetes. The Cambridge risk score (area under curve 0.72, 0.69 to 0.76) and the Framingham offspring risk score (area under curve 0.78, 0.75 to 0.82) led to better discrimination of cases than did genotype based tests. Adding genetic information to phenotype based risk models did not improve discrimination and provided only a small improvement in model calibration and a modest net reclassification improvement of about 5% when added to the Cambridge risk score but not when added to the Framingham offspring risk score. The phenotype based risk models provided greater discrimination for type 2 diabetes than did models based on 20 common independently inherited diabetes risk alleles. The addition of genotypes to phenotype based risk models produced only minimal improvement in accuracy of risk estimation assessed by recalibration and, at best, a minor net reclassification improvement. The major translational application of the currently known common, small effect genetic variants influencing susceptibility to type 2 diabetes is likely to come from the insight they provide on causes of disease and potential therapeutic targets.
Sohl, Terry L.; Dornbierer, Jordan; Wika, Steve; Sayler, Kristi L.; Quenzer, Robert
2017-01-01
Land use and land cover (LULC) change occurs at a local level within contiguous ownership and management units (parcels), yet LULC models primarily use pixel-based spatial frameworks. The few parcel-based models being used overwhelmingly focus on small geographic areas, limiting the ability to assess LULC change impacts at regional to national scales. We developed a modified version of the Forecasting Scenarios of land use change model to project parcel-based agricultural change across a large region in the United States Great Plains. A scenario representing an agricultural biofuel scenario was modeled from 2012 to 2030, using real parcel boundaries based on contiguous ownership and land management units. The resulting LULC projection provides a vastly improved representation of landscape pattern over existing pixel-based models, while simultaneously providing an unprecedented combination of thematic detail and broad geographic extent. The conceptual approach is practical and scalable, with potential use for national-scale projections.
Modelling wetland-groundwater interactions in the boreal Kälväsvaara esker, Northern Finland
NASA Astrophysics Data System (ADS)
Jaros, Anna; Rossi, Pekka; Ronkanen, Anna-Kaisa; Kløve, Bjørn
2016-04-01
Many types of boreal peatland ecosystems such as alkaline fens, aapa mires and Fennoscandia spring fens rely on the presence of groundwater. In these ecosystems groundwater creates unique conditions for flora and fauna by providing water, nutrients and constant water temperature enriching local biodiversity. The groundwater-peatland interactions and their dynamics are not, however, in many cases fully understood and their measurement and quantification is difficult due to highly heterogeneous structure of peatlands and large spatial extend of these ecosystems. Understanding of these interactions and their changes due to anthropogenic impact on groundwater resources would benefit the protection of the groundwater dependent peatlands. The groundwater-peatland interactions were investigated using the fully-integrated physically-based groundwater-surface water code HydroGeoSphere in a case study of the Kälväsvaara esker aquifer, Northern Finland. The Kälväsvaara is a geologically complex esker and it is surrounded by vast aapa mire system including alkaline and springs fens. In addition, numerous small springs occur in the discharge zone of the esker. In order to quantify groundwater-peatland interactions a simple steady-state model was built and results were evaluated using expected trends and field measurements. The employed model reproduced relatively well spatially distributed hydrological variables such as soil water content, water depths and groundwater-surface water exchange fluxes within the wetland and esker areas. The wetlands emerged in simulations as a result of geological and topographical conditions. They could be identified by high saturation levels at ground surface and by presence of shallow ponded water over some areas. The model outputs exhibited also strong surface water-groundwater interactions in some parts of the aapa system. These areas were noted to be regions of substantial diffusive groundwater discharge by the earlier studies. In contrast, the simulations were not able to capture small scale point groundwater discharge i.e. springs. This reflects that modelling small scale groundwater input to wetland ecosystems can be challenging without detailed information on the aquifer and wetland geology. Overall, the good consistency between simulations and observations demonstrated that wetland-groundwater interactions can be studied using fully-integrated physically-based groundwater-surface water models.
Automated map sharpening by maximization of detail and connectivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.
An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less
Automated map sharpening by maximization of detail and connectivity
Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.; ...
2018-05-18
An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less
Zhang, Jisheng; Jia, Limin; Niu, Shuyun; Zhang, Fan; Tong, Lu; Zhou, Xuesong
2015-01-01
It is essential for transportation management centers to equip and manage a network of fixed and mobile sensors in order to quickly detect traffic incidents and further monitor the related impact areas, especially for high-impact accidents with dramatic traffic congestion propagation. As emerging small Unmanned Aerial Vehicles (UAVs) start to have a more flexible regulation environment, it is critically important to fully explore the potential for of using UAVs for monitoring recurring and non-recurring traffic conditions and special events on transportation networks. This paper presents a space-time network- based modeling framework for integrated fixed and mobile sensor networks, in order to provide a rapid and systematic road traffic monitoring mechanism. By constructing a discretized space-time network to characterize not only the speed for UAVs but also the time-sensitive impact areas of traffic congestion, we formulate the problem as a linear integer programming model to minimize the detection delay cost and operational cost, subject to feasible flying route constraints. A Lagrangian relaxation solution framework is developed to decompose the original complex problem into a series of computationally efficient time-dependent and least cost path finding sub-problems. Several examples are used to demonstrate the results of proposed models in UAVs’ route planning for small and medium-scale networks. PMID:26076404
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heberle, Frederick A; Pan, Jianjun; Standaert, Robert F
2012-01-01
Some of our recent work has resulted in the detailed structures of fully hydrated, fluid phase phosphatidylcholine (PC) and phosphatidylglycerol (PG) bilayers. These structures were obtained from the joint refinement of small-angle neutron and X-ray data using the scattering density profile (SDP) models developed by Ku erka et al. (Ku erka et al. 2012; Ku erka et al. 2008). In this review, we first discuss models for the standalone analysis of neutron or X-ray scattering data from bilayers, and assess the strengths and weaknesses inherent in these models. In particular, it is recognized that standalone data do not contain enoughmore » information to fully resolve the structure of inherently disordered fluid bilayers, and therefore may not provide a robust determination of bilayer structural parameters, including the much sought after area per lipid. We then discuss the development of matter density-based models (including the SDP model) that allow for the joint refinement of different contrast neutron and X-ray data sets, as well as the implementation of local volume conservation in the unit cell (i.e., ideal packing). Such models provide natural definitions of bilayer thicknesses (most importantly the hydrophobic and Luzzati thicknesses) in terms of Gibbs dividing surfaces, and thus allow for the robust determination of lipid areas through equivalent slab relationships between bilayer thickness and lipid volume. In the final section of this review, we discuss some of the significant findings/features pertaining to structures of PC and PG bilayers as determined from SDP model analyses.« less
Scenario Analysis of Soil and Water Conservation in Xiejia Watershed Based on Improved CSLE Model
NASA Astrophysics Data System (ADS)
Liu, Jieying; Yu, Ming; Wu, Yong; Huang, Yao; Nie, Yawen
2018-01-01
According to the existing research results and related data, use the scenario analysis method, to evaluate the effects of different soil and water conservation measures on soil erosion in a small watershed. Based on the analysis of soil erosion scenarios and model simulation budgets in the study area, it is found that all scenarios simulated soil erosion rates are lower than the present situation of soil erosion in 2013. Soil and water conservation measures are more effective in reducing soil erosion than soil and water conservation biological measures and soil and water conservation tillage measures.
NASA Astrophysics Data System (ADS)
Baum, R. L.; Coe, J. A.; Godt, J.; Kean, J. W.
2014-12-01
Heavy rainfall during 9 - 13 September 2013 induced about 1100 debris flows in the foothills and mountains of the northern Colorado Front Range. Eye-witness accounts and fire-department records put the times of greatest landslide activity during the times of heaviest rainfall on September 12 - 13. Antecedent soil moisture was relatively low, particularly at elevations below 2250 m where many of the debris flows occurred, based on 45 - 125 mm of summer precipitation and absence of rainfall for about 2 weeks before the storm. Mapping from post-event imagery and field observations indicated that most debris flows initiated as small, shallow landslides. These landslides typically formed in colluvium that consisted of angular clasts in a sandy or silty matrix, depending on the nature of the parent bedrock. Weathered bedrock was partially exposed in the basal surfaces of many of the shallow source areas at depths ranging from 0.2 to 5 m, and source areas commonly occupied less than 500 m2. Although 49% of the source areas occurred in swales and 3 % in channels, where convergent flow might have contributed to pore-pressure build up during the rainfall, 48% of the source areas occurred on open slopes. Upslope contributing areas of most landslides (58%) were small (< 1000 m2) and 78% of the slides occurred on south-facing slopes (90°≤ aspect ≤270°). These observations pose challenges for modeling initiation of the debris flows. Effects of variable soil depth and properties, vegetation, and rainfall must be examined to explain the dominance of debris flows on south-facing slopes. Accounting for the small sizes and mixed swale and open-slope settings of source areas demands new approaches for resolving soil-depth and physical-properties variability. The low-moisture initial conditions require consideration of unsaturated zone effects. Ongoing fieldwork and computational modeling are aimed at addressing these challenges related to initiation of the September 2013 debris flows.
Feng, Dai; Cortese, Giuliana; Baumgartner, Richard
2017-12-01
The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.
Doi, Shunsuke; Ide, Hiroo; Takeuchi, Koichi; Fujita, Shinsuke; Takabayashi, Katsuhiko
2017-01-01
Accessibility to healthcare service providers, the quantity, and the quality of them are important for national health. In this study, we focused on geographic accessibility to estimate and evaluate future demand and supply of healthcare services. We constructed a simulation model called the patient access area model (PAAM), which simulates patients’ access time to healthcare service institutions using a geographic information system (GIS). Using this model, to evaluate the balance of future healthcare services demand and supply in small areas, we estimated the number of inpatients every five years in each area and compared it with the number of hospital beds within a one-hour drive from each area. In an experiment with the Tokyo metropolitan area as a target area, when we assumed hospital bed availability to be 80%, it was predicted that over 78,000 inpatients would not receive inpatient care in 2030. However, this number would decrease if we lowered the rate of inpatient care by 10% and the average length of the hospital stay. Using this model, recommendations can be made regarding what action should be undertaken and by when to prevent a dramatic increase in healthcare demand. This method can help plan the geographical resource allocation in healthcare services for healthcare policy. PMID:29125585
A simple distributed sediment delivery approach for rural catchments
NASA Astrophysics Data System (ADS)
Reid, Lucas; Scherer, Ulrike
2014-05-01
The transfer of sediments from source areas to surface waters is a complex process. In process based erosion models sediment input is thus quantified by representing all relevant sub processes such as detachment, transport and deposition of sediment particles along the flow path to the river. A successful application of these models requires, however, a large amount of spatially highly resolved data on physical catchment characteristics, which is only available for a few, well examined small catchments. For the lack of appropriate models, the empirical Universal Soil Loss Equation (USLE) is widely applied to quantify the sediment production in meso to large scale basins. As the USLE provides long-term mean soil loss rates, it is often combined with spatially lumped models to estimate the sediment delivery ratio (SDR). In these models, the SDR is related to data on morphological characteristics of the catchment such as average local relief, drainage density, proportion of depressions or soil texture. Some approaches include the relative distance between sediment source areas and the river channels. However, several studies showed that spatially lumped parameters describing the morphological characteristics are only of limited value to represent the factors of influence on sediment transport at the catchment scale. Sediment delivery is controlled by the location of the sediment source areas in the catchment and the morphology along the flow path to the surface water bodies. This complex interaction of spatially varied physiographic characteristics cannot be adequately represented by lumped morphological parameters. The objective of this study is to develop a simple but spatially distributed approach to quantify the sediment delivery ratio by considering the characteristics of the flow paths in a catchment. We selected a small catchment located in in an intensively cultivated loess region in Southwest Germany as study area for the development of the SDR approach. The flow pathways were extracted in a geographic information system. Then the sediment delivery ratio for each source area was determined using an empirical approach considering the slope, morphology and land use properties along the flow path. As a benchmark for the calibration of the model parameters we used results of a detailed process based erosion model available for the study area. Afterwards the approach was tested in larger catchments located in the same loess region.
David, Elizabeth A; Andersen, Stina W; Beckett, Laurel A; Melnikow, Joy; Kelly, Karen; Cooke, David T; Brown, Lisa M; Canter, Robert J
2017-11-01
For advanced-stage non-small cell lung cancer, chemotherapy and chemoradiotherapy are the primary treatments. Although surgical intervention in these patients is associated with improved survival, the effect of selection bias is poorly defined. Our objective was to characterize selection bias and identify potential surgical candidates by constructing a Surgical Selection Score (SSS). Patients with clinical stage IIIA, IIIB, or IV non-small cell lung cancer were identified in the National Cancer Data Base from 1998 to 2012. Logistic regression was used to develop the SSS based on clinical characteristics. Estimated area under the receiver operating characteristic curve was used to assess discrimination performance of the SSS. Kaplan-Meier analysis was used to compare patients with similar SSSs. We identified 300,572 patients with stage IIIA, IIIB, or IV non-small cell lung cancer without missing data; 6% (18,701) underwent surgical intervention. The surgical cohort was 57% stage IIIA (n = 10,650), 19% stage IIIB (n = 3,483), and 24% stage IV (n = 4,568). The areas under the receiver operating characteristic curve from the best-fit logistic regression model in the training and validation sets were not significantly different, at 0.83 (95% confidence interval, 0.82 to 0.83) and 0.83 (95% confidence interval, 0.82 to 0.83). The range of SSS is 43 to 1,141. As expected, SSS was a good predictor of survival. Within each quartile of SSS, patients in the surgical group had significantly longer survival than nonsurgical patients (p < 0.001). A prediction model for selection of patients for surgical intervention was created. Once validated and prospectively tested, this model may be used to identify patients who may benefit from surgical intervention. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Seliske, L; Norwood, T A; McLaughlin, J R; Wang, S; Palleschi, C; Holowaty, E
2016-06-07
An important public health goal is to decrease the prevalence of key behavioural risk factors, such as tobacco use and obesity. Survey information is often available at the regional level, but heterogeneity within large geographic regions cannot be assessed. Advanced spatial analysis techniques are demonstrated to produce sensible micro area estimates of behavioural risk factors that enable identification of areas with high prevalence. A spatial Bayesian hierarchical model was used to estimate the micro area prevalence of current smoking and excess bodyweight for the Erie-St. Clair region in southwestern Ontario. Estimates were mapped for male and female respondents of five cycles of the Canadian Community Health Survey (CCHS). The micro areas were 2006 Census Dissemination Areas, with an average population of 400-700 people. Two individual-level models were specified: one controlled for survey cycle and age group (model 1), and one controlled for survey cycle, age group and micro area median household income (model 2). Post-stratification was used to derive micro area behavioural risk factor estimates weighted to the population structure. SaTScan analyses were conducted on the granular, postal-code level CCHS data to corroborate findings of elevated prevalence. Current smoking was elevated in two urban areas for both sexes (Sarnia and Windsor), and an additional small community (Chatham) for males only. Areas of excess bodyweight were prevalent in an urban core (Windsor) among males, but not females. Precision of the posterior post-stratified current smoking estimates was improved in model 2, as indicated by narrower credible intervals and a lower coefficient of variation. For excess bodyweight, both models had similar precision. Aggregation of the micro area estimates to CCHS design-based estimates validated the findings. This is among the first studies to apply a full Bayesian model to complex sample survey data to identify micro areas with variation in risk factor prevalence, accounting for spatial correlation and other covariates. Application of micro area analysis techniques helps define areas for public health planning, and may be informative to surveillance and research modeling of relevant chronic disease outcomes.
Davila-Payan, Carlo; DeGuzman, Michael; Johnson, Kevin; Serban, Nicoleta
2015-01-01
Introduction Interventions for pediatric obesity can be geographically targeted if high-risk populations can be identified. We developed an approach to estimate the percentage of overweight or obese children aged 2 to 17 years in small geographic areas using publicly available data. We piloted our approach for Georgia. Methods We created a logistic regression model to estimate the individual probability of high body mass index (BMI), given data on the characteristics of the survey participants. We combined the regression model with a simulation to sample subpopulations and obtain prevalence estimates. The models used information from the 2001–2010 National Health and Nutrition Examination Survey, the 2010 Census, and the 2010 American Community Survey. We validated our results by comparing 1) estimates for adults in Georgia produced by using our approach with estimates from the Centers for Disease Control and Prevention (CDC) and 2) estimates for children in Arkansas produced by using our approach with school examination data. We generated prevalence estimates for census tracts in Georgia and prioritized areas for interventions. Results In DeKalb County, the mean prevalence among census tracts varied from 27% to 40%. For adults, the median difference between our estimates and CDC estimates was 1.3 percentage points; for Arkansas children, the median difference between our estimates and examination-based estimates data was 1.7 percentage points. Conclusion Prevalence estimates for census tracts can be different from estimates for the county, so small-area estimates are crucial for designing effective interventions. Our approach validates well against external data, and it can be a relevant aid for planning local interventions for children. PMID:25764138
NASA Technical Reports Server (NTRS)
Stevens, Joseph E.
1955-01-01
Low-lift drag data are presented herein for one 1/7.5-scale rocket-boosted model and three 1/45.85-scale equivalent-body models of the Grumman F9F-9 airplane, The data were obtained over a Reynolds number range of about 5 x 10(exp 6) to 10 x 10(exp 6) based on wing mean aerodynamic chord for the rocket model and total body length for the equivalent-body models. The rocket-boosted model showed a drag rise of about 0,037 (based on included wing area) between the subsonic level and the peak supersonic drag coefficient at the maximum Mach number of this test. The base drag coefficient measured on this model varied from a value of -0,0015 in the subsonic range to a maximum of about 0.0020 at a Mach number of 1.28, Drag coefficients for the equivalent-body models varied from about 0.125 (based on body maximum area) in the subsonic range to about 0.300 at a Mach number of 1.25. Increasing the total fineness ratio by a small amount raised the drag-rise Mach number slightly.
When Will the Antarctic Ozone Hole Recover?
NASA Technical Reports Server (NTRS)
Newman, Paul A.
2006-01-01
The Antarctic ozone hole demonstrates large-scale, man-made affects on our atmosphere. Surface observations now show that human produced ozone depleting substances (ODSs) are declining. The ozone hole should soon start to diminish because of this decline. In this talk we will demonstrate an ozone hole parametric model. This model is based upon: 1) a new algorithm for estimating 61 and Br levels over Antarctica and 2) late-spring Antarctic stratospheric temperatures. This parametric model explains 95% of the ozone hole area's variance. We use future ODS levels to predict ozone hole recovery. Full recovery to 1980 levels will occur in approximately 2068. The ozone hole area will very slowly decline over the next 2 decades. Detection of a statistically significant decrease of area will not occur until approximately 2024. We further show that nominal Antarctic stratospheric greenhouse gas forced temperature change should have a small impact on the ozone hole.
When Will the Antarctic Ozone Hole Recover?
NASA Technical Reports Server (NTRS)
Newman, Paul A.; Nash, Eric R.; Kawa, S. Randolph; Montzka, Stephen A.; Schauffler, Sue
2006-01-01
The Antarctic ozone hole demonstrates large-scale, man-made affects on our atmosphere. Surface observations now show that human produced ozone depleting substances (ODSs) are declining. The ozone hole should soon start to diminish because of this decline. Herein we demonstrate an ozone hole parametric model. This model is based upon: 1) a new algorithm for estimating C1 and Br levels over Antarctica and 2) late-spring Antarctic stratospheric temperatures. This parametric model explains 95% of the ozone hole area s variance. We use future ODS levels to predict ozone hole recovery. Full recovery to 1980 levels will occur in approximately 2068. The ozone hole area will very slowly decline over the next 2 decades. Detection of a statistically significant decrease of area will not occur until approximately 2024. We further show that nominal Antarctic stratospheric greenhouse gas forced temperature change should have a small impact on the ozone hole.
Modelling Coastal Cliff Recession Based on the GIM-DDD Method
NASA Astrophysics Data System (ADS)
Gong, Bin; Wang, Shanyong; Sloan, Scott William; Sheng, Daichao; Tang, Chun'an
2018-04-01
The unpredictable and instantaneous collapse behaviour of coastal rocky cliffs may cause damage that extends significantly beyond the area of failure. Gravitational movements that occur during coastal cliff recession involve two major stages: the small deformation stage and the large displacement stage. In this paper, a method of simulating the entire progressive failure process of coastal rocky cliffs is developed based on the gravity increase method (GIM), the rock failure process analysis method and the discontinuous deformation analysis method, and it is referred to as the GIM-DDD method. The small deformation stage, which includes crack initiation, propagation and coalescence processes, and the large displacement stage, which includes block translation and rotation processes during the rocky cliff collapse, are modelled using the GIM-DDD method. In addition, acoustic emissions, stress field variations, crack propagation and failure mode characteristics are further analysed to provide insights that can be used to predict, prevent and minimize potential economic losses and casualties. The calculation and analytical results are consistent with previous studies, which indicate that the developed method provides an effective and reliable approach for performing rocky cliff stability evaluations and coastal cliff recession analyses and has considerable potential for improving the safety and protection of seaside cliff areas.
Shi, Lei; Zhang, Jianjun; Shi, Yi; Ding, Xu; Wei, Zhenchun
2015-01-14
We consider the base station placement problem for wireless sensor networks with successive interference cancellation (SIC) to improve throughput. We build a mathematical model for SIC. Although this model cannot be solved directly, it enables us to identify a necessary condition for SIC on distances from sensor nodes to the base station. Based on this relationship, we propose to divide the feasible region of the base station into small pieces and choose a point within each piece for base station placement. The point with the largest throughput is identified as the solution. The complexity of this algorithm is polynomial. Simulation results show that this algorithm can achieve about 25% improvement compared with the case that the base station is placed at the center of the network coverage area when using SIC.
NASA Astrophysics Data System (ADS)
Emoto, K.; Saito, T.; Shiomi, K.
2017-12-01
Short-period (<1 s) seismograms are strongly affected by small-scale (<10 km) heterogeneities in the lithosphere. In general, short-period seismograms are analysed based on the statistical method by considering the interaction between seismic waves and randomly distributed small-scale heterogeneities. Statistical properties of the random heterogeneities have been estimated by analysing short-period seismograms. However, generally, the small-scale random heterogeneity is not taken into account for the modelling of long-period (>2 s) seismograms. We found that the energy of the coda of long-period seismograms shows a spatially flat distribution. This phenomenon is well known in short-period seismograms and results from the scattering by small-scale heterogeneities. We estimate the statistical parameters that characterize the small-scale random heterogeneity by modelling the spatiotemporal energy distribution of long-period seismograms. We analyse three moderate-size earthquakes that occurred in southwest Japan. We calculate the spatial distribution of the energy density recorded by a dense seismograph network in Japan at the period bands of 8-16 s, 4-8 s and 2-4 s and model them by using 3-D finite difference (FD) simulations. Compared to conventional methods based on statistical theories, we can calculate more realistic synthetics by using the FD simulation. It is not necessary to assume a uniform background velocity, body or surface waves and scattering properties considered in general scattering theories. By taking the ratio of the energy of the coda area to that of the entire area, we can separately estimate the scattering and the intrinsic absorption effects. Our result reveals the spectrum of the random inhomogeneity in a wide wavenumber range including the intensity around the corner wavenumber as P(m) = 8πε2a3/(1 + a2m2)2, where ε = 0.05 and a = 3.1 km, even though past studies analysing higher-frequency records could not detect the corner. Finally, we estimate the intrinsic attenuation by modelling the decay rate of the energy. The method proposed in this study is suitable for quantifying the statistical properties of long-wavelength subsurface random inhomogeneity, which leads the way to characterizing a wider wavenumber range of spectra, including the corner wavenumber.
NASA Astrophysics Data System (ADS)
Rossi, Mauro; Torri, Dino; Santi, Elisa; Bacaro, Giovanni; Marchesini, Ivan
2014-05-01
Landslide phenomena and erosion processes are widespread and cause every year extensive damages to the environment and sensible reduction of ecosystem services. These processes are in competition among them, and their complex interaction control the landscapes evolution. Landslide phenomena and erosion processes can be strongly influenced by land use, vegetation, soil characteristics and anthropic actions. Such type of phenomena are mainly model separately using empirical and physically based approaches. The former rely upon the identification of simple empirical laws correlating/relating the occurrence of instability processes to some of their potential causes. The latter are based on physical descriptions of the processes, and depending on the degree of complexity they can integrate different variables characterizing the process and their trigger. Those model often couple an hydrological model with an erosion or a landslide model. The spatial modeling schemas are heterogeneous, but mostly the raster (i.e. matrices of data) or the conceptual (i.e. cascading planes and channels) description of the terrain are used. The two model types are generally designed and applied at different scales. Empirical models, less demanding in terms of input data cannot consider explicitly the real process triggering mechanisms and commonly they are exploited to assess the potential occurrence of instability phenomena over large areas (small scale assessment). Physically-based models are high-demanding in term of input data, difficult to obtain over large areas if not with large uncertainty, and their applicability is often limited to small catchments or single slopes (large scale assessment). More those models, even if physically-based, are simplified description of the instability processes and can neglect significant issues of the real triggering mechanisms. For instance the influence of vegetation has been considered just partially. Although in the literature a variety of model approaches have been proposed to model separately landslide and erosion processes, only few attempts were made to model both jointly, mostly integrating pre-existing models. To overcome this limitation we develop a new model called LANDPLANER (LANDscape, Plants, LANdslide and ERosion), specifically design to describe the dynamic response of slopes (or basins) under different changing scenarios including: (i) changes of meteorological factors, (ii) changes of vegetation or land-use, (iii) and changes of slope morphology. The was applied in different study area in order to check its basic assumptions, and to test its general operability and applicability. Results show a reasonable model behaviors and confirm its easy applicability in real cases.
DOT National Transportation Integrated Search
2008-10-31
This study explores the application of mileage-based user fees, or vehicle-miles traveled (VMT) fees, as an : alternative to the fuel tax in rural and small urban areas. The purpose of the study is to identify the issues : associated with implementat...
Fernandes, J P; Freire, M; Guiomar, N; Gil, A
2017-03-15
The present study deals with the development of systematic conservation planning as management instrument in small oceanic islands, ensuring open systems of governance, and able to integrate an informed and involved participation of the stakeholders. Marxan software was used to define management areas according a set of alternative land use scenarios considering different conservation and management paradigms. Modeled conservation zones were interpreted and compared with the existing protected areas allowing more fused information for future trade-outs and stakeholder's involvement. The results, allowing the identification of Target Management Units (TMU) based on the consideration of different development scenarios proved to be consistent with a feasible development of evaluation approaches able to support sound governance systems. Moreover, the detailed geographic identification of TMU seems to be able to support participated policies towards a more sustainable management of the entire island. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Arief, A. B.; Yudono, A.; Akil, A.; Ramli, I.
2017-08-01
The lack of social and public facilties of seven small islands around Makassar, causing the commuters to experience inefficiency in fulfilling their basic needs in the mainland of Makassar city. The purpose of this study is finding the location of coastal TOD in accordance with the principles of development model of coastal TOD. The result showed that inefficiency of time, cost and distance could be eliminated by applying vertical, united and integrated development model of coastal TOD. Using survey, interview and literature study through expert system analysis based on GIS deliniates coastal TOD.
NASA Technical Reports Server (NTRS)
Miller, L. D.; Tom, C.; Nualchawee, K.
1977-01-01
A tropical forest area of Northern Thailand provided a test case of the application of the approach in more natural surroundings. Remote sensing imagery subjected to proper computer analysis has been shown to be a very useful means of collecting spatial data for the science of hydrology. Remote sensing products provide direct input to hydrologic models and practical data bases for planning large and small-scale hydrologic developments. Combining the available remote sensing imagery together with available map information in the landscape model provides a basis for substantial improvements in these applications.
Automated high resolution mapping of coffee in Rwanda using an expert Bayesian network
NASA Astrophysics Data System (ADS)
Mukashema, A.; Veldkamp, A.; Vrieling, A.
2014-12-01
African highland agro-ecosystems are dominated by small-scale agricultural fields that often contain a mix of annual and perennial crops. This makes such systems difficult to map by remote sensing. We developed an expert Bayesian network model to extract the small-scale coffee fields of Rwanda from very high resolution data. The model was subsequently applied to aerial orthophotos covering more than 99% of Rwanda and on one QuickBird image for the remaining part. The method consists of a stepwise adjustment of pixel probabilities, which incorporates expert knowledge on size of coffee trees and fields, and on their location. The initial naive Bayesian network, which is a spectral-based classification, yielded a coffee map with an overall accuracy of around 50%. This confirms that standard spectral variables alone cannot accurately identify coffee fields from high resolution images. The combination of spectral and ancillary data (DEM and a forest map) allowed mapping of coffee fields and associated uncertainties with an overall accuracy of 87%. Aggregated to district units, the mapped coffee areas demonstrated a high correlation with the coffee areas reported in the detailed national coffee census of 2009 (R2 = 0.92). Unlike the census data our map provides high spatial resolution of coffee area patterns of Rwanda. The proposed method has potential for mapping other perennial small scale cropping systems in the East African Highlands and elsewhere.
Where to Dig for Fossils: Combining Climate-Envelope, Taphonomy and Discovery Models
Block, Sebastián; Saltré, Frédérik; Rodríguez-Rey, Marta; Fordham, Damien A.; Unkel, Ingmar; Bradshaw, Corey J. A.
2016-01-01
Fossils represent invaluable data to reconstruct the past history of life, yet fossil-rich sites are often rare and difficult to find. The traditional fossil-hunting approach focuses on small areas and has not yet taken advantage of modelling techniques commonly used in ecology to account for an organism’s past distributions. We propose a new method to assist finding fossils at continental scales based on modelling the past distribution of species, the geological suitability of fossil preservation and the likelihood of fossil discovery in the field, and apply it to several genera of Australian megafauna that went extinct in the Late Quaternary. Our models predicted higher fossil potentials for independent sites than for randomly selected locations (mean Kolmogorov-Smirnov statistic = 0.66). We demonstrate the utility of accounting for the distribution history of fossil taxa when trying to find the most suitable areas to look for fossils. For some genera, the probability of finding fossils based on simple climate-envelope models was higher than the probability based on models incorporating current conditions associated with fossil preservation and discovery as predictors. However, combining the outputs from climate-envelope, preservation, and discovery models resulted in the most accurate predictions of potential fossil sites at a continental scale. We proposed potential areas to discover new fossils of Diprotodon, Zygomaturus, Protemnodon, Thylacoleo, and Genyornis, and provide guidelines on how to apply our approach to assist fossil hunting in other continents and geological settings. PMID:27027874
Where to Dig for Fossils: Combining Climate-Envelope, Taphonomy and Discovery Models.
Block, Sebastián; Saltré, Frédérik; Rodríguez-Rey, Marta; Fordham, Damien A; Unkel, Ingmar; Bradshaw, Corey J A
2016-01-01
Fossils represent invaluable data to reconstruct the past history of life, yet fossil-rich sites are often rare and difficult to find. The traditional fossil-hunting approach focuses on small areas and has not yet taken advantage of modelling techniques commonly used in ecology to account for an organism's past distributions. We propose a new method to assist finding fossils at continental scales based on modelling the past distribution of species, the geological suitability of fossil preservation and the likelihood of fossil discovery in the field, and apply it to several genera of Australian megafauna that went extinct in the Late Quaternary. Our models predicted higher fossil potentials for independent sites than for randomly selected locations (mean Kolmogorov-Smirnov statistic = 0.66). We demonstrate the utility of accounting for the distribution history of fossil taxa when trying to find the most suitable areas to look for fossils. For some genera, the probability of finding fossils based on simple climate-envelope models was higher than the probability based on models incorporating current conditions associated with fossil preservation and discovery as predictors. However, combining the outputs from climate-envelope, preservation, and discovery models resulted in the most accurate predictions of potential fossil sites at a continental scale. We proposed potential areas to discover new fossils of Diprotodon, Zygomaturus, Protemnodon, Thylacoleo, and Genyornis, and provide guidelines on how to apply our approach to assist fossil hunting in other continents and geological settings.
Infrared Spectroscopic Imaging for Prostate Pathology Practice
2010-03-01
lassification a lgorithm u ses mo rphological f eatures – geometric pr operties of epithelial cells/nuclei and lumens – that are quantified based on H&E stained...images as well as FT-IR images of the samples. By restricting the features used to geometric measures, we sought to m imic t he pa ttern r...be modeled as small elliptical areas in the stained images. This geometrical model is often confounded as multiple nuclei can be so close as to ap
Predictive modeling of slope deposits and comparisons of two small areas in Northern Germany
NASA Astrophysics Data System (ADS)
Shary, Peter A.; Sharaya, Larisa S.; Mitusov, Andrew V.
2017-08-01
Methods for correct quantitative comparison of several terrains are important in the development and use of quantitative landscape evolution models, and they need to introduce specific modeling parameters. We introduce such parameters and compare two small terrains with respect to the link slope-valley for the description of slope deposits (colluvium) in them. We show that colluvium accumulation in small areas cannot be described by linear models and thus introduce non-linear models. Two small areas, Perdoel (0.29 ha) and Bornhöved (3.2 ha), are studied. Slope deposits in the both are mainly in dry valleys, with a total thickness Mtotal up to 2.0 m in Perdoel and up to 1.2 m in Bornhöved. Parent materials are mainly Pleistocene sands aged 30 kyr BP. Exponential models of multiple regression that use a 1-m LiDAR DEM (digital elevation model) explained 70-93% of spatial variability in Mtotal. Parameters DH12 and DV12 of horizontal and vertical distances are introduced that permit to characterize and compare conditions of colluvium formation for various terrains. The study areas differ 3.7 times by the parameter DH12 that describes a horizontal distance from thalwegs at which Mtotal diminishes 2.72 times. DH12 is greater in Bornhöved (29.7 m) than in Perdoel (8.12 m). We relate this difference in DH12 to the distinction between types of the link slope-valley: a regional type if catchment area of a region outside a given small area plays an important role, and a local type when accumulation of colluvium from valley banks within a small area is of more importance. We argue that the link slope-valley is regional in Perdoel and local in Bornhöved. Peaks of colluvium thickness were found on thalwegs of three studied valleys by both direct measurements in a trench, and model surfaces of Mtotal. A hypothesis on the formation mechanism of such peaks is discussed. The parameter DV12 describes a vertical distance from a peak of colluvium thickness along valley bottom at which Mtotal diminishes 2.72 times; values of this parameter differ 1.4 times for the study areas. DV12 is greater in Perdoel (3.0 m) than in Bornhöved (2.1 m) thus indicating more sharp peaks of Mtotal in Bornhöved. Exponential models allow construction of predictive maps of buried Pleistocene surfaces for both the terrains and calculate colluvium volumes with an error 4.2% for Perdoel and 7.1% for Bornhöved. Comparisons of buried and present surfaces showed that the latter are more smoothed, more strongly in valleys where flow branching is increased.
Tree Branching: Leonardo da Vinci's Rule versus Biomechanical Models
Minamino, Ryoko; Tateno, Masaki
2014-01-01
This study examined Leonardo da Vinci's rule (i.e., the sum of the cross-sectional area of all tree branches above a branching point at any height is equal to the cross-sectional area of the trunk or the branch immediately below the branching point) using simulations based on two biomechanical models: the uniform stress and elastic similarity models. Model calculations of the daughter/mother ratio (i.e., the ratio of the total cross-sectional area of the daughter branches to the cross-sectional area of the mother branch at the branching point) showed that both biomechanical models agreed with da Vinci's rule when the branching angles of daughter branches and the weights of lateral daughter branches were small; however, the models deviated from da Vinci's rule as the weights and/or the branching angles of lateral daughter branches increased. The calculated values of the two models were largely similar but differed in some ways. Field measurements of Fagus crenata and Abies homolepis also fit this trend, wherein models deviated from da Vinci's rule with increasing relative weights of lateral daughter branches. However, this deviation was small for a branching pattern in nature, where empirical measurements were taken under realistic measurement conditions; thus, da Vinci's rule did not critically contradict the biomechanical models in the case of real branching patterns, though the model calculations described the contradiction between da Vinci's rule and the biomechanical models. The field data for Fagus crenata fit the uniform stress model best, indicating that stress uniformity is the key constraint of branch morphology in Fagus crenata rather than elastic similarity or da Vinci's rule. On the other hand, mechanical constraints are not necessarily significant in the morphology of Abies homolepis branches, depending on the number of daughter branches. Rather, these branches were often in agreement with da Vinci's rule. PMID:24714065
Tree branching: Leonardo da Vinci's rule versus biomechanical models.
Minamino, Ryoko; Tateno, Masaki
2014-01-01
This study examined Leonardo da Vinci's rule (i.e., the sum of the cross-sectional area of all tree branches above a branching point at any height is equal to the cross-sectional area of the trunk or the branch immediately below the branching point) using simulations based on two biomechanical models: the uniform stress and elastic similarity models. Model calculations of the daughter/mother ratio (i.e., the ratio of the total cross-sectional area of the daughter branches to the cross-sectional area of the mother branch at the branching point) showed that both biomechanical models agreed with da Vinci's rule when the branching angles of daughter branches and the weights of lateral daughter branches were small; however, the models deviated from da Vinci's rule as the weights and/or the branching angles of lateral daughter branches increased. The calculated values of the two models were largely similar but differed in some ways. Field measurements of Fagus crenata and Abies homolepis also fit this trend, wherein models deviated from da Vinci's rule with increasing relative weights of lateral daughter branches. However, this deviation was small for a branching pattern in nature, where empirical measurements were taken under realistic measurement conditions; thus, da Vinci's rule did not critically contradict the biomechanical models in the case of real branching patterns, though the model calculations described the contradiction between da Vinci's rule and the biomechanical models. The field data for Fagus crenata fit the uniform stress model best, indicating that stress uniformity is the key constraint of branch morphology in Fagus crenata rather than elastic similarity or da Vinci's rule. On the other hand, mechanical constraints are not necessarily significant in the morphology of Abies homolepis branches, depending on the number of daughter branches. Rather, these branches were often in agreement with da Vinci's rule.
Growth and modelling of spherical crystalline morphologies of molecular materials
NASA Astrophysics Data System (ADS)
Shalev, O.; Biswas, S.; Yang, Y.; Eddir, T.; Lu, W.; Clarke, R.; Shtein, M.
2014-10-01
Crystalline, yet smooth, sphere-like morphologies of small molecular compounds are desirable in a wide range of applications but are very challenging to obtain using common growth techniques, where either amorphous films or faceted crystallites are the norm. Here we show solvent-free, guard flow-assisted organic vapour jet printing of non-faceted, crystalline microspheroids of archetypal small molecular materials used in organic electronic applications. We demonstrate how process parameters control the size distribution of the spheroids and propose an analytical model and a phase diagram predicting the surface morphology evolution of different molecules based on processing conditions, coupled with the thermophysical and mechanical properties of the molecules. This experimental approach opens a path for exciting applications of small molecular organic compounds in optical coatings, textured surfaces with controlled wettability, pharmaceutical and food substance printing and others, where thick organic films and particles with high surface area are needed.
Murrihy, Rachael C; Byrne, Mitchell K; Gonsalvez, Craig J
2009-02-01
Internationally, family doctors seeking to enhance their skills in evidence-based mental health treatment are attending brief training workshops, despite clear evidence in the literature that short-term, massed formats are not likely to improve skills in this complex area. Reviews of the educational literature suggest that an optimal model of training would incorporate distributed practice techniques; repeated practice over a lengthy time period, small-group interactive learning, mentoring relationships, skills-based training and an ongoing discussion of actual patients. This study investigates the potential role of group-based training incorporating multiple aspects of good pedagogy for training doctors in basic competencies in brief cognitive behaviour therapy (BCBT). Six groups of family doctors (n = 32) completed eight 2-hour sessions of BCBT group training over a 6-month period. A baseline control design was utilised with pre- and post-training measures of doctors' BCBT skills, knowledge and engagement in BCBT treatment. Family doctors' knowledge, skills in and actual use of BCBT with patients improved significantly over the course of training compared with the control period. This research demonstrates preliminary support for the efficacy of an empirically derived group training model for family doctors. Brief CBT group-based training could prove to be an effective and viable model for future doctor training.
Noise characteristics of upper surface blown configurations. Experimental program and results
NASA Technical Reports Server (NTRS)
Brown, W. H.; Searle, N.; Blakney, D. F.; Pennock, A. P.; Gibson, J. S.
1977-01-01
An experimental data base was developed from the model upper surface blowing (USB) propulsive lift system hardware. While the emphasis was on far field noise data, a considerable amount of relevant flow field data were also obtained. The data were derived from experiments in four different facilities resulting in: (1) small scale static flow field data; (2) small scale static noise data; (3) small scale simulated forward speed noise and load data; and (4) limited larger-scale static noise flow field and load data. All of the small scale tests used the same USB flap parts. Operational and geometrical variables covered in the test program included jet velocity, nozzle shape, nozzle area, nozzle impingement angle, nozzle vertical and horizontal location, flap length, flap deflection angle, and flap radius of curvature.
NASA Astrophysics Data System (ADS)
Méndez-Barroso, Luis A.; Zárate-Valdez, Jose L.; Robles-Morúa, Agustín
2018-07-01
Structure from Motion (SfM) represents a good low-cost alternative to generate high resolution topography where LiDAR (Light Detection and Ranging) data is scarce or unaffordable. In this work, we demonstrate the advantages of high resolution elevation models (DEM) obtained using the SfM technique to delineate catchment boundaries and the stream network. The SfM-based DEM was compared with LiDAR data, distributed by the Mexican Government, and a previous high resolution topographic map generated by a RTK-GPS system. Aerial images were collected on a forested ecohydrological monitoring site in northwest Mexico using a commercial grade digital camera attached to a tethered helium balloon. Here we applied the SfM method with the removal of the vegetation, similarly to the more advance LiDAR methods. This was achieved by adjusting the point cloud classification parameters (maximum angle, maximum distance and cell size), which to our knowledge, has not has not been reported in the available SfM literature. The SfM terrain model showed minimal differences in ground elevation in the center of the image domain (0-0.5 m) while errors increased on the edges of the domain. The SfM model generated the largest catchment area, main and total channel length (1.07 ha, 106.1 and 223 m, respectively) while LiDAR model obtained the smallest area and main channel length (0.77 ha and 92.9 m, respectively). On the other hand, the SfM model had a better and accurate representation of the river network among all models evaluated due to its closest proximity to the observed GPS-tracked main channel. We concluded that the integration of low cost unmanned aerial vehicles and the SfM method is a good alternative to estimate hydro-morphological attributes in small catchments. Furthermore, we found that high resolution SfM-based terrain models had a fairly good representation of small catchments which is useful in regions with limited data availability. The main findings of this research provide scientific value within the field of hydrological remote sensing in particular in the acquisition of high resolution topography in remote areas without access to more expensive LiDAR or survey techniques. High resolution DEMs allow for a better characterization of catchment area size and stream network delineation which influence hydrological processes (i.e. soil moisture redistribution, runoff, ET).
Using an agent-based model to simulate children’s active travel to school
2013-01-01
Background Despite the multiple advantages of active travel to school, only a small percentage of US children and adolescents walk or bicycle to school. Intervention studies are in a relatively early stage and evidence of their effectiveness over long periods is limited. The purpose of this study was to illustrate the utility of agent-based models in exploring how various policies may influence children’s active travel to school. Methods An agent-based model was developed to simulate children’s school travel behavior within a hypothetical city. The model was used to explore the plausible implications of policies targeting two established barriers to active school travel: long distance to school and traffic safety. The percent of children who walk to school was compared for various scenarios. Results To maximize the percent of children who walk to school the school locations should be evenly distributed over space and children should be assigned to the closest school. In the case of interventions to improve traffic safety, targeting a smaller area around the school with greater intensity may be more effective than targeting a larger area with less intensity. Conclusions Despite the challenges they present, agent based models are a useful complement to other analytical strategies in studying the plausible impact of various policies on active travel to school. PMID:23705953
Using an agent-based model to simulate children's active travel to school.
Yang, Yong; Diez-Roux, Ana V
2013-05-26
Despite the multiple advantages of active travel to school, only a small percentage of US children and adolescents walk or bicycle to school. Intervention studies are in a relatively early stage and evidence of their effectiveness over long periods is limited. The purpose of this study was to illustrate the utility of agent-based models in exploring how various policies may influence children's active travel to school. An agent-based model was developed to simulate children's school travel behavior within a hypothetical city. The model was used to explore the plausible implications of policies targeting two established barriers to active school travel: long distance to school and traffic safety. The percent of children who walk to school was compared for various scenarios. To maximize the percent of children who walk to school the school locations should be evenly distributed over space and children should be assigned to the closest school. In the case of interventions to improve traffic safety, targeting a smaller area around the school with greater intensity may be more effective than targeting a larger area with less intensity. Despite the challenges they present, agent based models are a useful complement to other analytical strategies in studying the plausible impact of various policies on active travel to school.
Autonomous vertical autorotation for unmanned helicopters
NASA Astrophysics Data System (ADS)
Dalamagkidis, Konstantinos
Small Unmanned Aircraft Systems (UAS) are considered the stepping stone for the integration of civil unmanned vehicles in the National Airspace System (NAS) because of their low cost and risk. Such systems are aimed at a variety of applications including search and rescue, surveillance, communications, traffic monitoring and inspection of buildings, power lines and bridges. Amidst these systems, small helicopters play an important role because of their capability to hold a position, to maneuver in tight spaces and to take off and land from virtually anywhere. Nevertheless civil adoption of such systems is minimal, mostly because of regulatory problems that in turn are due to safety concerns. This dissertation examines the risk to safety imposed by UAS in general and small helicopters in particular, focusing on accidents resulting in a ground impact. To improve the performance of small helicopters in this area, the use of autonomous autorotation is proposed. This research goes beyond previous work in the area of autonomous autorotation by developing an on-line, model-based, real-time controller that is capable of handling constraints and different cost functions. The approach selected is based on a non-linear model-predictive controller, that is augmented by a neural network to improve the speed of the non-linear optimization. The immediate benefit of this controller is that a class of failures that would otherwise result in an uncontrolled crash and possible injuries or fatalities can now be accommodated. Furthermore besides simply landing the helicopter, the controller is also capable of minimizing the risk of serious injury to people in the area. This is accomplished by minimizing the kinetic energy during the last phase of the descent. The presented research is designed to benefit the entire UAS community as well as the public, by allowing for safer UAS operations, which in turn also allow faster and less expensive integration of UAS in the NAS.
Burned area detection based on Landsat time series in savannas of southern Burkina Faso
NASA Astrophysics Data System (ADS)
Liu, Jinxiu; Heiskanen, Janne; Maeda, Eduardo Eiji; Pellikka, Petri K. E.
2018-02-01
West African savannas are subject to regular fires, which have impacts on vegetation structure, biodiversity and carbon balance. An efficient and accurate mapping of burned area associated with seasonal fires can greatly benefit decision making in land management. Since coarse resolution burned area products cannot meet the accuracy needed for fire management and climate modelling at local scales, the medium resolution Landsat data is a promising alternative for local scale studies. In this study, we developed an algorithm for continuous monitoring of annual burned areas using Landsat time series. The algorithm is based on burned pixel detection using harmonic model fitting with Landsat time series and breakpoint identification in the time series data. This approach was tested in a savanna area in southern Burkina Faso using 281 images acquired between October 2000 and April 2016. An overall accuracy of 79.2% was obtained with balanced omission and commission errors. This represents a significant improvement in comparison with MODIS burned area product (67.6%), which had more omission errors than commission errors, indicating underestimation of the total burned area. By observing the spatial distribution of burned areas, we found that the Landsat based method misclassified cropland and cloud shadows as burned areas due to the similar spectral response, and MODIS burned area product omitted small and fragmented burned areas. The proposed algorithm is flexible and robust against decreased data availability caused by clouds and Landsat 7 missing lines, therefore having a high potential for being applied in other landscapes in future studies.
NASA Astrophysics Data System (ADS)
Singh, V. K.; Jha, A. K.; Gupta, K.; Srivastav, S. K.
2017-12-01
Recent studies indicate that there is a significant improvement in the urban land use dynamics through modeling at finer spatial resolutions. Geo-computational models such as cellular automata and agent based model have given evident proof regarding the quantification of the urban growth pattern with urban boundary. In recent studies, socio- economic factors such as demography, education rate, household density, parcel price of the current year, distance to road, school, hospital, commercial centers and police station are considered to the major factors influencing the Land Use Land Cover (LULC) pattern of the city. These factors have unidirectional approach to land use pattern which makes it difficult to analyze the spatial aspects of model results both quantitatively and qualitatively. In this study, cellular automata model is combined with generic model known as Agent Based Model to evaluate the impact of socio economic factors on land use pattern. For this purpose, Dehradun an Indian city is selected as a case study. Socio economic factors were collected from field survey, Census of India, Directorate of economic census, Uttarakhand, India. A 3X3 simulating window is used to consider the impact on LULC. Cellular automata model results are examined for the identification of hot spot areas within the urban area and agent based model will be using logistic based regression approach where it will identify the correlation between each factor on LULC and classify the available area into low density, medium density, high density residential or commercial area. In the modeling phase, transition rule, neighborhood effect, cell change factors are used to improve the representation of built-up classes. Significant improvement is observed in the built-up classes from 84 % to 89 %. However after incorporating agent based model with cellular automata model the accuracy improved from 89 % to 94 % in 3 classes of urban i.e. low density, medium density and commercial classes. Sensitivity study of the model indicated that southern and south-west part of the city have shown improvement and small patches of growth are also observed in the north western part of the city.The study highlights the growing importance of socio economic factors and geo-computational modeling approach on changing LULC of newly growing cities of modern India.
Topography changes monitoring of small islands using camera drone
NASA Astrophysics Data System (ADS)
Bang, E.
2017-12-01
Drone aerial photogrammetry was conducted for monitoring topography changes of small islands in the east sea of Korea. Severe weather and sea wave is eroding the islands and sometimes cause landslide and falling rock. Due to rugged cliffs in all direction and bad accessibility, ground based survey methods are less efficient in monitoring topography changes of the whole area. Camera drones can provide digital images and movie in every corner of the islands, and drone aerial photogrammetry is powerful to get precise digital surface model (DSM) for a limited area. We have got a set of digital images to construct a textured 3D model of the project area every year since 2014. Flight height is in less than 100m from the top of those islands to get enough ground sampling distance (GSD). Most images were vertically captured with automatic flights, but we also flied drones around the islands with about 30°-45° camera angle for constructing 3D model better. Every digital image has geo-reference, but we set several ground control points (GCPs) on the islands and their coordinates were measured with RTK surveying methods to increase the absolute accuracy of the project. We constructed 3D textured model using photogrammetry tool, which generates 3D spatial information from digital images. From the polygonal model, we could get DSM with contour lines. Thematic maps such as hill shade relief map, aspect map and slope map were also processed. Those maps make us understand topography condition of the project area better. The purpose of this project is monitoring topography change of these small islands. Elevation difference map between DSMs of each year is constructed. There are two regions showing big negative difference value. By comparing constructed textured models and captured digital images around these regions, it is checked that a region have experienced real topography change. It is due to huge rock fall near the center of the east island. The size of fallen rock can be measured on the digital model exactly, which is about 13m*6m*2m (height*width*thickness). We believe that drone aerial photogrammetry can be an efficient topography changes detection method for a complicated terrain area.
Wind-Tunnel Investigations of Blunt-Body Drag Reduction Using Forebody Surface Roughness
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Sprague, Stephanie; Naughton, Jonathan W.; Curry, Robert E. (Technical Monitor)
2001-01-01
This paper presents results of wind-tunnel tests that demonstrate a novel drag reduction technique for blunt-based vehicles. For these tests, the forebody roughness of a blunt-based model was modified using micomachined surface overlays. As forebody roughness increases, boundary layer at the model aft thickens and reduces the shearing effect of external flow on the separated flow behind the base region, resulting in reduced base drag. For vehicle configurations with large base drag, existing data predict that a small increment in forebody friction drag will result in a relatively large decrease in base drag. If the added increment in forebody skin drag is optimized with respect to base drag, reducing the total drag of the configuration is possible. The wind-tunnel tests results conclusively demonstrate the existence of a forebody dragbase drag optimal point. The data demonstrate that the base drag coefficient corresponding to the drag minimum lies between 0.225 and 0.275, referenced to the base area. Most importantly, the data show a drag reduction of approximately 15% when the drag optimum is reached. When this drag reduction is scaled to the X-33 base area, drag savings approaching 45,000 N (10,000 lbf) can be realized.
Digital Biomass Accumulation Using High-Throughput Plant Phenotype Data Analysis.
Rahaman, Md Matiur; Ahsan, Md Asif; Gillani, Zeeshan; Chen, Ming
2017-09-01
Biomass is an important phenotypic trait in functional ecology and growth analysis. The typical methods for measuring biomass are destructive, and they require numerous individuals to be cultivated for repeated measurements. With the advent of image-based high-throughput plant phenotyping facilities, non-destructive biomass measuring methods have attempted to overcome this problem. Thus, the estimation of plant biomass of individual plants from their digital images is becoming more important. In this paper, we propose an approach to biomass estimation based on image derived phenotypic traits. Several image-based biomass studies state that the estimation of plant biomass is only a linear function of the projected plant area in images. However, we modeled the plant volume as a function of plant area, plant compactness, and plant age to generalize the linear biomass model. The obtained results confirm the proposed model and can explain most of the observed variance during image-derived biomass estimation. Moreover, a small difference was observed between actual and estimated digital biomass, which indicates that our proposed approach can be used to estimate digital biomass accurately.
NASA Astrophysics Data System (ADS)
Johansson, Emma; Lindborg, Tobias
2017-04-01
The Arctic region is sensitive to global warming, and permafrost thaw and release of old carbon are examples of processes that may have a positive feedback effect to the global climate system. Quantification and assumptions on future change are often based on model predictions. Such models require cross-disciplinary data of high quality that often is lacking. Biogeochemical processes in the landscape are highly influenced by the hydrology, which in turn is intimately related to permafrost processes. Thus, a multidisciplinary approach is needed when collecting data and setting up field experiments aiming at increase the understanding of these processes. Here we summarize and present data collected in the GRASP, Greenland Analogue Surface Project. GRASP is a catchment-scale field study of the periglacial area in the Kangerlussuaq region, West Greenland, focusing on hydrological and biogeochemical processes in the landscape. The site investigations were initiated in 2010 and have since then resulted in three separate data sets published in ESSD (Earth system and Science Data) each one focusing on i) meteorological data and hydrology, ii) biogeochemistry and iii) geometries of sediments and the active layer. The three data-sets, which are freely available via the PANGAEA data base, enable conceptual and coupled numerical modeling of hydrological and biogeochemical processes. An important strength with the GRASP data is that all data is collected within the same, relatively small, catchment area. This implies that measurements are more easily linked to the right source area or process. Despite the small catchment area it includes the major units of the periglacial hydrological system; a lake, a talik, a supra- and subpermafrost aquifer and, consequently, biogeochemical processes in each of these units may be studied. The new data from GRASP is both used with the aim to increase the knowledge of present day periglacial hydrology and biogeochemistry but also in order to predict consequences within these subjects of future climate change.
Sand deposition in the Colorado River in the Grand Canyon from flooding of the Little Colorado River
Wiele, S.M.; Graf, J.B.; Smith, J.D.
1996-01-01
Methods for computing the volume of sand deposited in the Colorado River in Grand Canyon National Park by floods in major tributaries and for determining redistribution of that sand by main-channel flows are required for successful management of sand-dependent riparian resources. We have derived flow, sediment transport, and bed evolution models based on a gridded topography developed from measured channel topography and used these models to compute deposition in a short reach of the river just downstream from the Little Colorado River, the largest tributary in the park. Model computations of deposition from a Little Colorado River flood in January 1993 were compared to bed changes measured at 15 cross sections. The total difference between changes in cross-sectional area due to deposition computed by the model and the measured changes was 6%. A wide reach with large areas of recirculating flow and large depressions in the main channel accumulated the most sand, whereas a reach with similar planimetric area but a long, narrow shape and relatively small areas of recirculating flow and small depressions in the main channel accumulated only about a seventh as much sand. About 32% of the total deposition was in recirculation zones, 65% was in the main channel, and 3% was deposited along the channel margin away from the recirculation zone. Overall, about 15% of the total input of sand from this Little Colorado River flood was deposited in the first 3 km below the confluence, suggesting that deposition of the flood-derived material extended for only several tens of kilometers downstream from the confluence.
Flat-Sky Pseudo-Cls Analysis for Weak Gravitational Lensing
NASA Astrophysics Data System (ADS)
Asgari, Marika; Taylor, Andy; Joachimi, Benjamin; Kitching, Thomas D.
2018-05-01
We investigate the use of estimators of weak lensing power spectra based on a flat-sky implementation of the 'Pseudo-CI' (PCl) technique, where the masked shear field is transformed without regard for masked regions of sky. This masking mixes power, and 'E'-convergence and 'B'-modes. To study the accuracy of forward-modelling and full-sky power spectrum recovery we consider both large-area survey geometries, and small-scale masking due to stars and a checkerboard model for field-of-view gaps. The power spectrum for the large-area survey geometry is sparsely-sampled and highly oscillatory, which makes modelling problematic. Instead, we derive an overall calibration for large-area mask bias using simulated fields. The effects of small-area star masks can be accurately corrected for, while the checkerboard mask has oscillatory and spiky behaviour which leads to percent biases. Apodisation of the masked fields leads to increased biases and a loss of information. We find that we can construct an unbiased forward-model of the raw PCls, and recover the full-sky convergence power to within a few percent accuracy for both Gaussian and lognormal-distributed shear fields. Propagating this through to cosmological parameters using a Fisher-Matrix formalism, we find we can make unbiased estimates of parameters for surveys up to 1,200 deg2 with 30 galaxies per arcmin2, beyond which the percent biases become larger than the statistical accuracy. This implies a flat-sky PCl analysis is accurate for current surveys but a Euclid-like survey will require higher accuracy.
Shi, Lei; Zhang, Jianjun; Shi, Yi; Ding, Xu; Wei, Zhenchun
2015-01-01
We consider the base station placement problem for wireless sensor networks with successive interference cancellation (SIC) to improve throughput. We build a mathematical model for SIC. Although this model cannot be solved directly, it enables us to identify a necessary condition for SIC on distances from sensor nodes to the base station. Based on this relationship, we propose to divide the feasible region of the base station into small pieces and choose a point within each piece for base station placement. The point with the largest throughput is identified as the solution. The complexity of this algorithm is polynomial. Simulation results show that this algorithm can achieve about 25% improvement compared with the case that the base station is placed at the center of the network coverage area when using SIC. PMID:25594600
A longitudinal analysis of alcohol outlet density and domestic violence.
Livingston, Michael
2011-05-01
A small number of studies have identified a positive relationship between alcohol outlet density and domestic violence. These studies have all been based on cross-sectional data and have been limited to the assessment of ecological correlations between outlet density and domestic violence rates. This study provides the first longitudinal examination of this relationship. Cross-sectional time-series using aggregated data from small areas. The relationships between alcohol outlet density and domestic violence were assessed over time using a fixed-effects model. Controls for the spatial autocorrelation of the data were included in the model. The study uses data for 186 postcodes from within the metropolitan area of Melbourne, Australia for the years 1996 to 2005. Alcohol outlet density measures for three different types of outlets (hotel/pub, packaged liquor, on-premise) were derived from liquor licensing records and domestic violence rates were calculated from police-recorded crime data, based on the victim's postcode. Alcohol outlet density was associated significantly with rates of domestic violence, over time. All three licence categories were positively associated with domestic violence rates, with small effects for general (pub) and on-premise licences and a large effect for packaged liquor licences. In Melbourne, the density of liquor licences is positively associated with rates of domestic violence over time. The effects were particularly large for packaged liquor outlets, suggesting a need for licensing policies that pay more attention to o off-premise alcohol availability. © 2011 The Authors, Addiction © 2011 Society for the Study of Addiction.
NASA Astrophysics Data System (ADS)
Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho
2017-03-01
So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.
2015-01-01
economically and socially integrated regions surrounding dense urban areas and have at least 1 million inhabitants, such as Cleveland, Chicago , and...group. examples: new York, Chicago . Small metropolitan area Meets criteria for a large or small urban area and small CBSA. this group includes...Lynch, Donald S. Shepard , and Helen M. Pettinati, “The Effectiveness of Telephone-Based Continuing Care for Alcohol and Cocaine Dependence: 24-Month
NASA Astrophysics Data System (ADS)
Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.
2017-10-01
Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.
Kura, Nura Umar; Ramli, Mohammad Firuz; Ibrahim, Shaharin; Sulaiman, Wan Nor Azmin; Aris, Ahmad Zaharin; Tanko, Adamu Idris; Zaudi, Muhammad Amar
2015-01-01
In this work, the DRASTIC and GALDIT models were employed to determine the groundwater vulnerability to contamination from anthropogenic activities and seawater intrusion in Kapas Island. In addition, the work also utilized sensitivity analysis to evaluate the influence of each individual parameter used in developing the final models. Based on these effects and variation indices of the said parameters, new effective weights were determined and were used to create modified DRASTIC and GALDIT models. The final DRASTIC model classified the island into five vulnerability classes: no risk (110-140), low (140-160), moderate (160-180), high (180-200), and very high (>200), covering 4, 26, 59, 4, and 7 % of the island, respectively. Likewise, for seawater intrusion, the modified GALDIT model delineates the island into four vulnerability classes: very low (<90), low (90-110), moderate (110-130), and high (>130) covering 39, 33, 18, and 9 % of the island, respectively. Both models show that the areas that are likely to be affected by anthropogenic pollution and seawater intrusion are within the alluvial deposit at the western part of the island. Pearson correlation was used to verify the reliability of the two models in predicting their respective contaminants. The correlation matrix showed a good relationship between DRASTIC model and nitrate (r = 0.58). In a similar development, the correlation also reveals a very strong negative relationship between GALDIT model and seawater contaminant indicator (resistivity Ωm) values (r = -0.86) suggesting that the model predicts more than 86 % of seawater intrusion. In order to facilitate management strategy, suitable areas for artificial recharge were identified through modeling. The result suggested some areas within the alluvial deposit at the western part of the island as suitable for artificial recharge. This work can serve as a guide for a full vulnerability assessment to anthropogenic pollution and seawater intrusion in small islands and will help policy maker and manager with understanding needed to ensure sustainability of the island's aquifer.
Identification of phosphorus emission hotspots in agricultural catchments
Kovacs, Adam; Honti, Mark; Zessner, Matthias; Eder, Alexander; Clement, Adrienne; Blöschl, Günter
2012-01-01
An enhanced transport-based management approach is presented, which is able to support cost-effective water quality management with respect to diffuse phosphorus pollution. Suspended solids and particulate phosphorus emissions and their transport were modeled in two hilly agricultural watersheds (Wulka River in Austria and Zala River in Hungary) with an improved version of the catchment-scale PhosFate model. Source and transmission areas were ranked by an optimization method in order to provide a priority list of the areas of economically efficient (optimal) management alternatives. The model was calibrated and validated at different gauges and for various years. The spatial distribution of the emissions shows that approximately one third of the catchment area is responsible for the majority of the emissions. However, only a few percent of the source areas can transport fluxes to the catchment outlet. These effective source areas, together with the main transmission areas are potential candidates for improved management practices. In accordance with the critical area concept, it was shown that intervention with better management practices on a properly selected small proportion of the total area (1–3%) is sufficient to reach a remarkable improvement in water quality. If soil nutrient management is also considered in addition to water quality, intervention on 4–12% of the catchment areas can fulfill both aspects. PMID:22771465
NASA Astrophysics Data System (ADS)
Ye, L.; Wu, J.; Wang, L.; Song, T.; Ji, R.
2017-12-01
Flooding in small-scale watershed in hilly area is characterized by short time periods and rapid rise and recession due to the complex underlying surfaces, various climate type and strong effect of human activities. It is almost impossible for a single hydrological model to describe the variation of flooding in both time and space accurately for all the catchments in hilly area because the hydrological characteristics can vary significantly among different catchments. In this study, we compare the performance of 5 hydrological models with varying degrees of complexity for simulation of flash flood for 14 small-scale watershed in China in order to find the relationship between the applicability of the hydrological models and the catchments characteristics. Meanwhile, given the fact that the hydrological data is sparse in hilly area, the effect of precipitation data, DEM resolution and their interference on the uncertainty of flood simulation is also illustrated. In general, the results showed that the distributed hydrological model (HEC-HMS in this study) performed better than the lumped hydrological models. Xinajiang and API models had good simulation for the humid catchments when long-term and continuous rainfall data is provided. Dahuofang model can simulate the flood peak well while the runoff generation module is relatively poor. In addition, the effect of diverse modelling data on the simulations is not simply superposed, and there is a complex interaction effect among different modelling data. Overall, both the catchment hydrological characteristics and modelling data situation should be taken into consideration in order to choose the suitable hydrological model for flood simulation for small-scale catchment in hilly area.
NASA Astrophysics Data System (ADS)
Feng, Jiandi; Jiang, Weiping; Wang, Zhengtao; Zhao, Zhenzhen; Nie, Linjuan
2017-08-01
Global empirical total electron content (TEC) models based on TEC maps effectively describe the average behavior of the ionosphere. However, the accuracy of these global models for a certain region may not be ideal. Due to the number and distribution of the International GNSS Service (IGS) stations, the accuracy of TEC maps is geographically different. The modeling database derived from the global TEC maps with different accuracy is likely one of the main reasons that limits the accuracy of the new models. Moreover, many anomalies in the ionosphere are geographic or geomagnetic dependent, and as such the accuracy of global models can deteriorate if these anomalies are not fully incorporated into the modeling approach. For regional models built in small areas, these influences on modeling are immensely weakened. Thus, the regional TEC models may better reflect the temporal and spatial variations of TEC. In our previous work (Feng et al., 2016), a regional TEC model TECM-NEC is proposed for northeast China. However, this model is only directed against the typical region of Mid-latitude Summer Nighttime Anomaly (MSNA) occurrence, which is meaningless in other regions without MSNA. Following the technique of TECM-NEC model, this study proposes another regional empirical TEC model for other regions in mid-latitudes. Taking a small area BeiJing-TianJin-Tangshan (JJT) region (37.5°-42.5° N, 115°-120° E) in China as an example, a regional empirical TEC model (TECM-JJT) is proposed using the TEC grid data from January 1, 1999 to June 30, 2015 provided by the Center for Orbit Determination in Europe (CODE) under quiet geomagnetic conditions. The TECM-JJT model fits the input CODE TEC data with a bias of 0.11TECU and a root mean square error of 3.26TECU. Result shows that the regional model TECM-JJT is consistent with CODE TEC data and GPS-TEC data.
NASA Astrophysics Data System (ADS)
Kaminski, Jacek W.; Struzewska, Joanna; Markowicz, Krzysztof; Jefimow, Maciej
2015-04-01
In the scope of the iAREA projects (Impact of absorbing aerosols on radiative forcing in the European Arctic - http://www.igf.fuw.edu.pl/iAREA) a field campaign was undertaken in March and April 2014 on Spitzbergen. Analysis of measurements was supported by the GEM-AQ model simulations. The GEM-AQ model is a chemical weather model. The core of the model is based on a weather prediction model with environmental processes (chemistry and aerosols) implanted on-line and are interactive (i.e. providing feedback of chemistry on radiation and dynamics). Numerical experiments were performed with the computational grid resolution of ˜15 km. The emission inventory developed by NILU in the ECLIPSE project was used. Preliminary analysis revealed small but systematic overestimation of modelled AOD and background BC levels. We will present the analysis of the vertical distribution of different aerosol species and its contribution to AOD for two stations on Svalbard. Also, changes of modelled chemical composition of aerosols with altitude will be analyzed.
NASA Astrophysics Data System (ADS)
Oliveira, Sérgio C.; Zêzere, José L.; Lajas, Sara; Melo, Raquel
2017-07-01
Approaches used to assess shallow slide susceptibility at the basin scale are conceptually different depending on the use of statistical or physically based methods. The former are based on the assumption that the same causes are more likely to produce the same effects, whereas the latter are based on the comparison between forces which tend to promote movement along the slope and the counteracting forces that are resistant to motion. Within this general framework, this work tests two hypotheses: (i) although conceptually and methodologically distinct, the statistical and deterministic methods generate similar shallow slide susceptibility results regarding the model's predictive capacity and spatial agreement; and (ii) the combination of shallow slide susceptibility maps obtained with statistical and physically based methods, for the same study area, generate a more reliable susceptibility model for shallow slide occurrence. These hypotheses were tested at a small test site (13.9 km2) located north of Lisbon (Portugal), using a statistical method (the information value method, IV) and a physically based method (the infinite slope method, IS). The landslide susceptibility maps produced with the statistical and deterministic methods were combined into a new landslide susceptibility map. The latter was based on a set of integration rules defined by the cross tabulation of the susceptibility classes of both maps and analysis of the corresponding contingency tables. The results demonstrate a higher predictive capacity of the new shallow slide susceptibility map, which combines the independent results obtained with statistical and physically based models. Moreover, the combination of the two models allowed the identification of areas where the results of the information value and the infinite slope methods are contradictory. Thus, these areas were classified as uncertain and deserve additional investigation at a more detailed scale.
Enabling Quantitative Optical Imaging for In-die-capable Critical Dimension Targets
Barnes, B.M.; Henn, M.-A.; Sohn, M. Y.; Zhou, H.; Silver, R. M.
2017-01-01
Dimensional scaling trends will eventually bring semiconductor critical dimensions (CDs) down to only a few atoms in width. New optical techniques are required to address the measurement and variability for these CDs using sufficiently small in-die metrology targets. Recently, Qin et al. [Light Sci Appl, 5, e16038 (2016)] demonstrated quantitative model-based measurements of finite sets of lines with features as small as 16 nm using 450 nm wavelength light. This paper uses simulation studies, augmented with experiments at 193 nm wavelength, to adapt and optimize the finite sets of features that work as in-die-capable metrology targets with minimal increases in parametric uncertainty. A finite element based solver for time-harmonic Maxwell's equations yields two- and three-dimensional simulations of the electromagnetic scattering for optimizing the design of such targets as functions of reduced line lengths, fewer number of lines, fewer focal positions, smaller critical dimensions, and shorter illumination wavelength. Metrology targets that exceeded performance requirements are as short as 3 μm for 193 nm light, feature as few as eight lines, and are extensible to sub-10 nm CDs. Target areas measured at 193 nm can be fifteen times smaller in area than current state-of-the-art scatterometry targets described in the literature. This new methodology is demonstrated to be a promising alternative for optical model-based in-die CD metrology. PMID:28757674
Congdon, Peter
2009-01-30
Estimates of disease prevalence for small areas are increasingly required for the allocation of health funds according to local need. Both individual level and geographic risk factors are likely to be relevant to explaining prevalence variations, and in turn relevant to the procedure for small area prevalence estimation. Prevalence estimates are of particular importance for major chronic illnesses such as cardiovascular disease. A multilevel prevalence model for cardiovascular outcomes is proposed that incorporates both survey information on patient risk factors and the effects of geographic location. The model is applied to derive micro area prevalence estimates, specifically estimates of cardiovascular disease for Zip Code Tabulation Areas in the USA. The model incorporates prevalence differentials by age, sex, ethnicity and educational attainment from the 2005 Behavioral Risk Factor Surveillance System survey. Influences of geographic context are modelled at both county and state level, with the county effects relating to poverty and urbanity. State level influences are modelled using a random effects approach that allows both for spatial correlation and spatial isolates. To assess the importance of geographic variables, three types of model are compared: a model with person level variables only; a model with geographic effects that do not interact with person attributes; and a full model, allowing for state level random effects that differ by ethnicity. There is clear evidence that geographic effects improve statistical fit. Geographic variations in disease prevalence partly reflect the demographic composition of area populations. However, prevalence variations may also show distinct geographic 'contextual' effects. The present study demonstrates by formal modelling methods that improved explanation is obtained by allowing for distinct geographic effects (for counties and states) and for interaction between geographic and person variables. Thus an appropriate methodology to estimate prevalence at small area level should include geographic effects as well as person level demographic variables.
Congdon, Peter
2009-01-01
Background Estimates of disease prevalence for small areas are increasingly required for the allocation of health funds according to local need. Both individual level and geographic risk factors are likely to be relevant to explaining prevalence variations, and in turn relevant to the procedure for small area prevalence estimation. Prevalence estimates are of particular importance for major chronic illnesses such as cardiovascular disease. Methods A multilevel prevalence model for cardiovascular outcomes is proposed that incorporates both survey information on patient risk factors and the effects of geographic location. The model is applied to derive micro area prevalence estimates, specifically estimates of cardiovascular disease for Zip Code Tabulation Areas in the USA. The model incorporates prevalence differentials by age, sex, ethnicity and educational attainment from the 2005 Behavioral Risk Factor Surveillance System survey. Influences of geographic context are modelled at both county and state level, with the county effects relating to poverty and urbanity. State level influences are modelled using a random effects approach that allows both for spatial correlation and spatial isolates. Results To assess the importance of geographic variables, three types of model are compared: a model with person level variables only; a model with geographic effects that do not interact with person attributes; and a full model, allowing for state level random effects that differ by ethnicity. There is clear evidence that geographic effects improve statistical fit. Conclusion Geographic variations in disease prevalence partly reflect the demographic composition of area populations. However, prevalence variations may also show distinct geographic 'contextual' effects. The present study demonstrates by formal modelling methods that improved explanation is obtained by allowing for distinct geographic effects (for counties and states) and for interaction between geographic and person variables. Thus an appropriate methodology to estimate prevalence at small area level should include geographic effects as well as person level demographic variables. PMID:19183458
USDA-ARS?s Scientific Manuscript database
Numerous modeling and field studies have evaluated the effectiveness of vegetative treatment systems in treating runoff from animal feeding operations; however, none have evaluated the effectiveness of vegetative treatment areas (VTA’s) receiving direct runoff from small swine operations during natu...
Impact of small-scale structures on estuarine circulation
NASA Astrophysics Data System (ADS)
Liu, Zhuo; Zhang, Yinglong J.; Wang, Harry V.; Huang, Hai; Wang, Zhengui; Ye, Fei; Sisson, Mac
2018-05-01
We present a novel and challenging application of a 3D estuary-shelf model to the study of the collective impact of many small-scale structures (bridge pilings of 1 m × 2 m in size) on larger-scale circulation in a tributary (James River) of Chesapeake Bay. We first demonstrate that the model is capable of effectively transitioning grid resolution from 400 m down to 1 m near the pilings without introducing undue numerical artifact. We then show that despite their small sizes and collectively small area as compared to the total channel cross-sectional area, the pilings exert a noticeable impact on the large-scale circulation, and also create a rich structure of vortices and wakes around the pilings. As a result, the water quality and local sedimentation patterns near the bridge piling area are likely to be affected as well. However, when evaluating over the entire waterbody of the project area, the near field effects are weighed with the areal percentage which is small compared to that for the larger unaffected area, and therefore the impact on the lower James River as a whole becomes relatively insignificant. The study highlights the importance of the use of high resolution in assessing the near-field impact of structures.
Sunlight reflection off the spacecraft with a solar sail on the surface of mars
NASA Astrophysics Data System (ADS)
Starinova, O. L.; Rozhkov, M. A.; Gorbunova, I. V.
2018-05-01
Modern technologies make it possible to fulfill many projects in the field of space exploration. One such project is the colonization of Mars and providing favorable conditions for living on it. Authors propose principles of functioning of the spacecraft with a solar sail, intended to create a thermal and light spot in a predetermined area of the Martian surface. This additional illumination can maintain and support certain climatic conditions on a small area where a Mars base could be located. This paper investigate the possibility of the spacecraft continuously reflect the sunlight off the solar sail on the small area of the Mars surface. The mathematical motion model in such condition of the solar sail's orientation is considered and used for motion simulation session. Moreover, the analysis of this motion is performed. Thus, were obtained parameters of the synchronic non-Keplerian orbit and spacecraft construction. In addition, were given recommendations for further applying satellites to reflect the sunlight on a planet's surface.
Modelling urban growth in the Indo-Gangetic plain using nighttime OLS data and cellular automata
NASA Astrophysics Data System (ADS)
Roy Chowdhury, P. K.; Maithani, Sandeep
2014-12-01
The present study demonstrates the applicability of the Operational Linescan System (OLS) sensor in modelling urban growth at regional level. The nighttime OLS data provides an easy, inexpensive way to map urban areas at a regional scale, requiring a very small volume of data. A cellular automata (CA) model was developed for simulating urban growth in the Indo-Gangetic plain; using OLS data derived maps as input. In the proposed CA model, urban growth was expressed in terms of causative factors like economy, topography, accessibility and urban infrastructure. The model was calibrated and validated based on OLS data of year 2003 and 2008 respectively using spatial metrics measures and subsequently the urban growth was predicted for the year 2020. The model predicted high urban growth in North Western part of the study area, in south eastern part growth would be concentrated around two cities, Kolkata and Howrah. While in the middle portion of the study area, i.e., Jharkhand, Bihar and Eastern Uttar Pradesh, urban growth has been predicted in form of clusters, mostly around the present big cities. These results will not only provide an input to urban planning but can also be utilized in hydrological and ecological modelling which require an estimate of future built up areas especially at regional level.
Guymon, Gary L.; Yen, Chung-Cheng
1990-01-01
The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.
NASA Astrophysics Data System (ADS)
Guymon, Gary L.; Yen, Chung-Cheng
1990-07-01
The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.
NASA Astrophysics Data System (ADS)
Naumenko, Mikhail; Guzivaty, Vadim; Sapelko, Tatiana
2016-04-01
Lake morphometry refers to physical factors (shape, size, structure, etc) that determine the lake depression. Morphology has a great influence on lake ecological characteristics especially on water thermal conditions and mixing depth. Depth analyses, including sediment measurement at various depths, volumes of strata and shoreline characteristics are often critical to the investigation of biological, chemical and physical properties of fresh waters as well as theoretical retention time. Management techniques such as loading capacity for effluents and selective removal of undesirable components of the biota are also dependent on detailed knowledge of the morphometry and flow characteristics. During the recent years a lake bathymetric surveys were carried out by using echo sounder with a high bottom depth resolution and GPS coordinate determination. Few digital bathymetric models have been created with 10*10 m spatial grid for some small lakes of Russian Plain which the areas not exceed 1-2 sq. km. The statistical characteristics of the depth and slopes distribution of these lakes calculated on an equidistant grid. It will provide the level-surface-volume variations of small lakes and reservoirs, calculated through combination of various satellite images. We discuss the methodological aspects of creating of morphometric models of depths and slopes of small lakes as well as the advantages of digital models over traditional methods.
Stanton, Neville A; Walker, Guy H; Sorensen, Linda J
2012-01-01
This article presents the rationale behind an important enhancement to a socio-technical model of organisations and teams derived from military research. It combines this with empirical results which take advantage of these enhancements. In Part 1, a new theoretical legacy for the model is developed based on Ergonomics theories and insights. This allows team communications data to be plotted into the model and for it to demonstrate discriminate validity between alternative team structures. Part 2 presents multinational data from the Experimental Laboratory for Investigating Collaboration, Information-sharing, and Trust (ELICIT) community. It was surprising to see that teams in both traditional hierarchical command and control and networked 'peer-to-peer' organisations operate in broadly the same area of the model, a region occupied by networks of communication exhibiting 'small world' properties. Small world networks may be of considerable importance for the Ergonomics analysis of team organisation and performance. This article is themed around macro and systems Ergonomics, and examines the effects of command and control structures. Despite some differences in behaviour and measures of agility, when given the freedom to do so, participants organised themselves into a small world network. This network type has important and interesting implications for the Ergonomics design of teams and organisations.
Assignment of boundary conditions in embedded ground water flow models
Leake, S.A.
1998-01-01
Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger-scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger.scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.
NASA Astrophysics Data System (ADS)
Lateh, Masitah Abdul; Kamilah Muda, Azah; Yusof, Zeratul Izzah Mohd; Azilah Muda, Noor; Sanusi Azmi, Mohd
2017-09-01
The emerging era of big data for past few years has led to large and complex data which needed faster and better decision making. However, the small dataset problems still arise in a certain area which causes analysis and decision are hard to make. In order to build a prediction model, a large sample is required as a training sample of the model. Small dataset is insufficient to produce an accurate prediction model. This paper will review an artificial data generation approach as one of the solution to solve the small dataset problem.
Lowrey, Chris E.; Longshore, Kathleen M.; Riddle, Brett R.; Mantooth, Stacy
2016-01-01
Although montane sky islands surrounded by desert scrub and shrub steppe comprise a large part of the biological diversity of the Basin and Range Province of southwestern North America, comprehensive ecological and population demographic studies for high-elevation small mammals within these areas are rare. Here, we examine the ecology and population parameters of the Palmer’s chipmunk (Tamias palmeri) in the Spring Mountains of southern Nevada, and present a predictive GIS-based distribution and probability of occurrence model at both home range and geographic spatial scales. Logistic regression analyses and Akaike Information Criterion model selection found variables of forest type, slope, and distance to water sources as predictive of chipmunk occurrence at the geographic scale. At the home range scale, increasing population density, decreasing overstory canopy cover, and decreasing understory canopy cover contributed to increased survival rates.
A large-scale deforestation experiment: Effects of patch area and isolation on Amazon birds
Ferraz, G.; Nichols, J.D.; Hines, J.E.; Stouffer, P.C.; Bierregaard, R.O.; Lovejoy, T.E.
2007-01-01
As compared with extensive contiguous areas, small isolated habitat patches lack many species. Some species disappear after isolation; others are rarely found in any small patch, regardless of isolation. We used a 13-year data set of bird captures from a large landscape-manipulation experiment in a Brazilian Amazon forest to model the extinction-colonization dynamics of 55 species and tested basic predictions of island biogeography and metapopulation theory. From our models, we derived two metrics of species vulnerability to changes in isolation and patch area. We found a strong effect of area and a variable effect of isolation on the predicted patch occupancy by birds.
Chromospheric and Transition region He lines during a flare
NASA Astrophysics Data System (ADS)
Falchi, A.; Mauas, P. J. D.; Andretta, V.; Teriaca, L.; Cauzzi, G.; Falciani, R.; Smaldone, L. A.
An observing campaign (SOHO JOP 139), coordinated between ground based and SOHO instruments, has been planned to obtain simultaneous spectroheliograms of the same area in several spectral lines. The chromospheric lines Ca II K, Hα and Na I D as well as He I 10830, 5876, 584 and 304 Ålines have been observed. These observations allow us to build semi-empirical models of the atmosphere before and during a small flare. With these models, constructed to match the observed line profiles, we can test the He abundance value.
Predicting herbicide and biocide concentrations in rivers across Switzerland
NASA Astrophysics Data System (ADS)
Wemyss, Devon; Honti, Mark; Stamm, Christian
2014-05-01
Pesticide concentrations vary strongly in space and time. Accordingly, intensive sampling is required to achieve a reliable quantification of pesticide pollution. As this requires substantial resources, loads and concentration ranges in many small and medium streams remain unknown. Here, we propose partially filling the information gap for herbicides and biocides by using a modelling approach that predicts stream concentrations without site-specific calibration simply based on generally available data like land use, discharge and nation-wide consumption data. The simple, conceptual model distinguishes herbicide losses from agricultural fields, private gardens and biocide losses from buildings (facades, roofs). The herbicide model is driven by river discharge and the applied herbicide mass; the biocide model requires precipitation and the footprint area of urban areas containing the biocide. The model approach allows for modelling concentrations across multiple catchments at the daily, or shorter, time scale and for small to medium-sized catchments (1 - 100 km2). Four high resolution sampling campaigns in the Swiss Plateau were used to calibrate the model parameters for six model compounds: atrazine, metolachlor, terbuthylazine, terbutryn, diuron and mecoprop. Five additional sampled catchments across Switzerland were used to directly compare the predicted to the measured concentrations. Analysis of the first results reveals a reasonable simulation of the concentration dynamics for specific rainfall events and across the seasons. Predicted concentration ranges are reasonable even without site-specific calibration. This indicates the transferability of the calibrated model directly to other areas. However, the results also demonstrate systematic biases in that the highest measured peaks were not attained by the model. Probable causes for these deviations are conceptual model limitations and input uncertainty (pesticide use intensity, local precipitation, etc.). Accordingly, the model will be conceptually improved. This presentation will present the model simulations and compare the performance of the original and the modified model versions. Finally, the model will be applied across approximately 50% of the catchments in the Swiss Plateau, where necessary input data is available and where the model concept can be reasonably applied.
NASA Astrophysics Data System (ADS)
Kyllmar, K.; Mårtensson, K.; Johnsson, H.
2005-03-01
A method to calculate N leaching from arable fields using model-calculated N leaching coefficients (NLCs) was developed. Using the process-based modelling system SOILNDB, leaching of N was simulated for four leaching regions in southern Sweden with 20-year climate series and a large number of randomised crop sequences based on regional agricultural statistics. To obtain N leaching coefficients, mean values of annual N leaching were calculated for each combination of main crop, following crop and fertilisation regime for each leaching region and soil type. The field-NLC method developed could be useful for following up water quality goals in e.g. small monitoring catchments, since it allows normal leaching from actual crop rotations and fertilisation to be determined regardless of the weather. The method was tested using field data from nine small intensively monitored agricultural catchments. The agreement between calculated field N leaching and measured N transport in catchment stream outlets, 19-47 and 8-38 kg ha -1 yr -1, respectively, was satisfactory in most catchments when contributions from land uses other than arable land and uncertainties in groundwater flows were considered. The possibility of calculating effects of crop combinations (crop and following crop) is of considerable value since changes in crop rotation constitute a large potential for reducing N leaching. When the effect of a number of potential measures to reduce N leaching (i.e. applying manure in spring instead of autumn; postponing ploughing-in of ley and green fallow in autumn; undersowing a catch crop in cereals and oilseeds; and increasing the area of catch crops by substituting winter cereals and winter oilseeds with corresponding spring crops) was calculated for the arable fields in the catchments using field-NLCs, N leaching was reduced by between 34 and 54% for the separate catchments when the best possible effect on the entire potential area was assumed.
ISSUES IN THE STATISTICAL ANALYSIS OF SMALL-AREA HEALTH DATA. (R825173)
The availability of geographically indexed health and population data, with advances in computing, geographical information systems and statistical methodology, have opened the way for serious exploration of small area health statistics based on routine data. Such analyses may be...
Doyle, W. Harry
1981-01-01
A requirement of Public Law 95-87, the Surface Mining Control and Reclamation Act of 1977, is the understanding of the hydrology in actual and proposed surface-mined areas. Surface-water data for small specific-sites and for larger areas such as adjacent and general areas are needed also to satisfy the hydrologic requirements of the Act. The Act specifies that surface-water modeling techniques may be used to generate the data and information. The purpose of this report is to describe how this can be achieved for smaller watersheds. This report also characterizes 12 ' state-of-the-art ' strip-mining assessment models that are to be tested with data from two data-intensive studies involving small watersheds in Tennessee and Indiana. Watershed models are best applied to small watersheds with specific-site data. Extending the use of modeling techniques to larger watersheds remains relatively untested, and to date the upper limits for application have not been established. The U.S. Geological Survey is currently collecting regional hydrologic data in the major coal provinces of the United States and this data will be used to help satisfy the ' general-area ' data requirements of the Act. This program is reviewed and described in this report. (USGS)
The application of a Web-geographic information system for improving urban water cycle modelling.
Mair, M; Mikovits, C; Sengthaler, M; Schöpf, M; Kinzel, H; Urich, C; Kleidorfer, M; Sitzenfrei, R; Rauch, W
2014-01-01
Research in urban water management has experienced a transition from traditional model applications to modelling water cycles as an integrated part of urban areas. This includes the interlinking of models of many research areas (e.g. urban development, socio-economy, urban water management). The integration and simulation is realized in newly developed frameworks (e.g. DynaMind and OpenMI) and often assumes a high knowledge in programming. This work presents a Web based urban water management modelling platform which simplifies the setup and usage of complex integrated models. The platform is demonstrated with a small application example on a case study within the Alpine region. The used model is a DynaMind model benchmarking the impact of newly connected catchments on the flooding behaviour of an existing combined sewer system. As a result the workflow of the user within a Web browser is demonstrated and benchmark results are shown. The presented platform hides implementation specific aspects behind Web services based technologies such that the user can focus on his main aim, which is urban water management modelling and benchmarking. Moreover, this platform offers a centralized data management, automatic software updates and access to high performance computers accessible with desktop computers and mobile devices.
NASA Technical Reports Server (NTRS)
Illingworth, Garth; Savage, Blair; Angel, J. Roger; Blandford, Roger D.; Boggess, Albert; Bowyer, C. Stuart; Carruthers, George R.; Cowie, Lennox L.; Doschek, George A.; Dupree, Andrea K.
1991-01-01
The following subject areas are covered: (1) the science program (star formation and origins of planetary systems; structure and evolution of the interstellar medium; stellar population; the galactic and extragalactic distance scale; nature of galaxy nuclei, AGNs, and QSOs; formation and evolution of galaxies at high redshifts; and cosmology); (2) implementation of the science program; (3) the observatory-class missions (HST; LST - the 6m successor to HST; and next-generation 16m telescope); (4) moderate and small missions (Delta-class Explorers; imaging astrometric interferometer; small Explorers; optics development and demonstrations; and supporting ground-based capabilities); (5) prerequisites - the current science program (Lyman-FUSE; HTS optimization; the near-term science program; data analysis, modeling, and theory funding; and archives); (6) technologies for the next century; and (7) lunar-based telescopes and instruments.
NASA Astrophysics Data System (ADS)
Schindewolf, Marcus; Kaiser, Andreas; Buchholtz, Arno; Schmidt, Jürgen
2017-04-01
Extreme rainfall events and resulting flash floods led to massive devastations in Germany during spring 2016. The study presented aims on the development of a early warning system, which allows the simulation and assessment of negative effects on infrastructure by radar-based heavy rainfall predictions, serving as input data for the process-based soil loss and deposition model EROSION 3D. Our approach enables a detailed identification of runoff and sediment fluxes in agricultural used landscapes. In a first step, documented historical events were analyzed concerning the accordance of measured radar rainfall and large scale erosion risk maps. A second step focused on a small scale erosion monitoring via UAV of source areas of heavy flooding events and a model reconstruction of the processes involved. In all examples damages were caused to local infrastructure. Both analyses are promising in order to detect runoff and sediment delivering areas even in a high temporal and spatial resolution. Results prove the important role of late-covering crops such as maize, sugar beet or potatoes in runoff generation. While e.g. winter wheat positively affects extensive runoff generation on undulating landscapes, massive soil loss and thus muddy flows are observed and depicted in model results. Future research aims on large scale model parameterization and application in real time, uncertainty estimation of precipitation forecast and interface developments.
NASA Astrophysics Data System (ADS)
Rinehart, A. J.; Vivoni, E. R.
2005-12-01
Snow processes play a significant role in the hydrologic cycle of mountainous and high-latitude catchments in the western United States. Snowmelt runoff contributes to a large percentage of stream runoff while snow covered regions remain highly localized to small portions of the catchment area. The appropriate representation of snow dynamics at a given range of spatial and temporal scales is critical for adequately predicting runoff responses in snowmelt-dominated watersheds. In particular, the accurate depiction of snow cover patterns is important as a range of topographic, land-use and geographic parameters create zones of preferential snow accumulation or ablation that significantly affect the timing of a region's snow melt and the persistence of a snow pack. In this study, we present the development and testing of a distributed snow model designed for simulations over complex terrain. The snow model is developed within the context of the TIN-based Real-time Integrated Basin Simulator (tRIBS), a fully-distributed watershed model capable of continuous simulations of coupled hydrological processes, including unsaturated-saturated zone dynamics, land-atmosphere interactions and runoff generation via multiple mechanisms. The use of triangulated irregular networks as a domain discretization allows tRIBS to accurately represent topography with a reduced number of computational nodes, as compared to traditional grid-based models. This representation is developed using a Delauney optimization criterion that causes areas of topographic homogeneity to be represented at larger spatial scales than the original grid, while more heterogeneous areas are represented at higher resolutions. We utilize the TIN-based terrain representation to simulate microscale (10-m to 100-m) snow pack dynamics over a catchment. The model includes processes such as the snow pack energy balance, wind and bulk redistribution, and snow interception by vegetation. For this study, we present tests from a distributed one-layer energy balance model as applied to a northern New Mexico hillslope in a ponderosa pine forest using both synthetic and real meteorological forcing. We also provide tests of the model's capability to represent spatial patterns within a small watershed in the Jemez Mountain region. Finally, we discuss the interaction of the tested snow process module with existing components in the watershed model and additional applications and capabilities under development.
Masked areas in shear peak statistics. A forward modeling approach
Bard, D.; Kratochvil, J. M.; Dawson, W.
2016-03-09
The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impactmore » of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance.« less
MASKED AREAS IN SHEAR PEAK STATISTICS: A FORWARD MODELING APPROACH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bard, D.; Kratochvil, J. M.; Dawson, W., E-mail: djbard@slac.stanford.edu
2016-03-10
The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impactmore » of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance.« less
NASA Astrophysics Data System (ADS)
Wiemann, Stefan; Eltner, Anette; Sardemann, Hannes; Spieler, Diana; Singer, Thomas; Thanh Luong, Thi; Janabi, Firas Al; Schütze, Niels; Bernard, Lars; Bernhofer, Christian; Maas, Hans-Gerd
2017-04-01
Flash floods regularly cause severe socio-economic damage worldwide. In parallel, climate change is very likely to increase the number of such events, due to an increasing frequency of extreme precipitation events (EASAC 2013). Whereas recent work primarily addresses the resilience of large catchment areas, the major impact of hydro-meteorological extremes caused by heavy precipitation is on small areas. Those are very difficult to observe and predict, due to sparse monitoring networks and only few means for hydro-meteorological modelling, especially in small catchment areas. The objective of the EXTRUSO project is to identify and implement appropriate means to close this gap by an interdisciplinary approach, combining comprehensive research expertise from meteorology, hydrology, photogrammetry and geoinformatics. The project targets innovative techniques for achieving spatio-temporal densified monitoring and simulations for the analysis, prediction and warning of local hydro-meteorological extreme events. The following four aspects are of particular interest: 1. The monitoring, analysis and combination of relevant hydro-meteorological parameters from various sources, including existing monitoring networks, ground radar, specific low-cost sensors and crowdsourcing. 2. The determination of relevant hydro-morphological parameters from different photogrammetric sensors (e.g. camera, laser scanner) and sensor platforms (e.g. UAV (unmanned aerial vehicle) and UWV (unmanned water vehicle)). 3. The continuous hydro-meteorological modelling of precipitation, soil moisture and water flows by means of conceptual and data-driven modelling. 4. The development of a collaborative, web-based service infrastructure as an information and communication point, especially in the case of an extreme event. There are three major applications for the planned information system: First, the warning of local extreme events for the population in potentially affected areas, second, the support for decision makers and emergency responders in the case of an event and, third, the development of open, interoperable tools for other researchers to be applied and further developed. The test area of the project is the Free State of Saxony (Germany) with a number of small and medium catchment areas. However, the whole system, comprising models, tools and sensor setups, is planned to be transferred and tested in other areas, within and outside Europe, as well. The team working on the project consists of eight researchers, including five PhD students and three postdocs. The EXTRUSO project is funded by the European Social Fund (ESF grant nr. 100270097) with a project duration of three years until June 2019. EASAC (2013): Trends in extreme weather events in Europe: implications for national and European Union adaption strategies. European Academies Science Advisory Council. Policy report 22, November 2013 The EXTRUSO project is funded by the European Social Fund (ESF), grant nr. 100270097
Electrical and optical 3D modelling of light-trapping single-photon avalanche diode
NASA Astrophysics Data System (ADS)
Zheng, Tianzhe; Zang, Kai; Morea, Matthew; Xue, Muyu; Lu, Ching-Ying; Jiang, Xiao; Zhang, Qiang; Kamins, Theodore I.; Harris, James S.
2018-02-01
Single-photon avalanche diodes (SPADs) have been widely used to push the frontier of scientific research (e.g., quantum science and single-molecule fluorescence) and practical applications (e.g., Lidar). However, there is a typical compromise between photon detection efficiency and jitter distribution. The light-trapping SPAD has been proposed to break this trade-off by coupling the vertically incoming photons into a laterally propagating mode while maintaining a small jitter and a thin Si device layer. In this work, we provide a 3D-based optical and electrical model based on practical fabrication conditions and discuss about design parameters, which include surface texturing, photon injection position, device area, and other features.
Analysis of exhaled breath by laser detection
NASA Astrophysics Data System (ADS)
Thrall, Karla D.; Toth, James J.; Sharpe, Steven W.
1996-04-01
The goal of our work is two fold: (1) to develop a portable rapid laser based breath analyzer for monitoring metabolic processes, and (2) predict these metabolic processes through physiologically based pharmacokinetic (PBPK) modeling. Small infrared active molecules such as ammonia, carbon monoxide, carbon dioxide, methane and ethane are present in exhaled breath and can be readily detected by laser absorption spectroscopy. In addition, many of the stable isotopomers of these molecules can be accurately detected, making it possible to follow specific metabolic processes. Potential areas of applications for this technology include the diagnosis of certain pathologies (e.g. Helicobacter Pylori infection), detection of trauma due to either physical or chemical causes and monitoring nutrient uptake (i.e., malnutrition). In order to understand the origin and elucidate the metabolic processes associated with these small molecules, we are employing physiologically based pharmacokinetic (PBPK) models. A PBPK model is founded on known physiological processes (i.e., blood flow rates, tissue volumes, breathing rate, etc.), chemical-specific processes (i.e., tissue solubility coefficients, molecular weight, chemical density, etc.), and on metabolic processes (tissue site and rate of metabolic biotransformation). Since many of these processes are well understood, a PBPK model can be developed and validated against the more readily available experimental animal data, and then by extrapolating the parameters to apply to man, the model can predict chemical behavior in humans.
Infrared moving small target detection based on saliency extraction and image sparse representation
NASA Astrophysics Data System (ADS)
Zhang, Xiaomin; Ren, Kan; Gao, Jin; Li, Chaowei; Gu, Guohua; Wan, Minjie
2016-10-01
Moving small target detection in infrared image is a crucial technique of infrared search and tracking system. This paper present a novel small target detection technique based on frequency-domain saliency extraction and image sparse representation. First, we exploit the features of Fourier spectrum image and magnitude spectrum of Fourier transform to make a rough extract of saliency regions and use a threshold segmentation system to classify the regions which look salient from the background, which gives us a binary image as result. Second, a new patch-image model and over-complete dictionary were introduced to the detection system, then the infrared small target detection was converted into a problem solving and optimization process of patch-image information reconstruction based on sparse representation. More specifically, the test image and binary image can be decomposed into some image patches follow certain rules. We select the target potential area according to the binary patch-image which contains salient region information, then exploit the over-complete infrared small target dictionary to reconstruct the test image blocks which may contain targets. The coefficients of target image patch satisfy sparse features. Finally, for image sequence, Euclidean distance was used to reduce false alarm ratio and increase the detection accuracy of moving small targets in infrared images due to the target position correlation between frames.
Water resources of the Wild Rice River watershed, northwestern Minnesota
Winter, Thomas C.; Bidwell, L.E.; Maclay, Robert W.
1970-01-01
The area of the watershed is about 2,600 square miles and includes most of Mahnomen and Norman Counties and parts of Becker, Clay, Clearwater, and Polk Counties. The population of the area is about 37,000 people of which about 70 percent live on farms. The economy is based principally on farming. The area of lake clay and silt is used mostly for raising sugar beets and wheat; and potatoes are grown largely on the sandy soils. In the western part of the morainal area small grain, dairy, and cattle farming is the most common. The eastern part of the morainal area is important for forest products and recreation. Industries in the area are small and are based on agricultural processing and service.
NASA Astrophysics Data System (ADS)
Fast, Jerome D.; Osteen, B. Lance
In this study, a four-dimensional data assimilation technique based on Newtonian relaxation is incorporated into the Colorado State University (CSU) Regional Atmospheric Modeling System (RAMS) and evaluated using data taken from one experiment of the US Department of Energy's (DOE) 1991 Atmospheric Studies in COmplex Terrain (ASCOT) field study along the front range of the Rockies in Colorado. The main objective of this study is to determine the ability of the model to predict small-scale circulations influenced by terrain, such as drainage flows, and assess the impact of data assimilation on the numerical results. In contrast to previous studies in which the smallest horizontal grid spacing was 10 km and 8 km, data assimilation is applied in this study to domains with a horizontal grid spacing as small as 1 km. The prognostic forecasts made by RAMS are evaluated by comparing simulations that employ static initial conditions, with simulations that incorporate continuous data assimilation, and data assimilation for a fixed period of time (dynamic initialization). This paper will also elaborate on the application and limitation of the Newtonian relaxation technique in limited-area mesoscale models with a relatively small grid spacing.
NASA Astrophysics Data System (ADS)
Bartu, Petr; Koeppe, Robert; Arnold, Nikita; Neulinger, Anton; Fallon, Lisa; Bauer, Siegfried
2010-06-01
Position sensitive detection schemes based on the lateral photoeffect rely on inorganic semiconductors. Such position sensitive devices (PSDs) are reliable and robust, but preparation with large active areas is expensive and use on curved substrates is impossible. Here we present a novel route for the fabrication of conformable PSDs which allows easy preparation on large areas, and use on curved surfaces. Our device is based on stretchable silicone waveguides with embedded fluorescent dyes, used in conjunction with small silicon photodiodes. Impinging laser light (e.g., from a laser pointer) is absorbed by the dye in the PSD and re-emitted as fluorescence light at a larger wavelength. Due to the isotropic emission from the fluorescent dye molecules, most of the re-emitted light is coupled into the planar silicone waveguide and directed to the edges of the device. Here the light signals are detected via embedded small silicon photodiodes arranged in a regular pattern. Using a mathematical algorithm derived by extensive using of models from global positioning system (GPS) systems and human activity monitoring, the position of light spots is easily calculated. Additionally, the device shows high durability against mechanical stress, when clamped in an uniaxial stretcher and mechanically loaded up to 15% strain. The ease of fabrication, conformability, and durability of the device suggests its use as interface devices and as sensor skin for future robots.
Kukec, Andreja; Boznar, Marija Z; Mlakar, Primoz; Grasic, Bostjan; Herakovic, Andrej; Zadnik, Vesna; Zaletel-Kragelj, Lijana; Farkas, Jerneja; Erzen, Ivan
2014-05-01
The study of atmospheric air pollution research in complex terrains is challenged by the lack of appropriate methodology supporting the analysis of the spatial relationship between phenomena affected by a multitude of factors. The key is optimal design of a meaningful approach based on small spatial units of observation. The Zasavje region, Slovenia, was chosen as study area with the main objective to investigate in practice the role of such units in a test environment. The process consisted of three steps: modelling of pollution in the atmosphere with dispersion models, transfer of the results to geographical information system software, and then moving on to final determination of the function of small spatial units. A methodology capable of designing useful units for atmospheric air pollution research in highly complex terrains was created, and the results were deemed useful in offering starting points for further research in the field of geospatial health.
NASA Astrophysics Data System (ADS)
Feng, Xingru; Li, Mingjie; Yin, Baoshu; Yang, Dezhou; Yang, Hongwei
2018-06-01
This is a study of the storm surge trends in some of the typhoon-prone coastal areas of China. An unstructured-grid, storm surge-wave-tide coupled model was established for the coastal areas of Zhejiang, Fujian and Guangdong provinces. The coupled model has a high resolution in coastal areas, and the simulated results compared well with the in situ observations and satellite altimeter data. The typhoon-induced storm surges along the coast of the study areas were simulated based on the established coupled model for the past 20 years (1997-2016). The simulated results were used to analyze the trends of the storm surges in the study area. The extreme storm surge trends along the central coast of Fujian Province reached up to 0.06 m/y, significant at the 90% confidence level. The duration of the storm surges greater than 1.0 and 0.7 m had an increasing trend along the coastal area of northern Fujian Province, significant at confidence levels of 70%-91%. The simulated trends of the extreme storm surges were also validated by observations from two tide gauge stations. Further studies show that the correlation coefficient (RTE) between the duration of the storm surge greater than 1 m and the annual ENSO index can reach as high as 0.62, significant at the 99% confidence level. This occurred in a location where the storm surge trend was not significant. For the areas with significant increasing storm surge trends, RTE was small and not significant. This study identified the storm surge trends for the full complex coastline of the study area. These results are useful both for coastal management by the government and for coastal engineering design.
Debris Detector Verification by Hvi-Tests
NASA Astrophysics Data System (ADS)
Bauer, Waldemar; Drolshagen, Gerhard; Vörsmann, Peter; Romberg, Oliver; Putzar, Robin
Information regarding Space Debris (SD) or Micrometeoroids (MM) impacting on spacecraft (S/C) or payloads (P/L) can be obtained by using environmental models e.g. MASTER (ESA) or ORDEM (NASA). The validation of such models is performed by comparison of simulated results with measured or orbital observed data. The latter is utilised for large particles and can be obtained from ground based or space based radars or telescopes. Data regarding very small but abundant particles can also be gained by analysis of retrieved hardware (e.g. Hubble Space Telescope, Space Shuttle Windows), which are brought from orbit back to Earth. Furthermore, in-situ impact detectors are an essential source for information on small size meteoroids and space debris. These kind of detectors are placed in orbit and collect impact data regarding SD and MM, sending data near real time via telemetry. Compared to the impact data which is gained by analysis of retrieved surfaces, the detected data comprise additional information regarding exact impact time and, depending on the type of detector, on the orbit and particles composition. Nevertheless, existing detectors have limitations. Since the detection area is small, statistically meaningful number of impacts are obtained for very small particles only. Measurements of particles in the size range of hundreds of microns to mm which are potentially damaging to S/C require larger sensor areas. To make use of the advantages of in-situ impact detectors and to increase the amount of impact data an innovative impact detector concept is currently under development at DLR in Bremen. Different to all previous impact detectors the Solar Generator based Impact Detector (SOLID) is not an add-on component on the S/C. SOLID makes use of existing subsystems of the S/C and adopts them for impact detection purposes. Since the number of impacts on a target in space depends linearly on the exposed area, the S/C solar panels offer a unique opportunity to use them for impact detection. Considering that the SOLID method could be applied to several S/Cs in different orbits, the spatial coverage in space concerning SD and MM can be significantly increased. In this way the method allows to generate large amount of impact data, which can be used for environmental model validation. This paper focuses on the verification of the SOLID method by Hypervelocity Impact (HVI) tests performed at Fraunhofer EMI. The test set-up as well as achieved results are presented and discussed.
Evaluation of the Williams-type model for barley yields in North Dakota and Minnesota
NASA Technical Reports Server (NTRS)
Barnett, T. L. (Principal Investigator)
1981-01-01
The Williams-type yield model is based on multiple regression analysis of historial time series data at CRD level pooled to regional level (groups of similar CRDs). Basic variables considered in the analysis include USDA yield, monthly mean temperature, monthly precipitation, soil texture and topographic information, and variables derived from these. Technologic trend is represented by piecewise linear and/or quadratic functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test (1970-1979) demonstrate that biases are small and performance based on root mean square appears to be acceptable for the intended AgRISTARS large area applications. The model is objective, adequate, timely, simple, and not costly. It consideres scientific knowledge on a broad scale but not in detail, and does not provide a good current measure of modeled yield reliability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan Hruska
Currently, small Unmanned Aerial Vehicles (UAVs) are primarily used for capturing and down-linking real-time video. To date, their role as a low-cost airborne platform for capturing high-resolution, georeferenced still imagery has not been fully utilized. On-going work within the Unmanned Vehicle Systems Program at the Idaho National Laboratory (INL) is attempting to exploit this small UAV-acquired, still imagery potential. Initially, a UAV-based still imagery work flow model was developed that includes initial UAV mission planning, sensor selection, UAV/sensor integration, and imagery collection, processing, and analysis. Components to support each stage of the work flow are also being developed. Critical tomore » use of acquired still imagery is the ability to detect changes between images of the same area over time. To enhance the analysts’ change detection ability, a UAV-specific, GIS-based change detection system called SADI or System for Analyzing Differences in Imagery is under development. This paper will discuss the associated challenges and approaches to collecting still imagery with small UAVs. Additionally, specific components of the developed work flow system will be described and graphically illustrated using varied examples of small UAV-acquired still imagery.« less
A Gaussian random field model for similarity-based smoothing in Bayesian disease mapping.
Baptista, Helena; Mendes, Jorge M; MacNab, Ying C; Xavier, Miguel; Caldas-de-Almeida, José
2016-08-01
Conditionally specified Gaussian Markov random field (GMRF) models with adjacency-based neighbourhood weight matrix, commonly known as neighbourhood-based GMRF models, have been the mainstream approach to spatial smoothing in Bayesian disease mapping. In the present paper, we propose a conditionally specified Gaussian random field (GRF) model with a similarity-based non-spatial weight matrix to facilitate non-spatial smoothing in Bayesian disease mapping. The model, named similarity-based GRF, is motivated for modelling disease mapping data in situations where the underlying small area relative risks and the associated determinant factors do not vary systematically in space, and the similarity is defined by "similarity" with respect to the associated disease determinant factors. The neighbourhood-based GMRF and the similarity-based GRF are compared and accessed via a simulation study and by two case studies, using new data on alcohol abuse in Portugal collected by the World Mental Health Survey Initiative and the well-known lip cancer data in Scotland. In the presence of disease data with no evidence of positive spatial correlation, the simulation study showed a consistent gain in efficiency from the similarity-based GRF, compared with the adjacency-based GMRF with the determinant risk factors as covariate. This new approach broadens the scope of the existing conditional autocorrelation models. © The Author(s) 2016.
NASA Technical Reports Server (NTRS)
Selna, James; Schlaff, Bernard A
1951-01-01
The drag and pressure recovery of an NACA submerged-inlet model and an NACA series I nose-inlet model were investigated in the transonic flight range. The tests were conducted over a mass-flow-ratio range of 0.4 to 0.8 and a Mach number range of about 0.8 to 1.10 employing large-scale recoverable free-fall models. The results indicate that the Mach number of drag divergence of the inlet models was about the same as that of a basic model without inlets. The external drag coefficients of the nose-inlet model were less than those of the submerged-inlet model throughout the test range. The difference in drag coefficient based on the maximum cross-sectional area of the models was about 0.02 at supersonic speeds and about 0.015 at subsonic speeds. For a hypothetical airplane with a ratio of maximum fuselage cross-sectional area to wing area of 0.06, the difference in airplane drag coefficient would be relatively small, about 0.0012 at supersonic speeds and about 0.0009 at subsonic speeds. Additional drag comparisons between the two inlet models are made considering inlet incremental and additive drag.
Estimating influence of stocking regimes on livestock grazing distributions
USDA-ARS?s Scientific Manuscript database
Ungulates often concentrate grazing at small hotspots in the larger landscape, and dispersing livestock away from these intensively grazed areas is one of the central challenges in range management. We evaluated a technique based on shifting the stocking date to prevent overgrazing of small areas co...
Evaluation of Resources Carrying Capacity in China Based on Remote Sensing and GIS
NASA Astrophysics Data System (ADS)
Liu, K.; Gan, Y. H.; Zhang, T.; Luo, Z. Y.; Wang, J. J.; Lin, F. N.
2018-04-01
This paper accurately extracted the information of arable land, grassland (wetland), forest land, water area and construction land, based on 1 : 250000 basic geographic information data. It made model modification of comprehensive CCRR to achieve carrying capacity calculation taking resource quality into consideration. Ultimately it achieved a comprehensive assessment of CCRR status in China. The top ten cities where the status of carrying capacity of resources was overloaded were Wenzhou, Shanghai, Chengdu, Baoding, Shantou, Jieyang, Dongguan, Fuyang, Zhoukou and Handan. The cities were basically distributed in the central and southern areas with convenient transportation and more economically developed areas. Among the cities in surplus status, resources carrying capacity in Hulun Buir was the most abundant, followed by Heihe, Bayingolin Mongol Autonomous Prefecture, Qiqihar, Chifeng and Jiamusi, all of which were located in northeastern China with a small population and plentiful cultivated land.
Seafood prices reveal impacts of a major ecological disturbance
Smith, Martin D.; Oglend, Atle; Kirkpatrick, A. Justin; Asche, Frank; Bennear, Lori S.; Craig, J. Kevin; Nance, James M.
2017-01-01
Coastal hypoxia (dissolved oxygen ≤ 2 mg/L) is a growing problem worldwide that threatens marine ecosystem services, but little is known about economic effects on fisheries. Here, we provide evidence that hypoxia causes economic impacts on a major fishery. Ecological studies of hypoxia and marine fauna suggest multiple mechanisms through which hypoxia can skew a population’s size distribution toward smaller individuals. These mechanisms produce sharp predictions about changes in seafood markets. Hypoxia is hypothesized to decrease the quantity of large shrimp relative to small shrimp and increase the price of large shrimp relative to small shrimp. We test these hypotheses using time series of size-based prices. Naive quantity-based models using treatment/control comparisons in hypoxic and nonhypoxic areas produce null results, but we find strong evidence of the hypothesized effects in the relative prices: Hypoxia increases the relative price of large shrimp compared with small shrimp. The effects of fuel prices provide supporting evidence. Empirical models of fishing effort and bioeconomic simulations explain why quantifying effects of hypoxia on fisheries using quantity data has been inconclusive. Specifically, spatial-dynamic feedbacks across the natural system (the fish stock) and human system (the mobile fishing fleet) confound “treated” and “control” areas. Consequently, analyses of price data, which rely on a market counterfactual, are able to reveal effects of the ecological disturbance that are obscured in quantity data. Our results are an important step toward quantifying the economic value of reduced upstream nutrient loading in the Mississippi Basin and are broadly applicable to other coupled human-natural systems. PMID:28137850
Seafood prices reveal impacts of a major ecological disturbance.
Smith, Martin D; Oglend, Atle; Kirkpatrick, A Justin; Asche, Frank; Bennear, Lori S; Craig, J Kevin; Nance, James M
2017-02-14
Coastal hypoxia (dissolved oxygen ≤ 2 mg/L) is a growing problem worldwide that threatens marine ecosystem services, but little is known about economic effects on fisheries. Here, we provide evidence that hypoxia causes economic impacts on a major fishery. Ecological studies of hypoxia and marine fauna suggest multiple mechanisms through which hypoxia can skew a population's size distribution toward smaller individuals. These mechanisms produce sharp predictions about changes in seafood markets. Hypoxia is hypothesized to decrease the quantity of large shrimp relative to small shrimp and increase the price of large shrimp relative to small shrimp. We test these hypotheses using time series of size-based prices. Naive quantity-based models using treatment/control comparisons in hypoxic and nonhypoxic areas produce null results, but we find strong evidence of the hypothesized effects in the relative prices: Hypoxia increases the relative price of large shrimp compared with small shrimp. The effects of fuel prices provide supporting evidence. Empirical models of fishing effort and bioeconomic simulations explain why quantifying effects of hypoxia on fisheries using quantity data has been inconclusive. Specifically, spatial-dynamic feedbacks across the natural system (the fish stock) and human system (the mobile fishing fleet) confound "treated" and "control" areas. Consequently, analyses of price data, which rely on a market counterfactual, are able to reveal effects of the ecological disturbance that are obscured in quantity data. Our results are an important step toward quantifying the economic value of reduced upstream nutrient loading in the Mississippi Basin and are broadly applicable to other coupled human-natural systems.
Re-Engineering Complex Legacy Systems at NASA
NASA Technical Reports Server (NTRS)
Ruszkowski, James; Meshkat, Leila
2010-01-01
The Flight Production Process (FPP) Re-engineering project has established a Model-Based Systems Engineering (MBSE) methodology and the technological infrastructure for the design and development of a reference, product-line architecture as well as an integrated workflow model for the Mission Operations System (MOS) for human space exploration missions at NASA Johnson Space Center. The design and architectural artifacts have been developed based on the expertise and knowledge of numerous Subject Matter Experts (SMEs). The technological infrastructure developed by the FPP Re-engineering project has enabled the structured collection and integration of this knowledge and further provides simulation and analysis capabilities for optimization purposes. A key strength of this strategy has been the judicious combination of COTS products with custom coding. The lean management approach that has led to the success of this project is based on having a strong vision for the whole lifecycle of the project and its progress over time, a goal-based design and development approach, a small team of highly specialized people in areas that are critical to the project, and an interactive approach for infusing new technologies into existing processes. This project, which has had a relatively small amount of funding, is on the cutting edge with respect to the utilization of model-based design and systems engineering. An overarching challenge that was overcome by this project was to convince upper management of the needs and merits of giving up more conventional design methodologies (such as paper-based documents and unwieldy and unstructured flow diagrams and schedules) in favor of advanced model-based systems engineering approaches.
Small target pre-detection with an attention mechanism
NASA Astrophysics Data System (ADS)
Wang, Yuehuan; Zhang, Tianxu; Wang, Guoyou
2002-04-01
We introduce the concept of predetection based on an attention mechanism to improve the efficiency of small-target detection by limiting the image region of detection. According to the characteristics of small-target detection, local contrast is taken as the only feature in predetection and a nonlinear sampling model is adopted to make the predetection adaptive to detect small targets with different area sizes. To simplify the predetection itself and decrease the false alarm probability, neighboring nodes in the sampling grid are used to generate a saliency map, and a short-term memory is adopted to accelerate the `pop-out' of targets. We discuss the fact that the proposed approach is simple enough in computational complexity. In addition, even in a cluttered background, attention can be led to targets in a satisfying few iterations, which ensures that the detection efficiency will not be decreased due to false alarms. Experimental results are presented to demonstrate the applicability of the approach.
Study of research and development requirements of small gas-turbine combustors
NASA Technical Reports Server (NTRS)
Demetri, E. P.; Topping, R. F.; Wilson, R. P., Jr.
1980-01-01
A survey is presented of the major small-engine manufacturers and governmental users. A consensus was undertaken regarding small-combustor requirements. The results presented are based on an evaluation of the information obtained in the course of the study. The current status of small-combustor technology is reviewed. The principal problems lie in liner cooling, fuel injection, part-power performance, and ignition. Projections of future engine requirements and their effect on the combustor are discussed. The major changes anticipated are significant increases in operating pressure and temperature levels and greater capability of using heavier alternative fuels. All aspects of combustor design are affected, but the principal impact is on liner durability. An R&D plan which addresses the critical combustor needs is described. The plan consists of 15 recommended programs for achieving necessary advances in the areas of liner thermal design, primary-zone performance, fuel injection, dilution, analytical modeling, and alternative-fuel utilization.
Small Subunits of Serine Palmitoyltransferase (ssSPTs) and Their Physiological Roles
2014-02-12
showing that organisms also have unique sphingoid base chain lengths. In insects, such as Drosophila melanogaster , the predominant chain lengths of the ... Drosophila melanogaster mutant defective in male meiotic cytokinesis (‘Ghiberti’) has a mutation in a gene with low homology to the ssSPT subunits of...INTRODUCTION: Sphingolipid metabolism in Drosophila melanogaster (fly) is an active area of research. It is a good model system to study the roles of
Simulating Complex Satellites and a Space-Based Surveillance Sensor Simulation
2009-09-01
high-resolution imagery (Fig. 1). Thus other means for characterizing satellites will need to be developed. Research into non- resolvable space object...computing power and time . The second way, which we are using here is to create simpler models of satellite bodies and use albedo-area calculations...their position, movement, size, and physical features. However, there are many satellites in orbit that are simply too small or too far away to resolve by
Ferguson, Neil S; Lamb, Karen E; Wang, Yang; Ogilvie, David; Ellaway, Anne
2013-01-01
Obesity and other chronic conditions linked with low levels of physical activity (PA) are associated with deprivation. One reason for this could be that it is more difficult for low-income groups to access recreational PA facilities such as swimming pools and sports centres than high-income groups. In this paper, we explore the distribution of access to PA facilities by car and bus across mainland Scotland by income deprivation at datazone level. GIS car and bus networks were created to determine the number of PA facilities accessible within travel times of 10, 20 and 30 minutes. Multilevel negative binomial regression models were then used to investigate the distribution of the number of accessible facilities, adjusting for datazone population size and local authority. Access to PA facilities by car was significantly (p<0.01) higher for the most affluent quintile of area-based income deprivation than for most other quintiles in small towns and all other quintiles in rural areas. Accessibility by bus was significantly lower for the most affluent quintile than for other quintiles in urban areas and small towns, but not in rural areas. Overall, we found that the most disadvantaged groups were those without access to a car and living in the most affluent areas or in rural areas.
Lin, Yu-Hsiu; McLain, Alexander C; Probst, Janice C; Bennett, Kevin J; Qureshi, Zaina P; Eberth, Jan M
2017-01-01
The purpose of this study was to develop county-level estimates of poor health-related quality of life (HRQOL) among aged 65 years and older U.S. adults and to identify spatial clusters of poor HRQOL using a multilevel, poststratification approach. Multilevel, random-intercept models were fit to HRQOL data (two domains: physical health and mental health) from the 2011-2012 Behavioral Risk Factor Surveillance System. Using a poststratification, small area estimation approach, we generated county-level probabilities of having poor HRQOL for each domain in U.S. adults aged 65 and older, and validated our model-based estimates against state and county direct estimates. County-level estimates of poor HRQOL in the United States ranged from 18.07% to 44.81% for physical health and 14.77% to 37.86% for mental health. Correlations between model-based and direct estimates were higher for physical than mental HRQOL. Counties located in the Arkansas, Kentucky, and Mississippi exhibited the worst physical HRQOL scores, but this pattern did not hold for mental HRQOL, which had the highest probability of mentally unhealthy days in Illinois, Indiana, and Vermont. Substantial geographic variation in physical and mental HRQOL scores exists among older U.S. adults. State and local policy makers should consider these local conditions in targeting interventions and policies to counties with high levels of poor HRQOL scores. Copyright © 2016 Elsevier Inc. All rights reserved.
Global fire emissions estimates during 1997-2016
NASA Astrophysics Data System (ADS)
van der Werf, Guido R.; Randerson, James T.; Giglio, Louis; van Leeuwen, Thijs T.; Chen, Yang; Rogers, Brendan M.; Mu, Mingquan; van Marle, Margreet J. E.; Morton, Douglas C.; Collatz, G. James; Yokelson, Robert J.; Kasibhatla, Prasad S.
2017-09-01
Climate, land use, and other anthropogenic and natural drivers have the potential to influence fire dynamics in many regions. To develop a mechanistic understanding of the changing role of these drivers and their impact on atmospheric composition, long-term fire records are needed that fuse information from different satellite and in situ data streams. Here we describe the fourth version of the Global Fire Emissions Database (GFED) and quantify global fire emissions patterns during 1997-2016. The modeling system, based on the Carnegie-Ames-Stanford Approach (CASA) biogeochemical model, has several modifications from the previous version and uses higher quality input datasets. Significant upgrades include (1) new burned area estimates with contributions from small fires, (2) a revised fuel consumption parameterization optimized using field observations, (3) modifications that improve the representation of fuel consumption in frequently burning landscapes, and (4) fire severity estimates that better represent continental differences in burning processes across boreal regions of North America and Eurasia. The new version has a higher spatial resolution (0.25°) and uses a different set of emission factors that separately resolves trace gas and aerosol emissions from temperate and boreal forest ecosystems. Global mean carbon emissions using the burned area dataset with small fires (GFED4s) were 2.2 × 1015 grams of carbon per year (Pg C yr-1) during 1997-2016, with a maximum in 1997 (3.0 Pg C yr-1) and minimum in 2013 (1.8 Pg C yr-1). These estimates were 11 % higher than our previous estimates (GFED3) during 1997-2011, when the two datasets overlapped. This net increase was the result of a substantial increase in burned area (37 %), mostly due to the inclusion of small fires, and a modest decrease in mean fuel consumption (-19 %) to better match estimates from field studies, primarily in savannas and grasslands. For trace gas and aerosol emissions, differences between GFED4s and GFED3 were often larger due to the use of revised emission factors. If small fire burned area was excluded (GFED4 without the s
for small fires), average emissions were 1.5 Pg C yr-1. The addition of small fires had the largest impact on emissions in temperate North America, Central America, Europe, and temperate Asia. This small fire layer carries substantial uncertainties; improving these estimates will require use of new burned area products derived from high-resolution satellite imagery. Our revised dataset provides an internally consistent set of burned area and emissions that may contribute to a better understanding of multi-decadal changes in fire dynamics and their impact on the Earth system. GFED data are available from http://www.globalfiredata.org.
Effects of surface tension and intraluminal fluid on mechanics of small airways.
Hill, M J; Wilson, T A; Lambert, R K
1997-01-01
Airway constriction is accompanied by folding of the mucosa to form ridges that run axially along the inner surface of the airways. The mucosa has been modeled (R. K. Lambert. J. Appl. Physiol. 71:666-673, 1991) as a thin elastic layer with a finite bending stiffness, and the contribution of its bending stiffness to airway elastance has been computed. In this study, we extend that work by including surface tension and intraluminal fluid in the model. With surface tension, the pressure on the inner surface of the elastic mucosa is modified by the pressure difference across the air-liquid interface. As folds form in the mucosa, intraluminal fluid collects in pools in the depressions formed by the folds, and the curvature of the air-liquid interface becomes nonuniform. If the amount of intraluminal fluid is small, < 2% of luminal volume, the pools of intraluminal fluid are small, the air-liquid interface nearly coincides with the surface of the mucosa, and the area of the air-liquid interface remains constant as airway cross-sectional area decreases. In that case, surface energy is independent of airway area, and surface tension has no effect on airway mechanics. If the amount of intraluminal fluid is > 2%, the area of the air-liquid interface decreases as airway cross-sectional area decreases. and surface tension contributes to airway compression. The model predicts that surface tension plus intraluminal fluid can cause an instability in the area-pressure curve of small airways. This instability provides a mechanism for abrupt airway closure and abrupt reopening at a higher opening pressure.
Busing, Richard T.; Solomon, Allen M.
2005-01-01
An individual-based model of forest dynamics (FORCLIM) was tested for its ability to simulate forest composition and structure in the Pacific Northwest region of North America. Simulation results across gradients of climate and disturbance were compared to forest survey data from several vegetation zones in western Oregon. Modelled patterns of tree species composition, total basal area and stand height across climate gradients matched those in the forest survey data. However, the density of small stems (<50 cm DBH) was underestimated by the model. Thus actual size-class structure and other density-based parameters of stand structure were not simulated with high accuracy. The addition of partial-stand disturbances at moderate frequencies (<0.01 yr-1) often improved agreement between simulated and actual results. Strengths and weaknesses of the FORCLIM model in simulating forest dynamics and structure in the Pacific Northwest are discussed.
Liu, Lian; Zhang, Shao-Wu; Huang, Yufei; Meng, Jia
2017-08-31
As a newly emerged research area, RNA epigenetics has drawn increasing attention recently for the participation of RNA methylation and other modifications in a number of crucial biological processes. Thanks to high throughput sequencing techniques, such as, MeRIP-Seq, transcriptome-wide RNA methylation profile is now available in the form of count-based data, with which it is often of interests to study the dynamics at epitranscriptomic layer. However, the sample size of RNA methylation experiment is usually very small due to its costs; and additionally, there usually exist a large number of genes whose methylation level cannot be accurately estimated due to their low expression level, making differential RNA methylation analysis a difficult task. We present QNB, a statistical approach for differential RNA methylation analysis with count-based small-sample sequencing data. Compared with previous approaches such as DRME model based on a statistical test covering the IP samples only with 2 negative binomial distributions, QNB is based on 4 independent negative binomial distributions with their variances and means linked by local regressions, and in the way, the input control samples are also properly taken care of. In addition, different from DRME approach, which relies only the input control sample only for estimating the background, QNB uses a more robust estimator for gene expression by combining information from both input and IP samples, which could largely improve the testing performance for very lowly expressed genes. QNB showed improved performance on both simulated and real MeRIP-Seq datasets when compared with competing algorithms. And the QNB model is also applicable to other datasets related RNA modifications, including but not limited to RNA bisulfite sequencing, m 1 A-Seq, Par-CLIP, RIP-Seq, etc.
Downscaling Ocean Conditions: Initial Results using a Quasigeostrophic and Realistic Ocean Model
NASA Astrophysics Data System (ADS)
Katavouta, Anna; Thompson, Keith
2014-05-01
Previous theoretical work (Henshaw et al, 2003) has shown that the small-scale modes of variability of solutions of the unforced, incompressible Navier-Stokes equation, and Burgers' equation, can be reconstructed with surprisingly high accuracy from the time history of a few of the large-scale modes. Motivated by this theoretical work we first describe a straightforward method for assimilating information on the large scales in order to recover the small scale oceanic variability. The method is based on nudging in specific wavebands and frequencies and is similar to the so-called spectral nudging method that has been used successfully for atmospheric downscaling with limited area models (e.g. von Storch et al., 2000). The validity of the method is tested using a quasigestrophic model configured to simulate a double ocean gyre separated by an unstable mid-ocean jet. It is shown that important features of the ocean circulation including the position of the meandering mid-ocean jet and associated pinch-off eddies can indeed be recovered from the time history of a small number of large-scales modes. The benefit of assimilating additional time series of observations from a limited number of locations, that alone are too sparse to significantly improve the recovery of the small scales using traditional assimilation techniques, is also demonstrated using several twin experiments. The final part of the study outlines the application of the approach using a realistic high resolution (1/36 degree) model, based on the NEMO (Nucleus for European Modelling of the Ocean) modeling framework, configured for the Scotian Shelf of the east coast of Canada. The large scale conditions used in this application are obtained from the HYCOM (HYbrid Coordinate Ocean Model) + NCODA (Navy Coupled Ocean Data Assimilation) global 1/12 degree analysis product. Henshaw, W., Kreiss, H.-O., Ystrom, J., 2003. Numerical experiments on the interaction between the larger- and the small-scale motion of the Navier-Stokes equations. Multiscale Modeling and Simulation 1, 119-149. von Storch, H., Langenberg, H., Feser, F., 2000. A spectral nudging technique for dynamical downscaling purposes. Monthly Weather Review 128, 3664-3673.
Effect of slip-area scaling on the earthquake frequency-magnitude relationship
NASA Astrophysics Data System (ADS)
Senatorski, Piotr
2017-06-01
The earthquake frequency-magnitude relationship is considered in the maximum entropy principle (MEP) perspective. The MEP suggests sampling with constraints as a simple stochastic model of seismicity. The model is based on the von Neumann's acceptance-rejection method, with b-value as the parameter that breaks symmetry between small and large earthquakes. The Gutenberg-Richter law's b-value forms a link between earthquake statistics and physics. Dependence between b-value and the rupture area vs. slip scaling exponent is derived. The relationship enables us to explain observed ranges of b-values for different types of earthquakes. Specifically, different b-value ranges for tectonic and induced, hydraulic fracturing seismicity is explained in terms of their different triggering mechanisms: by the applied stress increase and fault strength reduction, respectively.
Chen, Jing; Sun, Bo-Ming; Chen, Dan; Wu, Xin; Guo, Long-Zhu; Wang, Gang
2014-01-01
The small Sanjiang plain is one of the most important commodity grain production bases and the largest fresh water wetland in China. Due to the rapid expansion of agricultural activities in the past 30 years, the contradiction between economic development and the loss of ecosystem services has become an issue of increasing concern in the area. In this study, we analysed land use changes and the loss of ecosystem services value caused by these changes. We found that cropland sprawl was predominant and occurred in forest, wetland, and grassland areas in the small Sanjiang plain from 1980 to 2010. Using a model to evaluate ecosystem services value, we calculated that the decreased values of ecosystem services were 169.88 × 10(8) Yuan from 1980 to 2000 and 120.00 × 10(8) Yuan from 2000 to 2010. All of the ecosystem services were diminished from 1980 to 2010 except for food production. Therefore, the loss of ecosystem services value should be considered by the policymakers of land use and development.
Abstract: Two physically based and deterministic models, CASC2-D and KINEROS are evaluated and compared for their performances on modeling sediment movement on a small agricultural watershed over several events. Each model has different conceptualization of a watershed. CASC...
Downscaling ocean conditions: Experiments with a quasi-geostrophic model
NASA Astrophysics Data System (ADS)
Katavouta, A.; Thompson, K. R.
2013-12-01
The predictability of small-scale ocean variability, given the time history of the associated large-scales, is investigated using a quasi-geostrophic model of two wind-driven gyres separated by an unstable, mid-ocean jet. Motivated by the recent theoretical study of Henshaw et al. (2003), we propose a straightforward method for assimilating information on the large-scale in order to recover the small-scale details of the quasi-geostrophic circulation. The similarity of this method to the spectral nudging of limited area atmospheric models is discussed. Results from the spectral nudging of the quasi-geostrophic model, and an independent multivariate regression-based approach, show that important features of the ocean circulation, including the position of the meandering mid-ocean jet and the associated pinch-off eddies, can be recovered from the time history of a small number of large-scale modes. We next propose a hybrid approach for assimilating both the large-scales and additional observed time series from a limited number of locations that alone are too sparse to recover the small scales using traditional assimilation techniques. The hybrid approach improved significantly the recovery of the small-scales. The results highlight the importance of the coupling between length scales in downscaling applications, and the value of assimilating limited point observations after the large-scales have been set correctly. The application of the hybrid and spectral nudging to practical ocean forecasting, and projecting changes in ocean conditions on climate time scales, is discussed briefly.
NASA Astrophysics Data System (ADS)
Al-Abadi, Alaa M.; Al-Shamma'a, Ayser M.; Aljabbari, Mukdad H.
2017-03-01
In this study, intrinsic groundwater vulnerability for the shallow aquifer in northeastern Missan governorate, south of Iraq is evaluated using commonly used DRASTIC model in framework of GIS environment. Preparation of DRASTIC parameters is attained through gathering data from different sources including field survey, geological and meteorological data, a digital elevation model DEM of the study area, archival database, and published research. The different data used to build DRASTIC model are arranged in a geospatial database using spatial analyst extension of ArcGIS 10.2 software. The obtained results related to the vulnerability to general contaminants show that the study area is characterized by two vulnerability zones: low and moderate. Ninety-four percentage (94 %) of the study area has a low class of groundwater vulnerability to contamination, whereas a total of (6 %) of the study area has moderate vulnerability. The pesticides DRASTIC index map shows that the study area is also characterized by two zones of vulnerability: low and moderate. The DRASTIC map of this version clearly shows that small percentage (13 %) of the study area has low vulnerability to contamination, and most parts have moderate vulnerability (about 87 %). The final results indicate that the aquifer system in the interested area is relatively protected from contamination on the groundwater surface. To mitigate the contamination risks in the moderate vulnerability zones, a protective measure must be put before exploiting the aquifer and before comprehensive agricultural activities begin in the area.
No Place To Hide: Substance Abuse in Mid-Size Cities and Rural America.
ERIC Educational Resources Information Center
Columbia Univ., New York, NY. National Center on Addiction and Substance Abuse.
America's substance abuse epidemic has come to rural America. Adults in small metropolitan and rural areas are just as likely as those in urban America to use and abuse illegal drugs, alcohol, and tobacco. Young teens in small metropolitan and rural areas are more likely to abuse substances than those in large metro areas. Based on a wide variety…
NASA Astrophysics Data System (ADS)
Deguchi, T.; Kim, H. J.; Ikeda, T.
2017-05-01
The mechanical behavior of ductile cast iron is governed by graphite particles and casting defects in the microstructures, which can significantly decrease the fatigue strength. In our previous study, the fatigue limit of ferritic-pearlitic ductile cast iron specimens with small defects ((\\sqrt{{area}}=80˜ 1500{{μ }}{{m}})) could successfully be predicted based on the \\sqrt{{area}} parameter model by using \\sqrt{{area}} as a geometrical parameter of defect as well as the tensile strength as a material parameter. In addition, the fatigue limit for larger defects could be predicted based on the conventional fracture mechanics approach. In this study, rotating bending and tension-compression fatigue tests with ferritic-pearlitic ductile cast iron containing circumferential sharp notches as well as smooth specimens were performed to investigate quantitatively the effects of defect. The notch depths ranged 10 ˜ 2500 μm and the notch root radii were 5 and 50 μm. The stress ratios were R = -1 and 0.1. The microscopic observation of crack propagation near fatigue limit revealed that the fatigue limit was determined by the threshold condition for propagation of a small crack emanating from graphite particles. The fatigue limit could be successfully predicted as a function of R using a method proposed in this study.
Diana, M; Agnus, V; Halvax, P; Liu, Y-Y; Dallemagne, B; Schlagowski, A-I; Geny, B; Diemunsch, P; Lindner, V; Marescaux, J
2015-01-01
Fluorescence videography is a promising technique for assessing bowel perfusion. Fluorescence-based enhanced reality (FLER) is a novel concept, in which a dynamic perfusion cartogram, generated by computer analysis, is superimposed on to real-time laparoscopic images. The aim of this experimental study was to assess the accuracy of FLER in detecting differences in perfusion in a small bowel resection-anastomosis model. A small bowel ischaemic segment was created laparoscopically in 13 pigs. Animals were allocated to having anastomoses performed at either low perfusion (25 per cent; n = 7) or high perfusion (75 per cent; n = 6), as determined by FLER analysis. Capillary lactate levels were measured in blood samples obtained by serosal puncturing in the ischaemic area, resection lines and vascularized areas. Pathological inflammation scoring of the anastomosis was carried out. Lactate levels in the ischaemic area (mean(s.d.) 5·6(2·8) mmol/l) were higher than those in resection lines at 25 per cent perfusion (3·7(1·7) mmol/l; P = 0·010) and 75 per cent perfusion (2·9(1·3) mmol/l; P < 0·001), and higher than levels in vascular zones (2·5(1·0) mmol/l; P < 0·001). Lactate levels in resection lines with 75 per cent perfusion were lower than those in lines with 25 per cent perfusion (P < 0·001), and similar to those in vascular zones (P = 0·188). Levels at resection lines with 25 per cent perfusion were higher than those in vascular zones (P = 0·001). Mean(s.d.) global inflammation scores were higher in the 25 per cent perfusion group compared with the 75 per cent perfusion group for mucosa/submucosa (2·1(0·4) versus 1·2(0·4); P = 0·003) and serosa (1·8(0·4) versus 0·8(0·8); P = 0·014). A ratio of preanastomotic lactate levels in the ischaemic area relative to the resection lines of 2 or less was predictive of a more severe inflammation score. In an experimental model, FLER appeared accurate in discriminating bowel perfusion levels. Surgical relevance Clinical assessment has limited accuracy in evaluating bowel perfusion before anastomosis. Fluorescence videography estimates intestinal perfusion based on the fluorescence intensity of injected fluorophores, which is proportional to bowel vascularization. However, evaluation of fluorescence intensity remains a static and subjective measure. Fluorescence-based enhanced reality (FLER) is a dynamic fluorescence videography technique integrating near-infrared endoscopy and specific software. The software generates a virtual perfusion cartogram based on time to peak fluorescence, which can be superimposed on to real-time laparoscopic images. This experimental study demonstrates the accuracy of FLER in detecting differences in bowel perfusion in a survival model of laparoscopic small bowel resection-anastomosis, based on biochemical and histopathological data. It is concluded that real-time imaging of bowel perfusion is easy to use and accurate, and should be translated into clinical use. © 2015 BJS Society Ltd. Published by John Wiley & Sons Ltd.
Simulating the hydrologic cycle in coal mining subsidence areas with a distributed hydrologic model
Wang, Jianhua; Lu, Chuiyu; Sun, Qingyan; Xiao, Weihua; Cao, Guoliang; Li, Hui; Yan, Lingjia; Zhang, Bo
2017-01-01
Large-scale ground subsidence caused by coal mining and subsequent water-filling leads to serious environmental problems and economic losses, especially in plains with a high phreatic water level. Clarifying the hydrologic cycle in subsidence areas has important practical value for environmental remediation, and provides a scientific basis for water resource development and utilisation of the subsidence areas. Here we present a simulation approach to describe interactions between subsidence area water (SW) and several hydrologic factors from the River-Subsidence-Groundwater Model (RSGM), which is developed based on the distributed hydrologic model. Analysis of water balance shows that the recharge of SW from groundwater only accounts for a small fraction of the total water source, due to weak groundwater flow in the plain. The interaction between SW and groundwater has an obvious annual cycle. The SW basically performs as a net source of groundwater in the wet season, and a net sink for groundwater in the dry season. The results show there is an average 905.34 million m3 per year of water available through the Huainan coal mining subsidence areas (HCMSs). If these subsidence areas can be integrated into water resource planning, the increasingly precarious water supply infrastructure will be strengthened. PMID:28106048
Xiao, Nina J; Medley, Colin D; Shieh, Ian C; Downing, Gregory; Pizarro, Shelly; Liu, Jun; Patel, Ankit R
Leachables from single-use bioprocess containers (BPCs) are a source of process-related impurities that have the potential to alter product quality of biotherapeutics and affect patient health. Leachables often exist at very low concentrations, making it difficult to detect their presence and challenging to assess their impact on protein quality. A small-scale stress model based on assessing protein stability was developed to evaluate the potential risks associated with storing biotherapeutics in disposable bags caused by the presence of leachables. Small-scale BPCs were filled with protein solution at high surface area-to-volume ratios (≥3× the surface area-to-volume ratio of manufacturing-scale BPCs) and incubated at stress temperatures (e.g., 25 °C or 30 °C for up to 12 weeks) along with an appropriate storage vessel (e.g., glass vial or stainless steel) as a control for side-by-side comparison. Changes in protein size variants measured by size exclusion chromatography, capillary electrophoresis, and particle formation for two monoclonal antibodies using both the small-scale stress model and a control revealed a detrimental effect of gamma-irradiated BPCs on protein aggregation and significant BPC difference between earlier and later batches. It was found that preincubation of the empty BPCs prior to protein storage improved protein stability, suggesting the presence of volatile or heat-sensitive leachables (heat-labile or thermally degraded). In addition, increasing the polysorbate 20 concentration lowered, but did not completely mitigate, the leachable-protein interactions, indicating the presence of a hydrophobic leachable. Overall, this model can inform the risk of BPC leachables on biotherapeutics during routine manufacturing and assist in making decisions on the selection of a suitable BPC for the manufacturing process by assessing changes in product quality. Leachables from single-use systems often exist in small quantities and are difficult to detect with existing analytical methods. The presence of relevant detrimental leachables from single-use bioprocess containers (BPCs) can be indirectly detected by studying the stability of monoclonal antibodies via changes by size exclusion chromatography, capillary electrophoresis sodium dodecyl sulfate, and visible/sub-visible particles using a small-scale stress model containing high surface area-to-volume ratio at elevated temperature alongside with an appropriate control (e.g., glass vials or stainless steel containers). These changes in protein quality attributes allowed the evaluation of potential risks associated with adopting single-use bioprocess containers for storage as well as bag quality and bag differences between earlier and later batches. These leachables appear to be generated during the bag sterilization process by gamma irradiation. Improvements in protein stability after storage in "preheated" bags indicated that these leachables may be thermally unstable or volatile. The effect of surfactant levels, storage temperatures, surface area-to-volume ratios, filtration, and buffer exchange on leachables and protein stability were also assessed. © PDA, Inc. 2016.
Color analysis and image rendering of woodblock prints with oil-based ink
NASA Astrophysics Data System (ADS)
Horiuchi, Takahiko; Tanimoto, Tetsushi; Tominaga, Shoji
2012-01-01
This paper proposes a method for analyzing the color characteristics of woodblock prints having oil-based ink and rendering realistic images based on camera data. The analysis results of woodblock prints show some characteristic features in comparison with oil paintings: 1) A woodblock print can be divided into several cluster areas, each with similar surface spectral reflectance; and 2) strong specular reflection from the influence of overlapping paints arises only in specific cluster areas. By considering these properties, we develop an effective rendering algorithm by modifying our previous algorithm for oil paintings. A set of surface spectral reflectances of a woodblock print is represented by using only a small number of average surface spectral reflectances and the registered scaling coefficients, whereas the previous algorithm for oil paintings required surface spectral reflectances of high dimension at all pixels. In the rendering process, in order to reproduce the strong specular reflection in specific cluster areas, we use two sets of parameters in the Torrance-Sparrow model for cluster areas with or without strong specular reflection. An experiment on a woodblock printing with oil-based ink was performed to demonstrate the feasibility of the proposed method.
Performance-based planning for small metropolitan areas
DOT National Transportation Integrated Search
2015-01-31
This report provides insights on effective practices in performance based planning by Metropolitan Planning Organizations (MPOs) that plan for Urbanized Areas with populations less than 200,000. It references existing best practices research on perfo...
NASA Astrophysics Data System (ADS)
Defrance, Dimitri; Javelle, Pierre; Ecrepont, Stéphane; Andreassian, Vazken
2013-04-01
In Europe, flash floods mainly occur in the Mediterranean area on small catchments with a short concentration time. Anticipating this kind of events is a major issue in order to reduce the resulting damages. But for many of the impacted catchments, no data are available to calibrate and evaluate hydrological models. In this context, the aims of this study is to develop and evaluate a warning method for the Southern French Alps. This area is of particular interest, because it regroups different hydrological regimes, from purely Mediterranean to purely Alpine influences. Two main issues should be addressed: - How to define the hydrological model and its parameterization for an application in an ungauged context? - How to evaluate the final results on 'real' ungauged catchments? The first issue is a classic one. Using a 'observed' data set (154 streamflow stations with catchment areas ranging from 5 to 1000 km² and distributed rainfall available on the 1997-2006 period), we developed a regional model specifically for the studied area. For this purpose, the AIGA method, initially developed for Mediterranean catchments was adapted, in order to take into account snowmelt and to produce baseflows. Then, different parameterizations were tested, derived from different simple regionalisation techniques: - the same parameters set for the whole area defined as the median of the local calibrated parameters; - the same technique as the previous case, but by considering different sub-areas, defined as "hydro-climatically" homogeneous by previous studies; - and finally the neighbour's method. The second issue is more original. Indeed, in most studies the final evaluation is done using gauged stations as they were 'ungauged', ie keeping the at-site discharge data only for validation ant not for calibration. The main disadvantage of this approach is that the evaluation is made at the scale of the gauged catchments, which are in general greater than the catchments impacted by flash floods. Furthermore, many events are missed, since flash floods can occur very locally. In this study, we try to evaluate the results on observations collected by witnesses on 'real' ungauged catchments. The proposed method consists to use an historical data-base of flood damages reports. These data have been collected by local authorities (RTM). Finally, 139 ungauged locations were considered, where we simulated discharges for the entire 1997-2006 period. The comparison of these modelled discharges with the occurrence of an observed discharge makes it possible to determine a local 'modelled' discharge threshold above it most of the damages are observed. The pertinence of this threshold (and consequently of the model used for the simulation) is assessed by considering classical contingency statistics: probability of detection (POD), false alarm rate (FAR) and critical success index (CSI). The main advantage of this historical approach is the availability of many events in the database on very small catchments (50% less than 20 km²). The preliminary results show that on gauged basins, the base flow and the snowmelt added modules improve the performance of the AIGA method when locally calibrated. But when results are applied on real ungauged catchments, improvements become less obvious, with a small advantage for neighbour's method. These results shows the difficulty arising with ungauged catchments, specially when target catchments are smaller than the gauged 'parents'. It also illustrates the interest of the damages database used as 'proxy' data to investigate the model performances at smaller scales. This work has been done in the framework of the RHYTMME project, with the financial support of the European Union, the Provence-Alpes-Côte d'Azur Region and the French Ministry in charge of Ecology.
Probabilistic prediction models for aggregate quarry siting
Robinson, G.R.; Larkins, P.M.
2007-01-01
Weights-of-evidence (WofE) and logistic regression techniques were used in a GIS framework to predict the spatial likelihood (prospectivity) of crushed-stone aggregate quarry development. The joint conditional probability models, based on geology, transportation network, and population density variables, were defined using quarry location and time of development data for the New England States, North Carolina, and South Carolina, USA. The Quarry Operation models describe the distribution of active aggregate quarries, independent of the date of opening. The New Quarry models describe the distribution of aggregate quarries when they open. Because of the small number of new quarries developed in the study areas during the last decade, independent New Quarry models have low parameter estimate reliability. The performance of parameter estimates derived for Quarry Operation models, defined by a larger number of active quarries in the study areas, were tested and evaluated to predict the spatial likelihood of new quarry development. Population density conditions at the time of new quarry development were used to modify the population density variable in the Quarry Operation models to apply to new quarry development sites. The Quarry Operation parameters derived for the New England study area, Carolina study area, and the combined New England and Carolina study areas were all similar in magnitude and relative strength. The Quarry Operation model parameters, using the modified population density variables, were found to be a good predictor of new quarry locations. Both the aggregate industry and the land management community can use the model approach to target areas for more detailed site evaluation for quarry location. The models can be revised easily to reflect actual or anticipated changes in transportation and population features. ?? International Association for Mathematical Geology 2007.
Princé, Karine; Lorrillière, Romain; Barbet-Massin, Morgane; Léger, François; Jiguet, Frédéric
2015-01-01
Climate and land use changes are key drivers of current biodiversity trends, but interactions between these drivers are poorly modeled, even though they could amplify or mitigate negative impacts of climate change. Here, we attempt to predict the impacts of different agricultural change scenarios on common breeding birds within farmland included in the potential future climatic suitable areas for these species. We used the Special Report on Emissions Scenarios (SRES) to integrate likely changes in species climatic suitability, based on species distribution models, and changes in area of farmland, based on the IMAGE model, inside future climatic suitable areas. We also developed six farmland cover scenarios, based on expert opinion, which cover a wide spectrum of potential changes in livestock farming and cropping patterns by 2050. We ran generalized linear mixed models to calibrate the effects of farmland cover and climate change on bird specific abundance within 386 small agricultural regions. We used model outputs to predict potential changes in bird populations on the basis of predicted changes in regional farmland cover, in area of farmland and in species climatic suitability. We then examined the species sensitivity according to their habitat requirements. A scenario based on extensification of agricultural systems (i.e., low-intensity agriculture) showed the greatest potential to reduce reverse current declines in breeding birds. To meet ecological requirements of a larger number of species, agricultural policies accounting for regional disparities and landscape structure appear more efficient than global policies uniformly implemented at national scale. Interestingly, we also found evidence that farmland cover changes can mitigate the negative effect of climate change. Here, we confirm that there is a potential for countering negative effects of climate change by adaptive management of landscape. We argue that such studies will help inform sustainable agricultural policies for the future.
Tracing the Attention of Moving Citizens
NASA Astrophysics Data System (ADS)
Wu, Lingfei; Wang, Cheng-Jun
2016-09-01
With the widespread use of mobile computing devices in contemporary society, our trajectories in the physical space and virtual world are increasingly closely connected. Using the anonymous smartphone data of 1 × 105 users in a major city of China, we study the interplay between online and offline human behaviors by constructing the mobility network (offline) and the attention network (online). Using the network renormalization technique, we find that they belong to two different classes: the mobility network is small-world, whereas the attention network is fractal. We then divide the city into different areas based on the features of the mobility network discovered under renormalization. Interestingly, this spatial division manifests the location-based online behaviors, for example shopping, dating, and taxi-requesting. Finally, we offer a geometric network model to help us understand the relationship between small-world and fractal networks.
Hartman, J.S.; Weisberg, P.J.; Pillai, R.; Ericksen, J.A.; Kuiken, T.; Lindberg, S.E.; Zhang, H.; Rytuba, J.J.; Gustin, M.S.
2009-01-01
Ecosystems that have low mercury (Hg) concentrations (i.e., not enriched or impactedbygeologic or anthropogenic processes) cover most of the terrestrial surface area of the earth yet their role as a net source or sink for atmospheric Hg is uncertain. Here we use empirical data to develop a rule-based model implemented within a geographic information system framework to estimate the spatial and temporal patterns of Hg flux for semiarid deserts, grasslands, and deciduous forests representing 45% of the continental United States. This exercise provides an indication of whether these ecosystems are a net source or sink for atmospheric Hg as well as a basis for recommendation of data to collect in future field sampling campaigns. Results indicated that soil alone was a small net source of atmospheric Hg and that emitted Hg could be accounted for based on Hg input by wet deposition. When foliar assimilation and wet deposition are added to the area estimate of soil Hg flux these biomes are a sink for atmospheric Hg. ?? 2009 American Chemical Society.
Adaptively Parameterized Tomography of the Western Hellenic Subduction Zone
NASA Astrophysics Data System (ADS)
Hansen, S. E.; Papadopoulos, G. A.
2017-12-01
The Hellenic subduction zone (HSZ) is the most seismically active region in Europe and plays a major role in the active tectonics of the eastern Mediterranean. This complicated environment has the potential to generate both large magnitude (M > 8) earthquakes and tsunamis. Situated above the western end of the HSZ, Greece faces a high risk from these geologic hazards, and characterizing this risk requires detailed understanding of the geodynamic processes occurring in this area. However, despite previous investigations, the kinematics of the HSZ are still controversial. Regional tomographic studies have yielded important information about the shallow seismic structure of the HSZ, but these models only image down to 150 km depth within small geographic areas. Deeper structure is constrained by global tomographic models but with coarser resolution ( 200-300 km). Additionally, current tomographic models focused on the HSZ were generated with regularly-spaced gridding, and this type of parameterization often over-emphasizes poorly sampled regions of the model or under-represents small-scale structure. Therefore, we are developing a new, high-resolution image of the mantle structure beneath the western HSZ using an adaptively parameterized seismic tomography approach. By combining multiple, regional travel-time datasets in the context of a global model, with adaptable gridding based on the sampling density of high-frequency data, this method generates a composite model of mantle structure that is being used to better characterize geodynamic processes within the HSZ, thereby allowing for improved hazard assessment. Preliminary results will be shown.
Global Burned Area and Biomass Burning Emissions from Small Fires
NASA Technical Reports Server (NTRS)
Randerson, J. T.; Chen, Y.; vanderWerf, G. R.; Rogers, B. M.; Morton, D. C.
2012-01-01
In several biomes, including croplands, wooded savannas, and tropical forests, many small fires occur each year that are well below the detection limit of the current generation of global burned area products derived from moderate resolution surface reflectance imagery. Although these fires often generate thermal anomalies that can be detected by satellites, their contributions to burned area and carbon fluxes have not been systematically quantified across different regions and continents. Here we developed a preliminary method for combining 1-km thermal anomalies (active fires) and 500 m burned area observations from the Moderate Resolution Imaging Spectroradiometer (MODIS) to estimate the influence of these fires. In our approach, we calculated the number of active fires inside and outside of 500 m burn scars derived from reflectance data. We estimated small fire burned area by computing the difference normalized burn ratio (dNBR) for these two sets of active fires and then combining these observations with other information. In a final step, we used the Global Fire Emissions Database version 3 (GFED3) biogeochemical model to estimate the impact of these fires on biomass burning emissions. We found that the spatial distribution of active fires and 500 m burned areas were in close agreement in ecosystems that experience large fires, including savannas across southern Africa and Australia and boreal forests in North America and Eurasia. In other areas, however, we observed many active fires outside of burned area perimeters. Fire radiative power was lower for this class of active fires. Small fires substantially increased burned area in several continental-scale regions, including Equatorial Asia (157%), Central America (143%), and Southeast Asia (90%) during 2001-2010. Globally, accounting for small fires increased total burned area by approximately by 35%, from 345 Mha/yr to 464 Mha/yr. A formal quantification of uncertainties was not possible, but sensitivity analyses of key model parameters caused estimates of global burned area increases from small fires to vary between 24% and 54%. Biomass burning carbon emissions increased by 35% at a global scale when small fires were included in GFED3, from 1.9 Pg C/yr to 2.5 Pg C/yr. The contribution of tropical forest fires to year-to-year variability in carbon fluxes increased because small fires amplified emissions from Central America, South America and Southeast Asia-regions where drought stress and burned area varied considerably from year to year in response to El Nino-Southern Oscillation and other climate modes.
NASA Astrophysics Data System (ADS)
Agnes, Debrina; Nandatama, Akbar; Isdyantoko, Bagus Andi; Aditya Nugraha, Fajri; Ghivarry, Giusti; Putra Aghni, Perwira; ChandraWijaya, Renaldi; Widayani, Prima
2016-11-01
Gili Indah area, located in Jerowaru, East Lombok Regency is a region that classified as farm area in spatial layout planning map of West Nusa Tenggara province. Gili Indah area has a potential as a new tourism attraction within its gilis (local term for ‘small island’). Assessment should be done to prevent ecological disturbance and infringement towards spatial layout planning map caused by incorrect landuse. Land suitability assessment will be done using remote sensing approach whilst satellite imagery being used to get information about ocean ecology and land physical spatial distribution that will be the parameter of tourism land suitability, such as water clarity, ocean current, type of beaches’ substrate, and beach typology. Field observation then will evaluate the accuracy of data extraction also as a material to do reinterpretation. The actual physical condition will be pictured after the spatial model built with GIS by tiered qualitative analysis approach. The result of assessment and mapping of tourism land suitability is that parts of Gili Indah Area (GiliMaringkik, Greater GiliBembeq, and Small GiliBembeq) are suitable for archipelago tourism while the others is not.
NASA Astrophysics Data System (ADS)
McGuire, Jeffrey J.; Kaneko, Yoshihiro
2018-06-01
The key kinematic earthquake source parameters: rupture velocity, duration and area, shed light on earthquake dynamics, provide direct constraints on stress-drop, and have implications for seismic hazard. However, for moderate and small earthquakes, these parameters are usually poorly constrained due to limitations of the standard analysis methods. Numerical experiments by Kaneko and Shearer [2014,2015] demonstrated that standard spectral fitting techniques can lead to roughly 1 order of magnitude variation in stress-drop estimates that do not reflect the actual rupture properties even for simple crack models. We utilize these models to explore an alternative approach where we estimate the rupture area directly. For the suite of models, the area averaged static stress drop is nearly constant for models with the same underlying friction law, yet corner frequency based stress-drop estimates vary by a factor of 5-10 even for noise free data. Alternatively, we simulated inversions for the rupture area as parameterized by the second moments of the slip distribution. A natural estimate for the rupture area derived from the second moments is A=πLcWc, where Lc and Wc are the characteristic rupture length and width. This definition yields estimates of stress drop that vary by only 10% between the models but are slightly larger than the true area-averaged values. We simulate inversions for the second moments for the various models and find that the area can be estimated well when there are at least 15 available measurements of apparent duration at a variety of take-off angles. The improvement compared to azimuthally-averaged corner-frequency based approaches results from the second moments accounting for directivity and removing the assumption of a circular rupture area, both of which bias the standard approach. We also develop a new method that determines the minimum and maximum values of rupture area that are consistent with a particular dataset at the 95% confidence level. For the Kaneko and Shearer models with 20+ randomly distributed observations and ˜10% noise levels, we find that the maximum and minimum bounds on rupture area typically vary by a factor of two and that the minimum stress drop is often more tightly constrained than the maximum.
Östberg, Anna-Lena; Kjellström, Anna N; Petzold, Max
2017-06-01
The objective was to examine associations between a primary Care Need Index (CNI) and dental caries experience. Dental journal records for 300 988 individuals in western Sweden, aged 3-19 years in 2007-09, were completed with official socioeconomic information. The CNI (independent variable), originally developed for assessing primary care need, was calculated for residential areas (small areas, parishes, dental clinics) based on markers of material deprivation, sociodemographic characteristics, social instability and cultural needs. Dental caries (dependent variable) was registered using the decayed, missing, filled teeth (DMFT) system. Multilevel Poisson regression and logistic regression models were used. All analyses were adjusted for age and gender. In the most deprived areas, the incidence rate ratio (IRR) for dental caries was up to five times higher than in the most affluent areas (reference); in small areas, the IRR for decayed teeth (DT) was 3.74 (95% CI: 3.39-4.12) and 5.11 (CI: 4.45-5.87) for decayed surfaces approximally (DSa). Caries indices including fillings (decayed filled teeth [DFT], decayed filled surfaces approximally [DFSa]) produced lower IRRs, with similar pictures at the parish and dental clinic level. The intracluster correlation was low overall, but stronger at lower geographical levels. The odds ratios for ≥3 caries lesions in the two most deprived areas of the CNI deciles were high, with a DT OR of 3.55 in small areas (95% CI: 3.39-3.73), compared with the eight more affluent deciles. There were strong associations between an index for assessing need in primary care, the CNI and dental caries in Swedish children and adolescents. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Large Scale Anthropogenic Reduction of Forest Cover in Last Glacial Maximum Europe
Pfeiffer, Mirjam; Kolen, Jan C. A.; Davis, Basil A. S.
2016-01-01
Reconstructions of the vegetation of Europe during the Last Glacial Maximum (LGM) are an enigma. Pollen-based analyses have suggested that Europe was largely covered by steppe and tundra, and forests persisted only in small refugia. Climate-vegetation model simulations on the other hand have consistently suggested that broad areas of Europe would have been suitable for forest, even in the depths of the last glaciation. Here we reconcile models with data by demonstrating that the highly mobile groups of hunter-gatherers that inhabited Europe at the LGM could have substantially reduced forest cover through the ignition of wildfires. Similar to hunter-gatherers of the more recent past, Upper Paleolithic humans were masters of the use of fire, and preferred inhabiting semi-open landscapes to facilitate foraging, hunting and travel. Incorporating human agency into a dynamic vegetation-fire model and simulating forest cover shows that even small increases in wildfire frequency over natural background levels resulted in large changes in the forested area of Europe, in part because trees were already stressed by low atmospheric CO2 concentrations and the cold, dry, and highly variable climate. Our results suggest that the impact of humans on the glacial landscape of Europe may be one of the earliest large-scale anthropogenic modifications of the earth system. PMID:27902716
Grazing-incidence small angle x-ray scattering studies of nanoscale polymer gratings
NASA Astrophysics Data System (ADS)
Doxastakis, Manolis; Suh, Hyo Seon; Chen, Xuanxuan; Rincon Delgadillo, Paulina A.; Wan, Lingshu; Williamson, Lance; Jiang, Zhang; Strzalka, Joseph; Wang, Jin; Chen, Wei; Ferrier, Nicola; Ramirez-Hernandez, Abelardo; de Pablo, Juan J.; Gronheid, Roel; Nealey, Paul
2015-03-01
Grazing-Incidence Small Angle X-ray Scattering (GISAXS) offers the ability to probe large sample areas, providing three-dimensional structural information at high detail in a thin film geometry. In this study we exploit the application of GISAXS to structures formed at one step of the LiNe (Liu-Nealey) flow using chemical patterns for directed self-assembly of block copolymer films. Experiments conducted at the Argonne National Laboratory provided scattering patterns probing film characteristics at both parallel and normal directions to the surface. We demonstrate the application of new computational methods to construct models based on scattering measured. Such analysis allows for extraction of structural characteristics at unprecedented detail.
Small and big quality in health care.
Lillrank, Paul Martin
2015-01-01
The purpose of this paper is to clarify healthcare quality's ontological and epistemological foundations; and examine how these lead to different measurements and technologies. Conceptual analysis. Small quality denotes conformance to ex ante requirements. Big quality includes product and service design, based on customer requirements and expectations. Healthcare quality can be divided into three areas: clinical decision making; patient safety; and patient experience, each with distinct measurement and improvement technologies. The conceptual model is expected to bring clarity to constructing specific definitions, measures, objectives and technologies for improving healthcare. This paper claims that before healthcare quality can be defined, measured and integrated into systems, it needs to be clearly separated into ontologically and epistemologically different parts.
NASA Astrophysics Data System (ADS)
Payrastre, Olivier; Bourgin, François; Lebouc, Laurent; Le Bihan, Guillaume; Gaume, Eric
2017-04-01
The October 2015 flash-floods in south eastern France caused more than twenty fatalities, high damages and large economic losses in high density urban areas of the Mediterranean coast, including the cities of Mandelieu-La Napoule, Cannes and Antibes. Following a post event survey and preliminary analyses conducted within the framework of the Hymex project, we set up an entire simulation chain at the regional scale to better understand this outstanding event. Rainfall-runoff simulations, inundation mapping and a first estimation of the impacts are conducted following the approach developed and successfully applied for two large flash-flood events in two different French regions (Gard in 2002 and Var in 2010) by Le Bihan (2016). A distributed rainfall-runoff model applied at high resolution for the whole area - including numerous small ungauged basins - is used to feed a semi-automatic hydraulic approach (Cartino method) applied along the river network - including small tributaries. Estimation of the impacts is then performed based on the delineation of the flooded areas and geographic databases identifying buildings and population at risk.
Multispectral photoacoustic microscopy of lipids using a pulsed supercontinuum laser.
Buma, Takashi; Conley, Nicole C; Choi, Sang Won
2018-01-01
We demonstrate optical resolution photoacoustic microscopy (OR-PAM) of lipid-rich tissue between 1050-1714 nm using a pulsed supercontinuum laser based on a large-mode-area photonic crystal fiber. OR-PAM experiments of lipid-rich samples show the expected optical absorption peaks near 1210 and 1720 nm. These results show that pulsed supercontinuum lasers are promising for OR-PAM applications such as label-free histology of lipid-rich tissue and imaging small animal models of disease.
A Refined Crop Drought Monitoring Method Based on the Chinese GF-1 Wide Field View Data
Chang, Sheng; Wu, Bingfang; Yan, Nana; Zhu, Jianjun; Wen, Qi; Xu, Feng
2018-01-01
In this study, modified perpendicular drought index (MPDI) models based on the red-near infrared spectral space are established for the first time through the analysis of the spectral characteristics of GF-1 wide field view (WFV) data, with a high spatial resolution of 16 m and the highest frequency as high as once every 4 days. GF-1 data was from the Chinese-made, new-generation high-resolution GF-1 remote sensing satellites. Soil-type spatial data are introduced for simulating soil lines in different soil types for reducing errors of using same soil line. Multiple vegetation indices are employed to analyze the response to the MPDI models. Relative soil moisture content (RSMC) and precipitation data acquired at selected stations are used to optimize the drought models, and the best one is the Two-band enhanced vegetation index (EVI2)-based MPDI model. The crop area that was statistically significantly affected by drought from a local governmental department, and used for validation. High correlations and small differences in drought-affected crop area was detected between the field observation data from the local governmental department and the EVI2-based MPDI results. The percentage of bias is between −21.8% and 14.7% in five sub-areas, with an accuracy above 95% when evaluating the performance via the data for the whole study region. Generally the proposed EVI2-based MPDI for GF-1 WFV data has great potential for reliably monitoring crop drought at a relatively high frequency and spatial scale. Currently there is almost no drought model based on GF-1 data, a full exploitation of the advantages of GF-1 satellite data and further improvement of the capacity to observe ground surface objects can provide high temporal and spatial resolution data source for refined monitoring of crop droughts. PMID:29690639
A Refined Crop Drought Monitoring Method Based on the Chinese GF-1 Wide Field View Data.
Chang, Sheng; Wu, Bingfang; Yan, Nana; Zhu, Jianjun; Wen, Qi; Xu, Feng
2018-04-23
In this study, modified perpendicular drought index (MPDI) models based on the red-near infrared spectral space are established for the first time through the analysis of the spectral characteristics of GF-1 wide field view (WFV) data, with a high spatial resolution of 16 m and the highest frequency as high as once every 4 days. GF-1 data was from the Chinese-made, new-generation high-resolution GF-1 remote sensing satellites. Soil-type spatial data are introduced for simulating soil lines in different soil types for reducing errors of using same soil line. Multiple vegetation indices are employed to analyze the response to the MPDI models. Relative soil moisture content (RSMC) and precipitation data acquired at selected stations are used to optimize the drought models, and the best one is the Two-band enhanced vegetation index (EVI2)-based MPDI model. The crop area that was statistically significantly affected by drought from a local governmental department, and used for validation. High correlations and small differences in drought-affected crop area was detected between the field observation data from the local governmental department and the EVI2-based MPDI results. The percentage of bias is between −21.8% and 14.7% in five sub-areas, with an accuracy above 95% when evaluating the performance via the data for the whole study region. Generally the proposed EVI2-based MPDI for GF-1 WFV data has great potential for reliably monitoring crop drought at a relatively high frequency and spatial scale. Currently there is almost no drought model based on GF-1 data, a full exploitation of the advantages of GF-1 satellite data and further improvement of the capacity to observe ground surface objects can provide high temporal and spatial resolution data source for refined monitoring of crop droughts.
Asano, Junichi; Hirakawa, Akihiro; Hamada, Chikuma
2014-01-01
A cure rate model is a survival model incorporating the cure rate with the assumption that the population contains both uncured and cured individuals. It is a powerful statistical tool for prognostic studies, especially in cancer. The cure rate is important for making treatment decisions in clinical practice. The proportional hazards (PH) cure model can predict the cure rate for each patient. This contains a logistic regression component for the cure rate and a Cox regression component to estimate the hazard for uncured patients. A measure for quantifying the predictive accuracy of the cure rate estimated by the Cox PH cure model is required, as there has been a lack of previous research in this area. We used the Cox PH cure model for the breast cancer data; however, the area under the receiver operating characteristic curve (AUC) could not be estimated because many patients were censored. In this study, we used imputation-based AUCs to assess the predictive accuracy of the cure rate from the PH cure model. We examined the precision of these AUCs using simulation studies. The results demonstrated that the imputation-based AUCs were estimable and their biases were negligibly small in many cases, although ordinary AUC could not be estimated. Additionally, we introduced the bias-correction method of imputation-based AUCs and found that the bias-corrected estimate successfully compensated the overestimation in the simulation studies. We also illustrated the estimation of the imputation-based AUCs using breast cancer data. Copyright © 2014 John Wiley & Sons, Ltd.
Analytical and regression models of glass rod drawing process
NASA Astrophysics Data System (ADS)
Alekseeva, L. B.
2018-03-01
The process of drawing glass rods (light guides) is being studied. The parameters of the process affecting the quality of the light guide have been determined. To solve the problem, mathematical models based on general equations of continuum mechanics are used. The conditions for the stable flow of the drawing process have been found, which are determined by the stability of the motion of the glass mass in the formation zone to small uncontrolled perturbations. The sensitivity of the formation zone to perturbations of the drawing speed and viscosity is estimated. Experimental models of the drawing process, based on the regression analysis methods, have been obtained. These models make it possible to customize a specific production process to obtain light guides of the required quality. They allow one to find the optimum combination of process parameters in the chosen area and to determine the required accuracy of maintaining them at a specified level.
NASA Astrophysics Data System (ADS)
Spieler, Diana; Schwarze, Robert; Schütze, Niels
2017-04-01
In the past a variety of different modeling approaches has been developed in catchment hydrology. Even though there is no argument on the relevant processes taking place, there is no unified theory on how best to represent them computationally. Thus a vast number of models has been developed, varying from lumped models to physically based models. Most of them have a more or less fixed model structure and follow the "one fits all" paradigm. However, a more flexible approach could improve model realism by designing catchment specific model structures based on data availability. This study focuses on applying the flexible hydrological modelling framework RAVEN (Craig et al., 2013), to systematically test several conceptual model structures on the 19 km2 Große Ohe Catchment in the Bavarian Forest (Germany). By combining RAVEN with the DREAM algorithm (Vrugt et al., 2009), the relationship between catchment characteristics, model structure, parameter uncertainty and data availability are analyzed. The model structure is progressively developed based on the available data of the well observed forested catchment area. In a second step, the impact of the catchment discretization is analyzed by testing different spatial resolutions of topographic input data.
NASA Astrophysics Data System (ADS)
Castillo, Miguel; Bishop, Paul; Jansen, John D.
2013-01-01
A sudden drop in river base-level can trigger a knickpoint that propagates throughout the fluvial network causing a transient state in the landscape. Knickpoint retreat has been confirmed in large fluvial settings (drainage areas > 100 km2) and field data suggest that the same applies to the case of small bedrock river catchments (drainage areas < 100 km2). Nevertheless, knickpoint recession on resistant lithologies with structure that potentially affects the retreat rate needs to be confirmed with field-based data. Moreover, it remains unclear whether small bedrock rivers can absorb base-level fall via knickpoint retreat. Here we evaluate the response of small bedrock rivers to base-level fall on the isle of Jura in western Scotland (UK), where rivers incise into dipping quartzite. The mapping of raised beach deposits and strath terraces, and the analysis of stream long profiles, were used to identify knickpoints that had been triggered by base-level fall. Our results indicate that the distance of knickpoint retreat scales to the drainage area in a power law function irrespective of structural setting. On the other hand, local channel slope and basin size influence the vertical distribution of knickpoints. As well, at low drainage areas (~ 4 km2) rivers are unable to absorb the full amount of base-level fall and channel reach morphology downstream of the knickpoint tends towards convexity. The results obtained here confirm that knickpoint retreat is mostly controlled by stream discharge, as has been observed for other transient landscapes. Local controls, reflecting basin size and channel slope, have an effect on the vertical distribution of knickpoints; such controls are also related to the ability of rivers to absorb the base-level fall.
Geomorphic Flood Area (GFA): a DEM-based tool for flood susceptibility mapping at large scales
NASA Astrophysics Data System (ADS)
Manfreda, S.; Samela, C.; Albano, R.; Sole, A.
2017-12-01
Flood hazard and risk mapping over large areas is a critical issue. Recently, many researchers are trying to achieve a global scale mapping encountering several difficulties, above all the lack of data and implementation costs. In data scarce environments, a preliminary and cost-effective floodplain delineation can be performed using geomorphic methods (e.g., Manfreda et al., 2014). We carried out several years of research on this topic, proposing a morphologic descriptor named Geomorphic Flood Index (GFI) (Samela et al., 2017) and developing a Digital Elevation Model (DEM)-based procedure able to identify flood susceptible areas. The procedure exhibited high accuracy in several test sites in Europe, United States and Africa (Manfreda et al., 2015; Samela et al., 2016, 2017) and has been recently implemented in a QGIS plugin named Geomorphic Flood Area (GFA) - tool. The tool allows to automatically compute the GFI, and turn it into a linear binary classifier capable of detecting flood-prone areas. To train this classifier, an inundation map derived using hydraulic models for a small portion of the basin is required (the minimum is 2% of the river basin's area). In this way, the GFA-tool allows to extend the classification of the flood-prone areas across the entire basin. We are also defining a simplified procedure for the estimation of the river depth, which may be helpful for large-scale analyses to approximatively evaluate the expected flood damages in the surrounding areas. ReferencesManfreda, S., Nardi, F., Samela, C., Grimaldi, S., Taramasso, A. C., Roth, G., & Sole, A. (2014). Investigation on the use of geomorphic approaches for the delineation of flood prone areas. J. Hydrol., 517, 863-876. Manfreda, S., Samela, C., Gioia, A., Consoli, G., Iacobellis, V., Giuzio, L., & Sole, A. (2016). Flood-prone areas assessment using linear binary classifiers based on flood maps obtained from 1D and 2D hydraulic models. Nat. Hazards, Vol. 79 (2), pp 735-754. Samela, C., Manfreda, S., Paola, F. D., Giugni, M., Sole, A., & Fiorentino, M. (2016). DEM-Based Approaches for the Delineation of Flood-Prone Areas in an Ungauged Basin in Africa. J. Hydrol. Eng,, 06015010. Samela, C., Troy, T. J., & Manfreda, S. (2017a). Geomorphic classifiers for flood-prone areas delineation for data-scarce environments. Adv. Water Resour., 102, 13-28.
Permeability-porosity relationships in sedimentary rocks
Nelson, Philip H.
1994-01-01
In many consolidated sandstone and carbonate formations, plots of core data show that the logarithm of permeability (k) is often linearly proportional to porosity (??). The slope, intercept, and degree of scatter of these log(k)-?? trends vary from formation to formation, and these variations are attributed to differences in initial grain size and sorting, diagenetic history, and compaction history. In unconsolidated sands, better sorting systematically increases both permeability and porosity. In sands and sandstones, an increase in gravel and coarse grain size content causes k to increase even while decreasing ??. Diagenetic minerals in the pore space of sandstones, such as cement and some clay types, tend to decrease log(k) proportionately as ?? decreases. Models to predict permeability from porosity and other measurable rock parameters fall into three classes based on either grain, surface area, or pore dimension considerations. (Models that directly incorporate well log measurements but have no particular theoretical underpinnings from a fourth class.) Grain-based models show permeability proportional to the square of grain size times porosity raised to (roughly) the fifth power, with grain sorting as an additional parameter. Surface-area models show permeability proportional to the inverse square of pore surface area times porosity raised to (roughly) the fourth power; measures of surface area include irreducible water saturation and nuclear magnetic resonance. Pore-dimension models show permeability proportional to the square of a pore dimension times porosity raised to a power of (roughly) two and produce curves of constant pore size that transgress the linear data trends on a log(k)-?? plot. The pore dimension is obtained from mercury injection measurements and is interpreted as the pore opening size of some interconnected fraction of the pore system. The linear log(k)-?? data trends cut the curves of constant pore size from the pore-dimension models, which shows that porosity reduction is always accompanied by a reduction in characteristic pore size. The high powers of porosity of the grain-based and surface-area models are required to compensate for the inclusion of the small end of the pore size spectrum.
Xu, Fang; Wallace, Robyn C.; Garvin, William; Greenlund, Kurt J.; Bartoli, William; Ford, Derek; Eke, Paul; Town, G. Machell
2016-01-01
Public health researchers have used a class of statistical methods to calculate prevalence estimates for small geographic areas with few direct observations. Many researchers have used Behavioral Risk Factor Surveillance System (BRFSS) data as a basis for their models. The aims of this study were to 1) describe a new BRFSS small area estimation (SAE) method and 2) investigate the internal and external validity of the BRFSS SAEs it produced. The BRFSS SAE method uses 4 data sets (the BRFSS, the American Community Survey Public Use Microdata Sample, Nielsen Claritas population totals, and the Missouri Census Geographic Equivalency File) to build a single weighted data set. Our findings indicate that internal and external validity tests were successful across many estimates. The BRFSS SAE method is one of several methods that can be used to produce reliable prevalence estimates in small geographic areas. PMID:27418213
Gridded rainfall estimation for distributed modeling in western mountainous areas
NASA Astrophysics Data System (ADS)
Moreda, F.; Cong, S.; Schaake, J.; Smith, M.
2006-05-01
Estimation of precipitation in mountainous areas continues to be problematic. It is well known that radar-based methods are limited due to beam blockage. In these areas, in order to run a distributed model that accounts for spatially variable precipitation, we have generated hourly gridded rainfall estimates from gauge observations. These estimates will be used as basic data sets to support the second phase of the NWS-sponsored Distributed Hydrologic Model Intercomparison Project (DMIP 2). One of the major foci of DMIP 2 is to better understand the modeling and data issues in western mountainous areas in order to provide better water resources products and services to the Nation. We derive precipitation estimates using three data sources for the period of 1987-2002: 1) hourly cooperative observer (coop) gauges, 2) daily total coop gauges and 3) SNOw pack TELemetry (SNOTEL) daily gauges. The daily values are disaggregated using the hourly gauge values and then interpolated to approximately 4km grids using an inverse-distance method. Following this, the estimates are adjusted to match monthly mean values from the Parameter-elevation Regressions on Independent Slopes Model (PRISM). Several analyses are performed to evaluate the gridded estimates for DMIP 2 experiments. These gridded inputs are used to generate mean areal precipitation (MAPX) time series for comparison to the traditional mean areal precipitation (MAP) time series derived by the NWS' California-Nevada River Forecast Center for model calibration. We use two of the DMIP 2 basins in California and Nevada: the North Fork of the American River (catchment area 885 sq. km) and the East Fork of the Carson River (catchment area 922 sq. km) as test areas. The basins are sub-divided into elevation zones. The North Fork American basin is divided into two zones above and below an elevation threshold. Likewise, the Carson River basin is subdivided in to four zones. For each zone, the analyses include: a) overall difference, b) annual difference, c) typical year monthly comparison, and d) regression fit of the MAPX and MAP data. In terms of mean areal precipitation, overall differences between the MAP and MAPX time series are very small for the North Fork American River elevation zones. For the East Fork Carson River zones, the over all difference is up to 10 percent. The difference tends to be high when the elevation zones are small in area. In our presentation, we will show the results of our analyses and discuss future evaluations of these precipitation estimates using distributed and lumped hydrologic models.
Demolition waste generation for development of a regional management chain model.
Bernardo, Miguel; Gomes, Marta Castilho; de Brito, Jorge
2016-03-01
Even though construction and demolition waste (CDW) is the bulkiest waste stream, its estimation and composition in specific regions still faces major difficulties. Therefore new methods are required especially when it comes to make predictions limited to small areas, such as counties. This paper proposes one such method, which makes use of data collected from real demolition works and statistical information on the geographical area under study. Based on a correlation analysis between the demolition waste estimates and indicators such as population density, buildings ageing index, buildings density and land occupation type, relationships are established that can be used to determine demolition waste outputs in a given area. The derived models are presented and explained. This methodology is independent from the specific region with which it is exemplified (the Lisbon Metropolitan Area) and can therefore be applied to any region of the world, from the country to the county level. Generation of demolition waste data at the county level is the basis of the design of a systemic model for CDW management in a region. Future developments proposed include a mixed-integer linear programming formulation of such recycling network. Copyright © 2016 Elsevier Ltd. All rights reserved.
Moon, Graham; Aitken, Grant; Taylor, Joanna; Twigg, Liz
2017-08-28
This study aims to address, for the first time, the challenges of constructing small area estimates of health status using linked national surveys. The study also seeks to assess the concordance of these small area estimates with data from national censuses. Population level health status in England, Scotland and Wales. A linked integrated dataset of 23 374 survey respondents (16+ years) from the 2011 waves of the Health Survey for England (n=8603), the Scottish Health Survey (n=7537) and the Welsh Health Survey (n=7234). Population prevalence of poorer self-rated health and limiting long-term illness. A multilevel small area estimation modelling approach was used to estimate prevalence of these outcomes for middle super output areas in England and Wales and intermediate zones in Scotland. The estimates were then compared with matched measures from the contemporaneous 2011 UK Census. There was a strong positive association between the small area estimates and matched census measures for all three countries for both poorer self-rated health (r=0.828, 95% CI 0.821 to 0.834) and limiting long-term illness (r=0.831, 95% CI 0.824 to 0.837), although systematic differences were evident, and small area estimation tended to indicate higher prevalences than census data. Despite strong concordance, variations in the small area prevalences of poorer self-rated health and limiting long-term illness evident in census data cannot be replicated perfectly using small area estimation with linked national surveys. This reflects a lack of harmonisation between surveys over question wording and design. The nature of small area estimates as 'expected values' also needs to be better understood. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Small Business Management. Teacher Edition.
ERIC Educational Resources Information Center
Oklahoma State Dept. of Vocational and Technical Education, Stillwater. Curriculum and Instructional Materials Center.
This instructor's guide, which is designed to assist teachers in providing instruction and technical support to small business owners and managers, contains 17 competency-based units of instruction on the following areas that both small business instructors and small business owners have deemed critical to the success of any business:…
Aditya, Kaustav; Sud, U. C.
2018-01-01
Poverty affects many people, but the ramifications and impacts affect all aspects of society. Information about the incidence of poverty is therefore an important parameter of the population for policy analysis and decision making. In order to provide specific, targeted solutions when addressing poverty disadvantage small area statistics are needed. Surveys are typically designed and planned to produce reliable estimates of population characteristics of interest mainly at higher geographic area such as national and state level. Sample sizes are usually not large enough to provide reliable estimates for disaggregated analysis. In many instances estimates are required for areas of the population for which the survey providing the data was unplanned. Then, for areas with small sample sizes, direct survey estimation of population characteristics based only on the data available from the particular area tends to be unreliable. This paper describes an application of small area estimation (SAE) approach to improve the precision of estimates of poverty incidence at district level in the State of Bihar in India by linking data from the Household Consumer Expenditure Survey 2011–12 of NSSO and the Population Census 2011. The results show that the district level estimates generated by SAE method are more precise and representative. In contrast, the direct survey estimates based on survey data alone are less stable. PMID:29879202
Chandra, Hukum; Aditya, Kaustav; Sud, U C
2018-01-01
Poverty affects many people, but the ramifications and impacts affect all aspects of society. Information about the incidence of poverty is therefore an important parameter of the population for policy analysis and decision making. In order to provide specific, targeted solutions when addressing poverty disadvantage small area statistics are needed. Surveys are typically designed and planned to produce reliable estimates of population characteristics of interest mainly at higher geographic area such as national and state level. Sample sizes are usually not large enough to provide reliable estimates for disaggregated analysis. In many instances estimates are required for areas of the population for which the survey providing the data was unplanned. Then, for areas with small sample sizes, direct survey estimation of population characteristics based only on the data available from the particular area tends to be unreliable. This paper describes an application of small area estimation (SAE) approach to improve the precision of estimates of poverty incidence at district level in the State of Bihar in India by linking data from the Household Consumer Expenditure Survey 2011-12 of NSSO and the Population Census 2011. The results show that the district level estimates generated by SAE method are more precise and representative. In contrast, the direct survey estimates based on survey data alone are less stable.
NASA Astrophysics Data System (ADS)
Trigila, Alessandro; Iadanza, Carla; Esposito, Carlo; Scarascia-Mugnozza, Gabriele
2015-11-01
The aim of this work is to define reliable susceptibility models for shallow landslides using Logistic Regression and Random Forests multivariate statistical techniques. The study area, located in North-East Sicily, was hit on October 1st 2009 by a severe rainstorm (225 mm of cumulative rainfall in 7 h) which caused flash floods and more than 1000 landslides. Several small villages, such as Giampilieri, were hit with 31 fatalities, 6 missing persons and damage to buildings and transportation infrastructures. Landslides, mainly types such as earth and debris translational slides evolving into debris flows, were triggered on steep slopes and involved colluvium and regolith materials which cover the underlying metamorphic bedrock. The work has been carried out with the following steps: i) realization of a detailed event landslide inventory map through field surveys coupled with observation of high resolution aerial colour orthophoto; ii) identification of landslide source areas; iii) data preparation of landslide controlling factors and descriptive statistics based on a bivariate method (Frequency Ratio) to get an initial overview on existing relationships between causative factors and shallow landslide source areas; iv) choice of criteria for the selection and sizing of the mapping unit; v) implementation of 5 multivariate statistical susceptibility models based on Logistic Regression and Random Forests techniques and focused on landslide source areas; vi) evaluation of the influence of sample size and type of sampling on results and performance of the models; vii) evaluation of the predictive capabilities of the models using ROC curve, AUC and contingency tables; viii) comparison of model results and obtained susceptibility maps; and ix) analysis of temporal variation of landslide susceptibility related to input parameter changes. Models based on Logistic Regression and Random Forests have demonstrated excellent predictive capabilities. Land use and wildfire variables were found to have a strong control on the occurrence of very rapid shallow landslides.
NASA Astrophysics Data System (ADS)
O'Connor, J. E.; Wise, D. R.; Mangano, J.; Jones, K.
2015-12-01
Empirical analyses of suspended sediment and bedload transport gives estimates of sediment flux for western Oregon and northwestern California. The estimates of both bedload and suspended load are from regression models relating measured annual sediment yield to geologic, physiographic, and climatic properties of contributing basins. The best models include generalized geology and either slope or precipitation. The best-fit suspended-sediment model is based on basin geology, precipitation, and area of recent wildfire. It explains 65% of the variance for 68 suspended sediment measurement sites within the model area. Predicted suspended sediment yields range from no yield from the High Cascades geologic province to 200 tonnes/ km2-yr in the northern Oregon Coast Range and 1000 tonnes/km2-yr in recently burned areas of the northern Klamath terrain. Bed-material yield is similarly estimated from a regression model based on 22 sites of measured bed-material transport, mostly from reservoir accumulation analyses but also from several bedload measurement programs. The resulting best-fit regression is based on basin slope and the presence/absence of the Klamath geologic terrane. For the Klamath terrane, bed-material yield is twice that of the other geologic provinces. This model explains more than 80% of the variance of the better-quality measurements. Predicted bed-material yields range up to 350 tonnes/ km2-yr in steep areas of the Klamath terrane. Applying these regressions to small individual watersheds (mean size; 66 km2 for bed-material; 3 km2 for suspended sediment) and cumulating totals down the hydrologic network (but also decreasing the bed-material flux by experimentally determined attrition rates) gives spatially explicit estimates of both bed-material and suspended sediment flux. This enables assessment of several management issues, including the effects of dams on bedload transport, instream gravel mining, habitat formation processes, and water-quality. The combined fluxes can also be compared to long-term rock uplift and cosmogenically determined landscape erosion rates.
Making Small Schools Work: A Handbook for Teachers in Small Rural Schools.
ERIC Educational Resources Information Center
Sigsworth, Alan; Solstad, Karl Jan
This handbook addresses the provision of an equitable basic education in rural areas, particularly in developing countries, by means of small schools located close to the pupils' homes. It is based on beliefs that small schools can be good schools; the appropriate place for a small school and its teachers is within the community; and small schools…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-20
... rent and the core-based statistical area (CBSA) rent as applied to the 40th percentile FMR for that..., calculated on the basis of the core-based statistical area (CBSA) or the metropolitan Statistical Area (MSA... will be ranked according to each of the statistics specified above, and then a weighted average ranking...
Structural Similitude and Scaling Laws for Plates and Shells: A Review
NASA Technical Reports Server (NTRS)
Simitses, G. J.; Starnes, J. H., Jr.; Rezaeepazhand, J.
2000-01-01
This paper deals with the development and use of scaled-down models in order to predict the structural behavior of large prototypes. The concept is fully described and examples are presented which demonstrate its applicability to beam-plates, plates and cylindrical shells of laminated construction. The concept is based on the use of field equations, which govern the response behavior of both the small model as well as the large prototype. The conditions under which the experimental data of a small model can be used to predict the behavior of a large prototype are called scaling laws or similarity conditions and the term that best describes the process is structural similitude. Moreover, since the term scaling is used to describe the effect of size on strength characteristics of materials, a discussion is included which should clarify the difference between "scaling law" and "size effect". Finally, a historical review of all published work in the broad area of structural similitude is presented for completeness.
NASA Astrophysics Data System (ADS)
Steenhuis, T. S.; Azzaino, Z.; Hoang, L.; Pacenka, S.; Worqlul, A. W.; Mukundan, R.; Stoof, C.; Owens, E. M.; Richards, B. K.
2017-12-01
The New York City source watersheds in the Catskill Mountains' humid, temperate climate has long-term hydrological and water quality monitoring data It is one of the few catchments where implementation of source and landscape management practices has led to decreased phosphorus concentration in the receiving surface waters. One of the reasons is that landscape measures correctly targeted the saturated variable source runoff areas (VSA) in the valley bottoms as the location where most of the runoff and other nonpoint pollutants originated. Measures targeting these areas were instrumental in lowering phosphorus concentration. Further improvements in water quality can be made based on a better understanding of the flow processes and water table fluctuations in the VSA. For that reason, we instrumented a self-contained upland variable source watershed with a landscape characteristic of a soil underlain by glacial till at shallow depth similar to the Catskill watersheds. In this presentation, we will discuss our experimental findings and present a mathematical model. Variable source areas have a small slope making gravity the driving force for the flow, greatly simplifying the simulation of the flow processes. The experimental data and the model simulations agreed for both outflow and water table fluctuations. We found that while the flows to the outlet were similar throughout the year, the discharge of the VSA varies greatly. This was due to transpiration by the plants which became active when soil temperatures were above 10oC. We found that shortly after the temperature increased above 10oC the baseflow stopped and only surface runoff occurred when rainstorms exceeded the storage capacity of the soil in at least a portion of the variable source area. Since plant growth in the variable source area was a major variable determining the base flow behavior, changes in temperature in the future - affecting the duration of the growing season - will affect baseflow and related transport of nutrient and other chemicals many times more than small temperature related increases in potential evaporation rate. This in turn will directly change the water availability and pollutant transport in the many surface source watersheds with variable source area hydrology.
Jennings, Cecil A.; Sundmark, Aaron P.
2017-01-01
The relationships between environmental variables and the growth rates of fishes are important and rapidly expanding topics in fisheries ecology. We used an informationtheoretic approach to evaluate the influence of lake surface area and total phosphorus on the age-specific growth rates of Lepomis macrochirus (Bluegill) in 6 small impoundments in central Georgia. We used model averaging to create composite models and determine the relative importance of the variables within each model. Results indicated that surface area was the most important factor in the models predicting growth of Bluegills aged 1–4 years; total phosphorus was also an important predictor for the same age-classes. These results suggest that managers can use water quality and lake morphometry variables to create predictive models specific to their waterbody or region to help develop lake-specific management plans that select for and optimize local-level habitat factors for enhancing Bluegill growth.
NASA Astrophysics Data System (ADS)
Moncoulon, D.; Labat, D.; Ardon, J.; Onfroy, T.; Leblois, E.; Poulard, C.; Aji, S.; Rémy, A.; Quantin, A.
2013-07-01
The analysis of flood exposure at a national scale for the French insurance market must combine the generation of a probabilistic event set of all possible but not yet occurred flood situations with hazard and damage modeling. In this study, hazard and damage models are calibrated on a 1995-2012 historical event set, both for hazard results (river flow, flooded areas) and loss estimations. Thus, uncertainties in the deterministic estimation of a single event loss are known before simulating a probabilistic event set. To take into account at least 90% of the insured flood losses, the probabilistic event set must combine the river overflow (small and large catchments) with the surface runoff due to heavy rainfall, on the slopes of the watershed. Indeed, internal studies of CCR claim database has shown that approximately 45% of the insured flood losses are located inside the floodplains and 45% outside. 10% other percent are due to seasurge floods and groundwater rise. In this approach, two independent probabilistic methods are combined to create a single flood loss distribution: generation of fictive river flows based on the historical records of the river gauge network and generation of fictive rain fields on small catchments, calibrated on the 1958-2010 Météo-France rain database SAFRAN. All the events in the probabilistic event sets are simulated with the deterministic model. This hazard and damage distribution is used to simulate the flood losses at the national scale for an insurance company (MACIF) and to generate flood areas associated with hazard return periods. The flood maps concern river overflow and surface water runoff. Validation of these maps is conducted by comparison with the address located claim data on a small catchment (downstream Argens).
Mineral resource potential of the Middle Santiam Roadless Area, Linn County, Oregon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, G.W.
1984-01-01
This report discusses the results of a mineral survey of the Middle Santiam Roadless Area (06929), Willamette National Forest, Linn County Oregon. Middle Santiam Roadless Area is adjacent on the east of the Quartzville mining district, a district that has yielded small amounts of base- and precious-metal ores. Many rock types and alteration features that characterize the mining district occur only the western part of the roadless area, and analysis of a few samples from this part of the roadless area indicates evidence of weak mineralization. The western part of the roadless area is therefore identified as having a moderatemore » potential for small deposits of base and precious metals and a low potential for large very low-grade precious-metal deposits. The eastern part of the roadless area has a low potential for metalliferous deposits. 7 refs., 4 figs., 1 tab.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burns, B.A.
This report reviews social and behavioral science models and techniques for their possible use in understanding and predicting consumer energy decision making and behaviors. A number of models and techniques have been developed that address different aspects of the decision process, use different theoretical bases and approaches, and have been aimed at different audiences. Three major areas of discussion were selected: (1) models of adaptation to social change, (2) decision making and choice, and (3) diffusion of innovation. Within these three areas, the contributions of psychologists, sociologists, economists, marketing researchers, and others were reviewed. Five primary components of the modelsmore » were identified and compared. The components are: (1) situational characteristics, (2) product characteristics, (3) individual characteristics, (4) social influences, and (5) the interaction or decision rules. The explicit use of behavioral and social science models in energy decision-making and behavior studies has been limited. Examples are given of a small number of energy studies which applied and tested existing models in studying the adoption of energy conservation behaviors and technologies, and solar technology.« less
Zeka, Ariana; Melly, Steve J; Schwartz, Joel
2008-01-01
Background Air pollution and social characteristics have been shown to affect indicators of health. While use of spatial methods to estimate exposure to air pollution has increased the power to detect effects, questions have been raised about potential for confounding by social factors. Methods A study of singleton births in Eastern Massachusetts was conducted between 1996 and 2002 to examine the association between indicators of traffic, land use, individual and area-based socioeconomic measures (SEM), and birth outcomes (birth weight, small for gestational age and preterm births), in a two-level hierarchical model. Results We found effects of both individual (education, race, prenatal care index) and area-based (median household income) SEM with all birth outcomes. The associations for traffic and land use variables were mainly seen with birth weight, with an exception for an effect of cumulative traffic density on small for gestational age. Race/ethnicity of mother was an important predictor of birth outcomes and a strong confounder for both area-based SEM and indices of physical environment. The effects of traffic and land use differed by level of education and median household income. Conclusion Overall, the findings of the study suggested greater likelihood of reduced birth weight and preterm births among the more socially disadvantaged, and a greater risk of reduced birth weight associated with traffic exposures. Results revealed the importance of controlling simultaneously for SEM and environmental exposures as the way to better understand determinants of health. PMID:19032747
Ohio Teacher Professional Development in the Physical Sciences
NASA Astrophysics Data System (ADS)
Cervenec, Jason; Harper, Kathleen A.
2006-02-01
An in-service teacher program held during the summers of 2004 and 2005 is described. This program, sponsored with state funds, drew a varied group of participants to learn Modeling Instruction in physics. The workshop leaders used the state science proficiency standards and physics education research (PER) results to guide many of the workshop's activities. In 2004, the participants experienced the Modeling mechanics curriculum while pretending to be students; in 2005, the teachers worked in small teams to develop Modeling-consistent units in other areas, often utilizing PER-based materials. Indications are that the experience was valuable to the teachers and that the workshop series should be offered for a new cohort.
Lonati, Giovanni; Cernuschi, Stefano; Sidi, Shelina
2010-12-01
This work is intended to assess the impact on local air quality due to atmospheric emissions from port area activities for a new port in project in the Mediterranean Sea. The sources of air pollutants in the harbour area are auxiliary engines used by ships at berth during loading/offloading operations. A fleet activity-based methodology is first applied to evaluate annual pollutant emissions (NO(X), SO(X), PM, CO and VOC) based on vessel traffic data, ships tonnage and in-port hotelling time for loading/offloading operations. The 3-dimensional Calpuff transport and dispersion model is then applied for the subsequent assessment of the ground level spatial distribution of atmospheric pollutants for both long-term and short-term averaging times. Compliance with current air quality standards in the port area is finally evaluated and indications for port operation are provided. Some methodological aspects of the impact assessment procedure, namely those concerning the steps of emission scenario definitions and model simulations set-up at the project stage, are specifically addressed, suggesting a pragmatic approach for similar evaluations for small new ports in project. Copyright © 2010 Elsevier B.V. All rights reserved.
De Roeck, Els; Van Coillie, Frieke; De Wulf, Robert; Soenen, Karen; Charlier, Johannes; Vercruysse, Jozef; Hantson, Wouter; Ducheyne, Els; Hendrickx, Guy
2014-12-01
The visualization of vector occurrence in space and time is an important aspect of studying vector-borne diseases. Detailed maps of possible vector habitats provide valuable information for the prediction of infection risk zones but are currently lacking for most parts of the world. Nonetheless, monitoring vector habitats from the finest scales up to farm level is of key importance to refine currently existing broad-scale infection risk models. Using Fasciola hepatica, a parasite liver fluke, as a case in point, this study illustrates the potential of very high resolution (VHR) optical satellite imagery to efficiently and semi-automatically detect detailed vector habitats. A WorldView2 satellite image capable of <5m resolution was acquired in the spring of 2013 for the area around Bruges, Belgium, a region where dairy farms suffer from liver fluke infections transmitted by freshwater snails. The vector thrives in small water bodies (SWBs), such as ponds, ditches and other humid areas consisting of open water, aquatic vegetation and/or inundated grass. These water bodies can be as small as a few m2 and are most often not present on existing land cover maps because of their small size. We present a classification procedure based on object-based image analysis (OBIA) that proved valuable to detect SWBs at a fine scale in an operational and semi-automated way. The classification results were compared to field and other reference data such as existing broad-scale maps and expert knowledge. Overall, the SWB detection accuracy reached up to 87%. The resulting fine-scale SWB map can be used as input for spatial distribution modelling of the liver fluke snail vector to enable development of improved infection risk mapping and management advice adapted to specific, local farm situations.
Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators
Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.
2003-01-01
Statistical models for estimating absolute densities of field populations of animals have been widely used over the last century in both scientific studies and wildlife management programs. To date, two general classes of density estimation models have been developed: models that use data sets from capture–recapture or removal sampling techniques (often derived from trapping grids) from which separate estimates of population size (NÌ‚) and effective sampling area (AÌ‚) are used to calculate density (DÌ‚ = NÌ‚/AÌ‚); and models applicable to sampling regimes using distance-sampling theory (typically transect lines or trapping webs) to estimate detection functions and densities directly from the distance data. However, few studies have evaluated these respective models for accuracy, precision, and bias on known field populations, and no studies have been conducted that compare the two approaches under controlled field conditions. In this study, we evaluated both classes of density estimators on known densities of enclosed rodent populations. Test data sets (n = 11) were developed using nine rodent species from capture–recapture live-trapping on both trapping grids and trapping webs in four replicate 4.2-ha enclosures on the Sevilleta National Wildlife Refuge in central New Mexico, USA. Additional “saturation” trapping efforts resulted in an enumeration of the rodent populations in each enclosure, allowing the computation of true densities. Density estimates (DÌ‚) were calculated using program CAPTURE for the grid data sets and program DISTANCE for the web data sets, and these results were compared to the known true densities (D) to evaluate each model's relative mean square error, accuracy, precision, and bias. In addition, we evaluated a variety of approaches to each data set's analysis by having a group of independent expert analysts calculate their best density estimates without a priori knowledge of the true densities; this “blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research studies involving small-mammal abundances.
Lin, Jie; Carter, Corey A; McGlynn, Katherine A; Zahm, Shelia H; Nations, Joel A; Anderson, William F; Shriver, Craig D; Zhu, Kangmin
2015-12-01
Accurate prognosis assessment after non-small-cell lung cancer (NSCLC) diagnosis is an essential step for making effective clinical decisions. This study is aimed to develop a prediction model with routinely available variables to assess prognosis in patients with NSCLC in the U.S. Military Health System. We used the linked database from the Department of Defense's Central Cancer Registry and the Military Health System Data Repository. The data set was randomly and equally split into a training set to guide model development and a testing set to validate the model prediction. Stepwise Cox regression was used to identify predictors of survival. Model performance was assessed by calculating area under the receiver operating curves and construction of calibration plots. A simple risk scoring system was developed to aid quick risk score calculation and risk estimation for NSCLC clinical management. The study subjects were 5054 patients diagnosed with NSCLC between 1998 and 2007. Age, sex, tobacco use, tumor stage, histology, surgery, chemotherapy, peripheral vascular disease, cerebrovascular disease, and diabetes mellitus were identified as significant predictors of survival. Calibration showed high agreement between predicted and observed event rates. The area under the receiver operating curves reached 0.841, 0.849, 0.848, and 0.838 during 1, 2, 3, and 5 years, respectively. This is the first NSCLC prognosis model for quick risk assessment within the Military Health System. After external validation, the model can be translated into clinical use both as a web-based tool and through mobile applications easily accessible to physicians, patients, and researchers.
A Practical, Robust and Fast Method for Location Localization in Range-Based Systems.
Huang, Shiping; Wu, Zhifeng; Misra, Anil
2017-12-11
Location localization technology is used in a number of industrial and civil applications. Real time location localization accuracy is highly dependent on the quality of the distance measurements and efficiency of solving the localization equations. In this paper, we provide a novel approach to solve the nonlinear localization equations efficiently and simultaneously eliminate the bad measurement data in range-based systems. A geometric intersection model was developed to narrow the target search area, where Newton's Method and the Direct Search Method are used to search for the unknown position. Not only does the geometric intersection model offer a small bounded search domain for Newton's Method and the Direct Search Method, but also it can self-correct bad measurement data. The Direct Search Method is useful for the coarse localization or small target search domain, while the Newton's Method can be used for accurate localization. For accurate localization, by utilizing the proposed Modified Newton's Method (MNM), challenges of avoiding the local extrema, singularities, and initial value choice are addressed. The applicability and robustness of the developed method has been demonstrated by experiments with an indoor system.
Processing techniques development, volume 3
NASA Technical Reports Server (NTRS)
Landgrebe, D. A. (Principal Investigator); Anuta, P. E.; Hixson, M. M.; Swain, P. H.
1978-01-01
The author has identified the following significant results. Analysis of the geometric characteristics of the aircraft synthetic aperture radar (SAR) relative to LANDSAT indicated that relatively low order polynominals would model the distortions to subpixel accuracy to bring SAR into registration for good quality imagery. Also the area analyzed was small, about 10 miles square, so this is an additional constraint. For the Air Force/ERIM data, none of the tested methods could achieve subpixel accuracy. Reasons for this is unknown; however, the noisy (high scintillation) nature of the data and attendent unrecognizability of features contribute to this error. It is concluded that the quadratic model would adequately provide distortion modeling for small areas, i.e., 10 to 20 miles square.
Numerical Simulation of Ground-Water Withdrawals in the Southern Lihue Basin, Kauai, Hawaii
Izuka, Scot K.; Oki, Delwyn S.
2002-01-01
Numerical simulations indicate that ground-water withdrawals from the Hanamaulu and Puhi areas of the southern Lihue Basin will result in a decline in water levels and reductions in base flows of streams near proposed new water-supply wells. Most of the changes will be attained within 10 to 20 years of the start of pumping. Except for areas such as Puhi and Kilohana, the freshwater lens in most inland areas of the southern Lihue Basin is thick and model simulations indicate that changes in water level and the position of the freshwater- saltwater interface in response to pumping will be small relative to the present thickness of the freshwater lens. Effects of the proposed withdrawals on streamflow depend on withdrawal rate and proximity of the wells to streams. Placing pumped wells away from streams with low base flow and toward streams with high base flow can reduce the relative effect on individual streams. Simulation of the 0.42-million-gallon-per-day increase in withdrawal projected for 2000 indicates that the resulting changes in water levels and interface position, relative to conditions prior to the withdrawal increase, will be small, and that stream base flow will be reduced by less than 10 percent. Simulation of the 0.83-million-gallon-per-day withdrawal projected for 2010 indicates further thinning of the freshwater lens in the Puhi area, where the lens already may be thin, as well as base-flow reduction in Nawiliwili Stream. Simulation of an alternative distribution of the 0.83-million-gallon-per-day withdrawal indicates that the effects can be reduced by shifting most of the new withdrawal to the Hanamaulu area where the freshwater lens is thicker and stream base flows are greater. Simulation of the 1.16-million-gallon-per-day increase in withdrawal projected for 2020 indicates that if withdrawal is distributed only among Hana-maulu wells 1, 3, and 4, and Puhi well 5A, further thinning of the already-thin freshwater lens in the Puhi area would occur. Such a distribution would also exceed the maximum draft recommended by the water-systems standards used in Hawaii. Another simulation in which part of the 1.16 million gallons per day was distributed among three additional hypothetical wells in the Hanamaulu area showed that the pumping effects could be shifted from the Puhi area to the Hanamaulu area, where the freshwater lens is thicker, but that base flow in Hanamaulu Stream may decrease by as much as 16 percent.
Ferguson, Neil S.; Lamb, Karen E.; Wang, Yang; Ogilvie, David; Ellaway, Anne
2013-01-01
Obesity and other chronic conditions linked with low levels of physical activity (PA) are associated with deprivation. One reason for this could be that it is more difficult for low-income groups to access recreational PA facilities such as swimming pools and sports centres than high-income groups. In this paper, we explore the distribution of access to PA facilities by car and bus across mainland Scotland by income deprivation at datazone level. GIS car and bus networks were created to determine the number of PA facilities accessible within travel times of 10, 20 and 30 minutes. Multilevel negative binomial regression models were then used to investigate the distribution of the number of accessible facilities, adjusting for datazone population size and local authority. Access to PA facilities by car was significantly (p<0.01) higher for the most affluent quintile of area-based income deprivation than for most other quintiles in small towns and all other quintiles in rural areas. Accessibility by bus was significantly lower for the most affluent quintile than for other quintiles in urban areas and small towns, but not in rural areas. Overall, we found that the most disadvantaged groups were those without access to a car and living in the most affluent areas or in rural areas. PMID:23409012
Landmeyer, J.E.
1994-01-01
Ground-water samples were collected from four shallow water-table aquifer observation wells beneath the Small-Arms Firing Range study area at Shaw Air Force Base. Water-chemistry analyses indicated that total lead concentrations in shallow ground water beneath the study area do not exceed the U.S. Environmental Protection Agency maximum contaminant level established for lead in drinking water (0.05 milligrams per liter). All other trace element total concentrations in ground water beneath the study area were at or below the detection limit of the analytical methodology.
Juckem, Paul F.
2009-01-01
A regional, two-dimensional, areal ground-water-flow model was developed to simulate the ground-water-flow system and ground-water/surface-water interaction in the Rock River Basin. The model was developed by the U.S. Geological Survey (USGS), in cooperation with the Rock River Coalition. The objectives of the regional model were to improve understanding of the ground-water-flow system and to develop a tool suitable for evaluating the effects of potential regional water-management programs. The computer code GFLOW was used because of the ease with which the model can simulate ground-water/surface-water interactions, provide a framework for simulating regional ground-water-flow systems, and be refined in a stepwise fashion to incorporate new data and simulate ground-water-flow patterns at multiple scales. The ground-water-flow model described in this report simulates the major hydrogeologic features of the modeled area, including bedrock and surficial aquifers, ground-water/surface-water interactions, and ground-water withdrawals from high-capacity wells. The steady-state model treats the ground-water-flow system as a single layer with hydraulic conductivity and base elevation zones that reflect the distribution of lithologic groups above the Precambrian bedrock and a regionally significant confining unit, the Maquoketa Formation. In the eastern part of the Basin where the shale-rich Maquoketa Formation is present, deep ground-water flow in the sandstone aquifer below the Maquoketa Formation was not simulated directly, but flow into this aquifer was incorporated into the GFLOW model from previous work in southeastern Wisconsin. Recharge was constrained primarily by stream base-flow estimates and was applied uniformly within zones guided by regional infiltration estimates for soils. The model includes average ground-water withdrawals from 1997 to 2006 for municipal wells and from 1997 to 2005 for high-capacity irrigation, industrial, and commercial wells. In addition, the model routes tributary base flow through the river network to the Rock River. The parameter-estimation code PEST was linked to the GFLOW model to select the combination of parameter values best able to match more than 8,000 water-level measurements and base-flow estimates at 9 streamgages. Results from the calibrated GFLOW model show simulated (1) ground-water-flow directions, (2) ground-water/surface-water interactions, as depicted in a map of gaining and losing river and lake sections, (3) ground-water contributing areas for selected tributary rivers, and (4) areas of relatively local ground water captured by rivers. Ground-water flow patterns are controlled primarily by river geometries, with most river sections gaining water from the ground-water-flow system; losing sections are most common on the downgradient shore of lakes and reservoirs or near major pumping centers. Ground-water contributing areas to tributary rivers generally coincide with surface watersheds; however the locations of ground-water divides are controlled by the water table, whereas surface-water divides are controlled by surface topography. Finally, areas of relatively local ground water captured by rivers generally extend upgradient from rivers but are modified by the regional flow pattern, such that these areas tend to shift toward regional ground-water divides for relatively small rivers. It is important to recognize the limitations of this regional-scale model. Heterogeneities in subsurface properties and in recharge rates are considered only at a very broad scale (miles to tens of miles). No account is taken of vertical variations in properties or pumping rates, and no provision is made to account for stacked ground-water-flow systems that have different flow patterns at different depths. Small-scale flow systems (hundreds to thousands of feet) associated with minor water bodies are not considered; as a result, the model is not currently designed for simulating site-specifi
The simcyp population based simulator: architecture, implementation, and quality assurance.
Jamei, Masoud; Marciniak, Steve; Edwards, Duncan; Wragg, Kris; Feng, Kairui; Barnett, Adrian; Rostami-Hodjegan, Amin
2013-01-01
Developing a user-friendly platform that can handle a vast number of complex physiologically based pharmacokinetic and pharmacodynamic (PBPK/PD) models both for conventional small molecules and larger biologic drugs is a substantial challenge. Over the last decade the Simcyp Population Based Simulator has gained popularity in major pharmaceutical companies (70% of top 40 - in term of R&D spending). Under the Simcyp Consortium guidance, it has evolved from a simple drug-drug interaction tool to a sophisticated and comprehensive Model Based Drug Development (MBDD) platform that covers a broad range of applications spanning from early drug discovery to late drug development. This article provides an update on the latest architectural and implementation developments within the Simulator. Interconnection between peripheral modules, the dynamic model building process and compound and population data handling are all described. The Simcyp Data Management (SDM) system, which contains the system and drug databases, can help with implementing quality standards by seamless integration and tracking of any changes. This also helps with internal approval procedures, validation and auto-testing of the new implemented models and algorithms, an area of high interest to regulatory bodies.
Effects of climate change on soil moisture over China from 1960-2006
Zhu, Q.; Jiang, H.; Liu, J.
2009-01-01
Soil moisture is an important variable in the climate system and it has sensitive impact on the global climate. Obviously it is one of essential components in the climate change study. The Integrated Biosphere Simulator (IBIS) is used to evaluate the spatial and temporal patterns of soil moisture across China under the climate change conditions for the period 1960-2006. Results show that the model performed better in warm season than in cold season. Mean errors (ME) are within 10% for all the months and root mean squared errors (RMSE) are within 10% except winter season. The model captured the spatial variability higher than 50% in warm seasons. Trend analysis based on the Mann-Kendall method indicated that soil moisture in most area of China is decreased especially in the northern China. The areas with significant increasing trends in soil moisture mainly locate at northwestern China and small areas in southeastern China and eastern Tibet plateau. ?? 2009 IEEE.
Wang, Yueyan; Ponce, Ninez A; Wang, Pan; Opsomer, Jean D; Yu, Hongjian
2015-12-01
We propose a method to meet challenges in generating health estimates for granular geographic areas in which the survey sample size is extremely small. Our generalized linear mixed model predicts health outcomes using both individual-level and neighborhood-level predictors. The model's feature of nonparametric smoothing function on neighborhood-level variables better captures the association between neighborhood environment and the outcome. Using 2011 to 2012 data from the California Health Interview Survey, we demonstrate an empirical application of this method to estimate the fraction of residents without health insurance for Zip Code Tabulation Areas (ZCTAs). Our method generated stable estimates of uninsurance for 1519 of 1765 ZCTAs (86%) in California. For some areas with great socioeconomic diversity across adjacent neighborhoods, such as Los Angeles County, the modeled uninsured estimates revealed much heterogeneity among geographically adjacent ZCTAs. The proposed method can increase the value of health surveys by providing modeled estimates for health data at a granular geographic level. It can account for variations in health outcomes at the neighborhood level as a result of both socioeconomic characteristics and geographic locations.
A New Framework for Cumulus Parametrization - A CPT in action
NASA Astrophysics Data System (ADS)
Jakob, C.; Peters, K.; Protat, A.; Kumar, V.
2016-12-01
The representation of convection in climate model remains a major Achilles Heel in our pursuit of better predictions of global and regional climate. The basic principle underpinning the parametrisation of tropical convection in global weather and climate models is that there exist discernible interactions between the resolved model scale and the parametrised cumulus scale. Furthermore, there must be at least some predictive power in the larger scales for the statistical behaviour on small scales for us to be able to formally close the parametrised equations. The presentation will discuss a new framework for cumulus parametrisation based on the idea of separating the prediction of cloud area from that of velocity. This idea is put into practice by combining an existing multi-scale stochastic cloud model with observations to arrive at the prediction of the area fraction for deep precipitating convection. Using mid-tropospheric humidity and vertical motion as predictors, the model is shown to reproduce the observed behaviour of both mean and variability of deep convective area fraction well. The framework allows for the inclusion of convective organisation and can - in principle - be made resolution-aware or resolution-independent. When combined with simple assumptions about cloud-base vertical motion the model can be used as a closure assumption in any existing cumulus parametrisation. Results of applying this idea in the the ECHAM model indicate significant improvements in the simulation of tropical variability, including but not limited to the MJO. This presentation will highlight how the close collaboration of the observational, theoretical and model development community in the spirit of the climate process teams can lead to significant progress in long-standing issues in climate modelling while preserving the freedom of individual groups in pursuing their specific implementation of an agreed framework.
Price, Charles A; Knox, Sarah-Jane C; Brodribb, Tim J
2013-01-01
Models that predict the form of hierarchical branching networks typically invoke optimization based on biomechanical similitude, the minimization of impedance to fluid flow, or construction costs. Unfortunately, due to the small size and high number of vein segments found in real biological networks, complete descriptions of networks needed to evaluate such models are rare. To help address this we report results from the analysis of the branching geometry of 349 leaf vein networks comprising over 1.5 million individual vein segments. In addition to measuring the diameters of individual veins before and after vein bifurcations, we also assign vein orders using the Horton-Strahler ordering algorithm adopted from the study of river networks. Our results demonstrate that across all leaves, both radius tapering and the ratio of daughter to parent branch areas for leaf veins are in strong agreement with the expectation from Murray's law. However, as veins become larger, area ratios shift systematically toward values expected under area-preserving branching. Our work supports the idea that leaf vein networks differentiate roles of leaf support and hydraulic supply between hierarchical orders.
Alwee, Razana; Hj Shamsuddin, Siti Mariyam; Sallehuddin, Roselina
2013-01-01
Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models. PMID:23766729
Alwee, Razana; Shamsuddin, Siti Mariyam Hj; Sallehuddin, Roselina
2013-01-01
Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models.
Tracing the Attention of Moving Citizens
Wu, Lingfei; Wang, Cheng-Jun
2016-01-01
With the widespread use of mobile computing devices in contemporary society, our trajectories in the physical space and virtual world are increasingly closely connected. Using the anonymous smartphone data of 1 × 105 users in a major city of China, we study the interplay between online and offline human behaviors by constructing the mobility network (offline) and the attention network (online). Using the network renormalization technique, we find that they belong to two different classes: the mobility network is small-world, whereas the attention network is fractal. We then divide the city into different areas based on the features of the mobility network discovered under renormalization. Interestingly, this spatial division manifests the location-based online behaviors, for example shopping, dating, and taxi-requesting. Finally, we offer a geometric network model to help us understand the relationship between small-world and fractal networks. PMID:27608929
NASA Astrophysics Data System (ADS)
Hanzelka, Pavel; Vonka, Jakub; Musilova, Vera
2013-08-01
We have designed a supporting system to fix a sample holder of a scanning tunneling microscope in an UHV chamber at room temperature. The microscope will operate down to a temperature of 20 K. Low thermal conductance, high mechanical stiffness, and small dimensions are the main features of the supporting system. Three sets of four glass balls placed in vertices of a tetrahedron are used for thermal insulation based on small contact areas between the glass balls. We have analyzed the thermal conductivity of the contacts between the balls mutually and between a ball and a metallic plate while the results have been applied to the entire support. The calculation based on a simple model of the setup has been verified with some experimental measurements. In comparison with other feasible supporting structures, the designed support has the lowest thermal conductance.
Hanzelka, Pavel; Vonka, Jakub; Musilova, Vera
2013-08-01
We have designed a supporting system to fix a sample holder of a scanning tunneling microscope in an UHV chamber at room temperature. The microscope will operate down to a temperature of 20 K. Low thermal conductance, high mechanical stiffness, and small dimensions are the main features of the supporting system. Three sets of four glass balls placed in vertices of a tetrahedron are used for thermal insulation based on small contact areas between the glass balls. We have analyzed the thermal conductivity of the contacts between the balls mutually and between a ball and a metallic plate while the results have been applied to the entire support. The calculation based on a simple model of the setup has been verified with some experimental measurements. In comparison with other feasible supporting structures, the designed support has the lowest thermal conductance.
Modelling the dependence of contrast sensitivity on grating area and spatial frequency.
Rovamo, J; Luntinen, O; Näsänen, R
1993-12-01
We modelled the human foveal visual system in a detection task as a simple image processor comprising (i) low-pass filtering due to the optical transfer function of the eye, (ii) high-pass filtering of neural origin, (iii) addition of internal neural noise, and (iv) detection by a local matched filter. Its detection efficiency for gratings was constant up to a critical area but then decreased with increasing area. To test the model we measured Michelson contrast sensitivity as a function of grating area at spatial frequencies of 0.125-32 c/deg for simple vertical and circular cosine gratings. In circular gratings luminance was sinusoidally modulated as a function of the radius of the grating field. In agreement with the model, contrast sensitivity at all spatial frequencies increased in proportion to the square-root of grating area at small areas. When grating area exceeded critical area, the increase saturated and contrast sensitivity became independent of area at large grating areas. Spatial integration thus obeyed Piper's law at small grating areas. The critical area of spatial integration, marking the cessation of Piper's law, was constant in solid degrees at low spatial frequencies but inversely proportional to spatial frequency squared at medium and high spatial frequencies. At low spatial frequencies the maximum contrast sensitivity obtainable by spatial integration increased in proportion to spatial frequency but at high spatial frequencies it decreased in proportion to the cube of the increasing spatial frequency. The increase was due to high-pass filtering of neural origin (lateral inhibition) and the decrease was mainly due to the optical transfer function of the eye. Our model explained 95% of the total variance of the contrast sensitivity data.
Lòpez-De Fede, Ana; Stewart, John E; Hardin, James W; Mayfield-Smith, Kathy
2016-06-10
Measures of small-area deprivation may be valuable in geographically targeting limited resources to prevent, diagnose, and effectively manage chronic conditions in vulnerable populations. We developed a census-based small-area socioeconomic deprivation index specifically to predict chronic disease burden among publically insured Medicaid recipients in South Carolina, a relatively poor state in the southern United States. We compared the predictive ability of the new index with that of four other small-area deprivation indicators. To derive the ZIP Code Tabulation Area-Level Palmetto Small-Area Deprivation Index (Palmetto SADI), we evaluated ten census variables across five socioeconomic deprivation domains, identifying the combination of census indicators most highly correlated with a set of five chronic disease conditions among South Carolina Medicaid enrollees. In separate validation studies, we used both logistic and spatial regression methods to assess the ability of Palmetto SADI to predict chronic disease burden among state Medicaid recipients relative to four alternative small-area socioeconomic deprivation measures: the Townsend index of material deprivation; a single-variable poverty indicator; and two small-area designations of health care resource deprivation, Primary Care Health Professional Shortage Area and Medically Underserved Area/Medically Underserved Population. Palmetto SADI was the best predictor of chronic disease burden (presence of at least one condition and presence of two or more conditions) among state Medicaid recipients compared to all alternative deprivation measures tested. A low-cost, regionally optimized socioeconomic deprivation index, Palmetto SADI can be used to identify areas in South Carolina at high risk for chronic disease burden among Medicaid recipients and other low-income Medicaid-eligible populations for targeted prevention, screening, diagnosis, disease self-management, and care coordination activities.
Rooney, James P K; Tobin, Katy; Crampsie, Arlene; Vajda, Alice; Heverin, Mark; McLaughlin, Russell; Staines, Anthony; Hardiman, Orla
2015-10-01
Evidence of an association between areal ALS risk and population density has been previously reported. We aim to examine ALS spatial incidence in Ireland using small areas, to compare this analysis with our previous analysis of larger areas and to examine the associations between population density, social deprivation and ALS incidence. Residential area social deprivation has not been previously investigated as a risk factor for ALS. Using the Irish ALS register, we included all cases of ALS diagnosed in Ireland from 1995-2013. 2006 census data was used to calculate age and sex standardised expected cases per small area. Social deprivation was assessed using the pobalHP deprivation index. Bayesian smoothing was used to calculate small area relative risk for ALS, whilst cluster analysis was performed using SaTScan. The effects of population density and social deprivation were tested in two ways: (1) as covariates in the Bayesian spatial model; (2) via post-Bayesian regression. 1701 cases were included. Bayesian smoothed maps of relative risk at small area resolution matched closely to our previous analysis at a larger area resolution. Cluster analysis identified two areas of significant low risk. These areas did not correlate with population density or social deprivation indices. Two areas showing low frequency of ALS have been identified in the Republic of Ireland. These areas do not correlate with population density or residential area social deprivation, indicating that other reasons, such as genetic admixture may account for the observed findings. Copyright © 2015 Elsevier Inc. All rights reserved.
Winds of Change: Reflections on Community Based Child Development in Nepal. Lessons Learnt No. 6.
ERIC Educational Resources Information Center
Wright, Chris
In March 1989, a new program was started in the middle hills area of Nepal. A community-based child development project entered the community by concentrating first on the children, and starting in one small area, and extending to new village areas in subsequent years. As of March 1993, there were four Village Development Areas participating in…
Rollable nano-etched diffractive low-concentration PV sheets for small satelites
NASA Astrophysics Data System (ADS)
Brac-de-la-Perriere, Vincent; Kress, Bernard; Ben-Menahem, Shahar; Ishihara, Abraham K.; Dorais, Greg
2014-09-01
This paper discuses a novel, rollable, mass fabricable, low-concentration photovoltaic sheets for Cubesats providing them with efficient photoelectric conversion of sunlight and secondary diffuse light. The wrap consists of three thin (of order a millimeter or less), cheap plastic-sheet layers, which can be rolled together in a spiral wrapping configuration when stowed. Preliminary simulation based on the above modeling approaches show that the designs achieve comparable photovoltaic power (area for area) and (b) result in a at angular response curve which remains at from normal incidence of over 35 degrees to the normal. The simulation were performed using a ray tracing simulator built in Matlab. In addition, we have constructed a demonstrator using quartz wafers based on the optimized design to show the technology. Details of its fabrication are also provided.
DTMs Assessment to the Definition of Shallow Landslides Prone Areas
NASA Astrophysics Data System (ADS)
Martins, Tiago D.; Oka-Fiori, Chisato; Carvalho Vieira, Bianca; Montgomery, David R.
2017-04-01
Predictive methods have been developed, especially since the 1990s, to identify landslide prone areas. One of the examples it is the physically based model SHALSTAB (Shallow Landsliding Stability Model), that calculate the potential instability for shallow landslides based on topography and physical soil properties. Normally, in such applications in Brazil, the Digital Terrain Model (DTM), is obtained mainly from conventional contour lines. However, recently the LiDAR (Light Detection and Ranging) system has been largely used in Brazil. Thus, this study aimed to evaluate different DTM's, generated from conventional data and LiDAR, and their influence in generating susceptibility maps to shallow landslides using SHALSTAB model. For that were analyzed the physical properties of soil, the response of the model when applying conventional topographical data and LiDAR's in the generation of DTM, and the shallow landslides susceptibility maps based on different topographical data. The selected area is in the urban perimeter of the municipality of Antonina (PR), affected by widespread landslides in March 2011. Among the results, it was evaluated different LiDAR data interpolation, using GIS tools, wherein the Triangulation/Natural Neighbor presented the best performance. It was also found that in one of evaluation indexes (Scars Concentration), the LiDAR derived DTM presented the best performance when compared with the one originated from contour lines, however, the Landslide Potential index, has presented a small increase. Consequently, it was possible to assess the DTM's, and the one derived from LiDAR improved very little the certitude percentage. It is also noted a gap in researches carried out in Brazil on the use of products generated from LiDAR data on geomorphological analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rose, Amy N.; Nagle, Nicholas N.
Techniques such as Iterative Proportional Fitting have been previously suggested as a means to generate new data with the demographic granularity of individual surveys and the spatial granularity of small area tabulations of censuses and surveys. This article explores internal and external validation approaches for synthetic, small area, household- and individual-level microdata using a case study for Bangladesh. Using data from the Bangladesh Census 2011 and the Demographic and Health Survey, we produce estimates of infant mortality rate and other household attributes for small areas using a variation of an iterative proportional fitting method called P-MEDM. We conduct an internalmore » validation to determine: whether the model accurately recreates the spatial variation of the input data, how each of the variables performed overall, and how the estimates compare to the published population totals. We conduct an external validation by comparing the estimates with indicators from the 2009 Multiple Indicator Cluster Survey (MICS) for Bangladesh to benchmark how well the estimates compared to a known dataset which was not used in the original model. The results indicate that the estimation process is viable for regions that are better represented in the microdata sample, but also revealed the possibility of strong overfitting in sparsely sampled sub-populations.« less
Rose, Amy N.; Nagle, Nicholas N.
2016-08-01
Techniques such as Iterative Proportional Fitting have been previously suggested as a means to generate new data with the demographic granularity of individual surveys and the spatial granularity of small area tabulations of censuses and surveys. This article explores internal and external validation approaches for synthetic, small area, household- and individual-level microdata using a case study for Bangladesh. Using data from the Bangladesh Census 2011 and the Demographic and Health Survey, we produce estimates of infant mortality rate and other household attributes for small areas using a variation of an iterative proportional fitting method called P-MEDM. We conduct an internalmore » validation to determine: whether the model accurately recreates the spatial variation of the input data, how each of the variables performed overall, and how the estimates compare to the published population totals. We conduct an external validation by comparing the estimates with indicators from the 2009 Multiple Indicator Cluster Survey (MICS) for Bangladesh to benchmark how well the estimates compared to a known dataset which was not used in the original model. The results indicate that the estimation process is viable for regions that are better represented in the microdata sample, but also revealed the possibility of strong overfitting in sparsely sampled sub-populations.« less
Kang, Youngsig; Hahm, Hyojoon; Yang, Sunghwan; Kim, Taegu
2008-10-01
Behavior models have provided an accident proneness concept based on life change unit (LCU) factors. This paper describes the development of a Korean Life Change Unit (KLCU) model for workers and managers in fatal accident areas, as well as an evaluation of its application. Results suggest that death of parents is the highest stress-giving factor for employees of small and medium sized industries a rational finding the viewpoint of Korean culture. The next stress-giving factors were shown to be the death of a spouse or loved ones, followed by the death of close family members, the death of close friends, changes of family members' health, unemployment, and jail terms. It turned out that these factors have a serious effect on industrial accidents and work-related diseases. The death of parents and close friends are ranked higher in the KLCU model than that of Western society. Crucial information for industrial accident prevention in real fields will be provided and the provided information will be useful for safety management programs related to accident prevention.
Serial grouping of 2D-image regions with object-based attention in humans.
Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R
2016-06-13
After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas.
An analysis of spatial representativeness of air temperature monitoring stations
NASA Astrophysics Data System (ADS)
Liu, Suhua; Su, Hongbo; Tian, Jing; Wang, Weizhen
2018-05-01
Surface air temperature is an essential variable for monitoring the atmosphere, and it is generally acquired at meteorological stations that can provide information about only a small area within an r m radius ( r-neighborhood) of the station, which is called the representable radius. In studies on a local scale, ground-based observations of surface air temperatures obtained from scattered stations are usually interpolated using a variety of methods without ascertaining their effectiveness. Thus, it is necessary to evaluate the spatial representativeness of ground-based observations of surface air temperature before conducting studies on a local scale. The present study used remote sensing data to estimate the spatial distribution of surface air temperature using the advection-energy balance for air temperature (ADEBAT) model. Two target stations in the study area were selected to conduct an analysis of spatial representativeness. The results showed that one station (AWS 7) had a representable radius of about 400 m with a possible error of less than 1 K, while the other station (AWS 16) had the radius of about 250 m. The representable radius was large when the heterogeneity of land cover around the station was small.
Classification of Birds and Bats Using Flight Tracks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cullinan, Valerie I.; Matzner, Shari; Duberstein, Corey A.
Classification of birds and bats that use areas targeted for offshore wind farm development and the inference of their behavior is essential to evaluating the potential effects of development. The current approach to assessing the number and distribution of birds at sea involves transect surveys using trained individuals in boats or airplanes or using high-resolution imagery. These approaches are costly and have safety concerns. Based on a limited annotated library extracted from a single-camera thermal video, we provide a framework for building models that classify birds and bats and their associated behaviors. As an example, we developed a discriminant modelmore » for theoretical flight paths and applied it to data (N = 64 tracks) extracted from 5-min video clips. The agreement between model- and observer-classified path types was initially only 41%, but it increased to 73% when small-scale jitter was censored and path types were combined. Classification of 46 tracks of bats, swallows, gulls, and terns on average was 82% accurate, based on a jackknife cross-validation. Model classification of bats and terns (N = 4 and 2, respectively) was 94% and 91% correct, respectively; however, the variance associated with the tracks from these targets is poorly estimated. Model classification of gulls and swallows (N ≥ 18) was on average 73% and 85% correct, respectively. The models developed here should be considered preliminary because they are based on a small data set both in terms of the numbers of species and the identified flight tracks. Future classification models would be greatly improved by including a measure of distance between the camera and the target.« less
A new contrast-assisted method in microcirculation volumetric flow assessment
NASA Astrophysics Data System (ADS)
Lu, Sheng-Yi; Chen, Yung-Sheng; Yeh, Chih-Kuang
2007-03-01
Microcirculation volumetric flow rate is a significant index in diseases diagnosis and treatment such as diabetes and cancer. In this study, we propose an integrated algorithm to assess microcirculation volumetric flow rate including estimation of blood perfused area and corresponding flow velocity maps based on high frequency destruction/contrast replenishment imaging technique. The perfused area indicates the blood flow regions including capillaries, arterioles and venules. Due to the echo variance changes between ultrasonic contrast agents (UCAs) pre- and post-destruction two images, the perfused area can be estimated by the correlation-based approach. The flow velocity distribution within the perfused area can be estimated by refilling time-intensity curves (TICs) after UCAs destruction. Most studies introduced the rising exponential model proposed by Wei (1998) to fit the TICs. Nevertheless, we found the TICs profile has a great resemblance to sigmoid function in simulations and in vitro experiments results. Good fitting correlation reveals that sigmoid model was more close to actual fact in describing destruction/contrast replenishment phenomenon. We derived that the saddle point of sigmoid model is proportional to blood flow velocity. A strong linear relationship (R = 0.97) between the actual flow velocities (0.4-2.1 mm/s) and the estimated saddle constants was found in M-mode and B-mode flow phantom experiments. Potential applications of this technique include high-resolution volumetric flow rate assessment in small animal tumor and the evaluation of superficial vasculature in clinical studies.
A 3D modeling approach to complex faults with multi-source data
NASA Astrophysics Data System (ADS)
Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan
2015-04-01
Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.
NASA Astrophysics Data System (ADS)
Lee, Ho Ki; Baek, Kye Hyun; Shin, Kyoungsub
2017-06-01
As semiconductor devices are scaled down to sub-20 nm, process window of plasma etching gets extremely small so that process drift or shift becomes more significant. This study addresses one of typical process drift issues caused by consumable parts erosion over time and provides feasible solution by using virtual metrology (VM) based wafer-to-wafer control. Since erosion of a shower head has center-to-edge area dependency, critical dimensions (CDs) at the wafer center and edge area get reversed over time. That CD trend is successfully estimated on a wafer-to-wafer basis by a partial least square (PLS) model which combines variables from optical emission spectroscopy (OES), VI-probe and equipment state gauges. R 2 of the PLS model reaches 0.89 and its prediction performance is confirmed in a mass production line. As a result, the model can be exploited as a VM for wafer-to-wafer control. With the VM, advanced process control (APC) strategy is implemented to solve the CD drift. Three σ of CD across wafer is improved from the range (1.3-2.9 nm) to the range (0.79-1.7 nm). Hopefully, results introduced in this paper will contribute to accelerating implementation of VM based APC strategy in semiconductor industry.
MEMS-based liquid lens for capsule endoscope
NASA Astrophysics Data System (ADS)
Seo, S. W.; Han, S.; Seo, J. H.; Kim, Y. M.; Kang, M. S.; Min, N. G.; Choi, W. B.; Sung, M. Y.
2008-03-01
The capsule endoscope, a new application area of digital imaging, is growing rapidly but needs the versatile imaging capabilities such as auto-focusing and zoom-in to be an active diagnostic tool. The liquid lens based on MEMS technology can be a strong candidate because it is able to be small enough. In this paper, a cylinder-type liquid lens was designed based on Young-Lippmann model and then fabricated with MEMS technology combining the silicon thin-film process and the wafer bonding process. The focal length of the lens module including the fabricated liquid lens was changed reproducibly as a function of the applied voltage. With the change of 30V in the applied bias, the focal length of the constructed lens module could be tuned in the range of about 42cm. The fabricated liquid lens was also proven to be small enough to be adopted in the capsule endoscope, which means the liquid lens can be utilized for the imaging capability improvement of the capsule endoscope.
Miskell, Georgia; Salmond, Jennifer A; Williams, David E
2018-04-01
Portable low-cost instruments have been validated and used to measure ambient nitrogen dioxide (NO 2 ) at multiple sites over a small urban area with 20min time resolution. We use these results combined with land use regression (LUR) and rank correlation methods to explore the effects of traffic, urban design features, and local meteorology and atmosphere chemistry on small-scale spatio-temporal variations. We measured NO 2 at 45 sites around the downtown area of Vancouver, BC, in spring 2016, and constructed four different models: i) a model based on averaging concentrations observed at each site over the whole measurement period, and separate temporal models for ii) morning, iii) midday, and iv) afternoon. Redesign of the temporal models using the average model predictors as constants gave three 'hybrid' models that used both spatial and temporal variables. These accounted for approximately 50% of the total variation with mean absolute error±5ppb. Ranking sites by concentration and by change in concentration across the day showed a shift of high NO 2 concentrations across the central city from morning to afternoon. Locations could be identified in which NO 2 concentration was determined by the geography of the site, and others as ones in which the concentration changed markedly from morning to afternoon indicating the importance of temporal controls. Rank correlation results complemented LUR in identifying significant urban design variables that impacted NO 2 concentration. High variability across a relatively small space was partially described by predictor variables related to traffic (bus stop density, speed limits, traffic counts, distance to traffic lights), atmospheric chemistry (ozone, dew point), and environment (land use, trees). A high-density network recording continuously would be needed fully to capture local variations. Copyright © 2017 Elsevier B.V. All rights reserved.
Local Wind Influence on Freshwater Plume Behavior: Application to the Catalan Shelf.
NASA Astrophysics Data System (ADS)
Liste, Maria; Grifoll, Manel; Monbaliu, Jaak; Keupers, Ingrid; Komijani, Homayoon
2013-04-01
Introduction Freshwater fluxes are not always considered, and often their 3D character is neglected. The "distributed" continental run-off is seldom taken into consideration. The main aim of the EU-FP7 Field_Ac project (www.field_ac.eu), was to improve operational service for coastal areas and to generate added value for shelf and regional scale predictions including land discharge as a boundary condition. In this paper the dispersal of a fresh water plume in a small part of the Catalan Coast (NW Mediterranean Sea) caused by a flash flood event in March 2011 is presented in response to the local wind forcing. Observations and modeling results are shown for a short period but with a large impact on the receiving coastal waters. Methodology and aim For the coastal circulation model, version 3.0 of the Regional Ocean Modeling System [ROMS, Shchepetkin and McWilliams, 2005] has been chosen. ROMS solves the 3-D Reynolds-Averaged Navier-Stokes equations in sigma coordinates. The code design is modular, so that different choices for advection and mixing, for example, may be applied by simply modifying preprocessor flags. Nested increasing-resolution models have been implemented in order to reproduce with sufficient spatial resolution the coastal circulation and the river plume evolution in a small portion of the Catalan coastal area. The boundary conditions for the largest domain model are obtained from the MyOcean products. River and urban run-off are estimated based on measured or predicted rainfall in the contributing catchments areas. Conceptual models based on a reservoir-type schematization of the river and sewer network have been set up to allow for fast prediction of the different point source boundary conditions [Keupers et al., 2011]. Model output data are compared to in situ data from dedicated campaigns during the Field_AC Project and to data from operational buoys in the Catalan coastal area. Results Wind forcing leads to freshwater spreading. As expected, wind speed and direction and the magnitude of the fresh water discharge affect substantially the plume behavior. This case study illustrates clearly the need to consider both the wind forcing and the fresh water discharge as part of a single system. References Field_AC project (www.field_ac.eu), EU- FP7-SPACE-2009-1-242284. Keupers, I., Willems, P., Fernandez Sainz, J., Bricheno, L., Wolf, J., Polton, J., Howarth, J., Carniel, S., Staneva, J. (2011). Methodology (including best practice guidelines) on how to identify and incorporate 'concentrated' and 'distributed' run-off in pre-operational forecasts, based on the input and requirements from our users. FIELD_AC project, D3.1, 90 pp. MyOcean products (http://www.myocean.eu/). Shchepetkin and McWilliams, 2005. The Regional Ocean Modeling System (ROMS): A split-explicit, free-surface, topography-following coordinates ocean model. Ocean Modelling. Vol. 9 pp. 347-404.
Measuring distance “as the horse runs”: Cross-scale comparison of terrain-based metrics
Buttenfield, Barbara P.; Ghandehari, M; Leyk, S; Stanislawski, Larry V.; Brantley, M E; Qiang, Yi
2016-01-01
Distance metrics play significant roles in spatial modeling tasks, such as flood inundation (Tucker and Hancock 2010), stream extraction (Stanislawski et al. 2015), power line routing (Kiessling et al. 2003) and analysis of surface pollutants such as nitrogen (Harms et al. 2009). Avalanche risk is based on slope, aspect, and curvature, all directly computed from distance metrics (Gutiérrez 2012). Distance metrics anchor variogram analysis, kernel estimation, and spatial interpolation (Cressie 1993). Several approaches are employed to measure distance. Planar metrics measure straight line distance between two points (“as the crow flies”) and are simple and intuitive, but suffer from uncertainties. Planar metrics assume that Digital Elevation Model (DEM) pixels are rigid and flat, as tiny facets of ceramic tile approximating a continuous terrain surface. In truth, terrain can bend, twist and undulate within each pixel.Work with Light Detection and Ranging (lidar) data or High Resolution Topography to achieve precise measurements present challenges, as filtering can eliminate or distort significant features (Passalacqua et al. 2015). The current availability of lidar data is far from comprehensive in developed nations, and non-existent in many rural and undeveloped regions. Notwithstanding computational advances, distance estimation on DEMs has never been systematically assessed, due to assumptions that improvements are so small that surface adjustment is unwarranted. For individual pixels inaccuracies may be small, but additive effects can propagate dramatically, especially in regional models (e.g., disaster evacuation) or global models (e.g., sea level rise) where pixels span dozens to hundreds of kilometers (Usery et al 2003). Such models are increasingly common, lending compelling reasons to understand shortcomings in the use of planar distance metrics. Researchers have studied curvature-based terrain modeling. Jenny et al. (2011) use curvature to generate hierarchical terrain models. Schneider (2001) creates a ‘plausibility’ metric for DEM-extracted structure lines. d’Oleire- Oltmanns et al. (2014) adopt object-based image processing as an alternative to working with DEMs; acknowledging the pre-processing involved in converting terrain into an object model is computationally intensive, and likely infeasible for some applications.This paper compares planar distance with surface adjusted distance, evolving from distance “as the crow flies” to distance “as the horse runs”. Several methods are compared for DEMs spanning a range of resolutions for the study area and validated against a 3 meter (m) lidar data benchmark. Error magnitudes vary with pixel size and with the method of surface adjustment. The rate of error increase may also vary with landscape type (terrain roughness, precipitation regimes and land settlement patterns). Cross-scale analysis for a single study area is reported here. Additional areas will be presented at the conference.
NASA Astrophysics Data System (ADS)
Cristiano, Elena; ten Veldhuis, Marie-claire; van de Giesen, Nick
2017-07-01
In urban areas, hydrological processes are characterized by high variability in space and time, making them sensitive to small-scale temporal and spatial rainfall variability. In the last decades new instruments, techniques, and methods have been developed to capture rainfall and hydrological processes at high resolution. Weather radars have been introduced to estimate high spatial and temporal rainfall variability. At the same time, new models have been proposed to reproduce hydrological response, based on small-scale representation of urban catchment spatial variability. Despite these efforts, interactions between rainfall variability, catchment heterogeneity, and hydrological response remain poorly understood. This paper presents a review of our current understanding of hydrological processes in urban environments as reported in the literature, focusing on their spatial and temporal variability aspects. We review recent findings on the effects of rainfall variability on hydrological response and identify gaps where knowledge needs to be further developed to improve our understanding of and capability to predict urban hydrological response.
Markov-random-field-based super-resolution mapping for identification of urban trees in VHR images
NASA Astrophysics Data System (ADS)
Ardila, Juan P.; Tolpekin, Valentyn A.; Bijker, Wietske; Stein, Alfred
2011-11-01
Identification of tree crowns from remote sensing requires detailed spectral information and submeter spatial resolution imagery. Traditional pixel-based classification techniques do not fully exploit the spatial and spectral characteristics of remote sensing datasets. We propose a contextual and probabilistic method for detection of tree crowns in urban areas using a Markov random field based super resolution mapping (SRM) approach in very high resolution images. Our method defines an objective energy function in terms of the conditional probabilities of panchromatic and multispectral images and it locally optimizes the labeling of tree crown pixels. Energy and model parameter values are estimated from multiple implementations of SRM in tuning areas and the method is applied in QuickBird images to produce a 0.6 m tree crown map in a city of The Netherlands. The SRM output shows an identification rate of 66% and commission and omission errors in small trees and shrub areas. The method outperforms tree crown identification results obtained with maximum likelihood, support vector machines and SRM at nominal resolution (2.4 m) approaches.
NASA Astrophysics Data System (ADS)
Ramsdale, Jason D.; Balme, Matthew R.; Conway, Susan J.; Gallagher, Colman; van Gasselt, Stephan A.; Hauber, Ernst; Orgel, Csilla; Séjourné, Antoine; Skinner, James A.; Costard, Francois; Johnsson, Andreas; Losiak, Anna; Reiss, Dennis; Swirad, Zuzanna M.; Kereszturi, Akos; Smith, Isaac B.; Platz, Thomas
2017-06-01
The increased volume, spatial resolution, and areal coverage of high-resolution images of Mars over the past 15 years have led to an increased quantity and variety of small-scale landform identifications. Though many such landforms are too small to represent individually on regional-scale maps, determining their presence or absence across large areas helps form the observational basis for developing hypotheses on the geological nature and environmental history of a study area. The combination of improved spatial resolution and near-continuous coverage significantly increases the time required to analyse the data. This becomes problematic when attempting regional or global-scale studies of metre and decametre-scale landforms. Here, we describe an approach for mapping small features (from decimetre to kilometre scale) across large areas, formulated for a project to study the northern plains of Mars, and provide context on how this method was developed and how it can be implemented. Rather than ;mapping; with points and polygons, grid-based mapping uses a ;tick box; approach to efficiently record the locations of specific landforms (we use an example suite of glacial landforms; including viscous flow features, the latitude dependant mantle and polygonised ground). A grid of squares (e.g. 20 km by 20 km) is created over the mapping area. Then the basemap data are systematically examined, grid-square by grid-square at full resolution, in order to identify the landforms while recording the presence or absence of selected landforms in each grid-square to determine spatial distributions. The result is a series of grids recording the distribution of all the mapped landforms across the study area. In some ways, these are equivalent to raster images, as they show a continuous distribution-field of the various landforms across a defined (rectangular, in most cases) area. When overlain on context maps, these form a coarse, digital landform map. We find that grid-based mapping provides an efficient solution to the problems of mapping small landforms over large areas, by providing a consistent and standardised approach to spatial data collection. The simplicity of the grid-based mapping approach makes it extremely scalable and workable for group efforts, requiring minimal user experience and producing consistent and repeatable results. The discrete nature of the datasets, simplicity of approach, and divisibility of tasks, open up the possibility for citizen science in which crowdsourcing large grid-based mapping areas could be applied.
NASA Astrophysics Data System (ADS)
Wildmann, N.; Kaufmann, F.; Bange, J.
2014-09-01
The measurement of water vapour concentration in the atmosphere is an ongoing challenge in environmental research. Satisfactory solutions exist for ground-based meteorological stations and measurements of mean values. However, carrying out advanced research of thermodynamic processes aloft as well, above the surface layer and especially in the atmospheric boundary layer (ABL), requires the resolution of small-scale turbulence. Sophisticated optical instruments are used in airborne meteorology with manned aircraft to achieve the necessary fast-response measurements of the order of 10 Hz (e.g. LiCor 7500). Since these instruments are too large and heavy for the application on small remotely piloted aircraft (RPA), a method is presented in this study that enhances small capacitive humidity sensors to be able to resolve turbulent eddies of the order of 10 m. The sensor examined here is a polymer-based sensor of the type P14-Rapid, by the Swiss company Innovative Sensor Technologies (IST) AG, with a surface area of less than 10 mm2 and a negligible weight. A physical and dynamical model of this sensor is described and then inverted in order to restore original water vapour fluctuations from sensor measurements. Examples of flight measurements show how the method can be used to correct vertical profiles and resolve turbulence spectra up to about 3 Hz. At an airspeed of 25 m s-1 this corresponds to a spatial resolution of less than 10 m.
Lee, Cheng-Kuang; Pao, Chun-Wei
2016-08-17
Solution-processed small-molecule organic solar cells are a promising renewable energy source because of their low production cost, mechanical flexibility, and light weight relative to their pure inorganic counterparts. In this work, we developed a coarse-grained (CG) Gay-Berne ellipsoid molecular simulation model based on atomistic trajectories from all-atom molecular dynamics simulations of smaller system sizes to systematically study the nanomorphology of the SMDPPEH/PCBM/solvent ternary blend during solution processing, including the blade-coating process by applying external shear to the solution. With the significantly reduced overall system degrees of freedom and computational acceleration from GPU, we were able to go well beyond the limitation of conventional all-atom molecular simulations with a system size on the order of hundreds of nanometers with mesoscale molecular detail. Our simulations indicate that, similar to polymer solar cells, the optimal blending ratio in small-molecule organic solar cells must provide the highest specific interfacial area for efficient exciton dissociation, while retaining balanced hole/electron transport pathway percolation. We also reveal that blade-coating processes have a significant impact on nanomorphology. For given donor/acceptor blending ratios, applying an external shear force can effectively promote donor/acceptor phase segregation and stacking in the SMDPPEH domains. The present study demonstrated the capability of an ellipsoid-based coarse-grained model for studying the nanomorphology evolution of small-molecule organic solar cells during solution processing/blade-coating and provided links between fabrication protocols and device nanomorphologies.
High resolution climate scenarios for snowmelt modelling in small alpine catchments
NASA Astrophysics Data System (ADS)
Schirmer, M.; Peleg, N.; Burlando, P.; Jonas, T.
2017-12-01
Snow in the Alps is affected by climate change with regard to duration, timing and amount. This has implications with respect to important societal issues as drinking water supply or hydropower generation. In Switzerland, the latter received a lot of attention following the political decision to phase out of nuclear electricity production. An increasing number of authorization requests for small hydropower plants located in small alpine catchments was observed in the recent years. This situation generates ecological conflicts, while the expected climate change poses a threat to water availability thus putting at risk investments in such hydropower plants. Reliable high-resolution climate scenarios are thus required, which account for small-scale processes to achieve realistic predictions of snowmelt runoff and its variability in small alpine catchments. We therefore used a novel model chain by coupling a stochastic 2-dimensional weather generator (AWE-GEN-2d) with a state-of-the-art energy balance snow cover model (FSM). AWE-GEN-2d was applied to generate ensembles of climate variables at very fine temporal and spatial resolution, thus providing all climatic input variables required for the energy balance modelling. The land-surface model FSM was used to describe spatially variable snow cover accumulation and melt processes. The FSM was refined to allow applications at very high spatial resolution by specifically accounting for small-scale processes, such as a subgrid-parametrization of snow covered area or an improved representation of forest-snow processes. For the present study, the model chain was tested for current climate conditions using extensive observational dataset of different spatial and temporal coverage. Small-scale spatial processes such as elevation gradients or aspect differences in the snow distribution were evaluated using airborne LiDAR data. 40-year of monitoring data for snow water equivalent, snowmelt and snow-covered area for entire Switzerland was used to verify snow distribution patterns at coarser spatial and temporal scale. The ability of the model chain to reproduce current climate conditions in small alpine catchments makes this model combination an outstanding candidate to produce high resolution climate scenarios of snowmelt in small alpine catchments.
Testing models for the formation of the equatorial ridge on Iapetus via crater counting
NASA Astrophysics Data System (ADS)
Damptz, Amanda L.; Dombard, Andrew J.; Kirchoff, Michelle R.
2018-03-01
Iapetus's equatorial ridge, visible in global views of the moon, is unique in the Solar System. The formation of this feature is likely attributed to a key event in the evolution of Iapetus, and various models have been proposed as the source of the ridge. By surveying imagery from the Cassini and Voyager missions, this study aims to compile a database of the impact crater population on and around Iapetus's equatorial ridge, assess the relative age of the ridge from differences in cratering between on ridge and off ridge, and test the various models of ridge formation. This work presents a database that contains 7748 craters ranging from 0.83 km to 591 km in diameter. The database includes the study area in which the crater is located, the latitude and longitude of the crater, the major and minor axis lengths, and the azimuthal angle of orientation of the major axis. Analysis of crater orientation over the entire study area reveals that there is no preference for long-axis orientation, particularly in the area with the highest resolution. Comparison of the crater size-frequency distributions show that the crater distribution on the ridge appears to be depleted in craters larger than 16 km with an abruptly enhanced crater population less than 16 km in diameter up to saturation. One possible interpretation is that the ridge is a relatively younger surface with an enhanced small impactor population. Finally, the compiled results are used to examine each ridge formation hypothesis. Based on these results, a model of ridge formation via a tidally disrupted sub-satellite appears most consistent with our interpretation of a younger ridge with an enhanced small impactor population.
Vegetation Removal from Uav Derived Dsms, Using Combination of RGB and NIR Imagery
NASA Astrophysics Data System (ADS)
Skarlatos, D.; Vlachos, M.
2018-05-01
Current advancements on photogrammetric software along with affordability and wide spreading of Unmanned Aerial Vehicles (UAV), allow for rapid, timely and accurate 3D modelling and mapping of small to medium sized areas. Although the importance and applications of large format aerial overlaps cameras and photographs in Digital Surface Model (DSM) production and LIDAR data is well documented in literature, this is not the case for UAV photography. Additionally, the main disadvantage of photogrammetry is the inability to map the dead ground (terrain), when we deal with areas that include vegetation. This paper assesses the use of near-infrared imagery captured by small UAV platforms to automatically remove vegetation from Digital Surface Models (DSMs) and obtain a Digital Terrain Model (DTM). Two areas were tested, based on the availability of ground reference points, both under trees and among vegetation, as well as on terrain. In addition, RGB and near-infrared UAV photography was captured and processed using Structure from Motion (SfM) and Multi View Stereo (MVS) algorithms to generate DSMs and corresponding colour and NIR orthoimages with 0.2 m and 0.25 m as pixel size respectively for the two test sites. Moreover, orthophotos were used to eliminate the vegetation from the DSMs using NDVI index, thresholding and masking. Following that, different interpolation algorithms, according to the test sites, were applied to fill in the gaps and created DTMs. Finally, a statistic analysis was made using reference terrain points captured on field, both on dead ground and under vegetation to evaluate the accuracy of the whole process and assess the overall accuracy of the derived DTMs in contrast with the DSMs.
Using airborne laser scanning profiles to validate marine geoid models
NASA Astrophysics Data System (ADS)
Julge, Kalev; Gruno, Anti; Ellmann, Artu; Liibusk, Aive; Oja, Tõnis
2014-05-01
Airborne laser scanning (ALS) is a remote sensing method which utilizes LiDAR (Light Detection And Ranging) technology. The datasets collected are important sources for large range of scientific and engineering applications. Mostly the ALS is used to measure terrain surfaces for compilation of Digital Elevation Models but it can also be used in other applications. This contribution focuses on usage of ALS system for measuring sea surface heights and validating gravimetric geoid models over marine areas. This is based on the ALS ability to register echoes of LiDAR pulse from the water surface. A case study was carried out to analyse the possibilities for validating marine geoid models by using ALS profiles. A test area at the southern shores of the Gulf of Finland was selected for regional geoid validation. ALS measurements were carried out by the Estonian Land Board in spring 2013 at different altitudes and using different scan rates. The one wavelength Leica ALS50-II laser scanner on board of a small aircraft was used to determine the sea level (with respect to the GRS80 reference ellipsoid), which follows roughly the equipotential surface of the Earth's gravity field. For the validation a high-resolution (1'x2') regional gravimetric GRAV-GEOID2011 model was used. This geoid model covers the entire area of Estonia and surrounding waters of the Baltic Sea. The fit between the geoid model and GNSS/levelling data within the Estonian dry land revealed RMS of residuals ±1… ±2 cm. Note that such fitting validation cannot proceed over marine areas. Therefore, an ALS observation-based methodology was developed to evaluate the GRAV-GEOID2011 quality over marine areas. The accuracy of acquired ALS dataset were analyzed, also an optimal width of nadir-corridor containing good quality ALS data was determined. Impact of ALS scan angle range and flight altitude to obtainable vertical accuracy were investigated as well. The quality of point cloud is analysed by cross validation between overlapped flight lines and the comparison with tide gauge stations readings. The comparisons revealed that the ALS based profiles of sea level heights agree reasonably with the regional geoid model (within accuracy of the ALS data and after applying corrections due to sea level variations). Thus ALS measurements are suitable for measuring sea surface heights and validating marine geoid models.
Tateshima, Satoshi; Tanishita, Kazuo; Hakata, Yasuhiro; Tanoue, Shin-ya; Viñuela, Fernando
2009-07-01
Development of a flexible self-expanding stent system and stent-assisted coiling technique facilitates endovascular treatment of wide-necked brain aneurysms. The hemodynamic effect of self-expandable stent placement across the neck of a brain aneurysm has not been well documented in patient-specific aneurysm models. Three patient-specific silicone aneurysm models based on clinical images were used in this study. Model 1 was constructed from a wide-necked internal carotid artery-ophthalmic artery aneurysm, and Models 2 and 3 were constructed from small wide-necked middle cerebral artery aneurysms. Neuroform stents were placed in the in vitro aneurysm models, and flow structures were compared before and after the stent placements. Flow velocity fields were acquired with particle imaging velocimetry. In Model 1, a clockwise, single-vortex flow pattern was observed in the aneurysm dome before stenting was performed. There were multiple vortices, and a very small fast flow stream was newly formed in the aneurysm dome after stenting. The mean intraaneurysmal flow velocity was reduced by approximately 23-40%. In Model 2, there was a clockwise vortex flow in the aneurysm dome and another small counterclockwise vortex in the tip of the aneurysm dome before stenting. The small vortex area disappeared after stenting, and the mean flow velocity in the aneurysm dome was reduced by 43-64%. In Model 3, a large, counterclockwise, single vortex was seen in the aneurysm dome before stenting. Multiple small vortices appeared in the aneurysm dome after stenting, and the mean flow velocity became slower by 22-51%. The flexible self-expandable stents significantly altered flow velocity and also flow structure in these aneurysms. Overall flow alterations by the stent appeared favorable for the long-term durability of aneurysm embolization. The possibility that the placement of a low-profile self-expandable stent might induce unfavorable flow patterns such as a fast flow stream in the aneurysm dome cannot be excluded.
Dulau, Violaine; Estrade, Vanessa; Fayan, Jacques
2017-01-01
Photo-identification surveys of Indo-Pacific bottlenose dolphins were conducted from 2009 to 2014 off Reunion Island (55°E33'/21°S07'), in the Indian Ocean. Robust Design models were applied to produce the most reliable estimate of population abundance and survival rate, while accounting for temporary emigration from the survey area (west coast). The sampling scheme consisted of a five-month (June-October) sampling period in each year of the study. The overall population size at Reunion was estimated to be 72 individuals (SE = 6.17, 95%CI = 61-85), based on a random temporary emigration (γ") of 0.096 and a proportion of 0.70 (SE = 0.03) distinct individuals. The annual survival rate was 0.93 (±0.018 SE, 95%CI = 0.886-0.958) and was constant over time and between sexes. Models considering gender groups indicated different movement patterns between males and females. Males showed null or quasi-null temporary emigration (γ" = γ' < 0.01), while females showed a random temporary emigration (γ") of 0.10, suggesting that a small proportion of females was outside the survey area during each primary sampling period. Sex-specific temporary migration patterns were consistent with movement and residency patterns observed in other areas. The Robust Design approach provided an appropriate sampling scheme for deriving island-associated population parameters, while allowing to restrict survey effort both spatially (i.e. west coast only) and temporally (five months per year). Although abundance and survival were stable over the six years, the small population size of fewer than 100 individuals suggested that this population is highly vulnerable. Priority should be given to reducing any potential impact of human activity on the population and its habitat.
An initial model for estimating soybean development stages from spectral data
NASA Technical Reports Server (NTRS)
Henderson, K. E.; Badhwar, G. D.
1982-01-01
A model, utilizing a direct relationship between remotely sensed spectral data and soybean development stage, has been proposed. The model is based upon transforming the spectral data in Landsat bands to greenness values over time and relating the area of this curve to soybean development stage. Soybean development stages were estimated from data acquired in 1978 from research plots at the Purdue University Agronomy Farm as well as Landsat data acquired over sample areas of the U.S. Corn Belt in 1978 and 1979. Analysis of spectral data from research plots revealed that the model works well with reasonable variation in planting date, row spacing, and soil background. The R-squared of calculated U.S. observed development stage exceeded 0.91 for all treatment variables. Using Landsat data the calculated U.S. observed development stage gave an R-squared of 0.89 in 1978 and 0.87 in 1979. No difference in the models performance could be detected between early and late planted fields, small and large fields, or high and low yielding fields.
NASA Astrophysics Data System (ADS)
Wu, D.; Du, Y.; Su, F.; Huang, W.; Zhang, L.
2018-04-01
The topographic measurement of muddy tidal flat is restricted by the difficulty of access to the complex, wide-range and dynamic tidal conditions. Then the waterline detection method (WDM) has the potential to investigate the morph-dynamics quantitatively by utilizing large archives of satellite images. The study explores the potential for using WDM with BJ-1 small satellite images to construct a digital elevation model (DEM) of a wide and grading mudflat. Three major conclusions of the study are as follows: (1) A new intelligent correlating model of waterline detection considering different tidal stages and local geographic conditions was explored. With this correlative algorithm waterline detection model, a series of waterlines were extracted from multi-temporal remotely sensing images collected over the period of a year. The model proved to detect waterlines more efficiently and exactly. (2) The spatial structure of elevation superimposing on the points of waterlines was firstly constructed and a more accurate hydrodynamic ocean tide grid model was used. By the newly constructed abnormal hydrology evaluation model, a more reasonable and reliable set of waterline points was acquired to construct a smoother TIN and GRID DEM. (3) DEM maps of Bohai Bay, with a spatial resolution of about 30 m and height accuracy of about 0.35 m considering LiDAR and 0.19 m considering RTK surveying were constructed over an area of about 266 km2. Results show that remote sensing research in extremely turbid estuaries and tidal areas is possible and is an effective tool for monitoring the tidal flats.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-18
... voucher programs, but it is unclear what the net effect will be. For example, small area FMRs are likely... standards for definition of CBSAs are based on a review of journey-to-work data, or commuting patterns, as... adequate technical assistance to the participating PHAs and monitor the effects and effectiveness of the...
Clemens, Tom
2017-01-01
Abstract Background: Patterns of adverse birth outcomes vary spatially and there is evidence that this may relate to features of the physical environment such as air pollution. However, other social characteristics of the environment such as levels of crime are relatively understudied. This study examines the association between crime rates and birth weight and prematurity. Methods: Maternity inpatient data recorded at birth, including residential postcode, was linked to a representative 5% sample of Scottish Census data and small area crime rates from Scottish Police forces. Coefficients associated with crime were reported from crude and confounder adjusted models predicting low birth weight (< 2500 g), mean birthweight, small for gestational age and prematurity for all singleton live births. Results: Total crime rates were associated with strong and significant reductions in mean birth weight and increases in the risks of both a small for gestational age baby and premature birth. These effects, with the exception of prematurity, were robust to adjustment for individual characteristics including smoking, ethnicity and other socio-economic variables as well as area based confounders including air pollution. Mean birth weight was robust to additional adjustment for neighbourhood income deprivation. Conclusion: The level of crime in a mother’s area of residence, which may be a proxy for the degree of threat felt and therefore stress experienced, appears to be an important determinant of the risk of adverse birth outcomes. PMID:27578830
Emissions, dispersion and human exposure of mercury from a Swedish chlor-alkali plant
NASA Astrophysics Data System (ADS)
Wängberg, I.; Barregard, L.; Sällsten, G.; Haeger-Eugensson, M.; Munthe, J.; Sommar, J.
Mercury in air near a mercury cell chlor-alkali plant in Sweden has been measured within the EU-project EMECAP. Based on the measurements and modelling the annual distributions of GEM and RGM have been calculated for the local area around the plant. The average concentration of GEM in residential areas near the plant was found to be 1-3.5 ng m -3 higher in comparison to the background concentration in this part of Sweden. The emission of RGM (0.55 kg year -1) results in elevated RGM concentrations close to the plant. The greatest impact on the local area is due to wet deposition of RGM. However, only a small fraction (0.4%) of all mercury being emitted was found to be deposited in the local area. No impact on urinary mercury could be demonstrated in the population living close to the plant.
Ocean outfalls as an alternative to minimizing risks to human and environmental health.
Feitosa, Renato Castiglia
2017-06-01
Submarine outfalls are proposed as an efficient alternative for the final destination of wastewater in densely populated coastal areas, due to the high dispersal capacity and the clearance of organic matter in the marine environment, and because they require small areas for implementation. This paper evaluates the probability of unsuitable bathing conditions in coastal areas nearby to the Ipanema, Barra da Tijuca and Icaraí outfalls based on a computational methodology gathering hydrodynamic, pollutant transport, and bacterial decay modelling. The results show a strong influence of solar radiation and all factors that mitigate its levels in the marine environment on coliform concentration. The aforementioned outfalls do not pollute the coastal areas, and unsuitable bathing conditions are restricted to nearby effluent launching points. The pollution observed at the beaches indicates that the contamination occurs due to the polluted estuarine systems, rivers and canals that flow to the coast.
A biological decontamination process for small, privately owned buildings.
Krauter, Paula; Tucker, Mark
2011-09-01
An urban wide-area recovery and restoration effort following a large-scale biological release will require extensive resources and tax the capabilities of government authorities. Further, the number of private decontamination contractors available may not be sufficient to respond to the needs. These resource limitations could create the need for decontamination by the building owner/occupant. This article provides owners/occupants with a simple method to decontaminate a building or area following a wide-area release of Bacillus anthracis using liquid sporicidal decontamination materials, such as pH-amended bleach or activated peroxide; simple application devices; and high-efficiency particulate air-filtered vacuums. Owner/occupant decontamination would be recommended only after those charged with overseeing decontamination-the Unified Command/Incident Command-identify buildings and areas appropriate for owner/occupant decontamination based on modeling and environmental sampling and conduct health and safety training for cleanup workers.
Constitutive behavior and fracture toughness properties of the F82H ferritic/martensitic steel
NASA Astrophysics Data System (ADS)
Spätig, P.; Odette, G. R.; Donahue, E.; Lucas, G. E.
2000-12-01
A detailed investigation of the constitutive behavior of the International Energy Agency (IEA) program heat of 8 Cr unirradiated F82H ferritic-martensitic steel has been undertaken in the temperature range of 80-723 K. The overall tensile flow stress is decomposed into temperature-dependent and athermal yield stress contributions plus a mildly temperature-dependent strain-hardening component. The fitting forms are based on a phenomenological dislocation mechanics model. This formulation provides a more accurate and physically based representation of the flow stress as a function of the key variables of test temperature, strain and stain rate compared to simple power law treatments. Fracture toughness measurements from small compact tension specimens are also reported and analyzed in terms of a critical stress-critical area local fracture model.
Polarizable atomic multipole-based force field for DOPC and POPE membrane lipids
NASA Astrophysics Data System (ADS)
Chu, Huiying; Peng, Xiangda; Li, Yan; Zhang, Yuebin; Min, Hanyi; Li, Guohui
2018-04-01
A polarizable atomic multipole-based force field for the membrane bilayer models 1,2-dioleoyl-phosphocholine (DOPC) and 1-palmitoyl-2-oleoyl-phosphatidylethanolamine (POPE) has been developed. The force field adopts the same framework as the Atomic Multipole Optimized Energetics for Biomolecular Applications (AMOEBA) model, in which the charge distribution of each atom is represented by the permanent atomic monopole, dipole and quadrupole moments. Many-body polarization including the inter- and intra-molecular polarization is modelled in a consistent manner with distributed atomic polarizabilities. The van der Waals parameters were first transferred from existing AMOEBA parameters for small organic molecules and then optimised by fitting to ab initio intermolecular interaction energies between models and a water molecule. Molecular dynamics simulations of the two aqueous DOPC and POPE membrane bilayer systems, consisting of 72 model molecules, were then carried out to validate the force field parameters. Membrane width, area per lipid, volume per lipid, deuterium order parameters, electron density profile, etc. were consistent with experimental values.
Herman, Peter; Sanganahalli, Basavaraju G.; Coman, Daniel; Blumenfeld, Hal; Rothman, Douglas L.
2011-01-01
Abstract A primary objective in neuroscience is to determine how neuronal populations process information within networks. In humans and animal models, functional magnetic resonance imaging (fMRI) is gaining increasing popularity for network mapping. Although neuroimaging with fMRI—conducted with or without tasks—is actively discovering new brain networks, current fMRI data analysis schemes disregard the importance of the total neuronal activity in a region. In task fMRI experiments, the baseline is differenced away to disclose areas of small evoked changes in the blood oxygenation level-dependent (BOLD) signal. In resting-state fMRI experiments, the spotlight is on regions revealed by correlations of tiny fluctuations in the baseline (or spontaneous) BOLD signal. Interpretation of fMRI-based networks is obscured further, because the BOLD signal indirectly reflects neuronal activity, and difference/correlation maps are thresholded. Since the small changes of BOLD signal typically observed in cognitive fMRI experiments represent a minimal fraction of the total energy/activity in a given area, the relevance of fMRI-based networks is uncertain, because the majority of neuronal energy/activity is ignored. Thus, another alternative for quantitative neuroimaging of fMRI-based networks is a perspective in which the activity of a neuronal population is accounted for by the demanded oxidative energy (CMRO2). In this article, we argue that network mapping can be improved by including neuronal energy/activity of both the information about baseline and small differences/fluctuations of BOLD signal. Thus, total energy/activity information can be obtained through use of calibrated fMRI to quantify differences of ΔCMRO2 and through resting-state positron emission tomography/magnetic resonance spectroscopy measurements for average CMRO2. PMID:22433047
NASA Astrophysics Data System (ADS)
Griffa, Annalisa; Carlson, Daniel; Berta, Maristella; Sciascia, Roberta; Corgnati, Lorenzo; Mantovani, Carlo; Fredji, Erick; Magaldi, Marcello; Zambianchi, Enrico; Poulain, Pierre Marie; Russo, Aniello; Carniel, Sandro
2017-04-01
Surface transport in the Adriatic Sea is investigated using data from historic drifter data, HF radar and virtual particles computed from a numerical model. Alongshore coastal currents and cyclonic gyres are the primary circulation features that connect regions in the Adriatic Sea. Their strength is highly dependent on the wind, with Southeasterly Sirocco winds driving eastward cross-Adriatic transport from the Italian coasts and Northwesterly Mistral winds enhancing east-to-west transport. Results from the analysis show that Cross-Adriatic connection percentages were higher for east-to-west transport, with westward (eastward) transport observed mostly in the northern (southern) arms of the central and southern gyres. These pathways of patterns influence the connection between Marine Protected Areas (MPAs) and between spawning and nursery areas for small pelagic fish. Percentage connections between MPAs are computed, showing that while the highest percentages occur through boundary currents, significant percentages also occur through cross-gyre transport, suggesting the concept of cell-based ecosystems. The nursery area of the Manfredonia Gulf has limited retention properties, and eggs and larvae are likely to reach the Gulf mostly from remote spawning areas through current transport
NASA Technical Reports Server (NTRS)
Grubb, Matt
2016-01-01
The NASA Operational Simulator for Small Satellites (NOS3) is a suite of tools to aid in areas such as software development, integration test (IT), mission operations training, verification and validation (VV), and software systems check-out. NOS3 provides a software development environment, a multi-target build system, an operator interface-ground station, dynamics and environment simulations, and software-based hardware models. NOS3 enables the development of flight software (FSW) early in the project life cycle, when access to hardware is typically not available. For small satellites there are extensive lead times on many of the commercial-off-the-shelf (COTS) components as well as limited funding for engineering test units (ETU). Considering the difficulty of providing a hardware test-bed to each developer tester, hardware models are modeled based upon characteristic data or manufacturers data sheets for each individual component. The fidelity of each hardware models is such that FSW executes unaware that physical hardware is not present. This allows binaries to be compiled for both the simulation environment, and the flight computer, without changing the FSW source code. For hardware models that provide data dependent on the environment, such as a GPS receiver or magnetometer, an open-source tool from NASA GSFC (42 Spacecraft Simulation) is used to provide the necessary data. The underlying infrastructure used to transfer messages between FSW and the hardware models can also be used to monitor, intercept, and inject messages, which has proven to be beneficial for VV of larger missions such as James Webb Space Telescope (JWST). As hardware is procured, drivers can be added to the environment to enable hardware-in-the-loop (HWIL) testing. When strict time synchronization is not vital, any number of combinations of hardware components and software-based models can be tested. The open-source operator interface used in NOS3 is COSMOS from Ball Aerospace. For testing, plug-ins are implemented in COSMOS to control the NOS3 simulations, while the command and telemetry tools available in COSMOS are used to communicate with FSW. NOS3 is actively being used for FSW development and component testing of the Simulation-to-Flight 1 (STF-1) CubeSat. As NOS3 matures, hardware models have been added for common CubeSat components such as Novatel GPS receivers, ClydeSpace electrical power systems and batteries, ISISpace antenna systems, etc. In the future, NASA IVV plans to distribute NOS3 to other CubeSat developers and release the suite to the open-source community.
Multilevel and Latent Variable Modeling with Composite Links and Exploded Likelihoods
ERIC Educational Resources Information Center
Rabe-Hesketh, Sophia; Skrondal, Anders
2007-01-01
Composite links and exploded likelihoods are powerful yet simple tools for specifying a wide range of latent variable models. Applications considered include survival or duration models, models for rankings, small area estimation with census information, models for ordinal responses, item response models with guessing, randomized response models,…
Docherty, Neil G; le Roux, Carel W
2016-03-01
Alterations in small intestinal physiology are proposed to play a causative role in the beneficial impact of Roux-en-Y gastric bypass on type 2 diabetes mellitus. The present article describes the key proposed mechanisms implicated with an emphasis on some of the newer findings in the field. Augmented incretin and diminished anti-incretin effects postsurgery are explored and a model proposed that reconciles the hindgut and foregut hypotheses of improved glycaemic control as being complementary rather than mutually exclusive. Synthesis of recent findings on postbypass changes in intestinal glucose handling then follows. Finally an updated view of the role of distal bile diversion and changes in the microbiota on enteroendocrine signalling is presented. A series of nonmutually exclusive changes in small intestinal physiology likely make a significant contribution to improved glycaemic control postgastric bypass. Longitudinal data indicate that these effects do not translate into a long-term cure. A number of surgery-induced changes, however, are amenable to device-based and pharmacology-based mimicry, and this is an area for prioritization of future research focus.
NASA Astrophysics Data System (ADS)
Strahm, Ivo; Munz, Nicole; Braun, Christian; Gälli, René; Leu, Christian; Stamm, Christian
2014-05-01
Water quality in the Swiss river network is affected by many micropollutants from a variety of diffuse sources. This study compares, for the first time, in a comprehensive manner the diffuse sources and the substance groups that contribute the most to water contamination in Swiss streams and highlights the major regions for water pollution. For this a simple but comprehensive model was developed to estimate emission from diffuse sources for the entire Swiss river network of 65 000 km. Based on emission factors the model calculates catchment specific losses to streams for more than 15 diffuse sources (such as crop lands, grassland, vineyards, fruit orchards, roads, railways, facades, roofs, green space in urban areas, landfills, etc.) and more than 130 different substances from 5 different substance groups (pesticides, biocides, heavy metals, human drugs, animal drugs). For more than 180 000 stream sections estimates of mean annual pollutant loads and mean annual concentration levels were modeled. This data was validated with a set of monitoring data and evaluated based on annual average environmental quality standards (AA-EQS). Model validation showed that the estimated mean annual concentration levels are within the range of measured data. Therefore simulations were considered as adequately robust for identifying the major sources of diffuse pollution. The analysis depicted that in Switzerland widespread pollution of streams can be expected. Along more than 18 000 km of the river network one or more simulated substances has a concentration exceeding the AA-EQS. In single stream sections it could be more than 50 different substances. Moreover, the simulations showed that in two-thirds of small streams (Strahler order 1 and 2) at least one AA-EQS is always exceeded. The highest number of substances exceeding the AA-EQS are in areas with large fractions of arable cropping, vineyards and fruit orchards. Urban areas are also of concern even without considering wastewater treatment plants. Only a small number of problematic substances are expected from grassland. Landfills and roadways are insignificant within the entire Swiss river network, but may locally lead to considerable water pollution. Considering all substance groups, pesticides and some heavy metals are the main polluters. Many pesticides are expected to exceed AA-EQS and in a substantial percentage of the river network. Modeling a large number of substances from many sources and a huge quantity of stream sections is only possible with a simple model. Nevertheless conclusions are robust and may indicate where and for what kind of substance groups additional efforts for water quality improvements should be undertaken.
NASA Astrophysics Data System (ADS)
Steger, Stefan; Schmaltz, Elmar; Glade, Thomas
2017-04-01
Empirical landslide susceptibility maps spatially depict the areas where future slope failures are likely due to specific environmental conditions. The underlying statistical models are based on the assumption that future landsliding is likely to occur under similar circumstances (e.g. topographic conditions, lithology, land cover) as past slope failures. This principle is operationalized by applying a supervised classification approach (e.g. a regression model with a binary response: landslide presence/absence) that enables discrimination between conditions that favored past landslide occurrences and the circumstances typical for landslide absences. The derived empirical relation is then transferred to each spatial unit of an area. Literature reveals that the specific topographic conditions representative for landslide presences are frequently extracted from derivatives of digital terrain models at locations were past landslides were mapped. The underlying morphology-based landslide identification becomes possible due to the fact that the topography at a specific locality usually changes after landslide occurrence (e.g. hummocky surface, concave and steep scarp). In a strict sense, this implies that topographic predictors used within conventional statistical landslide susceptibility models relate to post-failure topographic conditions - and not to the required pre-failure situation. This study examines the assumption that models calibrated on the basis of post-failure topographies may not be appropriate to predict future landslide locations, because (i) post-failure and pre-failure topographic conditions may differ and (ii) areas were future landslides will occur do not yet exhibit such a distinct post-failure morphology. The study was conducted for an area located in the Walgau region (Vorarlberg, western Austria), where a detailed inventory consisting of shallow landslides was available. The methodology comprised multiple systematic comparisons of models generated on the basis of post-failure conditions (i.e. the standard approach) with models based on an approximated pre-failure topography. Pre-failure topography was approximated by (i) erasing the area of mapped landslide polygons within a digital terrain model and (ii) filling these "empty" areas by interpolating elevation points located outside the mapped landslides. Landslide presence information was extracted from the respective landslide scarp locations while an equal number of randomly sampled points represented landslide absences. After an initial exploratory data analysis, mixed-effects logistic regression was applied to model landslide susceptibility on the basis of two predictor sets (post-failure versus pre-failure predictors). Furthermore, all analyses were separately conducted for five different modelling resolutions to elaborate the suspicion that the degree of generalization of topographic parameters may as well play a role on how the respective models may differ. Model evaluation was conducted by means of multiple procedures (i.e. odds ratios, k-fold cross validation, permutation-based variable importance, difference maps of predictions). The results revealed that models based on highest resolutions (e.g. 1 m, 2.5 m) and post-failure topography performed best from a purely quantitative perspective. A confrontation of models (post-failure versus pre-failure based models) based on an identical modelling resolution exposed that validation results, modelled relationships as well as the prediction pattern tended to converge with a decreasing raster resolution. Based on the results, we concluded that an approximation of pre-failure topography does not significantly contribute to improved landslide susceptibility models in the case (i) the underlying inventory consists of small landslide features and (ii) the models are based on coarse raster resolutions (e.g. 25 m). However, in the case modelling with high raster resolutions is envisaged (e.g. 1 m, 2.5 m) or the inventory mainly consists of larger events, a reconstruction of pre-failure conditions might be highly expedient, even though conventional validation results might indicate an opposite tendency. Finally, we recommend to consider that topographic predictors highly useful to detect past slope movements (e.g. roughness) are not necessarily valuable to predict future slope instabilities.
An analysis of population and social change in London wards in the 1980s.
Congdon, P
1989-01-01
"This paper discusses the estimation and projection of small area populations in London, [England] and considers trends in intercensal social and demographic indices which can be calculated using these estimates. Information available annually on vital statistics and electorates is combined with detailed data from the Census Small Area Statistics to derive demographic component based population estimates for London's electoral wards over five year periods. The availability of age disaggregated population estimates permits derivation of small area social indicators for intercensal years, for example, of unemployment and mortality. Trends in spatial inequality of such indicators during the 1980s are analysed and point to continuing wide differentials. A typology of population and social indicators gives an indication of the small area distribution of the recent population turnaround in inner London, and of its association with other social processes such as gentrification and ethnic concentration." excerpt
A new methodology for surcharge risk management in urban areas (case study: Gonbad-e-Kavus city).
Hooshyaripor, Farhad; Yazdi, Jafar
2017-02-01
This research presents a simulation-optimization model for urban flood mitigation integrating Non-dominated Sorting Genetic Algorithm (NSGA-II) with Storm Water Management Model (SWMM) hydraulic model under a curve number-based hydrologic model of low impact development technologies in Gonbad-e-Kavus, a small city in the north of Iran. In the developed model, the best performance of the system relies on the optimal layout and capacity of retention ponds over the study area in order to reduce surcharge from the manholes underlying a set of storm event loads, while the available investment plays a restricting role. Thus, there is a multi-objective optimization problem with two conflicting objectives solved successfully by NSGA-II to find a set of optimal solutions known as the Pareto front. In order to analyze the results, a new factor, investment priority index (IPI), is defined which shows the risk of surcharging over the network and priority of the mitigation actions. The IPI is calculated using the probability of pond selection for candidate locations and average depth of the ponds in all Pareto front solutions. The IPI can help the decision makers to arrange a long-term progressive plan with the priority of high-risk areas when an optimal solution has been selected.
Promoting small towns for rural development: a view from Nepal.
Bajracharya, B N
1995-06-01
Two small villages in Nepal are the subjects of case studies that illustrate the role of small towns in provision of services, employment, and market operations. Some general findings are that small towns act as service centers for distribution of basic essential goods such as food grains, salt, kerosene, and fabric for hill and mountain areas. The role of small towns as market centers and in the provision of employment is limited. In resource-poor areas small towns are less diversified. Towns with agricultural surpluses are more developed. Small hill towns satisfy consumption rather than production needs. The growth of rural areas and towns in rural areas in Nepal is dependent on arable land and levels of production in hill areas. Limited land and low levels of production have an adverse impact. Movement of people, goods, and services is limited by difficult terrain and lack of access to good roads. Variability in access to off-farm jobs and services available in small towns varies with ethnicity and place of residence. The best development strategy for small towns in Nepal is market-oriented territorial development, which retains surpluses in the local area and integrates markets in the larger economy. The strategy would decentralize planning into small territorial units that include both small towns and groups of villages, provide institutional support for the rural poor, expand off-farm employment, and include investment in region-serving functions. Subsistence agriculture needs to include diversification of high value cash crops based on local comparative advantage suitable for hill climate and terrain. Small farmers must produce both cash and subsistence crops. Government should provide market space and paved areas, weighing facilities, and overnight storage facilities. Products would be processed at the village level. Subdistricts must be established according to spatial and social linkages between villages and the service center and coordinated at the district level. Group marketing, transport to large urban centers, and agricultural technical services are needed.
Lin, Mei; Zhang, Xingyou; Holt, James B; Robison, Valerie; Li, Chien-Hsun; Griffin, Susan O
2018-06-01
Because conducting population-based oral health screening is resource intensive, oral health data at small-area levels (e.g., county-level) are not commonly available. We applied the multilevel logistic regression and poststratification method to estimate county-level prevalence of untreated dental caries among children aged 6-9years in the United States using data from the National Health and Nutrition Examination Survey (NHANES) 2005-2010 linked with various area-level data at census tract, county and state levels. We validated model-based national estimates against direct estimates from NHANES. We also compared model-based estimates with direct estimates from select State Oral Health Surveys (SOHS) at state and county levels. The model with individual-level covariates only and the model with individual-, census tract- and county-level covariates explained 7.2% and 96.3% respectively of overall county-level variation in untreated caries. Model-based county-level prevalence estimates ranged from 4.9% to 65.2% with median of 22.1%. The model-based national estimate (19.9%) matched the NHANES direct estimate (19.8%). We found significantly positive correlations between model-based estimates for 8-year-olds and direct estimates from the third-grade State Oral Health Surveys (SOHS) at state level for 34 states (Pearson coefficient: 0.54, P=0.001) and SOHS estimates at county level for 53 New York counties (Pearson coefficient: 0.38, P=0.006). This methodology could be a useful tool to characterize county-level disparities in untreated dental caries among children aged 6-9years and complement oral health surveillance to inform public health programs especially when local-level data are not available although the lack of external validation due to data unavailability should be acknowledged. Published by Elsevier Inc.
Unstructured P2P Network Load Balance Strategy Based on Multilevel Partitioning of Hypergraph
NASA Astrophysics Data System (ADS)
Feng, Lv; Chunlin, Gao; Kaiyang, Ma
2017-05-01
With rapid development of computer performance and distributed technology, P2P-based resource sharing mode plays important role in Internet. P2P network users continued to increase so the high dynamic characteristics of the system determine that it is difficult to obtain the load of other nodes. Therefore, a dynamic load balance strategy based on hypergraph is proposed in this article. The scheme develops from the idea of hypergraph theory in multilevel partitioning. It adopts optimized multilevel partitioning algorithms to partition P2P network into several small areas, and assigns each area a supernode for the management and load transferring of the nodes in this area. In the case of global scheduling is difficult to be achieved, the priority of a number of small range of load balancing can be ensured first. By the node load balance in each small area the whole network can achieve relative load balance. The experiments indicate that the load distribution of network nodes in our scheme is obviously compacter. It effectively solves the unbalanced problems in P2P network, which also improve the scalability and bandwidth utilization of system.
Small ICBM area narrowing report. Volume 3: Hard silo in patterned array basing mode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The purpose of this report is to identify those areas that could potentially support deployment of the Small Intercontinental Ballistic Missile (ICBM) utilizing basing modes presently considered viable: the Hard Mobile Launcher in Random Movement, the Hard Mobile Launcher at Minuteman Facilities, or the Hard Silo in Patterned Array. Specifically, this report describes the process and the rationale supporting the application of Exclusionary and Evaluative Criteria and lists those locations that were eliminated through the application of these criteria. The remaining locations will be the subject of further investigations.
Gomez, Céline; Mangeas, Morgan; Curt, Thomas; Ibanez, Thomas; Munzinger, Jérôme; Dumas, Pascal; Jérémy, André; Despinoy, Marc; Hély, Christelle
2015-01-01
Wildfire has been recognized as one of the most ubiquitous disturbance agents to impact on natural environments. In this study, our main objective was to propose a modeling approach to investigate the potential impact of wildfire on biodiversity. The method is illustrated with an application example in New Caledonia where conservation and sustainable biodiversity management represent an important challenge. Firstly, a biodiversity loss index, including the diversity and the vulnerability indexes, was calculated for every vegetation unit in New Caledonia and mapped according to its distribution over the New Caledonian mainland. Then, based on spatially explicit fire behavior simulations (using the FLAMMAP software) and fire ignition probabilities, two original fire risk assessment approaches were proposed: a one-off event model and a multi-event burn probability model. The spatial distribution of fire risk across New Caledonia was similar for both indices with very small localized spots having high risk. The patterns relating to highest risk are all located around the remaining sclerophyll forest fragments and are representing 0.012% of the mainland surface. A small part of maquis and areas adjacent to dense humid forest on ultramafic substrates should also be monitored. Vegetation interfaces between secondary and primary units displayed high risk and should represent priority zones for fire effects mitigation. Low fire ignition probability in anthropogenic-free areas decreases drastically the risk. A one-off event associated risk allowed localizing of the most likely ignition areas with potential for extensive damage. Emergency actions could aim limiting specific fire spread known to have high impact or consist of on targeting high risk areas to limit one-off fire ignitions. Spatially explicit information on burning probability is necessary for setting strategic fire and fuel management planning. Both risk indices provide clues to preserve New Caledonia hot spot of biodiversity facing wildfires.
Gomez, Céline; Mangeas, Morgan; Curt, Thomas; Ibanez, Thomas; Munzinger, Jérôme; Dumas, Pascal; Jérémy, André; Despinoy, Marc; Hély, Christelle
2015-01-01
Wildfire has been recognized as one of the most ubiquitous disturbance agents to impact on natural environments. In this study, our main objective was to propose a modeling approach to investigate the potential impact of wildfire on biodiversity. The method is illustrated with an application example in New Caledonia where conservation and sustainable biodiversity management represent an important challenge. Firstly, a biodiversity loss index, including the diversity and the vulnerability indexes, was calculated for every vegetation unit in New Caledonia and mapped according to its distribution over the New Caledonian mainland. Then, based on spatially explicit fire behavior simulations (using the FLAMMAP software) and fire ignition probabilities, two original fire risk assessment approaches were proposed: a one-off event model and a multi-event burn probability model. The spatial distribution of fire risk across New Caledonia was similar for both indices with very small localized spots having high risk. The patterns relating to highest risk are all located around the remaining sclerophyll forest fragments and are representing 0.012% of the mainland surface. A small part of maquis and areas adjacent to dense humid forest on ultramafic substrates should also be monitored. Vegetation interfaces between secondary and primary units displayed high risk and should represent priority zones for fire effects mitigation. Low fire ignition probability in anthropogenic-free areas decreases drastically the risk. A one-off event associated risk allowed localizing of the most likely ignition areas with potential for extensive damage. Emergency actions could aim limiting specific fire spread known to have high impact or consist of on targeting high risk areas to limit one-off fire ignitions. Spatially explicit information on burning probability is necessary for setting strategic fire and fuel management planning. Both risk indices provide clues to preserve New Caledonia hot spot of biodiversity facing wildfires. PMID:25691965
Sitek, Aneta; Rosset, Iwona; Żądzińska, Elżbieta; Kasielska-Trojan, Anna; Neskoromna-Jędrzejczak, Aneta; Antoszewski, Bogusław
2016-04-01
Light skin pigmentation is a known risk factor for skin cancer. Skin color parameters and Fitzpatrick phototypes were evaluated in terms of their usefulness in predicting the risk of skin cancer. A case-control study involved 133 individuals with skin cancer (100 with basal cell carcinoma, 21 with squamous cell carcinoma, 12 with melanoma) and 156 healthy individuals. All of them had skin phototype determined and spectrophotometric skin color measurements were done on the inner surfaces of their arms and on the buttock. Using those data, prediction models were built and subjected to 17-fold stratified cross-validation. A model, based on skin phototypes, was characterized by area under the receiver operating characteristic curve = 0.576 and exhibited a lower predictive power than the models, which were mostly based on spectrophotometric variables describing pigmentation levels. The best predictors of skin cancer were R coordinate of RGB color space (area under the receiver operating characteristic curve 0.687) and melanin index (area under the receiver operating characteristic curve 0.683) for skin on the buttock. A small number of patients were studied. Models were not externally validated. Skin color parameters are more accurate predictors of skin cancer occurrence than skin phototypes. Spectrophotometry is a quick, easy, and affordable method offering relatively good predictive power. Copyright © 2015 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Götz, Joachim; Buckel, Johannes; Heckmann, Tobias
2013-04-01
The analysis of alpine sediment cascades requires the identification, differentiation and quantification of sediment sources, storages, and transport processes. This study deals with the origin of alpine sediment transfer and relates primary talus deposits to corresponding rockwall source areas within the Gradenbach catchment (Schober Mountains, Austrian Alps). Sediment storage landforms are based on a detailed geomorphological map of the catchment which was generated to analyse the sediment transfer system. Mapping was mainly performed in the field and supplemented by post-mapping analysis using LIDAR data and digital orthophotos. A fundamental part of the mapping procedure was to capture additional landform-based information with respect to morphometry, activity and connectivity. The applied procedure provides a detailed inventory of sediment storage landforms including additional information on surface characteristics, dominant and secondary erosion and deposition processes, process activity and sediment storage coupling. We develop the working hypothesis that the present-day surface area ratio between rockfall talus (area as a proxy for volume, backed by geophysical analysis of selected talus cones) and corresponding rockwall source area is a measure of rockfall activity since deglaciation; large talus cones derived from small rockwall catchments indicate high activity, while low activity can be inferred where rockfall from large rock faces has created only small deposits. The surface area ratio of talus and corresponding rockwalls is analysed using a landform-based and a process-based approach. For the landform-based approach, we designed a GIS procedure which derives the (hydrological) catchment area of the contact lines of talus and rockwall landforms in the geomorphological map. The process-based approach simulates rockfall trajectories from steep (>45°) portions of a DEM generated by a random-walk rockfall model. By back-tracing those trajectories that end on a selected talus landform, the 'rockfall contributing area' is delineated; this approach takes account of the stochastic nature of rockfall trajectories and is able to identify, for example, rockfall delivery from one rockwall segment to multiple talus landforms (or from multiple rockfall segments to the same deposit, respectively). Using both approaches, a total of 290 rockwall-talus-subsystems are statistically analysed indicating a constant relationship between rockfall source areas and corresponding areas of talus deposits of almost 1:1. However, certain rockwall-talus-subsystems deviate from this correlation since sediment storage landforms of similar size originate from varying rockwall source areas and vice versa. This varying relationship is assumed to be strongly controlled by morphometric parameters, such as rockwall slope, altitudinal interval, and aspect. The impact of these parameters on the surface area ratio will be finally discussed.
Terrestrial precipitation and soil moisture: A case study over southern Arizona and data development
NASA Astrophysics Data System (ADS)
Stillman, Susan
Quantifying climatological precipitation and soil moisture as well as interannual variability and trends requires extensive observation. This work focuses on the analysis of available precipitation and soil moisture data and the development of new ways to estimate these quantities. Precipitation and soil moisture characteristics are highly dependent on the spatial and temporal scales. We begin at the point scale, examining hourly precipitation and soil moisture at individual gauges. First, we focus on the Walnut Gulch Experimental Watershed (WGEW), a 150 km2 area in southern Arizona. The watershed has been measuring rainfall since 1956 with a very high density network of approximately 0.6 gauges per km2. Additionally, there are 19 soil moisture probes at 5 cm depth with data starting in 2002. In order to extend the measurement period, we have developed a water balance model which estimates monsoon season (Jul-Sep) soil moisture using only precipitation for input, and calibrated so that the modeled soil moisture fits best with the soil moisture measured by each of the 19 probes from 2002-2012. This observationally constrained soil moisture is highly correlated with the collocated probes (R=0.88), and extends the measurement period from 10 to 56 years and the number of gauges from 19 to 88. Then, we focus on the spatiotemporal variability within the watershed and the ability to estimate area averaged quantities. Spatially averaged precipitation and observationally constrained soil moisture from the 88 gauges is then used to evaluate various gridded datasets. We find that gauge-based precipitation products perform best followed by reanalyses and then satellite-based products. Coupled Model Intercomparison Project Phase 5 (CMIP5) models perform the worst and overestimate cold season precipitation while offsetting the monsoon peak precipitation forward or backward by a month. Satellite-based soil moisture is the best followed by land data assimilation systems and reanalyses. We show that while WGEW is small compared to the grid size of many of the evaluated products, unlike scaling from point to area, the effect of scaling from smaller to larger area is small. Finally, we focus on global precipitation. Global monthly gauge based precipitation data has become widely available in recent years and is necessary for analyzing the climatological and anomaly precipitation fields as well as for calibrating and evaluating other gridded products such as satellite-based and modeled precipitation. However, frequency and intensity of precipitation are also important in the partitioning of water and energy fluxes. Therefore, because daily and sub-daily observed precipitation is limited to recent years, the number of raining days per month (N) is needed. We show that the only currently available long-term N product, developed by the Climate Research Unit (CRU), is deficient in certain areas, particularly where CRU gauge data is sparse. We then develop a new global 110-year N product, which shows significant improvement over CRU using three regional daily precipitation products with far more gauges than are used in CRU.
Linking flood peak, flood volume and inundation extent: a DEM-based approach
NASA Astrophysics Data System (ADS)
Rebolho, Cédric; Furusho-Percot, Carina; Blaquière, Simon; Brettschneider, Marco; Andréassian, Vazken
2017-04-01
Traditionally, flood inundation maps are computed based on the Shallow Water Equations (SWE) in one or two dimensions, with various simplifications that have proved to give good results. However, the complexity of the SWEs often requires a numerical resolution which can need long computing time, as well as detailed cross section data: this often results in restricting these models to rather small areas abundant with high quality data. This, along with the necessity for fast inundation mapping, are the reason why rapid inundation models are being designed, working for (almost) any river with a minimum amount of data and, above all, easily available data. Our model tries to follow this path by using a 100m DEM over France from which are extracted a drainage network and the associated drainage areas. It is based on two pre-existing methods: (1) SHYREG (Arnaud et al.,2013), a regionalized approach used to calculate the 2-year and 10-year flood quantiles (used as approximated bankfull flow and maximum discharge, respectively) for each river pixel of the DEM (below a 10 000 km2 drainage area) and (2) SOCOSE (Mailhol,1980), which gives, amongst other things, an empirical formula of a characteristic flood duration (for each pixel) based on catchment area, average precipitation and temperature. An overflow volume for each river pixel is extracted from a triangular shaped synthetic hydrograph designed with SHYREG quantiles and SOCOSE flood duration. The volume is then spread from downstream to upstream one river pixel at a time. When the entire hydrographic network is processed, the model stops and generates a map of potential inundation area associated with the 10-year flood quantile. Our model can also be calibrated using past-events inundation maps by adjusting two parameters, one which modifies the overflow duration, and the other, equivalent to a minimum drainage area for river pixels to be flooded. Thus, in calibration on a sample of 42 basins, the first draft of the model showed a 0.51 median Fit (intersection of simulated and observed areas divided by the union of the two, Bates and De Roo, 2000) and a 0.74 maximum. Obviously, this approach is quite rough, and would require testing on events of homogeneous return periods (which is not the case for now). The next steps in the test and the development of our method include the use of the AIGA distributed model to simulate past-events hydrographs, the search for a new way to automatically approach bankfull flow and the integration of the results in our model to build dynamic maps of the flood. References Arnaud, P., Eglin, Y., Janet, B., and Payrastre, O. (2013). Notice utilisateur : bases de données SHYREG-Débit. Méthode - Performances - Limites. Bates, P. D. and De Roo, A. P. J. (2000). A simple raster-based model for flood inundation simulation. Journal of Hydrology, 236(1-2):54-77. Mailhol, J. (1980). Pour une approche plus réaliste du temps caractéristique de crues des bassins versants. In Actes du Colloque d'Oxford, volume 129, pages 229-237, Oxford. IAHS-AISH.
Surface roughness effects on the solar reflectance of cool asphalt shingles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akbari, Hashem; Berdahl, Paul; Akbari, Hashem
2008-02-17
We analyze the solar reflectance of asphalt roofing shingles that are covered with pigmented mineral roofing granules. The reflecting surface is rough, with a total area approximately twice the nominal area. We introduce a simple analytical model that relates the 'micro-reflectance' of a small surface region to the 'macro-reflectance' of the shingle. This model uses a mean field approximation to account for multiple scattering effects. The model is then used to compute the reflectance of shingles with a mixture of different colored granules, when the reflectances of the corresponding mono-color shingles are known. Simple linear averaging works well, with smallmore » corrections to linear averaging derived for highly reflective materials. Reflective base granules and reflective surface coatings aid achievement of high solar reflectance. Other factors that influence the solar reflectance are the size distribution of the granules, coverage of the asphalt substrate, and orientation of the granules as affected by rollers during fabrication.« less
NASA Astrophysics Data System (ADS)
Wang, Po-Hsun; Liu, Hao-Li; Hsu, Po-Hung; Lin, Chia-Yu; Chris Wang, Churng-Ren; Chen, Pin-Yuan; Wei, Kuo-Chen; Yen, Tzu-Chen; Li, Meng-Lin
2012-06-01
In this study, we develop a novel photoacoustic imaging technique based on gold nanorods (AuNRs) for quantitatively monitoring focused-ultrasound (FUS) induced blood-brain barrier (BBB) opening in a rat model in vivo. This study takes advantage of the strong near-infrared absorption (peak at ~800 nm) of AuNRs and the extravasation tendency from BBB opening foci due to their nano-scale size to passively label the BBB disruption area. Experimental results show that AuNR contrast-enhanced photoacoustic microscopy (PAM) successfully reveals the spatial distribution and temporal response of BBB disruption area in the rat brains. The quantitative measurement of contrast enhancement has potential to estimate the local concentration of AuNRs and even the dosage of therapeutic molecules when AuNRs are further used as nano-carrier for drug delivery or photothermal therapy. The photoacoustic results also provide complementary information to MRI, being helpful to discover more details about FUS induced BBB opening in small animal models.
Geostatistical modeling of riparian forest microclimate and its implications for sampling
Eskelson, B.N.I.; Anderson, P.D.; Hagar, J.C.; Temesgen, H.
2011-01-01
Predictive models of microclimate under various site conditions in forested headwater stream - riparian areas are poorly developed, and sampling designs for characterizing underlying riparian microclimate gradients are sparse. We used riparian microclimate data collected at eight headwater streams in the Oregon Coast Range to compare ordinary kriging (OK), universal kriging (UK), and kriging with external drift (KED) for point prediction of mean maximum air temperature (Tair). Several topographic and forest structure characteristics were considered as site-specific parameters. Height above stream and distance to stream were the most important covariates in the KED models, which outperformed OK and UK in terms of root mean square error. Sample patterns were optimized based on the kriging variance and the weighted means of shortest distance criterion using the simulated annealing algorithm. The optimized sample patterns outperformed systematic sample patterns in terms of mean kriging variance mainly for small sample sizes. These findings suggest methods for increasing efficiency of microclimate monitoring in riparian areas.
Aeromagnetic survey by a model helicopter at the ruin of ironwork refinement
NASA Astrophysics Data System (ADS)
Funaki, M.; Nishioka, T.
2007-12-01
It is difficult to detect the magnetic anomaly resulting from the small scale of magnetic sources as archeological or historical ruins by a helicopter due to the restraint of the low altitude flights in the narrow area. Although a relatively small unmanned helicopters has been commercialized for agriculture use etc., it is too expensive for aeromagnetic surveys. We have developed a small autonomous unmanned helicopter which modified a model helicopter for aeromagnetic survey. A model helicopter (Hirobo Co.; SF40) with a 40cc gasoline engine, length of 143cm from the nose to the tail and dry weight of 15 kg is selected in this study. The irradiated magnetic field from the bottom-center of skid of SF40 was the total magnetic field (R)=3511 nT, inclination (I)=12 degrees and declination (D)=138 degrees. It was reduced to about 1 nT at 3 m downward from the skid during the hovering. When SF40 was covered with a magnetic shield film (Amolic sheet), the distance to measure 1nT diminished to 2 m. As shielding whole body with the film is not effective for reliable and safety flights, the only servomotors having the strong magnetization were shielded by the film. The autonomous flights based on GPS data succeeded. As the control system was too large and heavy for SF40, we are developing more simple and small navigation system for this project. Magnetometer system consists of a 3-axis fluxgate magnetometer, data logger, GPS and battery, recording every second of x, y and z magnetic fields, latitude, longitude, altitude and satellite number during 3 hours. The total weight of the system is 400g. The system was hanged to 2m lower from the skid by a rope (Bird magnetometer) or 2m front form the nose by a carbon fiber pipe (Stinger magnetometer) in order to avoid the magnetic field of SF40. However, the bird magnetometer was not suitable due to the strong noise resulting from the swing of the sensor. An archeological ruin of the ironwork refinement aged 15th century in western Japan was measured by the stinger magnetometer. The survey area was 70x20m with a gentle slop. The helicopter was controlled by the manual keeping up the roughly same altitude (the 4-8m height from the surface) and speed (1m/s). The result showed the strong anomalies of 500 nT at the NW corner of the area where consists with the refinement. From these viewpoints the model helicopter is useful to find the ironwork refinements instead of the identification based on the feeling and the experience of archeologists.
A comparison of small-area hospitalisation rates, estimated morbidity and hospital access.
Shulman, H; Birkin, M; Clarke, G P
2015-11-01
Published data on hospitalisation rates tend to reveal marked spatial variations within a city or region. Such variations may simply reflect corresponding variations in need at the small-area level. However, they might also be a consequence of poorer accessibility to medical facilities for certain communities within the region. To help answer this question it is important to compare these variable hospitalisation rates with small-area estimates of need. This paper first maps hospitalisation rates at the small-area level across the region of Yorkshire in the UK to show the spatial variations present. Then the Health Survey of England is used to explore the characteristics of persons with heart disease, using chi-square and logistic regression analysis. Using the most significant variables from this analysis the authors build a spatial microsimulation model of morbidity for heart disease for the Yorkshire region. We then compare these estimates of need with the patterns of hospitalisation rates seen across the region. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
An Overview of Materials Structures for Extreme Environments Efforts for 2015 SBIR Phases I and II
NASA Technical Reports Server (NTRS)
Nguyen, Hung D.; Steele, Gynelle C.
2017-01-01
Technological innovation is the overall focus of NASA's Small Business Innovation Research (SBIR) program. The program invests in the development of innovative concepts and technologies to help NASA's mission directorates address critical research and development needs for Agency projects. This report highlights innovative SBIR 2015 Phase I and II projects that specifically address areas in Materials and Structures for Extreme Environments, one of six core competencies at NASA Glenn Research Center. Each article describes an innovation, defines its technical objective, and highlights NASA applications as well as commercial and industrial applications. Ten technologies are featured: metamaterials-inspired aerospace structures, metallic joining to advanced ceramic composites, multifunctional polyolefin matrix composite structures, integrated reacting fluid dynamics and predictive materials degradation models for propulsion system conditions, lightweight inflatable structural airlock (LISA), copolymer materials for fused deposition modeling 3-D printing of nonstandard plastics, Type II strained layer superlattice materials development for space-based focal plane array applications, hydrogenous polymer-regolith composites for radiation-shielding materials, a ceramic matrix composite environmental barrier coating durability model, and advanced composite truss printing for large solar array structures. This report serves as an opportunity for NASA engineers, researchers, program managers, and other personnel to learn about innovations in this technology area as well as possibilities for collaboration with innovative small businesses that could benefit NASA programs and projects.
NASA Technical Reports Server (NTRS)
2009-01-01
Topics covered include: Improved Instrument for Detecting Water and Ice in Soil; Real-Time Detection of Dust Devils from Pressure Readings; Determining Surface Roughness in Urban Areas Using Lidar Data; DSN Data Visualization Suite; Hamming and Accumulator Codes Concatenated with MPSK or QAM; Wide-Angle-Scanning Reflectarray Antennas Actuated by MEMS; Biasable Subharmonic Membrane Mixer for 520 to 600 GHz; Hardware Implementation of Serially Concatenated PPM Decoder; Symbolic Processing Combined with Model-Based Reasoning; Presentation Extensions of the SOAP; Spreadsheets for Analyzing and Optimizing Space Missions; Processing Ocean Images to Detect Large Drift Nets; Alternative Packaging for Back-Illuminated Imagers; Diamond Machining of an Off-Axis Biconic Aspherical Mirror; Laser Ablation Increases PEM/Catalyst Interfacial Area; Damage Detection and Self-Repair in Inflatable/Deployable Structures; Polyimide/Glass Composite High-Temperature Insulation; Nanocomposite Strain Gauges Having Small TCRs; Quick-Connect Windowed Non-Stick Penetrator Tips for Rapid Sampling; Modeling Unsteady Cavitation and Dynamic Loads in Turbopumps; Continuous-Flow System Produces Medical-Grade Water; Discrimination of Spore-Forming Bacilli Using spoIVA; nBn Infrared Detector Containing Graded Absorption Layer; Atomic References for Measuring Small Accelerations; Ultra-Broad-Band Optical Parametric Amplifier or Oscillator; Particle-Image Velocimeter Having Large Depth of Field; Enhancing SERS by Means of Supramolecular Charge Transfer; Improving 3D Wavelet-Based Compression of Hyperspectral Images; Improved Signal Chains for Readout of CMOS Imagers; SOI CMOS Imager with Suppression of Cross-Talk; Error-Rate Bounds for Coded PPM on a Poisson Channel; Biomorphic Multi-Agent Architecture for Persistent Computing; and Using Covariance Analysis to Assess Pointing Performance.
Probability surveys of stream and river resources (hereafter referred to as streams) provide reliable estimates of stream condition when the areas for the estimates have sufficient number of sample sites. Monitoring programs are frequently asked to provide estimates for areas th...
2007-12-01
model. Finally, we build a small agent-based model using the component architecture to demonstrate the library’s functionality. 15. NUMBER OF...and a Behavioral model. Finally, we build a small agent-based model using the component architecture to demonstrate the library’s functionality...prototypes an architectural design which is generalizable, reusable, and extensible. We have created an initial set of model elements that demonstrate
LeDell, Erin; Petersen, Maya; van der Laan, Mark
In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC.
Petersen, Maya; van der Laan, Mark
2015-01-01
In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC. PMID:26279737
GIS-based identification of active lineaments within the Krasnokamensk Area, Transbaikalia, Russia
NASA Astrophysics Data System (ADS)
Petrov, V. A.; Lespinasse, M.; Ustinov, S. A.; Cialec, C.
2017-07-01
Lineament analysis was carried out using detailed digital elevation models (DEM) of the Krasnokamensk Area, southeastern Transbaikalia (Russia). The results of this research confirm the presence of already known faults, but also identify unknown fault zones. The primary focus was identifying small discontinuities and their relationship with extended fault zones. The developed technique allowed construction and identification of the active lineaments with their orientation of the compression and expansion axes in the horizontal plane, their direction of shear movement (right or left), and their geodynamic setting of formation (compression or stretching). The results of active faults identification and definition of their kinematics on digital elevation models were confirmed by measuring the velocities and directions of modern horizontal surface motions using a geodesic GPS, as well as identifying the principal stress axes directions of the modern stress field using modern-day earthquake data. The obtained results are deemed necessary for proper rational environmental management decisions.
Fishery consequences of marine reserves: short-term pain for longer-term gain.
Hopf, Jess K; Jones, Geoffrey P; Williamson, David H; Connolly, Sean R
2016-04-01
Marine reserves are often established in areas that support fisheries. Larval export from reserves is argued to help compensate for the loss of fishable habitat; however, previous modeling studies have focused on long-term equilibrium outcomes. We examined the transient consequences of reserve establishment for fished metapopulations, considering both a well-mixed larval pool and a spatially explicit model based on a coral trout (Plectropomus spp.) metapopulation. When fishing pressure was reallocated relative to the area protected, yields decreased initially, then recovered, and ultimately exceeded pre-reserve levels. However, recovery time was on the order of several years to decades. If fishing pressure intensified to maintain pre-reserve yields, reserves were sometimes unable to support the increased mortality and the metapopulation collapsed. This was more likely when reserves were small, or located peripherally within the metapopulation. Overall, reserves can achieve positive conservation and fishery benefits, but fisheries management complementary to reserve implementation is essential.
Bernard, Pierre-Yves; Benoît, Marc; Roger-Estrade, Jean; Plantureux, Sylvain
2016-12-01
The objectives of this comparison of two biophysical models of nitrogen losses were to evaluate first whether results were similar and second whether both were equally practical for use by non-scientist users. Results were obtained with the crop model STICS and the environmental model AGRIFLUX based on nitrogen loss simulations across a small groundwater catchment area (<1 km(2)) located in the Lorraine region in France. Both models simulate the influences of leaching and cropping systems on nitrogen losses in a relevant manner. The authors conclude that limiting the simulations to areas where soils with a greater risk of leaching cover a significant spatial extent would likely yield acceptable results because those soils have more predictable leaching of nitrogen. In addition, the choice of an environmental model such as AGRIFLUX which requires fewer parameters and input variables seems more user-friendly for agro-environmental assessment. The authors then discuss additional challenges for non-scientists such as lack of parameter optimization, which is essential to accurately assessing nitrogen fluxes and indirectly not to limit the diversity of uses of simulated results. Despite current restrictions, with some improvement, biophysical models could become useful environmental assessment tools for non-scientists. Copyright © 2016 Elsevier Ltd. All rights reserved.
Robustness of disaggregate oil and gas discovery forecasting models
Attanasi, E.D.; Schuenemeyer, J.H.
1989-01-01
The trend in forecasting oil and gas discoveries has been to develop and use models that allow forecasts of the size distribution of future discoveries. From such forecasts, exploration and development costs can more readily be computed. Two classes of these forecasting models are the Arps-Roberts type models and the 'creaming method' models. This paper examines the robustness of the forecasts made by these models when the historical data on which the models are based have been subject to economic upheavals or when historical discovery data are aggregated from areas having widely differing economic structures. Model performance is examined in the context of forecasting discoveries for offshore Texas State and Federal areas. The analysis shows how the model forecasts are limited by information contained in the historical discovery data. Because the Arps-Roberts type models require more regularity in discovery sequence than the creaming models, prior information had to be introduced into the Arps-Roberts models to accommodate the influence of economic changes. The creaming methods captured the overall decline in discovery size but did not easily allow introduction of exogenous information to compensate for incomplete historical data. Moreover, the predictive log normal distribution associated with the creaming model methods appears to understate the importance of the potential contribution of small fields. ?? 1989.
NASA Astrophysics Data System (ADS)
Dupuy, Stéphane; Lainé, Gérard; Tassin, Jacques; Sarrailh, Jean-Michel
2013-12-01
This article's goal is to explore the benefits of using Digital Surface Model (DSM) and Digital Terrain Model (DTM) derived from LiDAR acquisitions for characterizing the horizontal structure of different facies in forested areas (primary forests vs. secondary forests) within the framework of an object-oriented classification. The area under study is the island of Mayotte in the western Indian Ocean. The LiDAR data were the data originally acquired by an airborne small-footprint discrete-return LiDAR for the "Litto3D" coastline mapping project. They were used to create a Digital Elevation Model (DEM) at a spatial resolution of 1 m and a Digital Canopy Model (DCM) using median filtering. The use of two successive segmentations at different scales allowed us to adjust the segmentation parameters to the local structure of the landscape and of the cover. Working in object-oriented mode with LiDAR allowed us to discriminate six vegetation classes based on canopy height and horizontal heterogeneity. This heterogeneity was assessed using a texture index calculated from the height-transition co-occurrence matrix. Overall accuracy exceeds 90%. The resulting product is the first vegetation map of Mayotte which emphasizes the structure over the composition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malhotra, Mini; New, Joshua Ryan; Im, Piljae
As part of DOE's support of ANSI/ASHRAE/IES Standard 90.1 and IECC, researchers at Pacific Northwest National Laboratory (PNNL) apply a suite of prototype buildings covering 80% of the commercial building floor area in the U.S. for new construction. Efforts have started on expanding the prototype building suite to cover 90% of the commercial building floor area in the U.S., by developing prototype models for additional building types including place of worship, public order and safety, public assembly. Courthouse is courthouse is a sub-category under the “Public Order and Safety" building type category; other sub-categories include police station, fire station, andmore » jail, reformatory or penitentiary.ORNL used building design guides, databases, and documented courthouse projects, supplemented by personal communication with courthouse facility planning and design experts, to systematically conduct research on the courthouse building and system characteristics. This report documents the research conducted for the courthouse building type and proposes building and system characteristics for developing a prototype building energy model to be included in the Commercial Building Prototype Model suite. According to the 2012 CBECS, courthouses occupy a total of 436 million sqft of floor space or 0.5% of the total floor space in all commercial buildings in the US, next to fast food (0.35%), grocery store or food market (0.88%), and restaurant or cafeteria (1.2%) building types currently included in the Commercial Prototype Building Model suite. Considering aggregated average, courthouse falls among the larger with a mean floor area of 69,400 sqft smaller fuel consumption intensity building types and an average of 94.7 kBtu/sqft compared to 77.8 kBtu/sqft for office and 80 kBtu/sqft for all commercial buildings.Courthouses range in size from 1000 sqft to over a million square foot building gross square feet and 1 courtroom to over 100 courtrooms. Small courthouses represent a majority of courthouse buildings. However, collectively they comprise a small fraction of total courthouse floor area in the US. Spaces and operation of courthouse also varies depending on the court type (federal court vs state court; district, appellate, versus Supreme Court) and jurisdiction (general jurisdiction, general jurisdiction trial, or special courts). Based on the statistics on courthouses, general jurisdiction trial court is considered for the prototype model. The model is assumed to be a 4-courtroom, small, 72,000 sqft three-story building including a ground level/ basement.« less
Taniguchi, Mizuki; Kajioka, Shunichi; Shozib, Habibul B.; Sawamura, Kenta; Nakayama, Shinsuke
2013-01-01
Smooth and elaborate gut motility is based on cellular cooperation, including smooth muscle, enteric neurons and special interstitial cells acting as pacemaker cells. Therefore, spatial characterization of electric activity in tissues containing these electric excitable cells is required for a precise understanding of gut motility. Furthermore, tools to evaluate spatial electric activity in a small area would be useful for the investigation of model animals. We thus employed a microelectrode array (MEA) system to simultaneously measure a set of 8×8 field potentials in a square area of ∼1 mm2. The size of each recording electrode was 50×50 µm2, however the surface area was increased by fixing platinum black particles. The impedance of microelectrode was sufficiently low to apply a high-pass filter of 0.1 Hz. Mapping of spectral power, and auto-correlation and cross-correlation parameters characterized the spatial properties of spontaneous electric activity in the ileum of wild-type (WT) and W/Wv mice, the latter serving as a model of impaired network of pacemaking interstitial cells. Namely, electric activities measured varied in both size and cooperativity in W/Wv mice, despite the small area. In the ileum of WT mice, procedures suppressing the excitability of smooth muscle and neurons altered the propagation of spontaneous electric activity, but had little change in the period of oscillations. In conclusion, MEA with low impedance electrodes enables to measure slowly oscillating electric activity, and is useful to evaluate both histological and functional changes in the spatio-temporal property of gut electric activity. PMID:24124480
NASA Astrophysics Data System (ADS)
Ohkura, Hiroshi
Full polarimetric SAR images of ALOS PALSAR of Shinmoe-dake volcano in Japan were analyzed. The volcano erupted in January, 2011 and volcano ash deposited more than 10 cm in 12 km (2) and 1 m in 2 km (2) . Two images before and after the eruption were compared based on a point view of the four-component scattering model to detect changes of polarimetric scattering characteristics. The main detected changes are as follows. Total power of the four-component scattering model decreased on a farslope after the eruption. An incident angle on a farslope is larger than the angle on a foreslope. Decrease of surface roughness due to deposited volcanic ashes makes back-scattering smaller in the area of a larger incidence angle. However the rate of the double-bounce component got higher in a forest at the foot of a mountain slope and on a plain, where the ground surface is almost horizontal and the incident angle is relatively-large. Decrease of roughness of the forest floor increases forward scattering on the floor of the larger incident angle. This increases the double-bounced scattering due to bouncing back between the forest floor and trunks which stand "perpendicularly" on the almost horizontal forest floor. The rate of the surface scattering component got higher around an area where layover occurred. In the study area, most of layovers occurred at a ridge where an incidence angle was small. Decrease of surface roughness due to the ash deposit increases the surface scattering power in the area of the small incidence angle.
NASA Astrophysics Data System (ADS)
Ichiba, Abdellah; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe; Ten Veldhuis, Marie-Claire
2017-04-01
Nowadays, there is a growing interest on small-scale rainfall information, provided by weather radars, to be used in urban water management and decision-making. Therefore, an increasing interest is in parallel devoted to the development of fully distributed and grid-based models following the increase of computation capabilities, the availability of high-resolution GIS information needed for such models implementation. However, the choice of an appropriate implementation scale to integrate the catchment heterogeneity and the whole measured rainfall variability provided by High-resolution radar technologies still issues. This work proposes a two steps investigation of scale effects in urban hydrology and its effects on modeling works. In the first step fractal tools are used to highlight the scale dependency observed within distributed data used to describe the catchment heterogeneity, both the structure of the sewer network and the distribution of impervious areas are analyzed. Then an intensive multi-scale modeling work is carried out to understand scaling effects on hydrological model performance. Investigations were conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model was implemented at 17 spatial resolutions ranging from 100 m to 5 m and modeling investigations were performed using both rain gauge rainfall information as well as high resolution X band radar data in order to assess the sensitivity of the model to small scale rainfall variability. Results coming out from this work demonstrate scale effect challenges in urban hydrology modeling. In fact, fractal concept highlights the scale dependency observed within distributed data used to implement hydrological models. Patterns of geophysical data change when we change the observation pixel size. The multi-scale modeling investigation performed with Multi-Hydro model at 17 spatial resolutions confirms scaling effect on hydrological model performance. Results were analyzed at three ranges of scales identified in the fractal analysis and confirmed in the modeling work. The sensitivity of the model to small-scale rainfall variability was discussed as well.
ERIC Educational Resources Information Center
Brooks, Lara; Whitacre, Brian; Shideler, Dave; Muske, Glenn; Woods, Mike
2012-01-01
Small and home-based businesses have long been identified by Extension educators as an important component of economic development, particularly in rural areas. The services available to these businesses can take many forms, including management training, accessibility of local funding, providing incubation facilities, or setting up mentoring…
Focks, Andreas; Belgers, Dick; van der Steen, Jozef J.M.; Boesten, Jos J.T.I.; Roessink, Ivo
2016-01-01
Estimating the exposure of honeybees to pesticides on a landscape scale requires models of their spatial foraging behaviour. For this purpose, we developed a mechanistic, energetics-based model for a single day of nectar foraging in complex landscape mosaics. Net energetic efficiency determined resource patch choice. In one version of the model a single optimal patch was selected each hour. In another version, recruitment of foragers was simulated and several patches could be exploited simultaneously. Resource availability changed during the day due to depletion and/or intrinsic properties of the resource (anthesis). The model accounted for the impact of patch distance and size, resource depletion and replenishment, competition with other nectar foragers, and seasonal and diurnal patterns in availability of nectar-providing crops and wild flowers. From the model we derived simple rules for resource patch selection, e.g., for landscapes with mass-flowering crops only, net energetic efficiency would be proportional to the ratio of the energetic content of the nectar divided by distance to the hive. We also determined maximum distances at which resources like oilseed rape and clover were still energetically attractive. We used the model to assess the potential for pesticide exposure dilution in landscapes of different composition and complexity. Dilution means a lower concentration in nectar arriving at the hive compared to the concentration in nectar at a treated field and can result from foraging effort being diverted away from treated fields. Applying the model for all possible hive locations over a large area, distributions of dilution factors were obtained that were characterised by their 90-percentile value. For an area for which detailed spatial data on crops and off-field semi-natural habitats were available, we tested three landscape management scenarios that were expected to lead to exposure dilution: providing alternative resources than the target crop (oilseed rape) in the form of (i) other untreated crop fields, (ii) flower strips of different widths at field edges (off-crop in-field resources), and (iii) resources on off-field (semi-natural) habitats. For both model versions, significant dilution occurred only when alternative resource patches were equal or more attractive than oilseed rape, nearby and numerous and only in case of flower strips and off-field habitats. On an area-base, flower strips were more than one order of magnitude more effective than off-field habitats, the main reason being that flower strips had an optimal location. The two model versions differed in the predicted number of resource patches exploited over the day, but mainly in landscapes with numerous small resource patches. In landscapes consisting of few large resource patches (crop fields) both versions predicted the use of a small number of patches. PMID:27602273
A Systems Biology Approach to Small Fish Ecotoxicogenomics
At present, a large and integrated effort in the area of ecotoxicogenomics within USEPA's Office of Research and Development is a research program titled "Linkage of exposure and effects using genomics, proteomics, and metabolomics in small fish models." The effort involves coll...
Small-Area Estimation of Spatial Access to Care and Its Implications for Policy.
Gentili, Monica; Isett, Kim; Serban, Nicoleta; Swann, Julie
2015-10-01
Local or small-area estimates to capture emerging trends across large geographic regions are critical in identifying and addressing community-level health interventions. However, they are often unavailable due to lack of analytic capabilities in compiling and integrating extensive datasets and complementing them with the knowledge about variations in state-level health policies. This study introduces a modeling approach for small-area estimation of spatial access to pediatric primary care that is data "rich" and mathematically rigorous, integrating data and health policy in a systematic way. We illustrate the sensitivity of the model to policy decision making across large geographic regions by performing a systematic comparison of the estimates at the census tract and county levels for Georgia and California. Our results show the proposed approach is able to overcome limitations of other existing models by capturing patient and provider preferences and by incorporating possible changes in health policies. The primary finding is systematic underestimation of spatial access, and inaccurate estimates of disparities across population and across geography at the county level with respect to those at the census tract level with implications on where to focus and which type of interventions to consider.
Balderama, Orlando F
2010-01-01
An integrated computer program called Cropping System and Water Management Model (CSWM) with a three-step feature (expert system-simulation-optimization) was developed to address a range of decision support for rainfed farming, i.e. crop selection, scheduling and optimisation. The system was used for agricultural planning with emphasis on sustainable agriculture in the rainfed areas through the use of small farm reservoirs for increased production and resource conservation and management. The application of the model was carried out using crop, soil, and climate and water resource data from the Philippines. Primarily, four sets of data representing the different rainfall classification of the country were collected, analysed, and used as input in the model. Simulations were also done on date of planting, probabilities of wet and dry period and with various capacities of the water reservoir used for supplemental irrigation. Through the analysis, useful information was obtained to determine suitable crops in the region, cropping schedule and pattern appropriate to the specific climate conditions. In addition, optimisation of the use of the land and water resources can be achieved in areas partly irrigated by small reservoirs.
will be useful, for example, in providing initial and/or lateral boundary conditions for regional reforecasts with various limited-area models. To access a subset of model output, for example a small number
Testing of The Harp Guidelines On A Small Watershed In Finland
NASA Astrophysics Data System (ADS)
Granlund, K.; Rekolainen, S.
TESTING of THE HARP GUIDELINES ON A SMALL WATERSHED IN FIN- LAND K. Granlund, S. Rekolainen Finnish Environment Institute, Research Department kirsti.granlund@vyh.fi Watersheds have emerged as environmental units for assessing, controlling and reduc- ing non-point-source pollution. Within the framework of the international conventions, such as OSPARCOM, HELCOM, and in the implementation of the EU Water Frame- work Directive, the criteria for model selection is of key importance. Harmonized Quantification and Reporting Procedures for Nutrients (HARP) aims at helping the implementation of OSPAR's (Convention for the Protection of the Marine Environ- ment of the North-East Atlantic) strategy in controlling eutrophication and reducing nutrient input to marine ecosystems by 50nitrogen and phosphorus losses from both point and nonpoint sources and help assess the effectiveness of the pollution reduction strategy. The HARP guidelines related respectively to the "Quantification of Nitrogen and Phosphorus Losses from Diffuse Anthropogenic Sources and Natural Background Losses" and to the "Quantification and Reporting of the Retention of Nitrogen and Phosphorus in River Catchments" were tested on a small, well instrumented agricul- tural watershed in Finland. The project was coordinated by the Environment Institute of the Joint Research Centre. Three types of methodologies for estimating nutrient losses to watercourses were eval- uated during the project. Simple methods based on regression equations or loading functions provide a quick method for estimating nutrient losses. Through these meth- ods the pollutant load can be related to parameters such as slope, soil type, land-use, management practices etc. Relevant nutrient loading functions for the study catch- ment were collected during the project. One mid-range model was applied to simulate the nitrogen cycle in a simplified manner in relation to climate, soil properties, land- use and management practices. Physically based models describe in detail the water and nutrient cycle within the watershed. ICECREAM and SWAT models were applied on the study watershed. ICECREAM is a management model based on CREAMS model for predicting field-scale runoff and erosion. The nitrogen and phosphorus sub- models are based on GLEAMS model. SWAT is a continuous time and spatially dis- tributed model, which includes hydrological, sediment and chemical processes in river 1 basins.The simple methods and the mid-range model for nitrogen proved to be fast and easy to apply, but due limited information on crop-specific loading functions and ni- trogen process rates (e.g. mineralisation in soil), only order-of-magnitude estimates for nutrient loads could be calculated. The ICECREAM model was used to estimate crop-specific nutrient losses from the agricultural area. The potential annual nutrient loads for the whole catchment were then calculated by including estimates for nutri- ent loads from other land-use classes (forested area and scattered settlement). Finally, calibration of the SWAT model was started to study in detail the effects of catchment characteristics on nutrient losses. The preliminary results of model testing are pre- sented and the suitability of different methodologies for estimating nutrient losses in Finnish catchments is discussed. 2
Carbon Storage in Urban Areas in the USA
NASA Astrophysics Data System (ADS)
Churkina, G.; Brown, D.; Keoleian, G.
2007-12-01
It is widely accepted that human settlements occupy a small proportion of the landmass and therefore play a relatively small role in the dynamics of the global carbon cycle. Most modeling studies focusing on the land carbon cycle use models of varying complexity to estimate carbon fluxes through forests, grasses, and croplands, but completely omit urban areas from their scope. Here, we estimate carbon storage in urban areas within the United States, defined to encompass a range of observed settlement densities, and its changes from 1950 to 2000. We show that this storage is not negligible and has been continuously increasing. We include natural- and human-related components of urban areas in our estimates. The natural component includes carbon storage in urban soil and vegetation. The human related component encompasses carbon stored long term in buildings, furniture, cars, and waste. The study suggests that urban areas should receive continued attention in efforts to accurately account for carbon uptake and storage in terrestrial systems.
Raffensperger, Jeff P.; Fleming, Brandon J.; Banks, William S.L.; Horn, Marilee A.; Nardi, Mark R.; Andreasen, David C.
2010-01-01
Increased groundwater withdrawals from confined aquifers in the Maryland Coastal Plain to supply anticipated growth at Fort George G. Meade (Fort Meade) and surrounding areas resulting from the Department of Defense Base Realignment and Closure Program may have adverse effects in the outcrop or near-outcrop areas. Specifically, increased pumping from the Potomac Group aquifers (principally the Patuxent aquifer) could potentially reduce base flow in small streams below rates necessary for healthy biological functioning. Additionally, water levels may be lowered near, or possibly below, the top of the aquifer within the confined-unconfined transition zone near the outcrop area. A three-dimensional groundwater flow model was created to incorporate and analyze data on water withdrawals, streamflow, and hydraulic head in the region. The model is based on an earlier model developed to assess the effects of future withdrawals from well fields in Anne Arundel County, Maryland and surrounding areas, and includes some of the same features, including model extent, boundary conditions, and vertical discretization (layering). The resolution (horizontal grid discretization) of the earlier model limited its ability to simulate the effects of withdrawals on the outcrop and near-outcrop areas. The model developed for this study included a block-shaped higher-resolution local grid, referred to as the child model, centered on Fort Meade, which was coupled to the coarser-grid parent model using the shared node Local Grid Refinement capability of MODFLOW-LGR. A more detailed stream network was incorporated into the child model. In addition, for part of the transient simulation period, stress periods were reduced in length from 1 year to 3 months, to allow for simulation of the effects of seasonally varying withdrawals and recharge on the groundwater-flow system and simulated streamflow. This required revision of the database on withdrawals and estimation of seasonal variations in recharge represented in the earlier model. The calibrated model provides a tool for future forecasts of changes in the system under different management scenarios, and for simulating potential effects of withdrawals at Fort Meade and the surrounding area on water levels in the near-outcrop area and base flow in the outcrop area. Model error was assessed by comparing observed and simulated water levels from 62 wells (55 in the parent model and 7 in the child model). The root-mean-square error values for the parent and child model were 8.72 and 11.91 feet, respectively. Root-mean-square error values for the 55 parent model observation wells range from 0.95 to 30.31 feet; the range for the 7 child model observation wells is 5.00 to 24.17 feet. Many of the wells with higher root-mean-square error values occur at the perimeter of the child model and near large pumping centers, as well as updip in the confined aquifers. Root-mean-square error values decrease downdip and away from the large pumping centers. Both the parent and child models are sensitive to increasing withdrawal rates. The parent model is more sensitive than the child model to decreasing transmissivity of layers 3, 4, 5, and 6. The parent model is relatively insensitive to riverbed vertical conductance, however, the child model does exhibit some sensitivity to decreasing riverbed conductance. The overall water budget for the model included sources and sinks of water including recharge, surface-water bodies and rivers and streams, general-head boundaries, and withdrawals from permitted wells. Withdrawal from wells in 2005 was estimated to be equivalent to 8.5 percent of the total recharge rate.
Development of the monitoring system to detect the piping thickness reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, N. Y.; Ryu, K. H.; Oh, Y. J.
2006-07-01
As nuclear piping becomes aging, secondary piping which was considered safe, undergo thickness reduction problem these days. After some accidents caused by Flow Accelerated Corrosion (FAC), guidelines and recommendations for the thinned pipe management were issued, and thus need for monitoring increases. Through thinned pipe management program, monitoring activities based on the various analyses and the case study of other plants also increases. As the monitoring points increase, time needs to cover the recommended inspection area becomes increasing, while the time given to inspect the piping during overhaul becomes shortened. Existing Ultrasonic Technique (UT) can cover small area in amore » given time. Moreover, it cannot be applied to a complex geometry piping or a certain location like welded part. In this paper, we suggested Switching Direct Current Potential Drop (S-DCPD) method by which we can narrow down the FAC-susceptible area. To apply DCPD, we developed both resistance model and Finite Element Method (FEM) model to predict the DCPD feasibility. We tested elbow specimen to compare DCPD monitoring results with UT results to identify consistency. For the validation test, we designed simulation loop. To determine the text condition, we analyzed environmental parameters and introduced applicable wearing rate model. To obtain the model parameters, we developed electrodes and analyzed velocity profile in the test loop using CFX code. Based on the prediction model and prototype testing results, we are planning to perform validation test to identify applicability of S-DCPD in the NPP environment. Validation text plan will be described as a future work. (authors)« less
Vieira, D C S; Serpa, D; Nunes, J P C; Prats, S A; Neves, R; Keizer, J J
2018-08-01
Wildfires have become a recurrent threat for many Mediterranean forest ecosystems. The characteristics of the Mediterranean climate, with its warm and dry summers and mild and wet winters, make this a region prone to wildfire occurrence as well as to post-fire soil erosion. This threat is expected to be aggravated in the future due to climate change and land management practices and planning. The wide recognition of wildfires as a driver for runoff and erosion in burnt forest areas has created a strong demand for model-based tools for predicting the post-fire hydrological and erosion response and, in particular, for predicting the effectiveness of post-fire management operations to mitigate these responses. In this study, the effectiveness of two post-fire treatments (hydromulch and natural pine needle mulch) in reducing post-fire runoff and soil erosion was evaluated against control conditions (i.e. untreated conditions), at different spatial scales. The main objective of this study was to use field data to evaluate the ability of different erosion models: (i) empirical (RUSLE), (ii) semi-empirical (MMF), and (iii) physically-based (PESERA), to predict the hydrological and erosive response as well as the effectiveness of different mulching techniques in fire-affected areas. The results of this study showed that all three models were reasonably able to reproduce the hydrological and erosive processes occurring in burned forest areas. In addition, it was demonstrated that the models can be calibrated at a small spatial scale (0.5 m 2 ) but provide accurate results at greater spatial scales (10 m 2 ). From this work, the RUSLE model seems to be ideal for fast and simple applications (i.e. prioritization of areas-at-risk) mainly due to its simplicity and reduced data requirements. On the other hand, the more complex MMF and PESERA models would be valuable as a base of a possible tool for assessing the risk of water contamination in fire-affected water bodies and for testing different land management scenarios. Copyright © 2018 Elsevier Inc. All rights reserved.
Yuan, Chengcheng; Liu, Liming; Ye, Jinwei; Ren, Guoping; Zhuo, Dong; Qi, Xiaoxing
2017-05-01
Water pollution caused by anthropogenic activities and driven by changes in rural livelihood strategies in an agricultural system has received increasing attention in recent decades. To simulate the effects of rural household livelihood transition on non-point source (NPS) pollution, a model combining an agent-based model (ABM) and an improved export coefficient model (IECM) was developed. The ABM was adopted to simulate the dynamic process of household livelihood transition, and the IECM was employed to estimate the effects of household livelihood transition on NPS pollution. The coupled model was tested in a small catchment in the Dongting Lake region, China. The simulated results reveal that the transition of household livelihood strategies occurred with the changes in the prices of rice, pig, and labor. Thus, the cropping system, land-use intensity, resident population, and number of pigs changed in the small catchment from 2000 to 2014. As a result of these changes, the total nitrogen load discharged into the river initially increased from 6841.0 kg in 2000 to 8446.3 kg in 2004 and then decreased to 6063.9 kg in 2014. Results also suggest that rural living, livestock, paddy field, and precipitation alternately became the main causes of NPS pollution in the small catchment, and the midstream region of the small catchment was the primary area for NPS pollution from 2000 to 2014. Despite some limitations, the coupled model provides an innovative way to simulate the effects of rural household livelihood transition on NPS pollution with the change of socioeconomic factors, and thereby identify the key factors influencing water pollution to provide valuable suggestions on how agricultural environmental risks can be reduced through the regulation of the behaviors of farming households in the future.
Analysis of Darwin Rainfall Data: Implications on Sampling Strategy
NASA Technical Reports Server (NTRS)
Rafael, Qihang Li; Bras, Rafael L.; Veneziano, Daniele
1996-01-01
Rainfall data collected by radar in the vicinity of Darwin, Australia, have been analyzed in terms of their mean, variance, autocorrelation of area-averaged rain rate, and diurnal variation. It is found that, when compared with the well-studied GATE (Global Atmospheric Research Program Atlantic Tropical Experiment) data, Darwin rainfall has larger coefficient of variation (CV), faster reduction of CV with increasing area size, weaker temporal correlation, and a strong diurnal cycle and intermittence. The coefficient of variation for Darwin rainfall has larger magnitude and exhibits larger spatial variability over the sea portion than over the land portion within the area of radar coverage. Stationary, and nonstationary models have been used to study the sampling errors associated with space-based rainfall measurement. The nonstationary model shows that the sampling error is sensitive to the starting sampling time for some sampling frequencies, due to the diurnal cycle of rain, but not for others. Sampling experiments using data also show such sensitivity. When the errors are averaged over starting time, the results of the experiments and the stationary and nonstationary models match each other very closely. In the small areas for which data are available for I>oth Darwin and GATE, the sampling error is expected to be larger for Darwin due to its larger CV.
Using CubeSats to Monitor Debris Flux
NASA Technical Reports Server (NTRS)
Matney, Mark
2016-01-01
Recent updates to NASA's Orbital Debris Engineering Model (ORDEM 3.0) include a population of small particles (1-2 mm in size) composed of high-density materials (e.g., steel) that drive much of the predicted risk for satellites in the 700-1000 km altitude regime. This modeled population was based on the analysis of returned surfaces of the Shuttle, which flew below 600 km altitude. The cessation of Shuttle missions, plus the lack of in situ data above 600 km means that a data source is being sought to either confirm or modify this high-density population. One possible data source would be a database of anomalous sporadic changes in spacecraft orbit/orientation that might be due to momentum transfer from small particles too small to seriously damage the spacecraft. Because the momentum imparted from an impact would be tiny, it would most likely show up in the orbital behavior of cubesats and other small satellites. While such small satellites were few in number, this was not a particularly attractive option, but now with the proliferation of cubesats in multiple orbit planes and altitudes, the possible collecting area has increased significantly. This presentation will discuss the physics of momentum-transferring impacts from hypervelocity collisions, and make predictions about rates, directions, and locations of such impacts. In addition, it will include recommendations for satellite users on what kind of data might be worth archiving and investigating.
An analytical model with flexible accuracy for deep submicron DCVSL cells
NASA Astrophysics Data System (ADS)
Valiollahi, Sepideh; Ardeshir, Gholamreza
2018-07-01
Differential cascoded voltage switch logic (DCVSL) cells are among the best candidates of circuit designers for a wide range of applications due to advantages such as low input capacitance, high switching speed, small area and noise-immunity; nevertheless, a proper model has not yet been developed to analyse them. This paper analyses deep submicron DCVSL cells based on a flexible accuracy-simplicity trade-off including the following key features: (1) the model is capable of producing closed-form expressions with an acceptable accuracy; (2) model equations can be solved numerically to offer higher accuracy; (3) the short-circuit currents occurring in high-low/low-high transitions are accounted in analysis and (4) the changes in the operating modes of transistors during transitions together with an efficient submicron I-V model, which incorporates the most important non-ideal short-channel effects, are considered. The accuracy of the proposed model is validated in IBM 0.13 µm CMOS technology through comparisons with the accurate physically based BSIM3 model. The maximum error caused by analytical solutions is below 10%, while this amount is below 7% for numerical solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qai, Qiang; Rushton, Gerald; Bhaduri, Budhendra L
The objective of this research is to compute population estimates by age and sex for small areas whose boundaries are different from those for which the population counts were made. In our approach, population surfaces and age-sex proportion surfaces are separately estimated. Age-sex population estimates for small areas and their confidence intervals are then computed using a binomial model with the two surfaces as inputs. The approach was implemented for Iowa using a 90 m resolution population grid (LandScan USA) and U.S. Census 2000 population. Three spatial interpolation methods, the areal weighting (AW) method, the ordinary kriging (OK) method, andmore » a modification of the pycnophylactic method, were used on Census Tract populations to estimate the age-sex proportion surfaces. To verify the model, age-sex population estimates were computed for paired Block Groups that straddled Census Tracts and therefore were spatially misaligned with them. The pycnophylactic method and the OK method were more accurate than the AW method. The approach is general and can be used to estimate subgroup-count types of variables from information in existing administrative areas for custom-defined areas used as the spatial basis of support in other applications.« less
Zanobetti, Antonella; O’Neill, Marie S.; Gronlund, Carina J.; Schwartz, Joel D
2015-01-01
Background Extremes of temperature have been associated with short-term increases in daily mortality. We identified subpopulations with increased susceptibility to dying during temperature extremes, based on personal demographics, small-area characteristics and preexisting medical conditions. Methods We examined Medicare participants in 135 U.S. cities and identified preexisting conditions based on hospitalization records prior to their deaths, from 1985–2006. Personal characteristics were obtained from the Medicare records, and area characteristics were assigned based on zip-code of residence. We conducted a case-only analysis of over 11 million deaths, and evaluated modification of the risk of dying associated with extremely hot days and extremely cold days, continuous temperatures, and water-vapor pressure. Modifiers included preexisting conditions, personal characteristics, zip-code-level population characteristics, and land-cover characteristics. For each effect modifier, a city-specific logistic regression model was fitted and then an overall national estimate was calculated using meta-analysis. Results People with certain preexisting conditions were more susceptible to extreme heat, with an additional 6% (95% confidence interval= 4% – 8%) increase in the risk of dying on an extremely hot day in subjects with previous admission for atrial fibrillation, an additional 8% (4%–12%) in subjects with Alzheimer disease, and an additional 6% (3%–9%) in subjects with dementia. Zip-code level and personal characteristics were also associated with increased susceptibility to temperature. Conclusions We identified several subgroups of the population who are particularly susceptible to temperature extremes, including persons with atrial fibrillation. PMID:24045717
Bridging the Gap between Policy-Driven Land Use Changes and Regional Climate Projections
NASA Astrophysics Data System (ADS)
Berckmans, J.; Hamdi, R.; Dendoncker, N.; Ceulemans, R.
2017-12-01
Land use land cover changes (LULCC) can impact the regional climate by two mechanisms: biogeochemical and biogeophysical. The biogeochemical mechanism of the LULCC alters the chemical composition of the atmosphere by greenhouse gas emissions. The biogeophysical mechanism forces changes in the heat and moisture transfer between the land and the atmosphere. The different representations of the future LULCC under influence of the biogeochemical mechanism are included in the IPCC Radiative Concentration Pathways (RCPs). In contrast, the RCPs do not incorporate the biogeophysical effects. Although considerable research has been devoted to the biogeophysical effects of LULCC on climate, less attention has been paid to assessing the full (both biogeochemical and biogeophysical) LULCC impact on the regional climate in modeling studies. Due to the large variety of small changes in the landscape of Western Europe, the small scale climate impact by the LULCC has been achieved using high-resolution scenarios. The "ALARM" project that was governed by the European Commission generated LULCC data on a resolution of 250x250 m for three time steps: 2020, 2050 and 2080. The CNRM-CM5.1 global climate model has been downscaled to perform simulations with ALARO-SURFEX for the near-term future. Both climate changes and land cover changes have been assessed based on RCP and ALARM scenarios. The use of the land surface model SURFEX with its tiling approach allowed us to accurately represent the small scale changes in the landscape. The largest landscape changes contain the abandonment of agricultural land and the increase in forestry and urban areas. Our results show that the conversions from rural areas to urban areas and arable land to forest in Western Europe considerable affect the near-surface temperature and to a lesser extent the precipitation. These results are related to modifications demonstrated in the surface energy budget. The LULCC have a significant impact compared to the near-term future climate changes. They provide valuable information for landscape planning to mitigate and adapt to climate change. The strength of this study is the use of policy-driven LULCC data combined with an accurate representation of the land by the climate model.
The influence of the lysimeter filling on the soil monolith inside
NASA Astrophysics Data System (ADS)
Puetz, T.; Schilling, J.; Vereecken, H.
2009-04-01
In general, lysimeters are vessels containing disturbed or undisturbed soil blocks, for the most realistic scenario with regard to real outdoor conditions an undisturbed soil block so called soil monolith is preferable. The lower boundary condition was realized in two different ways: as a zero-tension lysimeter with a perforated bottom plate or as controlled lower boundary condition with a suction plate. The optimal surface area and the lysimeter length depend mainly on the scientific question. For cropped lysimeter experiments the lysimeter length has to reflect to a maximum root length. The base area is strongly connected to the scale of observation, whereby small-scale heterogeneity will be averaged using large base areas. For our experiments lysimeters with 2.5 m length, 2 m2 base area and with a wall thickness of the round vessel of 10 mm were used. A base frame weighted down by 120 t of concrete weights is necessary to press a lysimeter cylinder into the ground by the aid of a hydraulic press. The hydraulic press is connected with the base frame via chains. Because of the control of the four hydraulic cylinders a very precise vertical pressing process is guaranteed. To visualize the impact of the lysimeter filling on the intactness of the soil monolith a finite element computation was conducted. The finite element package ANSYS Release 11 was used to execute a nonlinear static analysis on a 2D-axisymmetric finite element model, to simulate the pressing process starting from a soil initial stress state and ending with the full length of the vessel driven into the soil, after which the hydraulic press and the concrete weights are deactivated and the vessel-surrounding soil is excavated. The numerical model of the pressing process considers among other things, a cap non-associative plasticity model with shear and volumetric hardening, soil to soil contact with cohesive zone modelling, soil to vessel contact with high friction, soil excavation using element birth and death and a stagger-loop over the complete pressing process to determine the actual cutting plane
Predicting Grizzly Bear Density in Western North America
Mowat, Garth; Heard, Douglas C.; Schwarz, Carl J.
2013-01-01
Conservation of grizzly bears (Ursus arctos) is often controversial and the disagreement often is focused on the estimates of density used to calculate allowable kill. Many recent estimates of grizzly bear density are now available but field-based estimates will never be available for more than a small portion of hunted populations. Current methods of predicting density in areas of management interest are subjective and untested. Objective methods have been proposed, but these statistical models are so dependent on results from individual study areas that the models do not generalize well. We built regression models to relate grizzly bear density to ultimate measures of ecosystem productivity and mortality for interior and coastal ecosystems in North America. We used 90 measures of grizzly bear density in interior ecosystems, of which 14 were currently known to be unoccupied by grizzly bears. In coastal areas, we used 17 measures of density including 2 unoccupied areas. Our best model for coastal areas included a negative relationship with tree cover and positive relationships with the proportion of salmon in the diet and topographic ruggedness, which was correlated with precipitation. Our best interior model included 3 variables that indexed terrestrial productivity, 1 describing vegetation cover, 2 indices of human use of the landscape and, an index of topographic ruggedness. We used our models to predict current population sizes across Canada and present these as alternatives to current population estimates. Our models predict fewer grizzly bears in British Columbia but more bears in Canada than in the latest status review. These predictions can be used to assess population status, set limits for total human-caused mortality, and for conservation planning, but because our predictions are static, they cannot be used to assess population trend. PMID:24367552
Predicting grizzly bear density in western North America.
Mowat, Garth; Heard, Douglas C; Schwarz, Carl J
2013-01-01
Conservation of grizzly bears (Ursus arctos) is often controversial and the disagreement often is focused on the estimates of density used to calculate allowable kill. Many recent estimates of grizzly bear density are now available but field-based estimates will never be available for more than a small portion of hunted populations. Current methods of predicting density in areas of management interest are subjective and untested. Objective methods have been proposed, but these statistical models are so dependent on results from individual study areas that the models do not generalize well. We built regression models to relate grizzly bear density to ultimate measures of ecosystem productivity and mortality for interior and coastal ecosystems in North America. We used 90 measures of grizzly bear density in interior ecosystems, of which 14 were currently known to be unoccupied by grizzly bears. In coastal areas, we used 17 measures of density including 2 unoccupied areas. Our best model for coastal areas included a negative relationship with tree cover and positive relationships with the proportion of salmon in the diet and topographic ruggedness, which was correlated with precipitation. Our best interior model included 3 variables that indexed terrestrial productivity, 1 describing vegetation cover, 2 indices of human use of the landscape and, an index of topographic ruggedness. We used our models to predict current population sizes across Canada and present these as alternatives to current population estimates. Our models predict fewer grizzly bears in British Columbia but more bears in Canada than in the latest status review. These predictions can be used to assess population status, set limits for total human-caused mortality, and for conservation planning, but because our predictions are static, they cannot be used to assess population trend.
Hauck, Markus
2005-05-01
Based on literature data, epiphytic lichen abundance was comparably studied in montane woodlands on healthy versus dead or dying conifers of Europe and North America in areas with different levels of atmospheric pollution. Study sites comprised Picea abies forests in the Harz Mountains and in the northern Alps, Germany, Picea rubens-Abies balsamea forests on Whiteface Mountain, Adirondacks, New York, U.S.A. and Picea engelmannii-Abies lasiocarpa forests in the Salish Mountains, Montana, U.S.A. Detrended correspondence analysis showed that epiphytic lichen vegetation differed more between healthy and dead or dying trees at high- versus low-polluted sites. This is attributed to greater differences in chemical habitat conditions between trees of different vitality in highly polluted areas. Based on these results, a hypothetical model of relative importance of site factors for small-scale variation of epiphytic lichen abundance versus atmospheric pollutant load is discussed.
[Conceptualizations on care for persons with dementia in nursing homes].
Rodríguez-Martín, Beatriz; Martínez-Andrés, María; Notario-Pacheco, Blanca; Martínez-Vizcaíno, Vicente
2016-03-01
Despite the importance of family perceptions when analyzing care for the elderly in nursing homes, little is said about this aspect. This study aims to identify preferences and areas for improvement in care for persons with dementia, as perceived by families. A qualitative study was performed, based on Grounded Theory, combining two data collection techniques (participant observation and in-depth interviews) in a theoretical sample of institutionalized persons with dementia. The ideal model of care for persons with dementia, as perceived by participants, was based on specialized and individualized care and family participation in the care provided. Areas for improvement included aspects pertaining to specialized training in geriatrics, human relations, and the culture of institutional work. Faced with the current trend towards technification of care, families are now demanding personalized, small-scale care in which they form an active part of the team.
De Wachter, R; Neefs, J M; Goris, A; Van de Peer, Y
1992-01-01
The nucleotide sequence of the gene coding for small ribosomal subunit RNA in the basidiomycete Ustilago maydis was determined. It revealed the presence of a group I intron with a length of 411 nucleotides. This is the third occurrence of such an intron discovered in a small subunit rRNA gene encoded by a eukaryotic nuclear genome. The other two occurrences are in Pneumocystis carinii, a fungus of uncertain taxonomic status, and Ankistrodesmus stipitatus, a green alga. The nucleotides of the conserved core structure of 101 group I intron sequences present in different genes and genome types were aligned and their evolutionary relatedness was examined. This revealed a cluster including all group I introns hitherto found in eukaryotic nuclear genes coding for small and large subunit rRNAs. A secondary structure model was designed for the area of the Ustilago maydis small ribosomal subunit RNA precursor where the intron is situated. It shows that the internal guide sequence pairing with the intron boundaries fits between two helices of the small subunit rRNA, and that minimal rearrangement of base pairs suffices to achieve the definitive secondary structure of the 18S rRNA upon splicing. PMID:1561081
Arctic lead detection using a waveform unmixing algorithm from CryoSat-2 data
NASA Astrophysics Data System (ADS)
Lee, S.; Im, J.
2016-12-01
Arctic areas consist of ice floes, leads, and polynyas. While leads and polynyas account for small parts in the Arctic Ocean, they play a key role in exchanging heat flux, moisture, and momentum between the atmosphere and ocean in wintertime because of their huge temperature difference In this study, a linear waveform unmixing approach was proposed to detect lead fraction. CryoSat-2 waveforms for pure leads, sea ice, and ocean were used as end-members based on visual interpretation of MODIS images coincident with CryoSat-2 data. The unmixing model produced lead, sea ice, and ocean abundances and a threshold (> 0.7) was applied to make a binary classification between lead and sea ice. The unmixing model produced better results than the existing models in the literature, which are based on simple thresholding approaches. The results were also comparable with our previous research using machine learning based models (i.e., decision trees and random forest). A monthly lead fraction was calculated, dividing the number of detected leads by the total number of measurements. The lead fraction around Beaufort Sea and Fram strait was high due to the anti-cyclonic rotation of Beaufort Gyre and the outflows of sea ice to the Atlantic. The lead fraction maps produced in this study were matched well with monthly lead fraction maps in the literature. The areas with thin sea ice identified from our previous research correspond to the high lead fraction areas in the present study. Furthermore, sea ice roughness from ASCAT scatterometer was compared to a lead fraction map to see the relationship between surface roughness and lead distribution.
Sloane, David C; Diamant, Allison L; Lewis, LaVonna B; Yancey, Antronette K; Flynn, Gwendolyn; Nascimento, Lori Miller; McCarthy, William J; Guinyard, Joyce Jones; Cousineau, Michael R
2003-01-01
OBJECTIVES To build health promotion capacity among community residents through a community-based participatory model, and to apply this model to study the nutritional environment of an urban area to better understand the role of such resources in residents' efforts to live a healthy life. DESIGN A multiphase collaborative study that inventoried selected markets in targeted areas of high African-American concentration in comparison with markets in a contrasting wealthier area with fewer African Americans. SETTING A community study set in the Los Angeles metropolitan area. PARTICIPANTS African-American community organizations and community residents in the target areas. INTERVENTIONS Two surveys of market inventories were conducted. The first was a single-sheet form profiling store conditions and the availability of a small selection of healthy foods. The second provided detailed information on whether the store offered fruit, vegetables, low-fat dairy products, dried goods and other items necessary for residents to consume a nutritious diet. RESULTS The targeted areas were significantly less likely to have important items for living a healthier life. The variety and quality of fresh fruit and vegetable produce was significantly lower in the target areas. Such products as 1% milk, skim milk, low-fat and nonfat cheese, soy milk, tofu, whole grain pasta and breads, and low-fat meat and poultry items were sig-nificantly less available. CONCLUSIONS Healthy food products were significantly less available in the target areas. The authors conclude from these results that the health disparities experienced by African-American communities have origins that extend beyond the health delivery system and individual behaviors inasmuch as adherence to the healthy lifestyle associated with low chronic disease risk is more difficult in resource-poor neighborhoods than in resource-rich ones. PMID:12848840
Object-oriented classification of drumlins from digital elevation models
NASA Astrophysics Data System (ADS)
Saha, Kakoli
Drumlins are common elements of glaciated landscapes which are easily identified by their distinct morphometric characteristics including shape, length/width ratio, elongation ratio, and uniform direction. To date, most researchers have mapped drumlins by tracing contours on maps, or through on-screen digitization directly on top of hillshaded digital elevation models (DEMs). This paper seeks to utilize the unique morphometric characteristics of drumlins and investigates automated extraction of the landforms as objects from DEMs by Definiens Developer software (V.7), using the 30 m United States Geological Survey National Elevation Dataset DEM as input. The Chautauqua drumlin field in Pennsylvania and upstate New York, USA was chosen as a study area. As the study area is huge (approximately covers 2500 sq.km. of area), small test areas were selected for initial testing of the method. Individual polygons representing the drumlins were extracted from the elevation data set by automated recognition, using Definiens' Multiresolution Segmentation tool, followed by rule-based classification. Subsequently parameters such as length, width and length-width ratio, perimeter and area were measured automatically. To test the accuracy of the method, a second base map was produced by manual on-screen digitization of drumlins from topographic maps and the same morphometric parameters were extracted from the mapped landforms using Definiens Developer. Statistical comparison showed a high agreement between the two methods confirming that object-oriented classification for extraction of drumlins can be used for mapping these landforms. The proposed method represents an attempt to solve the problem by providing a generalized rule-set for mass extraction of drumlins. To check that the automated extraction process was next applied to a larger area. Results showed that the proposed method is as successful for the bigger area as it was for the smaller test areas.
Rankl, James G.
1990-01-01
A physically based point-infiltration model was developed for computing infiltration of rainfall into soils and the resulting runoff from small basins in Wyoming. The user describes a 'design storm' in terms of average rainfall intensity and storm duration. Information required to compute runoff for the design storm by using the model include (1) soil type and description, and (2) two infiltration parameters and a surface-retention storage parameter. Parameter values are tabulated in the report. Rainfall and runoff data for three ephemeral-stream basins that contain only one type of soil were used to develop the model. Two assumptions were necessary: antecedent soil moisture is some long-term average, and storm rainfall is uniform in both time and space. The infiltration and surface-retention storage parameters were determined for the soil of each basin. Observed rainstorm and runoff data were used to develop a separation curve, or incipient-runoff curve, which distinguishes between runoff and nonrunoff rainfall data. The position of this curve defines the infiltration and surface-retention storage parameters. A procedure for applying the model to basins that contain more than one type of soil was developed using data from 7 of the 10 study basins. For these multiple-soil basins, the incipient-runoff curve defines the infiltration and retention-storage parameters for the soil having the highest runoff potential. Parameters were defined by ranking the soils according to their relative permeabilities and optimizing the position of the incipient-runoff curve by using measured runoff as a control for the fit. Analyses of runoff from multiple-soil basins indicate that the effective contributing area of runoff is less than the drainage area of the basin. In this study, the effective drainage area ranged from 41.6 to 71.1 percent of the total drainage area. Information on effective drainage area is useful in evaluating drainage area as an independent variable in statistical analyses of hydrologic data, such as annual peak frequency distributions and sediment yield.A comparison was made of the sum of the simulated runoff and the sum of the measured runoff for all available records of runoff-producing storms in the 10 study basins. The sums of the simulated runoff ranged from 12.0 percent less than to 23.4 percent more than the sums of the measured runoff. A measure of the standard error of estimate was computed for each data set. These values ranged from 20 to 70 percent of the mean value of the measured runoff. Rainfall-simulator infiltrometer tests were made in two small basins. The amount of water uptake measured by the test in Dugout Creek tributary basin averaged about three times greater than the amount of water uptake computed from rainfall and runoff data. Therefore, infiltrometer data were not used to determine infiltration rates for this study.
NASA Technical Reports Server (NTRS)
Hoffer, R. M.; Landgrebe, D. A. (Principal Investigator); Goodrick, F. E.
1972-01-01
There are no author-identified significant results in this report. The principal problem encountered has been the lack of good quality, small scale baseline photography for the test areas. Analysis of the ERTS-1 data for the San Juan Site will emphasize development of a preliminary spectral classification defining grass cover categories, and then selection of subframes for intensive investigation of the forestry, geologic, and hydrologic properties of the area. Primary work has been devoted to the selection and digitization of areas for topographic modeling, and compilation of ground based data maps necessary for computer analysis. Study effort has emphasized: geomorphic features; macro-vegetation; micro-vegetation; snow-hydrology; insect/disease damage; and blow-down. Analysis of a frame of the Lake Texoma area indicates a great deal of potential in the analysis and interpretation of ERTS imagery. Preliminary results of investigations of geologic, forest, range, cropland, and water resources of the area are summarized.
Lin, Yan; Wimberly, Michael C
2017-04-01
The purpose of this study was to examine the geographic variations of late-stage diagnosis in colorectal cancer (CRC) and breast cancer as well as to investigate the effects of 3 neighborhood-level factors-socioeconomic deprivation, urban/rural residence, and spatial accessibility to health care-on the late-stage risks. This study used population-based South Dakota cancer registry data from 2001 to 2012. A total of 4,878 CRC cases and 6,418 breast cancer cases were included in the analyses. Two-level logistic regression models were used to analyze the risk of late-stage CRC and breast cancer. For CRC, there was a small geographic variation across census tracts in late-stage diagnosis, and residing in isolated small rural areas was significantly associated with late-stage risk. However, this association became nonsignificant after adjusting for census-tract level socioeconomic deprivation. Socioeconomic deprivation was an independent predictor of CRC late-stage risk, and it explained the elevated risk among American Indians. No relationship was found between spatial accessibility and CRC late-stage risk. For breast cancer, no geographic variation in the late-stage diagnosis was observed across census tracts, and none of the 3 neighborhood-level factors was significantly associated with late-stage risk. Results suggested that socioeconomic deprivation, rather than spatial accessibility, contributed to CRC late-stage risks in South Dakota as a rural state. CRC intervention programs could be developed to target isolated small rural areas, socioeconomically disadvantaged areas, as well as American Indians residing in these areas. © 2016 National Rural Health Association.
Small Schools, Large Districts: Small-School Reform and New York City's Students
ERIC Educational Resources Information Center
Iatarola, Patrice; Schwartz, Amy Ellen; Stiefel, Leanna; Chellman, Colin C.
2008-01-01
Background/Context: High school reform is currently at the top of the education policy making agenda after years of stagnant achievement and persistent racial and income test score gaps. Although a number of reforms offer some promise of improving U.S. high schools, small schools have emerged as the favored reform model, especially in urban areas,…
Robinson, Hugh S.; Abarca, Maria; Zeller, Katherine A.; Velasquez, Grisel; Paemelaere, Evi A. D.; Goldberg, Joshua F.; Payan, Esteban; Hoogesteijn, Rafael; Boede, Ernesto O.; Schmidt, Krzysztof; Lampo, Margarita; Viloria, Ángel L.; Carreño, Rafael; Robinson, Nathaniel; Lukacs, Paul M.; Nowak, J. Joshua; Salom-Pérez, Roberto; Castañeda, Franklin; Boron, Valeria; Quigley, Howard
2018-01-01
Broad scale population estimates of declining species are desired for conservation efforts. However, for many secretive species including large carnivores, such estimates are often difficult. Based on published density estimates obtained through camera trapping, presence/absence data, and globally available predictive variables derived from satellite imagery, we modelled density and occurrence of a large carnivore, the jaguar, across the species’ entire range. We then combined these models in a hierarchical framework to estimate the total population. Our models indicate that potential jaguar density is best predicted by measures of primary productivity, with the highest densities in the most productive tropical habitats and a clear declining gradient with distance from the equator. Jaguar distribution, in contrast, is determined by the combined effects of human impacts and environmental factors: probability of jaguar occurrence increased with forest cover, mean temperature, and annual precipitation and declined with increases in human foot print index and human density. Probability of occurrence was also significantly higher for protected areas than outside of them. We estimated the world’s jaguar population at 173,000 (95% CI: 138,000–208,000) individuals, mostly concentrated in the Amazon Basin; elsewhere, populations tend to be small and fragmented. The high number of jaguars results from the large total area still occupied (almost 9 million km2) and low human densities (< 1 person/km2) coinciding with high primary productivity in the core area of jaguar range. Our results show the importance of protected areas for jaguar persistence. We conclude that combining modelling of density and distribution can reveal ecological patterns and processes at global scales, can provide robust estimates for use in species assessments, and can guide broad-scale conservation actions. PMID:29579129
Space-Time Smoothing of Complex Survey Data: Small Area Estimation for Child Mortality.
Mercer, Laina D; Wakefield, Jon; Pantazis, Athena; Lutambi, Angelina M; Masanja, Honorati; Clark, Samuel
2015-12-01
Many people living in low and middle-income countries are not covered by civil registration and vital statistics systems. Consequently, a wide variety of other types of data including many household sample surveys are used to estimate health and population indicators. In this paper we combine data from sample surveys and demographic surveillance systems to produce small area estimates of child mortality through time. Small area estimates are necessary to understand geographical heterogeneity in health indicators when full-coverage vital statistics are not available. For this endeavor spatio-temporal smoothing is beneficial to alleviate problems of data sparsity. The use of conventional hierarchical models requires careful thought since the survey weights may need to be considered to alleviate bias due to non-random sampling and non-response. The application that motivated this work is estimation of child mortality rates in five-year time intervals in regions of Tanzania. Data come from Demographic and Health Surveys conducted over the period 1991-2010 and two demographic surveillance system sites. We derive a variance estimator of under five years child mortality that accounts for the complex survey weighting. For our application, the hierarchical models we consider include random effects for area, time and survey and we compare models using a variety of measures including the conditional predictive ordinate (CPO). The method we propose is implemented via the fast and accurate integrated nested Laplace approximation (INLA).
An Impact Ejecta Behavior Model for Small, Irregular Bodies
NASA Technical Reports Server (NTRS)
Richardson, J. E.; Melosh, H. J.; Greenberg, R.
2003-01-01
In recent years, spacecraft observations of asteroids 951 Gaspra, 243 Ida, 253 Mathilde, and 433 Eros have shown the overriding dominance of impact processes with regard to the structure and surface morphology of these small, irregular bodies. In particular, impact ejecta play an important role in regolith formation, ranging from small particles to large blocks, as well as surface feature modification and obscuration. To investigate these processes, a numerical model has been developed based upon the impact ejecta scaling laws provided by Housen, Schmidt, and Holsapple, and modified to more properly simulate the late-stage ejection velocities and ejecta plume shape changes (ejection angle variations) shown in impact cratering experiments. A target strength parameter has also been added to allow the simulation of strength-dominated cratering events in addition to the more familiar gravity-dominated cratering events. The result is a dynamical simulation which models -- via tracer particles -- the ejecta plume behavior, ejecta blanket placement, and impact crater area resulting from a specified impact on an irregularly shaped target body, which is modeled in 3-dimensional polygon fashion. This target body can be placed in a simple rotation state about one of its principal axes, with the impact site and projectile/target parameters selected by the user. The gravitational force from the irregular target body (on each tracer particle) is determined using the polygonized surface (polyhedron) gravity technique developed by Werner.
Radar Reflectivity in Wingtip-Generated Wake Vortices
NASA Technical Reports Server (NTRS)
Marshall, Robert E.; Mudukutore, Ashok; Wissel, Vicki
1997-01-01
This report documents new predictive models of radar reflectivity, with meter-scale resolution, for aircraft wakes in clear air and fog. The models result from a radar design program to locate and quantify wake vortices from commercial aircraft in support of the NASA Aircraft Vortex Spacing System (AVOSS). The radar reflectivity model for clear air assumes: 1) turbulent eddies in the wake produce small discontinuities in radar refractive index; and 2) these turbulent eddies are in the 'inertial subrange' of turbulence. From these assumptions, the maximum radar frequency for detecting a particular aircraft wake, as well as the refractive index structure constant and radar volume reflectivity in the wake can be obtained from the NASA Terminal Area Simulation System (TASS) output. For fog conditions, an empirical relationship is used to calculate radar reflectivity factor from TASS output of bulk liquid water. Currently, two models exist: 1) Atlas-based on observations of liquid water and radar reflectivity factor in clouds; and 2) de Wolf- specifically tailored to a specific measured dataset (1992 Vandenberg Air Force Base).
Computer 3D site model generation based on aerial images
NASA Astrophysics Data System (ADS)
Zheltov, Sergey Y.; Blokhinov, Yuri B.; Stepanov, Alexander A.; Skryabin, Sergei V.; Sibiriakov, Alexandre V.
1997-07-01
The technology for 3D model design of real world scenes and its photorealistic rendering are current topics of investigation. Development of such technology is very attractive to implement in vast varieties of applications: military mission planning, crew training, civil engineering, architecture, virtual reality entertainments--just a few were mentioned. 3D photorealistic models of urban areas are often discussed now as upgrade from existing 2D geographic information systems. Possibility of site model generation with small details depends on two main factors: available source dataset and computer power resources. In this paper PC based technology is presented, so the scenes of middle resolution (scale of 1:1000) be constructed. Types of datasets are the gray level aerial stereo pairs of photographs (scale of 1:14000) and true color on ground photographs of buildings (scale ca.1:1000). True color terrestrial photographs are also necessary for photorealistic rendering, that in high extent improves human perception of the scene.
Financial analysis of brucellosis control for small-scale goat farming in the Bajío region, Mexico.
Oseguera Montiel, David; Bruce, Mieghan; Frankena, Klaas; Udo, Henk; van der Zijpp, Akke; Rushton, Jonathan
2015-03-01
Brucellosis is an endemic disease in small-scale goat husbandry systems in Mexico. It is a zoonosis and the economic consequences can be large, although estimates are not available for the Mexican goat sector. Our objective was to conduct a financial analysis of brucellosis control in a prominent dairy goat production area of the Bajío region, Mexico. We used three models: (1) a brucellosis transmission model at village flock level (n=1000 head), (2) a flock growth model at smallholder flock level (n=23 head) using output of model 1 and (3) cost-benefit analysis of several brucellosis control scenarios based on output of model 2. Scenarios consisted of test-and-slaughter or vaccination or a combination of both compared to the base situation (no control). The average net present values (NPV) of using vaccination over a 5-year period was 3.8 US$ (90% CI: 1.3-6.6) and 20 US$ (90% CI: 11.3-28.6) over a 10-year period per goat. The average benefit-cost ratios over a 5-year period and 10-year period were 4.3 US$ (90% CI: 2.2-6.9) and 12.3 US$ (90% CI: 7.5-17.3) per goat, respectively. For the total dairy goat population (38,462 head) of the study area (the Bajío of Jalisco and Michoacán) the NPV's over a 5-year and 10-year period were 0.15 million US$ and 0.8 million US$. However, brucellosis prevalence was predicted to remain relatively high at about 12%. Control scenarios with test-and-slaughter predicted to reduce brucellosis prevalence to less than 3%, but this produced a negative NPV over a 5-year period ranging from -31.6 to -11.1 US$ and from -31.1 to 7.5 US$ over a 10-year period. A brucellosis control campaign based on vaccination with full coverage is economically profitable for the goat dairy sector of the region although smallholders would need financial support in case test-and-slaughter is applied to reduce the prevalence more quickly. Copyright © 2014 Elsevier B.V. All rights reserved.
Assessment of crack opening area for leak rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharples, J.K.; Bouchard, P.J.
1997-04-01
This paper outlines the background to recommended crack opening area solutions given in a proposed revision to leak before break guidance for the R6 procedure. Comparisons with experimental and analytical results are given for some selected cases of circumferential cracks in cylinders. It is shown that elastic models can provide satisfactory estimations of crack opening displacement (and area) but they become increasingly conservative for values of L{sub r} greater than approximately 0.4. The Dugdale small scale yielding model gives conservative estimates of crack opening displacement with increasing enhancement for L{sub r} values greater than 0.4. Further validation of the elastic-plasticmore » reference stress method for up to L{sub r} values of about 1.0 is presented by experimental and analytical comparisons. Although a more detailed method, its application gives a best estimate of crack opening displacement which may be substantially greater than small scale plasticity models. It is also shown that the local boundary conditions in pipework need to be carefully considered when evaluating crack opening area for through-wall bending stresses resulting from welding residual stresses or geometry discontinuities.« less
NASA Astrophysics Data System (ADS)
Fradinata, Edy; Marli Kesuma, Zurnila
2018-05-01
Polynomials and Spline regression are the numeric model where they used to obtain the performance of methods, distance relationship models for cement retailers in Banda Aceh, predicts the market area for retailers and the economic order quantity (EOQ). These numeric models have their difference accuracy for measuring the mean square error (MSE). The distance relationships between retailers are to identify the density of retailers in the town. The dataset is collected from the sales of cement retailer with a global positioning system (GPS). The sales dataset is plotted of its characteristic to obtain the goodness of fitted quadratic, cubic, and fourth polynomial methods. On the real sales dataset, polynomials are used the behavior relationship x-abscissa and y-ordinate to obtain the models. This research obtains some advantages such as; the four models from the methods are useful for predicting the market area for the retailer in the competitiveness, the comparison of the performance of the methods, the distance of the relationship between retailers, and at last the inventory policy based on economic order quantity. The results, the high-density retail relationship areas indicate that the growing population with the construction project. The spline is better than quadratic, cubic, and four polynomials in predicting the points indicating of small MSE. The inventory policy usages the periodic review policy type.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-08
... number). Persons with hearing or speech impairments may access this number through TTY by calling the... decreased in value by as much as 35 percent. These decreases may put the some PBV properties at risk for... Market Rents for Project Base Vouchers in the Dallas TX Metropolitan Area AGENCY: Office of the Assistant...
Low-Cost Ultra-High Spatial and Temporal Resolution Mapping of Intertidal Rock Platforms
NASA Astrophysics Data System (ADS)
Bryson, M.; Johnson-Roberson, M.; Murphy, R.
2012-07-01
Intertidal ecosystems have primarily been studied using field-based sampling; remote sensing offers the ability to collect data over large areas in a snapshot of time which could compliment field-based sampling methods by extrapolating them into the wider spatial and temporal context. Conventional remote sensing tools (such as satellite and aircraft imaging) provide data at relatively course, sub-meter resolutions or with limited temporal resolutions and relatively high costs for small-scale environmental science and ecology studies. In this paper, we describe a low-cost, kite-based imaging system and photogrammetric pipeline that was developed for constructing highresolution, 3D, photo-realistic terrain models of intertidal rocky shores. The processing pipeline uses automatic image feature detection and matching, structure-from-motion and photo-textured terrain surface reconstruction algorithms that require minimal human input and only a small number of ground control points and allow the use of cheap, consumer-grade digital cameras. The resulting maps combine colour and topographic information at sub-centimeter resolutions over an area of approximately 100m, thus enabling spatial properties of the intertidal environment to be determined across a hierarchy of spatial scales. Results of the system are presented for an intertidal rock platform at Cape Banks, Sydney, Australia. Potential uses of this technique include mapping of plant (micro- and macro-algae) and animal (e.g. gastropods) assemblages at multiple spatial and temporal scales.
Slutske, Wendy S; Deutsch, Arielle R; Statham, Dixie J; Martin, Nicholas G
2015-08-01
Previous research has demonstrated that local area characteristics (such as disadvantage and gambling outlet density) and genetic risk factors are associated with gambling involvement and disordered gambling. These 2 lines of research were brought together in the present study by examining the extent to which genetic contributions to individual differences in gambling involvement and disorder contributed to being exposed to, and were also accentuated by, local area disadvantage. Participants were members of the national community-based Australian Twin Registry who completed a telephone interview in which the past-year frequency of gambling and symptoms of disordered gambling were assessed. Indicators of local area disadvantage were based on census data matched to the participants' postal codes. Univariate biometric model-fitting revealed that exposure to area disadvantage was partially explained by genetic factors. Bivariate biometric model-fitting was conducted to examine the evidence for gene-environment interaction while accounting for gene-environment correlation. These analyses demonstrated that: (a) a small portion of the genetic propensity to gamble was explained by moving to or remaining in a disadvantaged area, and (b) the remaining genetic and unique environmental variation in the frequency of participating in electronic machine gambling (among men and women) and symptoms of disordered gambling (among women) was greater in more disadvantaged localities. As the gambling industry continues to grow, it will be important to take into account the multiple contexts in which problematic gambling behavior can emerge-from genes to geography-as well as the ways in which such contexts may interact with each other. (c) 2015 APA, all rights reserved).
Slutske, Wendy S.; Deutsch, Arielle R.; Statham, Dixie B.; Martin, Nicholas G.
2015-01-01
Previous research has demonstrated that local area characteristics (such as disadvantage and gambling outlet density) and genetic risk factors are associated with gambling involvement and disordered gambling. These two lines of research were brought together in the present study by examining the extent to which genetic contributions to individual differences in gambling involvement and disorder contributed to being exposed to, and were also accentuated by, local area disadvantage. Participants were members of the national community-based Australian Twin Registry who completed a telephone interview in which the past-year frequency of gambling and symptoms of disordered gambling were assessed. Indicators of local area disadvantage were based on census data matched to the participants' postal codes. Univariate biometric model-fitting revealed that exposure to area disadvantage was partially explained by genetic factors. Bivariate biometric model-fitting was conducted to examine the evidence for gene-environment interaction while accounting for gene-environment correlation. These analyses demonstrated that: (a) a small portion of the genetic propensity to gamble was explained by moving to or remaining in a disadvantaged area, and (b) the remaining genetic and unique environmental variation in the frequency of participating in electronic machine gambling (among men and women) and symptoms of disordered gambling (among women) was greater in more disadvantaged localities. As the gambling industry continues to grow, it will be important to take into account the multiple contexts in which problematic gambling behavior can emerge -- from genes to geography -- as well as the ways in which such contexts may interact with each other. PMID:26147321
Coexistence of Trees and Grass: Importance of climate and fire within the tropics
NASA Astrophysics Data System (ADS)
Shuman, J. K.; Fisher, R.; Koven, C.; Knox, R. G.; Andre, B.; Kluzek, E. B.
2017-12-01
Tropical forests are characterized by transition zones where dominance shifts between trees and grasses with some areas exhibiting bistability of the two. The cause of this transition and bistability has been linked to the interacting effects of climate, vegetation structure and fire behavior. Utilizing the Functionally Assembled Terrestrial Ecosystem Simulator (FATES), a demographic vegetation model, and the CESM ESM, we explore the coexistence of trees and grass across the tropics with an active fire regime. FATES has been updated to use a fire module based on Spitfire. FATES-Spitfire tracks fire ignition, spread and impact based on fuel state and combustion. Fire occurs within the model with variable intensity that kills trees according to the combined effects of cambial damage and crown scorch due to flame height and fire intensity. As a size-structured model, FATES allows for variable mortality based on the size of tree cohorts, where larger trees experience lower morality compared to small trees. Results for simulation scenarios where vegetation is represented by all trees, all grass, or a combination of competing trees and grass are compared to assess changes in biomass, fire regime and tree-grass coexistence. Within the forest-grass transition area there is a critical time during which grass fuels fire spread and prevents the establishment of trees. If trees are able to escape mortality a tree-grass bistable area is successful. The ability to simulate the bistability and transition of trees and grass throughout the tropics is critical to representing vegetation dynamics in response to changing climate and CO2.
Sandstorms are frequent in the northern Chihuahuan Desert in New Mexico, an area characterized by open areas lacking vegetation, individual mesquite bushes, and mesquite coppice dunes. Field measurements of sand fluxes and wind velocities over a two year period provided a descri...
Increase Economic Valuation of Marine Ecotourism Spots In Small Islands
NASA Astrophysics Data System (ADS)
Rahakbauw, Siska D.; Teniwut, Wellem A.; Renjaan, Meiskyana R.; Hungan, Marselus
2017-10-01
Ecotourism is one of the fast-growing sectors especially in the developing country as a source of revenue. To get a sustainable development of ecotourism, it needs broad and comprehensive effort from central government and local government, perfect example in that regards in Indonesia is Bali and Lombok. For another area in Indonesia like Kei Islands which located in two administrative governments have a major problem to build a sustainable nature-based tourism because of the location of this area to the major cities in the country makes the travel cost is high. This situation makes the role of local community as the backbone of the growth and development of nature-based tourism is critical. By using structural equation modeling (SEM), we constructed a model to enhance local community perception on economic valuation of ecotourism spots in the area. Results showed that perceived quality as the mediation driven by the intensity of appearance on national television and the internet could increase community attachment to increase willingness to pay from the local community on ecotourism in Kei islands. Also, the result also indicated that WTP value for the local community on ecotourism in Kei Islands was 10.81 per trip, with average trip per month was 1 to 4 times.
Application of a three-dimensional hydrodynamic model to the Himmerfjärden, Baltic Sea
NASA Astrophysics Data System (ADS)
Sokolov, Alexander
2014-05-01
Himmerfjärden is a coastal fjord-like bay situated in the north-western part of the Baltic Sea. The fjord has a mean depth of 17 m and a maximum depth of 52 m. The water is brackish (6 psu) with small salinity fluctuation (±2 psu). A sewage treatment plant, which serves about 300 000 people, discharges into the inner part of Himmerfjärden. This area is the subject of a long-term monitoring program. We are planning to develop a publicly available modelling system for this area, which will perform short-term forecast predictions of pertinent parameters (e.g., water-levels, currents, salinity, temperature) and disseminate them to users. A key component of the system is a three-dimensional hydrodynamic model. The open source Delft3D Flow system (http://www.deltaressystems.com/hydro) has been applied to model the Himmerfjärden area. Two different curvilinear grids were used to approximate the modelling domain (25 km × 50 km × 60 m). One grid has low horizontal resolution (cell size varies from 250 to 450 m) to perform long-term numerical experiments (modelling period of several months), while another grid has higher resolution (cell size varies from 120 to 250 m) to model short-term situations. In vertical direction both z-level (50 layers) and sigma coordinate (20 layers) were used. Modelling results obtained with different horizontal resolution and vertical discretisation will be presented. This model will be a part of the operational system which provides automated integration of data streams from several information sources: meteorological forecast based on the HIRLAM model from the Finnish Meteorological Institute (https://en.ilmatieteenlaitos.fi/open-data), oceanographic forecast based on the HIROMB-BOOS Model developed within the Baltic community and provided by the MyOcean Project (http://www.myocean.eu), riverine discharge from the HYPE model provided by the Swedish Meteorological Hydrological Institute (http://vattenwebb.smhi.se/modelarea/).
De Clercq, E M; Leta, S; Estrada-Peña, A; Madder, M; Adehan, S; Vanwambeke, S O
2015-01-01
Rhipicephalus microplus is one of the most widely distributed and economically important ticks, transmitting Babesia bigemina, B. bovis and Anaplasma marginale. It was recently introduced to West Africa on live animals originating from Brazil. Knowing the precise environmental suitability for the tick would allow veterinary health officials to draft vector control strategies for different regions of the country. To test the performance of modelling algorithms and different sets of environmental explanatory variables, species distribution models for this tick species in Benin were developed using generalized linear models, linear discriminant analysis and random forests. The training data for these models were a dataset containing reported absence or presence in 104 farms, randomly selected across Benin. These farms were sampled at the end of the rainy season, which corresponds with an annual peak in tick abundance. Two environmental datasets for the country of Benin were compared: one based on interpolated climate data (WorldClim) and one based on remotely sensed images (MODIS). The pixel size for both environmental datasets was 1 km. Highly suitable areas occurred mainly along the warmer and humid coast extending northwards to central Benin. The northern hot and drier areas were found to be unsuitable. The models developed and tested on data from the entire country were generally found to perform well, having an AUC value greater than 0.92. Although statistically significant, only small differences in accuracy measures were found between the modelling algorithms, or between the environmental datasets. The resulting risk maps differed nonetheless. Models based on interpolated climate suggested gradual variations in habitat suitability, while those based on remotely sensed data indicated a sharper contrast between suitable and unsuitable areas, and a patchy distribution of the suitable areas. Remotely sensed data yielded more spatial detail in the predictions. When computing accuracy measures on a subset of data along the invasion front, the modelling technique Random Forest outperformed the other modelling approaches, and results with MODIS-derived variables were better than those using WorldClim data. The high environmental suitability for R. microplus in the southern half of Benin raises concern at the regional level for animal health, including its potential to substantially alter transmission risk of Babesia bovis. The northern part of Benin appeared overall of low environmental suitability. Continuous surveillance in the transition zone however remains relevant, in relation to important cattle movements in the region, and to the invasive character of R. microplus. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Junttila, Virpi; Kauranne, Tuomo; Finley, Andrew O.; Bradford, John B.
2015-01-01
Modern operational forest inventory often uses remotely sensed data that cover the whole inventory area to produce spatially explicit estimates of forest properties through statistical models. The data obtained by airborne light detection and ranging (LiDAR) correlate well with many forest inventory variables, such as the tree height, the timber volume, and the biomass. To construct an accurate model over thousands of hectares, LiDAR data must be supplemented with several hundred field sample measurements of forest inventory variables. This can be costly and time consuming. Different LiDAR-data-based and spatial-data-based sampling designs can reduce the number of field sample plots needed. However, problems arising from the features of the LiDAR data, such as a large number of predictors compared with the sample size (overfitting) or a strong correlation among predictors (multicollinearity), may decrease the accuracy and precision of the estimates and predictions. To overcome these problems, a Bayesian linear model with the singular value decomposition of predictors, combined with regularization, is proposed. The model performance in predicting different forest inventory variables is verified in ten inventory areas from two continents, where the number of field sample plots is reduced using different sampling designs. The results show that, with an appropriate field plot selection strategy and the proposed linear model, the total relative error of the predicted forest inventory variables is only 5%–15% larger using 50 field sample plots than the error of a linear model estimated with several hundred field sample plots when we sum up the error due to both the model noise variance and the model’s lack of fit.
NASA Astrophysics Data System (ADS)
Oh, Sungmin; Hohmann, Clara; Foelsche, Ulrich; Fuchsberger, Jürgen; Rieger, Wolfgang; Kirchengast, Gottfried
2017-04-01
WegenerNet Feldbach region (WEGN), a pioneering experiment for weather and climate observations, has recently completed its first 10-year precipitation measurement cycle. The WEGN has measured precipitation, temperature, humidity, and other parameters since the beginning of 2007, supporting local-level monitoring and modeling studies, over an area of about 20 km x 15 km centered near the City of Feldbach (46.93 ˚ N, 15.90 ˚ E) in the Alpine forelands of southeast Austria. All the 151 stations in the network are now equipped with high-quality Meteoservis sensors as of August 2016, following an equipment with Friedrichs sensors at most stations before, and continue to provide high-resolution (2 km2/5-min) gauge based precipitation measurements for interested users in hydro-meteorological communities. Here we will present overall characteristics of the WEGN, with a focus on sub-daily precipitation measurements, from the data processing (data quality control, gridded data products generation, etc.) to data applications (e.g., ground validation of satellite estimates). The latter includes our recent study on the propagation of uncertainty from rainfall to runoff. The study assesses responses of small-catchment runoff to spatial rainfall variability in the WEGN region over the Raab valley, using a physics-based distributed hydrological model; Water Flow and Balance Simulation Model (WaSiM), developed at ETH Zurich (Schulla, ETH Zurich, 1997). Given that uncertainty due to resolution of rainfall measurements is believed to be a significant source of error in hydrologic modeling especially for convective rainfall that dominates in the region during summer, the high-resolution of WEGN data furnishes a great opportunity to analyze effects of rainfall events on the runoff at different spatial resolutions. Furthermore, the assessment can be conducted not only for the lower Raab catchment (area of about 500 km2) but also for its sub-catchments (areas of about 30-70 km2). Beside the question how many stations are necessary for reliable hydrological modeling, different interpolation methods like Inverse Distance Interpolation, Elevation Dependent Regression, and combinations will be tested. This presentation will show the first results from a scale-depending analysis of spatial and temporal structures of heavy rainfall events and responses of simulated runoff at the event scale in the WEGN region.
NASA Astrophysics Data System (ADS)
Liu, Rongjie; Zhang, Jie; Yao, Haiyan; Cui, Tingwei; Wang, Ning; Zhang, Yi; Wu, Lingjuan; An, Jubai
2017-09-01
In this study, we monitored hourly changes in sea surface salinity (SSS) in turbid coastal waters from geostationary satellite ocean color images for the first time, using the Bohai Sea as a case study. We developed a simple multi-linear statistical regression model to retrieve SSS data from Geostationary Ocean Color Imager (GOCI) based on an in situ satellite matched-up dataset (R2 = 0.795; N = 41; Range: 26.4 to 31.9 psμ). The model was then validated using independent continuous SSS measurements from buoys, with the average percentage difference of 0.65%. The model was applied to GOCI images from the dry season during an astronomical tide to characterize hourly changes in SSS in the Bohai Sea. We found that the model provided reasonable estimates of the hourly changes in SSS and that trends in the modeled and measured data were similar in magnitude and direction (0.43 vs 0.33 psμ, R2 = 0.51). There were clear diurnal variations in the SSS of the Bohai Sea, with a regional average of 0.455 ± 0.079 psμ (0.02-3.77 psμ). The magnitude of the diurnal variations in SSS varied spatially, with large diurnal variability in the nearshore, particularly in the estuary, and small variability in the offshore area. The model for the riverine area was based on the inverse correlation between SSS and CDOM absorption. In the offshore area, the water mass of the North Yellow Sea, characterized by high SSS and low CDOM concentrations, dominated. Analysis of the driving mechanisms showed that the tidal current was the main control on hourly changes in SSS in the Bohai Sea.
Fuzzy Current-Mode Control and Stability Analysis
NASA Technical Reports Server (NTRS)
Kopasakis, George
2000-01-01
In this paper a current-mode control (CMC) methodology is developed for a buck converter by using a fuzzy logic controller. Conventional CMC methodologies are based on lead-lag compensation with voltage and inductor current feedback. In this paper the converter lead-lag compensation will be substituted with a fuzzy controller. A small-signal model of the fuzzy controller will also be developed in order to examine the stability properties of this buck converter control system. The paper develops an analytical approach, introducing fuzzy control into the area of CMC.
Cookson, Richard; Laudicella, Mauro; Donni, Paolo Li
2012-10-01
This study developed a method for measuring change in socio-economic equity in health care utilisation using small-area level administrative data. Our method provides more detailed information on utilisation than survey data but only examines socio-economic differences between neighbourhoods rather than individuals. The context was the English NHS from 2001 to 2008, a period of accelerated expenditure growth and pro-competition reform. Hospital records for all adults receiving non-emergency hospital care in the English NHS from 2001 to 2008 were aggregated to 32,482 English small areas with mean population about 1500 and combined with other small-area administrative data. Regression models of utilisation were used to examine year-on-year change in the small-area association between deprivation and utilisation, allowing for population size, age-sex composition and disease prevalence including (from 2003 to 2008) cancer, chronic kidney disease, coronary heart disease, diabetes, epilepsy, hypertension, hypothyroidism, stroke, transient ischaemic attack and (from 2006 to 2008) atrial fibrillation, chronic obstructive pulmonary disease, obesity and heart failure. There was no substantial change in small-area associations between deprivation and utilisation for outpatient visits, hip replacement, senile cataract, gastroscopy or coronary revascularisation, though overall non-emergency inpatient admissions rose slightly faster in more deprived areas than elsewhere. Associations between deprivation and disease prevalence changed little during the period, indicating that observed need did not grow faster in more deprived areas than elsewhere. We conclude that there was no substantial deterioration in socio-economic equity in health care utilisation in the English NHS from 2001 to 2008, and if anything, there may have been a slight improvement. Copyright © 2012 Elsevier Ltd. All rights reserved.
Cubesat Constellation Design for Air Traffic Monitoring
NASA Technical Reports Server (NTRS)
Nag, Sreeja; Rios, Joseph Lucio; Gerhardt, David; Pham, Camvu
2015-01-01
Suitably equipped global and local air traffic can be tracked. The tracking information may then be used for control from ground-based stations by receiving the Automatic Dependent Surveillance-Broadcast (ADS-B) signal. The ADS-B signal, emitted from the aircraft's Mode-S transponder, is currently tracked by terrestrial based receivers but not over remote oceans or sparsely populated regions such as Alaska or the Pacific Ocean. Lack of real-time aircraft time/location information in remote areas significantly hinders optimal planning and control because bigger "safety bubbles" (lateral and vertical separation) are required around the aircraft until they reach radar-controlled airspace. Moreover, it presents a search-and-rescue bottleneck. Aircraft in distress, e.g. Air France AF449 that crashed in 2009, take days to be located or cannot be located at all, e.g. Malaysia Airlines MH370 in 2014. In this paper, we describe a tool for designing a constellation of small satellites which demonstrates, through high-fidelity modeling based on simulated air traffic data, the value of space-based ADS-B monitoring and provides recommendations for cost-efficient deployment of a constellation of small satellites to increase safety and situational awareness in the currently poorly-served surveillance area of Alaska. Air traffic data has been obtained from the Future ATM Concepts Evaluation Tool (FACET), developed at NASA Ames Research Center, simulated over the Alaskan airspace over a period of one day. The simulation is driven by MATLAB with satellites propagated and coverage calculated using AGI's Satellite ToolKit(STK10).
NASA Astrophysics Data System (ADS)
Sayres, D. S.; Dobosy, R.; Healy, C. E.; Dumas, E. J.; Kochendorfer, J.; Munster, J. B.; Wilkerson, J.; Baker, B.; Anderson, J. G.
2016-12-01
The Arctic terrestrial and subsea permafrost region contains approximately 30% of the global carbon stock and therefore understanding Arctic methane emissions and how they might change with a changing climate is important for quantifying the global methane budget and understanding its growth in the atmosphere. Here we present measurements from a new in situ flux observation system designed for use on a small, low-flying aircraft that flew over the North Slope of Alaska during August of 2013. The system combines a small methane instrument based on Integrated Cavity Output Spectroscopy (ICOS) with an air turbulence probe to calculate methane fluxes based on eddy covariance. Surface fluxes are grouped by ecotope using a map based on LandSat 30 meter resolution data. We find that wet sedge areas dominate the methane fluxes during the first part of August, with methane emissions from the Sagavanirktok river being the second highest. We compare the aircraft measurements with an eddy covariance flux tower located in a wet sedge area and show that the two measurements agree quantitatively when the footprints of both overlap. However, fluxes from sedge vary at times by a factor of two or more even within a few kilometers of the tower demonstrating the importance of making regional measurements to map out methane emission spatial heterogeneity. Aircraft measurements of surface flux can play an important role in bridging the gap between ground-based measurements and regional measurements from remote sensing instruments and models.
The Role of Small Impoundments on Flow Alteration Within River Networks
NASA Astrophysics Data System (ADS)
Brogan, C. O.; Keys, T.; Scott, D.; Burgholzer, R.; Kleiner, J.
2017-12-01
Numerous water quality and quantity models have been established to illustrate the ecologic and hydrologic effects of large reservoirs. Smaller, unregulated ponds are often assumed to have a negligible impact on watershed flow regimes even though they overwhelmingly outnumber larger waterbodies. Individually, these small impoundments impart merely a fraction of the flow alteration larger reservoirs do; however, a network of ponds may act cumulatively to alter the flow regime. Many models have attempted to study smaller impoundments but rely on selectively available rating curves or bathymetry surveys. This study created a generalized process to model impoundments of varying size across a 58 square mile watershed exclusively using satellite imagery and publicly available information as inputs. With information drawn from public Army Corps of Engineers databases and LiDAR surveys, it was found that impoundment surface and drainage area served as useful explanatory variables, capable of predicting both pond bathymetry and outlet structure area across the 37 waterbodies modeled within the study area. Working within a flow routing model with inputs from the Chesapeake Bay HSPF model and verified with USGS gauge data, flow simulations were conducted with increasing number of impoundments to quantify how small ponds affect the overall flow regime. As the total impounded volume increased, simulations showed a notable reduction in both low and peak flows. Medium-sized floods increased as the network of ponds and reservoirs stabilized the catchment's streamflow. The results of this study illustrate the importance of including ponded waters into river corridor models to improve downstream management of both water quantity and quality.
Relativistic GLONASS and geodesy
NASA Astrophysics Data System (ADS)
Mazurova, E. M.; Kopeikin, S. M.; Karpik, A. P.
2016-12-01
GNSS technology is playing a major role in applications to civil, industrial and scientific areas. Nowadays, there are two fully functional GNSS: American GPS and Russian GLONASS. Their data processing algorithms have been historically based on the Newtonian theory of space and time with only a few relativistic effects taken into account as small corrections preventing the system from degradation on a fairly long time. Continuously growing accuracy of geodetic measurements and atomic clocks suggests reconsidering the overall approach to the GNSS theoretical model based on the Einstein theory of general relativity. This is essentially more challenging but fundamentally consistent theoretical approach to relativistic space geodesy. In this paper, we overview the basic principles of the relativistic GNSS model and explain the advantages of such a system for GLONASS and other positioning systems. Keywords: relativistic GLONASS, Einstein theory of general relativity.
Gillis, Lucy G; Ziegler, Alan D; van Oevelen, Dick; Cathalot, Cecile; Herman, Peter M J; Wolters, Jan W; Bouma, Tjeerd J
2014-01-01
Ecosystems in the tropical coastal zone exchange particulate organic matter (POM) with adjacent systems, but differences in this function among ecosystems remain poorly quantified. Seagrass beds are often a relatively small section of this coastal zone, but have a potentially much larger ecological influence than suggested by their surface area. Using stable isotopes as tracers of oceanic, terrestrial, mangrove and seagrass sources, we investigated the origin of particulate organic matter in nine mangrove bays around the island of Phuket (Thailand). We used a linear mixing model based on bulk organic carbon, total nitrogen and δ13C and δ15N and found that oceanic sources dominated suspended particulate organic matter samples along the mangrove-seagrass-ocean gradient. Sediment trap samples showed contributions from four sources oceanic, mangrove forest/terrestrial and seagrass beds where oceanic had the strongest contribution and seagrass beds the smallest. Based on ecosystem area, however, the contribution of suspended particulate organic matter derived from seagrass beds was disproportionally high, relative to the entire area occupied by mangrove forests, the catchment area (terrestrial) and seagrass beds. The contribution from mangrove forests was approximately equal to their surface area, whereas terrestrial contributions to suspended organic matter under contributed compared to their relative catchment area. Interestingly, mangrove forest contribution at 0 m on the transects showed a positive relationship with the exposed frontal width of the mangrove, indicating that mangrove forest exposure to hydrodynamic energy may be a controlling factor in mangrove outwelling. However we found no relationship between seagrass bed contribution and any physical factors, which we measured. Our results indicate that although seagrass beds occupy a relatively small area of the coastal zone, their role in the export of organic matter is disproportional and should be considered in coastal management especially with respect to their importance as a nutrient source for other ecosystems and organisms.
SINMAP Modeling of an active landslide area in the Swabian Alb
NASA Astrophysics Data System (ADS)
Terhorst, Birgit; Jaeger, Daniel
2015-04-01
Landslides are a common hazard in German low mountain areas such as the Swabian Alb. As areas of former landslides are highly prone to secondary movements, this study aims to assess the susceptibility for landslide hazard around Mössingen-Öschingen, a region consistently affected by landslides during the last decades. Based on the history and development of mass movements and a detailed geomorphological map, slope stability was calculated using SINMAP (Stability Index Mapping). SINMAP (Pack et al., 1998; Tarboton, 1997) is based on the "infinite slope stability model" by Hammond et al. (1992) and Montgomery and Dietrich (1994) describing the ratio of slope stabilizing factors (e.g. cohesion) and slope destabilizing factors (e.g. gravitation) on a slip surface parallel to the slope. Most input parameters are determined by the relief and therefore, can be calculated from a digital terrain model (DTM, resolution 5 m). Based on the local morphology and geology, a total of 10 'calibration regions', each with similar hydrogeological characteristics, were defined. Further input parameters were: Shear strength via friction angle (Phi), cohesion (C) and hydraulic conductivity (T/R). The data was obtained from soil mechanical assessments and field/laboratory analyses. As a result, a specific stability index is calculated, describing the susceptibility of a slope movement. In a first step, the 'topographic wetness index' (derived from catchment area, slope gradient and hydraulic conductivity) was calculated. Results show several preferred (natural) drainage channels with generally higher water saturations in morphological depressions. Several of them can be linked to the location of damaged houses in the settlement area on the lower slope. The SINMAP calculation clearly revealed the impermeable Callovian clay layers as most prone to slope movements. A comparison of the susceptibility map with slide masses which were mapped during a field survey showed generally good agreements. This was in particular true for the slopes of the "Landhaussiedlung", a small settlement area east of Mössingen-Öschingen. In the uphill areas, a large landslide was triggered on June 3rd, 2013, mainly caused by heavy rainfalls during the days before. The scarp/slip surface was situated in the Callovian clay layers and in an area which was shown as susceptible for slope movements by the SINMAP model earlier Terhorst and Kreja (2009). The movement processes reactivated an old slide mass, which reached the outermost parts of the settlement area and damaged the densely built-up underground of the Landhaussiedlung. Although no house was destroyed completely by the slide mass, the induced pressure caused severe damages, rendering the buildings uninhabitable and leading to the evacuation of the Landhaussiedlung. The results show, that the modeling provided a solid identification of the vulnerable slope areas. The recent landslide area is almost completely situated in a region modeled as vulnerable for slope movements. Therefore, the landslide event of 2013 practically validated the susceptibility map. On the base of solid data and under consideration of detailed and differentiated information, SINMAP is a powerful tool for the assessment of susceptibilities for translational slides. Hammond, C., Hall, D., Miller, S., Swetik, P., 1992. Level I Stability Analysis (LISA) documentation for version 2.0. General Technical Report, INT-285. U.S. Deptartment of Agriculture, Forest Service, Intermountain Research Station, Ogden. Montgomery, D.R., Dietrich, W.E., 1994. A Physically Based Model for the Topographic Control on Shallow Landsliding. Water Resources Research, 30(4), 1153-1171. Pack, R.T., Tarboton, D.G., Goodwin, C.N., 1998. The SINMAP approach to terrain stability mapping, 8th Congress of the International Association of Engineering Geology, Vancouver, Canada, pp. 8. Tarboton, D.G., 1997. A new method for the determination of flow directions and upslope areas in grid digital elevation models. Water Resources Research, 33(2), 309-319. Terhorst, B., Kreja, R., 2009. Slope stability modelling with SINMAP in a settlement area of the Swabian Alb. Landslides, 6(4), 309-319.
Small ICBM area narrowing report. Volume 1. Hard mobile launcher in random movement basing mode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The purpose of this report is to identify those areas that could potentially support deployment of the Small Intercontinental Ballistic Missile (ICBM) utilizing basing modes presently considered viable: the Hard Mobile Launcher in Random Movement, the Hard Mobile Launcher at Minuteman Facilities, and the Hard Silo in Patterned Array. Specifically, this report describes the process and the rationale supporting the application of Exclusionary and Evaluative Criteria and lists those locations that were eliminated through the application of these criteria. The remaining locations will be the subject of further investigations.
Small ICBM area narrowing report. Volume 2. Hard mobile launcher at minuteman facilities basing mode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The purpose of this report is to identify those areas that could potentially support deployment of the Small Intercontinental Ballistic Missile (ICBM) utilizing basing modes presently considered viable: the Hard Mobile Launcher in Random Movement, the Hard Mobile Launcher at Minuteman Facilities, and the Hard Silo in Patterned Array. Specifically, this report describes the process and the rationale supporting the application of Exclusionary and Evaluative Criteria and lists those locations that were eliminated through the application of these criteria. The remaining locations will be the subject of further investigations.
Eng, Lars; Nygren-Babol, Linnéa; Hanning, Anders
2016-10-01
Surface plasmon resonance (SPR) is a well-established method for studying interactions between small molecules and biomolecules. In particular, SPR is being increasingly applied within fragment-based drug discovery; however, within this application area, the limited sensitivity of SPR may constitute a problem. This problem can be circumvented by the use of label-enhanced SPR that shows a 100-fold higher sensitivity as compared with conventional SPR. Truly label-free interaction data for small molecules can be obtained by applying label-enhanced SPR in a surface competition assay format. The enhanced sensitivity is accompanied by an increased specificity and inertness toward disturbances (e.g., bulk refractive index disturbances). Label-enhanced SPR can be used for fragment screening in a competitive assay format; the competitive format has the added advantage of confirming the specificity of the molecular interaction. In addition, label-enhanced SPR extends the accessible kinetic regime of SPR to the analysis of very fast fragment binding kinetics. In this article, we demonstrate the working principles and benchmark the performance of label-enhanced SPR in a model system-the interaction between carbonic anhydrase II and a number of small-molecule sulfonamide-based inhibitors. Copyright © 2016 Elsevier Inc. All rights reserved.
A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.
Lin, Johnny; Bentler, Peter M
2012-01-01
Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.
Alberti, Luca; Colombo, Loris; Formentin, Giovanni
2018-04-15
The Lombardy Region in Italy is one of the most urbanized and industrialized areas in Europe. The presence of countless sources of groundwater pollution is therefore a matter of environmental concern. The sources of groundwater contamination can be classified into two different categories: 1) Point Sources (PS), which correspond to areas releasing plumes of high concentrations (i.e. hot-spots) and 2) Multiple-Point Sources (MPS) consisting in a series of unidentifiable small sources clustered within large areas, generating an anthropogenic diffuse contamination. The latter category frequently predominates in European Functional Urban Areas (FUA) and cannot be managed through standard remediation techniques, mainly because detecting the many different source areas releasing small contaminant mass in groundwater is unfeasible. A specific legislative action has been recently enacted at Regional level (DGR IX/3510-2012), in order to identify areas prone to anthropogenic diffuse pollution and their level of contamination. With a view to defining a management plan, it is necessary to find where MPS are most likely positioned. This paper describes a methodology devised to identify the areas with the highest likelihood to host potential MPS. A groundwater flow model was implemented for a pilot area located in the Milan FUA and through the PEST code, a Null-Space Monte Carlo method was applied in order to generate a suite of several hundred hydraulic conductivity field realizations, each maintaining the model in a calibrated state and each consistent with the modelers' expert-knowledge. Thereafter, the MODPATH code was applied to generate back-traced advective flowpaths for each of the models built using the conductivity field realizations. Maps were then created displaying the number of backtracked particles that crossed each model cell in each stochastic calibrated model. The result is considered to be representative of the FUAs areas with the highest likelihood to host MPS responsible for diffuse contamination. Copyright © 2017 Elsevier B.V. All rights reserved.
Small intestinal model for electrically propelled capsule endoscopy
2011-01-01
The aim of this research is to propose a small intestine model for electrically propelled capsule endoscopy. The electrical stimulus can cause contraction of the small intestine and propel the capsule along the lumen. The proposed model considered the drag and friction from the small intestine using a thin walled model and Stokes' drag equation. Further, contraction force from the small intestine was modeled by using regression analysis. From the proposed model, the acceleration and velocity of various exterior shapes of capsule were calculated, and two exterior shapes of capsules were proposed based on the internal volume of the capsules. The proposed capsules were fabricated and animal experiments were conducted. One of the proposed capsules showed an average (SD) velocity in forward direction of 2.91 ± 0.99 mm/s and 2.23 ± 0.78 mm/s in the backward direction, which was 5.2 times faster than that obtained in previous research. The proposed model can predict locomotion of the capsule based on various exterior shapes of the capsule. PMID:22177218
Andersen, Synnøve Thomassen; Jansen, Arild
2012-01-01
The paper addresses an ICT-based, user-driven innovation process in the health sector in rural areas in Norway. The empirical base is the introduction of a new model for psychiatric health provision. This model is supported by a technical solution based on mobile phones that is aimed to help the communication between professional health personnel and patients. This innovation was made possible through the use of standard mobile technology rather than more sophisticated systems. The users were heavily involved in the development work. Our analysis shows that by thinking simple and small-scale solutions, including to take the user's needs and premises as a point of departure rather than focusing on advanced technology, the implementation process was made possible. We show that by combining theory on information infrastructures, user-oriented system development, and innovation in a three-layered analytical framework, we can explain the interrelationship between technical, organizational, and health professional factors that made this innovation a success. PMID:23304134
Application of Persistent Scatterer Radar Interferometry to the New Orleans delta region
NASA Astrophysics Data System (ADS)
Lohman, R.; Fielding, E.; Blom, R.
2007-12-01
Subsidence in New Orleans and along the Gulf Coast is currently monitored using a variety of ground- and satellite-based methods, and extensive geophysical modeling of the area seeks to understand the inputs to subsidence rates from sediment compaction, salt evacuation, oxidation and anthropogenic forcings such as the withdrawal or injection of subsurface fluids. Better understanding of the temporal and spatial variability of these subsidence rates can help us improve civic planning and disaster mitigation efforts with the goal of protecting lives and property over the long term. Existing ground-based surveys indicate that subsidence gradients of up to 1 cm/yr or more over length scales of several 10's of km exist in the region, especially in the vicinity of the city of New Orleans. Modeling results based on sediment inputs and post-glacial sea level change tend to predict lower gradients, presumably because there is a large input from unmodeled crustal faults and anthropogenic activity. The broad spatial coverage of InSAR can both add to the existing network of ground-based geodetic surveys, and can help to identify areas that are deforming anomalously with respect to surrounding areas. Here we present the use of a modified point scatterer method applied to radar data from the Radarsat satellite for New Orleans and the Gulf Coast. Point target analysis of InSAR data has already been successfully applied to the New Orleans area by Dixon et al (2006). Our method is similar to the Stanford Method for PS (StaMPS) developed by Andy Hooper, adapted to rely on combinations of small orbital baselines and the inclusion of coherent regions from the time span of each interferogram during phase unwrapping rather than only using points that are stable within all interferograms.
Heterogeneity and scaling land-atmospheric water and energy fluxes in climate systems
NASA Technical Reports Server (NTRS)
Wood, Eric F.
1993-01-01
The effects of small-scale heterogeneity in land surface characteristics on the large-scale fluxes of water and energy in land-atmosphere system has become a central focus of many of the climatology research experiments. The acquisition of high resolution land surface data through remote sensing and intensive land-climatology field experiments (like HAPEX and FIFE) has provided data to investigate the interactions between microscale land-atmosphere interactions and macroscale models. One essential research question is how to account for the small scale heterogeneities and whether 'effective' parameters can be used in the macroscale models. To address this question of scaling, three modeling experiments were performed and are reviewed in the paper. The first is concerned with the aggregation of parameters and inputs for a terrestrial water and energy balance model. The second experiment analyzed the scaling behavior of hydrologic responses during rain events and between rain events. The third experiment compared the hydrologic responses from distributed models with a lumped model that uses spatially constant inputs and parameters. The results show that the patterns of small scale variations can be represented statistically if the scale is larger than a representative elementary area scale, which appears to be about 2 - 3 times the correlation length of the process. For natural catchments this appears to be about 1 - 2 sq km. The results concerning distributed versus lumped representations are more complicated. For conditions when the processes are nonlinear, then lumping results in biases; otherwise a one-dimensional model based on 'equivalent' parameters provides quite good results. Further research is needed to fully understand these conditions.
Simulation of population-based commuter exposure to NO₂ using different air pollution models.
Ragettli, Martina S; Tsai, Ming-Yi; Braun-Fahrländer, Charlotte; de Nazelle, Audrey; Schindler, Christian; Ineichen, Alex; Ducret-Stich, Regina E; Perez, Laura; Probst-Hensch, Nicole; Künzli, Nino; Phuleria, Harish C
2014-05-12
We simulated commuter routes and long-term exposure to traffic-related air pollution during commute in a representative population sample in Basel (Switzerland), and evaluated three air pollution models with different spatial resolution for estimating commute exposures to nitrogen dioxide (NO2) as a marker of long-term exposure to traffic-related air pollution. Our approach includes spatially and temporally resolved data on actual commuter routes, travel modes and three air pollution models. Annual mean NO2 commuter exposures were similar between models. However, we found more within-city and within-subject variability in annual mean (±SD) NO2 commuter exposure with a high resolution dispersion model (40 ± 7 µg m(-3), range: 21-61) than with a dispersion model with a lower resolution (39 ± 5 µg m(-3); range: 24-51), and a land use regression model (41 ± 5 µg m(-3); range: 24-54). Highest median cumulative exposures were calculated along motorized transport and bicycle routes, and the lowest for walking. For estimating commuter exposure within a city and being interested also in small-scale variability between roads, a model with a high resolution is recommended. For larger scale epidemiological health assessment studies, models with a coarser spatial resolution are likely sufficient, especially when study areas include suburban and rural areas.
NASA Technical Reports Server (NTRS)
Moussavi, Mahsa S.; Abdalati, Waleed; Pope, Allen; Scambos, Ted; Tedesco, Marco; MacFerrin, Michael; Grigsby, Shane
2016-01-01
Supraglacial meltwater lakes on the western Greenland Ice Sheet (GrIS) are critical components of its surface hydrology and surface mass balance, and they also affect its ice dynamics. Estimates of lake volume, however, are limited by the availability of in situ measurements of water depth,which in turn also limits the assessment of remotely sensed lake depths. Given the logistical difficulty of collecting physical bathymetric measurements, methods relying upon in situ data are generally restricted to small areas and thus their application to largescale studies is difficult to validate. Here, we produce and validate spaceborne estimates of supraglacial lake volumes across a relatively large area (1250 km(exp 2) of west Greenland's ablation region using data acquired by the WorldView-2 (WV-2) sensor, making use of both its stereo-imaging capability and its meter-scale resolution. We employ spectrally-derived depth retrieval models, which are either based on absolute reflectance (single-channel model) or a ratio of spectral reflectances in two bands (dual-channel model). These models are calibrated by usingWV-2multispectral imagery acquired early in the melt season and depth measurements from a high resolutionWV-2 DEM over the same lake basins when devoid of water. The calibrated models are then validated with different lakes in the area, for which we determined depths. Lake depth estimates based on measurements recorded in WV-2's blue (450-510 nm), green (510-580 nm), and red (630-690 nm) bands and dual-channel modes (blue/green, blue/red, and green/red band combinations) had near-zero bias, an average root-mean-squared deviation of 0.4 m (relative to post-drainage DEMs), and an average volumetric error of b1%. The approach outlined in this study - image-based calibration of depth-retrieval models - significantly improves spaceborne supraglacial bathymetry retrievals, which are completely independent from in situ measurements.
Dudaniec, Rachael Y; Worthington Wilmer, Jessica; Hanson, Jeffrey O; Warren, Matthew; Bell, Sarah; Rhodes, Jonathan R
2016-01-01
Landscape genetics lacks explicit methods for dealing with the uncertainty in landscape resistance estimation, which is particularly problematic when sample sizes of individuals are small. Unless uncertainty can be quantified, valuable but small data sets may be rendered unusable for conservation purposes. We offer a method to quantify uncertainty in landscape resistance estimates using multimodel inference as an improvement over single model-based inference. We illustrate the approach empirically using co-occurring, woodland-preferring Australian marsupials within a common study area: two arboreal gliders (Petaurus breviceps, and Petaurus norfolcensis) and one ground-dwelling antechinus (Antechinus flavipes). First, we use maximum-likelihood and a bootstrap procedure to identify the best-supported isolation-by-resistance model out of 56 models defined by linear and non-linear resistance functions. We then quantify uncertainty in resistance estimates by examining parameter selection probabilities from the bootstrapped data. The selection probabilities provide estimates of uncertainty in the parameters that drive the relationships between landscape features and resistance. We then validate our method for quantifying uncertainty using simulated genetic and landscape data showing that for most parameter combinations it provides sensible estimates of uncertainty. We conclude that small data sets can be informative in landscape genetic analyses provided uncertainty can be explicitly quantified. Being explicit about uncertainty in landscape genetic models will make results more interpretable and useful for conservation decision-making, where dealing with uncertainty is critical. © 2015 John Wiley & Sons Ltd.
Two dimensional modelling of flood flows and suspended sediment transport: the case of Brenta River
NASA Astrophysics Data System (ADS)
D'Alpaos, L.; Martini, P.; Carniello, L.
2003-04-01
The paper deals with numerical modelling of flood waves and suspended sediment in plain river basins. The two dimensional depth integrated momentum and continuity equations, modified to take into account of the bottom irregularities that strongly affect the hydrodynamic and the continuity in partially dry areas (for example, during the first stages of a plain flooding and in tidal flows), are solved with a standard Galerkin finite element method using a semi-implicit numerical scheme and considering the role both of the small channel network and the regulation dispositive on the flooding wave propagation. Transport of suspended sediment and bed evolution are coupled with the flood propagation through the convection-dispersion equation and the Exner's equation. Results of a real case study are presented in which the effects of extreme flood of Brenta River (Italy) are examinated. The flooded areas (urban and rural areas) are identified and a mitigation solution based on a diversion channel flowing into Venice Lagoon is proposed. We show that this solution strongly reduces the flood risk in the downstream areas and can provide an important sediment source to the Venice Lagoon. Finally, preliminary results of the sediment dispersion in the Venice Lagoon are presented.
A comparison between two simulation models for spread of foot-and-mouth disease.
Halasa, Tariq; Boklund, Anette; Stockmarr, Anders; Enøe, Claes; Christiansen, Lasse E
2014-01-01
Two widely used simulation models of foot-and-mouth disease (FMD) were used in order to compare the models' predictions in term of disease spread, consequence, and the ranking of the applied control strategies, and to discuss the effect of the way disease spread is modeled on the predicted outcomes of each model. The DTU-DADS (version 0.100), and ISP (version 2.001.11) were used to simulate a hypothetical spread of FMD in Denmark. Actual herd type, movements, and location data in the period 1st October 2006 and 30th September 2007 was used. The models simulated the spread of FMD using 3 different control scenarios: 1) A basic scenario representing EU and Danish control strategies, 2) pre-emptive depopulation of susceptible herds within a 500 meters radius around the detected herds, and 3) suppressive vaccination of susceptible herds within a 1,000 meters radius around the detected herds. Depopulation and vaccination started 14 days following the detection of the first infected herd. Five thousand index herds were selected randomly, of which there were 1,000 cattle herds located in high density cattle areas and 1,000 in low density cattle areas, 1,000 swine herds located in high density swine areas and 1,000 in low density swine areas, and 1,000 sheep herds. Generally, DTU-DADS predicted larger, longer duration and costlier epidemics than ISP, except when epidemics started in cattle herds located in high density cattle areas. ISP supported suppressive vaccination rather than pre-emptive depopulation, while DTU-DADS was indifferent to the alternative control strategies. Nonetheless, the absolute differences between control strategies were small making the choice of control strategy during an outbreak to be most likely based on practical reasons.
NASA Astrophysics Data System (ADS)
Kukkonen, Jaakko; Kangas, Leena; Kauhaniemi, Mari; Sofiev, Mikhail; Aarnio, Mia; Jaakkola, Jouni J. K.; Kousa, Anu; Karppinen, Ari
2018-06-01
Reliable and self-consistent data on air quality are needed for an extensive period of time for conducting long-term, or even lifetime health impact assessments. We have modelled the urban-scale concentrations of fine particulate matter (PM2.5) in the Helsinki Metropolitan Area for a period of 35 years, from 1980 to 2014. The regional background concentrations were evaluated based on reanalyses of the atmospheric composition on global and European scales, using the SILAM model. The high-resolution urban computations included both the emissions originated from vehicular traffic (separately exhaust and suspension emissions) and those from small-scale combustion, and were conducted using the road network dispersion model CAR-FMI and the multiple-source Gaussian dispersion model UDM-FMI. The modelled concentrations of PM2.5 agreed fairly well with the measured data at a regional background station and at four urban measurement stations, during 1999-2014. The modelled concentration trends were also evaluated for earlier years, until 1988, using proxy analyses. There was no systematic deterioration of the agreement of predictions and data for earlier years (the 1980s and 1990s), compared with the results for more recent years (2000s and early 2010s). The local vehicular emissions were about 5 times higher in the 1980s, compared with the emissions during the latest considered years. The local small-scale combustion emissions increased slightly over time. The highest urban concentrations of PM2.5 occurred in the 1980s; these have since decreased to about to a half of the highest values. In general, regional background was the largest contribution in this area. Vehicular exhaust has been the most important local source, but the relative shares of both small-scale combustion and vehicular non-exhaust emissions have increased in time. The study has provided long-term, high-resolution concentration databases on regional and urban scales that can be used for the assessment of health effects associated with air pollution.
The large-scale removal of mammalian invasive alien species in Northern Europe.
Robertson, Peter A; Adriaens, Tim; Lambin, Xavier; Mill, Aileen; Roy, Sugoto; Shuttleworth, Craig M; Sutton-Croft, Mike
2017-02-01
Numerous examples exist of successful mammalian invasive alien species (IAS) eradications from small islands (<10 km 2 ), but few from more extensive areas. We review 15 large-scale removals (mean area 2627 km 2 ) from Northern Europe since 1900, including edible dormouse, muskrat, coypu, Himalayan porcupine, Pallas' and grey squirrels and American mink, each primarily based on daily checking of static traps. Objectives included true eradication or complete removal to a buffer zone, as distinct from other programmes that involved local control to limit damage or spread. Twelve eradication/removal programmes (80%) were successful. Cost increased with and was best predicted by area, while the cost per unit area decreased; the number of individual animals removed did not add significantly to the model. Doubling the area controlled reduced cost per unit area by 10%, but there was no evidence that cost effectiveness had increased through time. Compared with small islands, larger-scale programmes followed similar patterns of effort in relation to area. However, they brought challenges when defining boundaries and consequent uncertainties around costs, the definition of their objectives, confirmation of success and different considerations for managing recolonisation. Novel technologies or increased use of volunteers may reduce costs. Rapid response to new incursions is recommended as best practice rather than large-scale control to reduce the environmental, financial and welfare costs. © 2016 Crown copyright. Pest Management Science published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry. © 2016 Crown copyright. Pest Management Science published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry.
Functional imaging of small tissue volumes with diffuse optical tomography
NASA Astrophysics Data System (ADS)
Klose, Alexander D.; Hielscher, Andreas H.
2006-03-01
Imaging of dynamic changes in blood parameters, functional brain imaging, and tumor imaging are the most advanced application areas of diffuse optical tomography (DOT). When dealing with the image reconstruction problem one is faced with the fact that near-infrared photons, unlike X-rays, are highly scattered when they traverse biological tissue. Image reconstruction schemes are required that model the light propagation inside biological tissue and predict measurements on the tissue surface. By iteratively changing the tissue-parameters until the predictions agree with the real measurements, a spatial distribution of optical properties inside the tissue is found. The optical properties can be related to the tissue oxygenation, inflammation, or to the fluorophore concentration of a biochemical marker. If the model of light propagation is inaccurate, the reconstruction process will lead to an inaccurate result as well. Here, we focus on difficulties that are encountered when DOT is employed for functional imaging of small tissue volumes, for example, in cancer studies involving small animals, or human finger joints for early diagnosis of rheumatoid arthritis. Most of the currently employed image reconstruction methods rely on the diffusion theory that is an approximation to the equation of radiative transfer. But, in the cases of small tissue volumes and tissues that contain low scattering regions diffusion theory has been shown to be of limited applicability Therefore, we employ a light propagation model that is based on the equation of radiative transfer, which promises to overcome the limitations.
NASA Astrophysics Data System (ADS)
Sarzalejo Silva, Sabrina Ester
Understanding the stratigraphic architecture of geologically complex reservoirs, such as the heavy oil deposits of Western Canada, is essential to achieve an efficient hydrocarbon recovery. Borehole and 3-D seismic data were integrated to define the stratigraphic architecture and generate 3-dimensional geological models of the Mannville Group in Saskatchewan. The Mannville is a stratigraphically complex unit formed of fluvial to marine deposits. Two areas in west-central and southern Saskatchewan were examined in this study. In west-central Saskatchewan, the area corresponds to a stratigraphically controlled heavy oil reservoir with production from the undifferentiated Dina-Cummings Members of the Lower Cretaceous Mannville Group. The southern area, although non-prospective for hydrocarbons, shares many similarities with time-equivalent strata in areas of heavy oil production. Seismic sequence stratigraphic principles together with log signatures permitted the subdivision of the Mannville into different packages. An initial geological model was generated integrating seismic and well-log data Multiattribute analysis and neural networks were used to generate a pseudo-lithology or gamma-ray volume. The incorporation of borehole core data to the model and the subsequent integration with the lithological prediction were crucial to capture the distribution of reservoir and non-reservoir deposits in the study area. The ability to visualize the 3-D seismic data in a variety of ways, including arbitrary lines and stratal or horizon slicing techniques helped the definition of stratigraphic features such as channels and scroll bars that affect fluid flow in hydrocarbon producing areas. Small-scale heterogeneities in the reservoir were not resolved due to the resolution of the seismic data. Although not undertaken in this study, the resulting stratigraphic framework could be used to help construct a static reservoir model. Because of the small size of the 3-D seismic surveys, horizontal slices through the data volume generally imaged only small portions of the paleogeomorphologic features thought to be present in this area. As such, it was only through the integration of datasets that the geological models were established.
Hybrid model of arm for analysis of regional blood oxygenation in non-invasive optical diagnostics
NASA Astrophysics Data System (ADS)
Nowocień, Sylwester; Mroczka, Janusz
2017-06-01
The paper presents a new comprehensive approach to modeling and analysis of processes occurring during the blood flow in the arm's small vessels as well as non-invasive measurement method of mixed venous oxygen saturation. During the work, a meta-analysis of available physiological data was performed and based on its result a hybrid model of forearm vascular tree was proposed. The model, in its structure, takes into account a classical nonlinear hydro-electric analogy in conjunction with light-tissue interaction. Several geometries of arm vascular tree obtained from magnetic resonance angiography (MRA) image were analyzed which allowed to proposed the structure of electrical analog network. Proposed model allows to simulate the behavior of forearm blood flow from the vascular tree mechanics point of view, as well as effects of the impact of cuff and vessel wall mechanics on the recorded photoplethysmographic signals. In particular, it allows to analyze the reaction and anatomical effects in small vessels and microcirculation caused by occlusive maneuver in selected techniques, what was of particular interest to authors and motivation to undertake research in this area. Preliminary studies using proposed model showed that inappropriate selection of occlusion maneuver parameters (e.g. occlusion time, cuff pressure etc.), cause dangerous turbulence of blood flow in the venous section of the vascular tree.
Improved Small Baseline processing by means of CAESAR eigen-interferograms decomposition
NASA Astrophysics Data System (ADS)
Verde, Simona; Reale, Diego; Pauciullo, Antonio; Fornaro, Gianfranco
2018-05-01
The Component extrAction and sElection SAR (CAESAR) is a method for the selection and filtering of scattering mechanisms recently proposed in the multibaseline interferometric SAR framework. Its strength is related to the possibility to select and extract multiple dominant scattering mechanisms, even interfering in the same pixel, since the stage of the interferograms generation, and to carry out a decorrelation noise phase filtering. Up to now, the validation of CAESAR has been addressed in the framework of SAR Tomography for the model-based detection of Persistent Scatterers (PSs). In this paper we investigate the effectiveness related to the use of CAESAR eigen-interferograms in classical multi-baseline DInSAR processing, based on the Small BAseline Subset (SBAS) strategy, typically adopted to extract large scale distributed deformation and atmospheric phase screen. Such components are also exploited for the calibration of the full resolution data for PS or tomographic analysis. By using COSMO-SKyMed (CSK) SAR data, it is demonstrated that dominant scattering component filtering effectively improves the monitoring of distributed spatially decorrelated areas (f.i. bare soil, rocks, etc.) and allows bringing to light man-made structures with dominant backscattering characteristics embedded in highly temporally decorrelated scenario, as isolated asphalt roads and block of buildings in non-urban areas. Moreover it is shown that, thanks to the CAESAR multiple scattering components separation, the layover mitigation in low-topography eigen-interferograms relieves Phase Unwrapping (PhU) errors in urban areas due to abrupt height variations.
Use of the negative binomial-truncated Poisson distribution in thunderstorm prediction
NASA Technical Reports Server (NTRS)
Cohen, A. C.
1971-01-01
A probability model is presented for the distribution of thunderstorms over a small area given that thunderstorm events (1 or more thunderstorms) are occurring over a larger area. The model incorporates the negative binomial and truncated Poisson distributions. Probability tables for Cape Kennedy for spring, summer, and fall months and seasons are presented. The computer program used to compute these probabilities is appended.
Development of a volumetric projection technique for the digital evaluation of field of view.
Marshall, Russell; Summerskill, Stephen; Cook, Sharon
2013-01-01
Current regulations for field of view requirements in road vehicles are defined by 2D areas projected on the ground plane. This paper discusses the development of a new software-based volumetric field of view projection tool and its implementation within an existing digital human modelling system. In addition, the exploitation of this new tool is highlighted through its use in a UK Department for Transport funded research project exploring the current concerns with driver vision. Focusing specifically on rearwards visibility in small and medium passenger vehicles, the volumetric approach is shown to provide a number of distinct advantages. The ability to explore multiple projections of both direct vision (through windows) and indirect vision (through mirrors) provides a greater understanding of the field of view environment afforded to the driver whilst still maintaining compatibility with the 2D projections of the regulatory standards. Field of view requirements for drivers of road vehicles are defined by simplified 2D areas projected onto the ground plane. However, driver vision is a complex 3D problem. This paper presents the development of a new software-based 3D volumetric projection technique and its implementation in the evaluation of driver vision in small- and medium-sized passenger vehicles.
Heo, Youn-Jung; Jung, Yen-Sook; Hwang, Kyeongil; Kim, Jueng-Eun; Yeo, Jun-Seok; Lee, Sehyun; Jeon, Ye-Jin; Lee, Donmin; Kim, Dong-Yu
2017-11-15
For the first time, the photovoltaic modules composed of small molecule were successfully fabricated by using roll-to-roll compatible printing techniques. In this study, blend films of small molecules, BTR and PC 71 BM were slot-die coated using a halogen-free solvent system. As a result, high efficiencies of 7.46% and 6.56% were achieved from time-consuming solvent vapor annealing (SVA) treatment and roll-to-roll compatible solvent additive approaches, respectively. After successful verification of our roll-to-roll compatible method on small-area devices, we further fabricated large-area photovoltaic modules with a total active area of 10 cm 2 , achieving a power conversion efficiency (PCE) of 4.83%. This demonstration of large-area photovoltaic modules through roll-to-roll compatible printing methods, even based on a halogen-free solvent, suggests the great potential for the industrial-scale production of organic solar cells (OSCs).
Gelcich, Stefan; Donlan, C Josh
2015-08-01
Territorial user rights for fisheries are being promoted to enhance the sustainability of small-scale fisheries. Using Chile as a case study, we designed a market-based program aimed at improving fishers' livelihoods while incentivizing the establishment and enforcement of no-take areas within areas managed with territorial user right regimes. Building on explicit enabling conditions (i.e., high levels of governance, participation, and empowerment), we used a place-based, human-centered approach to design a program that will have the necessary support and buy-in from local fishers to result in landscape-scale biodiversity benefits. Transactional infrastructure must be complex enough to capture the biodiversity benefits being created, but simple enough so that the program can be scaled up and is attractive to potential financiers. Biodiversity benefits created must be commoditized, and desired behavioral changes must be verified within a transactional context. Demand must be generated for fisher-created biodiversity benefits in order to attract financing and to scale the market model. Important design decisions around these 3 components-supply, transactional infrastructure, and demand-must be made based on local social-ecological conditions. Our market model, which is being piloted in Chile, is a flexible foundation on which to base scalable opportunities to operationalize a scheme that incentivizes local, verifiable biodiversity benefits via conservation behaviors by fishers that could likely result in significant marine conservation gains and novel cross-sector alliances. © 2015, Society for Conservation Biology.
Clemens, Tom; Dibben, Chris
2017-04-01
Patterns of adverse birth outcomes vary spatially and there is evidence that this may relate to features of the physical environment such as air pollution. However, other social characteristics of the environment such as levels of crime are relatively understudied. This study examines the association between crime rates and birth weight and prematurity. Maternity inpatient data recorded at birth, including residential postcode, was linked to a representative 5% sample of Scottish Census data and small area crime rates from Scottish Police forces. Coefficients associated with crime were reported from crude and confounder adjusted models predicting low birth weight (< 2500 g), mean birthweight, small for gestational age and prematurity for all singleton live births. Total crime rates were associated with strong and significant reductions in mean birth weight and increases in the risks of both a small for gestational age baby and premature birth. These effects, with the exception of prematurity, were robust to adjustment for individual characteristics including smoking, ethnicity and other socio-economic variables as well as area based confounders including air pollution. Mean birth weight was robust to additional adjustment for neighbourhood income deprivation. The level of crime in a mother's area of residence, which may be a proxy for the degree of threat felt and therefore stress experienced, appears to be an important determinant of the risk of adverse birth outcomes. © The Author 2016. Published by Oxford University Press on behalf of the European Public Health Association.
Amirataee, Babak; Montaseri, Majid; Rezaie, Hossein
2018-01-15
Droughts are extreme events characterized by temporal duration and spatial large-scale effects. In general, regional droughts are affected by general circulation of the atmosphere (at large-scale) and regional natural factors, including the topography, natural lakes, the position relative to the center and the path of the ocean currents (at small-scale), and they don't cover the exact same effects in a wide area. Therefore, drought Severity-Area-Frequency (S-A-F) curve investigation is an essential task to develop decision making rule for regional drought management. This study developed the copula-based joint probability distribution of drought severity and percent of area under drought across the Lake Urmia basin, Iran. To do this end, one-month Standardized Precipitation Index (SPI) values during the 1971-2013 were applied across 24 rainfall stations in the study area. Then, seven copula functions of various families, including Clayton, Gumbel, Frank, Joe, Galambos, Plackett and Normal copulas, were used to model the joint probability distribution of drought severity and drought area. Using AIC, BIC and RMSE criteria, the Frank copula was selected as the most appropriate copula in order to develop the joint probability distribution of severity-percent of area under drought across the study area. Based on the Frank copula, the drought S-A-F curve for the study area was derived. The results indicated that severe/extreme drought and non-drought (wet) behaviors have affected the majority of study areas (Lake Urmia basin). However, the area covered by the specific semi-drought effects is limited and has been subject to significant variations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Impacts of rural land-use on overland flow and sediment transport
NASA Astrophysics Data System (ADS)
Fraser, S. L.; Jackson, B. M.; Norton, K. P.
2013-12-01
The loss of fertile topsoil over time, due to erosive processes, could have a major impact on New Zealand's economy as well as being devastating to individual land owners. Improved management of land use is needed to provide protection of soil from erosion by overland flow and aeolian processes. Effects of soil erosion and sedimentation result in an annual nationwide cost of NZ$123 million. Many previous New Zealand studies have focused on large scale soil movement from land sliding and gully erosion, including identifying risk areas. However, long term small scale erosion and degradation has been largely overlooked in the literature. Although small scale soil erosion is less apparent than mass movement, cumulative small scale soil loss over many years may have a significant impact for future land productivity. One approach to assessing the role of soil degradation is through the application of landscape models. Due to the time consuming collection of data and limited scales over which data can be collected, many models created are unique to a particular land type, land use or locality. Collection of additional datasets can broaden the use of such models by informing model representation and enhancing parameterisation. The Land Use Capability Index (LUCI), developed by Jackson et al (2013) is an example of a model that will benefit from additional data sets. LUCI is a multi-criteria GIS tool, designed to inform land management decisions by identifying areas of potential change, based on land characteristics and land use options. LUCI topographically routes overland flow and sediment using existing land characteristic maps and additionally incorporating sub-field scale data. The model then has the ability to utilise these data to enhance prediction at landscape scale. This study focuses on the influence of land use on small scale sediment transport and enhancing process representation and parameterisation to improve predictive ability of models, such as LUCI. Data are currently being collected in a small catchment at the foothills of the Tararua ranges, lower North Island of New Zealand. Gurlach traps are utilised in a step like array on a number of hillslopes to provide a comprehensive dataset of overland flow and sediment volume for different magnitude rainfall events. ArcGIS is used to calculate a contributing area to each trap. The study provides quantitative data linking overland flow to event magnitude for the rural land uses of pasture versus regenerating native forest at multiple slope angles. These data along with measured soil depth/slope relationships and stream monitoring data are used to inform process representation and parameterisation of LUCI at hillslope scale. LUCI is then used to explore implications at landscape scale. The data and modelling are intended to provide information to help in long-term land management decisions. Jackson, B., Pagella, T., Sinclair, F., Orellana, B., Henshaw, A., Reynolds, B., McIntyre, N., Wheater, H., and Eycott, A. 2013. Polyscape: A GIS mapping framework providing efficient and spatially explicit landscape-scale valuation of multiple ecosystem services. Landscape and Urban Planning, 112(0): 74-88
A model-based approach to estimating forest area
Ronald E. McRoberts
2006-01-01
A logistic regression model based on forest inventory plot data and transformations of Landsat Thematic Mapper satellite imagery was used to predict the probability of forest for 15 study areas in Indiana, USA, and 15 in Minnesota, USA. Within each study area, model-based estimates of forest area were obtained for circular areas with radii of 5 km, 10 km, and 15 km and...
A low-cost drone based application for identifying and mapping of coastal fish nursery grounds
NASA Astrophysics Data System (ADS)
Ventura, Daniele; Bruno, Michele; Jona Lasinio, Giovanna; Belluscio, Andrea; Ardizzone, Giandomenico
2016-03-01
Acquiring seabed, landform or other topographic data in the field of marine ecology has a pivotal role in defining and mapping key marine habitats. However, accessibility for this kind of data with a high level of detail for very shallow and inaccessible marine habitats has been often challenging, time consuming. Spatial and temporal coverage often has to be compromised to make more cost effective the monitoring routine. Nowadays, emerging technologies, can overcome many of these constraints. Here we describe a recent development in remote sensing based on a small unmanned drone (UAVs) that produce very fine scale maps of fish nursery areas. This technology is simple to use, inexpensive, and timely in producing aerial photographs of marine areas. Both technical details regarding aerial photos acquisition (drone and camera settings) and post processing workflow (3D model generation with Structure From Motion algorithm and photo-stitching) are given. Finally by applying modern algorithm of semi-automatic image analysis and classification (Maximum Likelihood, ECHO and Object-based Image Analysis) we compared the results of three thematic maps of nursery area for juvenile sparid fishes, highlighting the potential of this method in mapping and monitoring coastal marine habitats.
Serial grouping of 2D-image regions with object-based attention in humans
Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R
2016-01-01
After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas. DOI: http://dx.doi.org/10.7554/eLife.14320.001 PMID:27291188
Calibration of limited-area ensemble precipitation forecasts for hydrological predictions
NASA Astrophysics Data System (ADS)
Diomede, Tommaso; Marsigli, Chiara; Montani, Andrea; Nerozzi, Fabrizio; Paccagnella, Tiziana
2015-04-01
The main objective of this study is to investigate the impact of calibration for limited-area ensemble precipitation forecasts, to be used for driving discharge predictions up to 5 days in advance. A reforecast dataset, which spans 30 years, based on the Consortium for Small Scale Modeling Limited-Area Ensemble Prediction System (COSMO-LEPS) was used for testing the calibration strategy. Three calibration techniques were applied: quantile-to-quantile mapping, linear regression, and analogs. The performance of these methodologies was evaluated in terms of statistical scores for the precipitation forecasts operationally provided by COSMO-LEPS in the years 2003-2007 over Germany, Switzerland, and the Emilia-Romagna region (northern Italy). The analog-based method seemed to be preferred because of its capability of correct position errors and spread deficiencies. A suitable spatial domain for the analog search can help to handle model spatial errors as systematic errors. However, the performance of the analog-based method may degrade in cases where a limited training dataset is available. A sensitivity test on the length of the training dataset over which to perform the analog search has been performed. The quantile-to-quantile mapping and linear regression methods were less effective, mainly because the forecast-analysis relation was not so strong for the available training dataset. A comparison between the calibration based on the deterministic reforecast and the calibration based on the full operational ensemble used as training dataset has been considered, with the aim to evaluate whether reforecasts are really worthy for calibration, given that their computational cost is remarkable. The verification of the calibration process was then performed by coupling ensemble precipitation forecasts with a distributed rainfall-runoff model. This test was carried out for a medium-sized catchment located in Emilia-Romagna, showing a beneficial impact of the analog-based method on the reduction of missed events for discharge predictions.
ERIC Educational Resources Information Center
Dolmans, Diana H. J. M.; Schmidt, Henk G.
2006-01-01
Students collaborating in small groups is a characteristic of problem-based learning (PBL) that is receiving increased consideration in the literature. In this paper findings from studies in this area are synthesized and discussed. A distinction is made between studies focusing on cognitive effects of group learning and studies focusing on…
NASA Astrophysics Data System (ADS)
Harris, B.; McDougall, K.; Barry, M.
2012-07-01
Digital Elevation Models (DEMs) allow for the efficient and consistent creation of waterways and catchment boundaries over large areas. Studies of waterway delineation from DEMs are usually undertaken over small or single catchment areas due to the nature of the problems being investigated. Improvements in Geographic Information Systems (GIS) techniques, software, hardware and data allow for analysis of larger data sets and also facilitate a consistent tool for the creation and analysis of waterways over extensive areas. However, rarely are they developed over large regional areas because of the lack of available raw data sets and the amount of work required to create the underlying DEMs. This paper examines definition of waterways and catchments over an area of approximately 25,000 km2 to establish the optimal DEM scale required for waterway delineation over large regional projects. The comparative study analysed multi-scale DEMs over two test areas (Wivenhoe catchment, 543 km2 and a detailed 13 km2 within the Wivenhoe catchment) including various data types, scales, quality, and variable catchment input parameters. Historic and available DEM data was compared to high resolution Lidar based DEMs to assess variations in the formation of stream networks. The results identified that, particularly in areas of high elevation change, DEMs at 20 m cell size created from broad scale 1:25,000 data (combined with more detailed data or manual delineation in flat areas) are adequate for the creation of waterways and catchments at a regional scale.