2017-06-01
AN ADVANCED MULTI-JUNCTION SOLAR -CELL DESIGN FOR SPACE ENVIRONMENTS (AM0) USING NEARLY ORTHOGONAL LATIN HYPERCUBES by Silvio Pueschel June...ADVANCED MULTI-JUNCTION SOLAR -CELL DESIGN FOR SPACE ENVIRONMENTS (AM0) USING NEARLY ORTHOGONAL LATIN HYPERCUBES 5. FUNDING NUMBERS 6. AUTHOR(S) Silvio...multi-junction solar cells with Silvaco Atlas simulation software. It introduces the nearly orthogonal Latin hypercube (NOLH) design of experiments (DoE
Identification of stochastic interactions in nonlinear models of structural mechanics
NASA Astrophysics Data System (ADS)
Kala, Zdeněk
2017-07-01
In the paper, the polynomial approximation is presented by which the Sobol sensitivity analysis can be evaluated with all sensitivity indices. The nonlinear FEM model is approximated. The input area is mapped using simulations runs of Latin Hypercube Sampling method. The domain of the approximation polynomial is chosen so that it were possible to apply large number of simulation runs of Latin Hypercube Sampling method. The method presented also makes possible to evaluate higher-order sensitivity indices, which could not be identified in case of nonlinear FEM.
NASA Astrophysics Data System (ADS)
Sheikholeslami, R.; Hosseini, N.; Razavi, S.
2016-12-01
Modern earth and environmental models are usually characterized by a large parameter space and high computational cost. These two features prevent effective implementation of sampling-based analysis such as sensitivity and uncertainty analysis, which require running these computationally expensive models several times to adequately explore the parameter/problem space. Therefore, developing efficient sampling techniques that scale with the size of the problem, computational budget, and users' needs is essential. In this presentation, we propose an efficient sequential sampling strategy, called Progressive Latin Hypercube Sampling (PLHS), which provides an increasingly improved coverage of the parameter space, while satisfying pre-defined requirements. The original Latin hypercube sampling (LHS) approach generates the entire sample set in one stage; on the contrary, PLHS generates a series of smaller sub-sets (also called `slices') while: (1) each sub-set is Latin hypercube and achieves maximum stratification in any one dimensional projection; (2) the progressive addition of sub-sets remains Latin hypercube; and thus (3) the entire sample set is Latin hypercube. Therefore, it has the capability to preserve the intended sampling properties throughout the sampling procedure. PLHS is deemed advantageous over the existing methods, particularly because it nearly avoids over- or under-sampling. Through different case studies, we show that PHLS has multiple advantages over the one-stage sampling approaches, including improved convergence and stability of the analysis results with fewer model runs. In addition, PLHS can help to minimize the total simulation time by only running the simulations necessary to achieve the desired level of quality (e.g., accuracy, and convergence rate).
Latin hypercube approach to estimate uncertainty in ground water vulnerability
Gurdak, J.J.; McCray, J.E.; Thyne, G.; Qi, S.L.
2007-01-01
A methodology is proposed to quantify prediction uncertainty associated with ground water vulnerability models that were developed through an approach that coupled multivariate logistic regression with a geographic information system (GIS). This method uses Latin hypercube sampling (LHS) to illustrate the propagation of input error and estimate uncertainty associated with the logistic regression predictions of ground water vulnerability. Central to the proposed method is the assumption that prediction uncertainty in ground water vulnerability models is a function of input error propagation from uncertainty in the estimated logistic regression model coefficients (model error) and the values of explanatory variables represented in the GIS (data error). Input probability distributions that represent both model and data error sources of uncertainty were simultaneously sampled using a Latin hypercube approach with logistic regression calculations of probability of elevated nonpoint source contaminants in ground water. The resulting probability distribution represents the prediction intervals and associated uncertainty of the ground water vulnerability predictions. The method is illustrated through a ground water vulnerability assessment of the High Plains regional aquifer. Results of the LHS simulations reveal significant prediction uncertainties that vary spatially across the regional aquifer. Additionally, the proposed method enables a spatial deconstruction of the prediction uncertainty that can lead to improved prediction of ground water vulnerability. ?? 2007 National Ground Water Association.
A Novel Latin Hypercube Algorithm via Translational Propagation
Pan, Guang; Ye, Pengcheng
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is directly related to the experimental designs used. Optimal Latin hypercube designs are frequently used and have been shown to have good space-filling and projective properties. However, the high cost in constructing them limits their use. In this paper, a methodology for creating novel Latin hypercube designs via translational propagation and successive local enumeration algorithm (TPSLE) is developed without using formal optimization. TPSLE algorithm is based on the inspiration that a near optimal Latin Hypercube design can be constructed by a simple initial block with a few points generated by algorithm SLE as a building block. In fact, TPSLE algorithm offers a balanced trade-off between the efficiency and sampling performance. The proposed algorithm is compared to two existing algorithms and is found to be much more efficient in terms of the computation time and has acceptable space-filling and projective properties. PMID:25276844
A user's guide to Sandia's latin hypercube sampling software : LHS UNIX library/standalone version.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swiler, Laura Painton; Wyss, Gregory Dane
2004-07-01
This document is a reference guide for the UNIX Library/Standalone version of the Latin Hypercube Sampling Software. This software has been developed to generate Latin hypercube multivariate samples. This version runs on Linux or UNIX platforms. This manual covers the use of the LHS code in a UNIX environment, run either as a standalone program or as a callable library. The underlying code in the UNIX Library/Standalone version of LHS is almost identical to the updated Windows version of LHS released in 1998 (SAND98-0210). However, some modifications were made to customize it for a UNIX environment and as a librarymore » that is called from the DAKOTA environment. This manual covers the use of the LHS code as a library and in the standalone mode under UNIX.« less
Latin Hypercube Sampling (LHS) UNIX Library/Standalone
DOE Office of Scientific and Technical Information (OSTI.GOV)
2004-05-13
The LHS UNIX Library/Standalone software provides the capability to draw random samples from over 30 distribution types. It performs the sampling by a stratified sampling method called Latin Hypercube Sampling (LHS). Multiple distributions can be sampled simultaneously, with user-specified correlations amongst the input distributions, LHS UNIX Library/ Standalone provides a way to generate multi-variate samples. The LHS samples can be generated either as a callable library (e.g., from within the DAKOTA software framework) or as a standalone capability. LHS UNIX Library/Standalone uses the Latin Hypercube Sampling method (LHS) to generate samples. LHS is a constrained Monte Carlo sampling scheme. Inmore » LHS, the range of each variable is divided into non-overlapping intervals on the basis of equal probability. A sample is selected at random with respect to the probability density in each interval, If multiple variables are sampled simultaneously, then values obtained for each are paired in a random manner with the n values of the other variables. In some cases, the pairing is restricted to obtain specified correlations amongst the input variables. Many simulation codes have input parameters that are uncertain and can be specified by a distribution, To perform uncertainty analysis and sensitivity analysis, random values are drawn from the input parameter distributions, and the simulation is run with these values to obtain output values. If this is done repeatedly, with many input samples drawn, one can build up a distribution of the output as well as examine correlations between input and output variables.« less
Lin, Yu-Pin; Chu, Hone-Jay; Huang, Yu-Long; Tang, Chia-Hsi; Rouhani, Shahrokh
2011-06-01
This study develops a stratified conditional Latin hypercube sampling (scLHS) approach for multiple, remotely sensed, normalized difference vegetation index (NDVI) images. The objective is to sample, monitor, and delineate spatiotemporal landscape changes, including spatial heterogeneity and variability, in a given area. The scLHS approach, which is based on the variance quadtree technique (VQT) and the conditional Latin hypercube sampling (cLHS) method, selects samples in order to delineate landscape changes from multiple NDVI images. The images are then mapped for calibration and validation by using sequential Gaussian simulation (SGS) with the scLHS selected samples. Spatial statistical results indicate that in terms of their statistical distribution, spatial distribution, and spatial variation, the statistics and variograms of the scLHS samples resemble those of multiple NDVI images more closely than those of cLHS and VQT samples. Moreover, the accuracy of simulated NDVI images based on SGS with scLHS samples is significantly better than that of simulated NDVI images based on SGS with cLHS samples and VQT samples, respectively. However, the proposed approach efficiently monitors the spatial characteristics of landscape changes, including the statistics, spatial variability, and heterogeneity of NDVI images. In addition, SGS with the scLHS samples effectively reproduces spatial patterns and landscape changes in multiple NDVI images.
Approximation of the exponential integral (well function) using sampling methods
NASA Astrophysics Data System (ADS)
Baalousha, Husam Musa
2015-04-01
Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.
Hybrid real-code ant colony optimisation for constrained mechanical design
NASA Astrophysics Data System (ADS)
Pholdee, Nantiwat; Bureerat, Sujin
2016-01-01
This paper proposes a hybrid meta-heuristic based on integrating a local search simplex downhill (SDH) method into the search procedure of real-code ant colony optimisation (ACOR). This hybridisation leads to five hybrid algorithms where a Monte Carlo technique, a Latin hypercube sampling technique (LHS) and a translational propagation Latin hypercube design (TPLHD) algorithm are used to generate an initial population. Also, two numerical schemes for selecting an initial simplex are investigated. The original ACOR and its hybrid versions along with a variety of established meta-heuristics are implemented to solve 17 constrained test problems where a fuzzy set theory penalty function technique is used to handle design constraints. The comparative results show that the hybrid algorithms are the top performers. Using the TPLHD technique gives better results than the other sampling techniques. The hybrid optimisers are a powerful design tool for constrained mechanical design problems.
NASA Astrophysics Data System (ADS)
WANG, P. T.
2015-12-01
Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.
Extension of latin hypercube samples with correlated variables.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hora, Stephen Curtis; Helton, Jon Craig; Sallaberry, Cedric J. PhD.
2006-11-01
A procedure for extending the size of a Latin hypercube sample (LHS) with rank correlated variables is described and illustrated. The extension procedure starts with an LHS of size m and associated rank correlation matrix C and constructs a new LHS of size 2m that contains the elements of the original LHS and has a rank correlation matrix that is close to the original rank correlation matrix C. The procedure is intended for use in conjunction with uncertainty and sensitivity analysis of computationally demanding models in which it is important to make efficient use of a necessarily limited number ofmore » model evaluations.« less
Improvements in sub-grid, microphysics averages using quadrature based approaches
NASA Astrophysics Data System (ADS)
Chowdhary, K.; Debusschere, B.; Larson, V. E.
2013-12-01
Sub-grid variability in microphysical processes plays a critical role in atmospheric climate models. In order to account for this sub-grid variability, Larson and Schanen (2013) propose placing a probability density function on the sub-grid cloud microphysics quantities, e.g. autoconversion rate, essentially interpreting the cloud microphysics quantities as a random variable in each grid box. Random sampling techniques, e.g. Monte Carlo and Latin Hypercube, can be used to calculate statistics, e.g. averages, on the microphysics quantities, which then feed back into the model dynamics on the coarse scale. We propose an alternate approach using numerical quadrature methods based on deterministic sampling points to compute the statistical moments of microphysics quantities in each grid box. We have performed a preliminary test on the Kessler autoconversion formula, and, upon comparison with Latin Hypercube sampling, our approach shows an increased level of accuracy with a reduction in sample size by almost two orders of magnitude. Application to other microphysics processes is the subject of ongoing research.
NASA Astrophysics Data System (ADS)
Hamalainen, Sampsa; Geng, Xiaoyuan; He, Juanxia
2017-04-01
Latin Hypercube Sampling (LHS) at variable resolutions for enhanced watershed scale Soil Sampling and Digital Soil Mapping. Sampsa Hamalainen, Xiaoyuan Geng, and Juanxia, He. AAFC - Agriculture and Agr-Food Canada, Ottawa, Canada. The Latin Hypercube Sampling (LHS) approach to assist with Digital Soil Mapping has been developed for some time now, however the purpose of this work was to complement LHS with use of multiple spatial resolutions of covariate datasets and variability in the range of sampling points produced. This allowed for specific sets of LHS points to be produced to fulfil the needs of various partners from multiple projects working in the Ontario and Prince Edward Island provinces of Canada. Secondary soil and environmental attributes are critical inputs that are required in the development of sampling points by LHS. These include a required Digital Elevation Model (DEM) and subsequent covariate datasets produced as a result of a Digital Terrain Analysis performed on the DEM. These additional covariates often include but are not limited to Topographic Wetness Index (TWI), Length-Slope (LS) Factor, and Slope which are continuous data. The range of specific points created in LHS included 50 - 200 depending on the size of the watershed and more importantly the number of soil types found within. The spatial resolution of covariates included within the work ranged from 5 - 30 m. The iterations within the LHS sampling were run at an optimal level so the LHS model provided a good spatial representation of the environmental attributes within the watershed. Also, additional covariates were included in the Latin Hypercube Sampling approach which is categorical in nature such as external Surficial Geology data. Some initial results of the work include using a 1000 iteration variable within the LHS model. 1000 iterations was consistently a reasonable value used to produce sampling points that provided a good spatial representation of the environmental attributes. When working within the same spatial resolution for covariates, however only modifying the desired number of sampling points produced, the change of point location portrayed a strong geospatial relationship when using continuous data. Access to agricultural fields and adjacent land uses is often "pinned" as the greatest deterrent to performing soil sampling for both soil survey and soil attribute validation work. The lack of access can be a result of poor road access and/or difficult geographical conditions to navigate for field work individuals. This seems a simple yet continuous issue to overcome for the scientific community and in particular, soils professionals. The ability to assist with the ease of access to sampling points will be in the future a contribution to the Latin Hypercube Sampling (LHS) approach. By removing all locations in the initial instance from the DEM, the LHS model can be restricted to locations only with access from the adjacent road or trail. To further the approach, a road network geospatial dataset can be included within spatial Geographic Information Systems (GIS) applications to access already produced points using a shortest-distance network method.
Chonggang Xu; Hong S. He; Yuanman Hu; Yu Chang; Xiuzhen Li; Rencang Bu
2005-01-01
Geostatistical stochastic simulation is always combined with Monte Carlo method to quantify the uncertainty in spatial model simulations. However, due to the relatively long running time of spatially explicit forest models as a result of their complexity, it is always infeasible to generate hundreds or thousands of Monte Carlo simulations. Thus, it is of great...
Sensitivity analysis of static resistance of slender beam under bending
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valeš, Jan
2016-06-08
The paper deals with statical and sensitivity analyses of resistance of simply supported I-beams under bending. The resistance was solved by geometrically nonlinear finite element method in the programme Ansys. The beams are modelled with initial geometrical imperfections following the first eigenmode of buckling. Imperfections were, together with geometrical characteristics of cross section, and material characteristics of steel, considered as random quantities. The method Latin Hypercube Sampling was applied to evaluate statistical and sensitivity resistance analyses.
NASA Astrophysics Data System (ADS)
Kumar, V.; Nayagum, D.; Thornton, S.; Banwart, S.; Schuhmacher2, M.; Lerner, D.
2006-12-01
Characterization of uncertainty associated with groundwater quality models is often of critical importance, as for example in cases where environmental models are employed in risk assessment. Insufficient data, inherent variability and estimation errors of environmental model parameters introduce uncertainty into model predictions. However, uncertainty analysis using conventional methods such as standard Monte Carlo sampling (MCS) may not be efficient, or even suitable, for complex, computationally demanding models and involving different nature of parametric variability and uncertainty. General MCS or variant of MCS such as Latin Hypercube Sampling (LHS) assumes variability and uncertainty as a single random entity and the generated samples are treated as crisp assuming vagueness as randomness. Also when the models are used as purely predictive tools, uncertainty and variability lead to the need for assessment of the plausible range of model outputs. An improved systematic variability and uncertainty analysis can provide insight into the level of confidence in model estimates, and can aid in assessing how various possible model estimates should be weighed. The present study aims to introduce, Fuzzy Latin Hypercube Sampling (FLHS), a hybrid approach of incorporating cognitive and noncognitive uncertainties. The noncognitive uncertainty such as physical randomness, statistical uncertainty due to limited information, etc can be described by its own probability density function (PDF); whereas the cognitive uncertainty such estimation error etc can be described by the membership function for its fuzziness and confidence interval by ?-cuts. An important property of this theory is its ability to merge inexact generated data of LHS approach to increase the quality of information. The FLHS technique ensures that the entire range of each variable is sampled with proper incorporation of uncertainty and variability. A fuzzified statistical summary of the model results will produce indices of sensitivity and uncertainty that relate the effects of heterogeneity and uncertainty of input variables to model predictions. The feasibility of the method is validated to assess uncertainty propagation of parameter values for estimation of the contamination level of a drinking water supply well due to transport of dissolved phenolics from a contaminated site in the UK.
Lin, Yu-Pin; Chu, Hone-Jay; Wang, Cheng-Long; Yu, Hsiao-Hsuan; Wang, Yung-Chieh
2009-01-01
This study applies variogram analyses of normalized difference vegetation index (NDVI) images derived from SPOT HRV images obtained before and after the ChiChi earthquake in the Chenyulan watershed, Taiwan, as well as images after four large typhoons, to delineate the spatial patterns, spatial structures and spatial variability of landscapes caused by these large disturbances. The conditional Latin hypercube sampling approach was applied to select samples from multiple NDVI images. Kriging and sequential Gaussian simulation with sufficient samples were then used to generate maps of NDVI images. The variography of NDVI image results demonstrate that spatial patterns of disturbed landscapes were successfully delineated by variogram analysis in study areas. The high-magnitude Chi-Chi earthquake created spatial landscape variations in the study area. After the earthquake, the cumulative impacts of typhoons on landscape patterns depended on the magnitudes and paths of typhoons, but were not always evident in the spatiotemporal variability of landscapes in the study area. The statistics and spatial structures of multiple NDVI images were captured by 3,000 samples from 62,500 grids in the NDVI images. Kriging and sequential Gaussian simulation with the 3,000 samples effectively reproduced spatial patterns of NDVI images. However, the proposed approach, which integrates the conditional Latin hypercube sampling approach, variogram, kriging and sequential Gaussian simulation in remotely sensed images, efficiently monitors, samples and maps the effects of large chronological disturbances on spatial characteristics of landscape changes including spatial variability and heterogeneity.
Convoy Protection under Multi-Threat Scenario
2017-06-01
14. SUBJECT TERMS antisubmarine warfare, convoy protection, screening, design of experiments, agent-based simulation 15. NUMBER OF...46 5. Scenarios 33–36 (Red Submarine Tactic-2) ...............................46 IV. DESIGN OF EXPERIMENT...47 C. NEARLY ORTHOGONAL LATIN HYPERCUBE DESIGN ............51 V. DATA ANALYSIS
FORMAL UNCERTAINTY ANALYSIS OF A LAGRANGIAN PHOTOCHEMICAL AIR POLLUTION MODEL. (R824792)
This study applied Monte Carlo analysis with Latin
hypercube sampling to evaluate the effects of uncertainty
in air parcel trajectory paths, emissions, rate constants,
deposition affinities, mixing heights, and atmospheric stability
on predictions from a vertically...
Comparison of Drainmod Based Watershed Scale Models
Glenn P. Fernandez; George M. Chescheir; R. Wayne Skaggs; Devendra M. Amatya
2004-01-01
Watershed scale hydrology and water quality models (DRAINMOD-DUFLOW, DRAINMOD-W, DRAINMOD-GIS and WATGIS) that describe the nitrogen loadings at the outlet of poorly drained watersheds were examined with respect to their accuracy and uncertainty in model predictions. Latin Hypercube Sampling (LHS) was applied to determine the impact of uncertainty in estimating field...
2006-12-01
The code was initially developed to be run within the netBeans IDE 5.04 running J2SE 5.0. During the course of the development, Eclipse SDK 3.2...covers the results from the research. Chapter V concludes and recommends future research. 4 netBeans
Comparison of empirical strategies to maximize GENEHUNTER lod scores.
Chen, C H; Finch, S J; Mendell, N R; Gordon, D
1999-01-01
We compare four strategies for finding the settings of genetic parameters that maximize the lod scores reported in GENEHUNTER 1.2. The four strategies are iterated complete factorial designs, iterated orthogonal Latin hypercubes, evolutionary operation, and numerical optimization. The genetic parameters that are set are the phenocopy rate, penetrance, and disease allele frequency; both recessive and dominant models are considered. We selected the optimization of a recessive model on the Collaborative Study on the Genetics of Alcoholism (COGA) data of chromosome 1 for complete analysis. Convergence to a setting producing a local maximum required the evaluation of over 100 settings (for a time budget of 800 minutes on a Pentium II 300 MHz PC). Two notable local maxima were detected, suggesting the need for a more extensive search before claiming that a global maximum had been found. The orthogonal Latin hypercube design was the best strategy for finding areas that produced high lod scores with small numbers of evaluations. Numerical optimization starting from a region producing high lod scores was the strategy that found the highest maximum observed.
Influence of scanning parameters on the estimation accuracy of control points of B-spline surfaces
NASA Astrophysics Data System (ADS)
Aichinger, Julia; Schwieger, Volker
2018-04-01
This contribution deals with the influence of scanning parameters like scanning distance, incidence angle, surface quality and sampling width on the average estimated standard deviations of the position of control points from B-spline surfaces which are used to model surfaces from terrestrial laser scanning data. The influence of the scanning parameters is analyzed by the Monte Carlo based variance analysis. The samples were generated for non-correlated and correlated data, leading to the samples generated by Latin hypercube and replicated Latin hypercube sampling algorithms. Finally, the investigations show that the most influential scanning parameter is the distance from the laser scanner to the object. The angle of incidence shows a significant effect for distances of 50 m and longer, while the surface quality contributes only negligible effects. The sampling width has no influence. Optimal scanning parameters can be found in the smallest possible object distance at an angle of incidence close to 0° in the highest surface quality. The consideration of correlations improves the estimation accuracy and underlines the importance of complete stochastic models for TLS measurements.
NASA Astrophysics Data System (ADS)
Farzamian, Mohammad; Monteiro Santos, Fernando A.; Khalil, Mohamed A.
2017-12-01
The coupled hydrogeophysical approach has proved to be a valuable tool for improving the use of geoelectrical data for hydrological model parameterization. In the coupled approach, hydrological parameters are directly inferred from geoelectrical measurements in a forward manner to eliminate the uncertainty connected to the independent inversion of electrical resistivity data. Several numerical studies have been conducted to demonstrate the advantages of a coupled approach; however, only a few attempts have been made to apply the coupled approach to actual field data. In this study, we developed a 1D coupled hydrogeophysical code to estimate the van Genuchten-Mualem model parameters, K s, n, θ r and α, from time-lapse vertical electrical sounding data collected during a constant inflow infiltration experiment. van Genuchten-Mualem parameters were sampled using the Latin hypercube sampling method to provide a full coverage of the range of each parameter from their distributions. By applying the coupled approach, vertical electrical sounding data were coupled to hydrological models inferred from van Genuchten-Mualem parameter samples to investigate the feasibility of constraining the hydrological model. The key approaches taken in the study are to (1) integrate electrical resistivity and hydrological data and avoiding data inversion, (2) estimate the total water mass recovery of electrical resistivity data and consider it in van Genuchten-Mualem parameters evaluation and (3) correct the influence of subsurface temperature fluctuations during the infiltration experiment on electrical resistivity data. The results of the study revealed that the coupled hydrogeophysical approach can improve the value of geophysical measurements in hydrological model parameterization. However, the approach cannot overcome the technical limitations of the geoelectrical method associated with resolution and of water mass recovery.
What Friends Are For: Collaborative Intelligence Analysis and Search
2014-06-01
14. SUBJECT TERMS Intelligence Community, information retrieval, recommender systems , search engines, social networks, user profiling, Lucene...improvements over existing search systems . The improvements are shown to be robust to high levels of human error and low similarity between users ...precision NOLH nearly orthogonal Latin hypercubes P@ precision at documents RS recommender systems TREC Text REtrieval Conference USM user
Improvement of Simulation Method in Validation of Software of the Coordinate Measuring Systems
NASA Astrophysics Data System (ADS)
Nieciąg, Halina
2015-10-01
Software is used in order to accomplish various tasks at each stage of the functioning of modern measuring systems. Before metrological confirmation of measuring equipment, the system has to be validated. This paper discusses the method for conducting validation studies of a fragment of software to calculate the values of measurands. Due to the number and nature of the variables affecting the coordinate measurement results and the complex character and multi-dimensionality of measurands, the study used the Monte Carlo method of numerical simulation. The article presents an attempt of possible improvement of results obtained by classic Monte Carlo tools. The algorithm LHS (Latin Hypercube Sampling) was implemented as alternative to the simple sampling schema of classic algorithm.
Flexible Space-Filling Designs for Complex System Simulations
2013-06-01
interior of the experimental region and cannot fit higher-order models. We present a genetic algorithm that constructs space-filling designs with...Computer Experiments, Design of Experiments, Genetic Algorithm , Latin Hypercube, Response Surface Methodology, Nearly Orthogonal 15. NUMBER OF PAGES 147...experimental region and cannot fit higher-order models. We present a genetic algorithm that constructs space-filling designs with minimal correlations
Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kleijnen, J.P.C.; Helton, J.C.
1999-04-01
The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are consideredmore » for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.« less
Analysis of hepatitis C viral dynamics using Latin hypercube sampling
NASA Astrophysics Data System (ADS)
Pachpute, Gaurav; Chakrabarty, Siddhartha P.
2012-12-01
We consider a mathematical model comprising four coupled ordinary differential equations (ODEs) to study hepatitis C viral dynamics. The model includes the efficacies of a combination therapy of interferon and ribavirin. There are two main objectives of this paper. The first one is to approximate the percentage of cases in which there is a viral clearance in absence of treatment as well as percentage of response to treatment for various efficacy levels. The other is to better understand and identify the parameters that play a key role in the decline of viral load and can be estimated in a clinical setting. A condition for the stability of the uninfected and the infected steady states is presented. A large number of sample points for the model parameters (which are physiologically feasible) are generated using Latin hypercube sampling. An analysis of the simulated values identifies that, approximately 29.85% cases result in clearance of the virus during the early phase of the infection. Results from the χ2 and the Spearman's tests done on the samples, indicate a distinctly different distribution for certain parameters for the cases exhibiting viral clearance under the combination therapy.
NASA Astrophysics Data System (ADS)
Aldrin, John C.; Mayes, Alexander; Jauriqui, Leanne; Biedermann, Eric; Heffernan, Julieanne; Livings, Richard; Goodlet, Brent; Mazdiyasni, Siamack
2018-04-01
A case study is presented evaluating uncertainty in Resonance Ultrasound Spectroscopy (RUS) inversion for a single crystal (SX) Ni-based superalloy Mar-M247 cylindrical dog-bone specimens. A number of surrogate models were developed with FEM model solutions, using different sampling schemes (regular grid, Monte Carlo sampling, Latin Hyper-cube sampling) and model approaches, N-dimensional cubic spline interpolation and Kriging. Repeated studies were used to quantify the well-posedness of the inversion problem, and the uncertainty was assessed in material property and crystallographic orientation estimates given typical geometric dimension variability in aerospace components. Surrogate model quality was found to be an important factor in inversion results when the model more closely represents the test data. One important discovery was when the model matches well with test data, a Kriging surrogate model using un-sorted Latin Hypercube sampled data performed as well as the best results from an N-dimensional interpolation model using sorted data. However, both surrogate model quality and mode sorting were found to be less critical when inverting properties from either experimental data or simulated test cases with uncontrolled geometric variation.
Chaudhary, Neha; Tøndel, Kristin; Bhatnagar, Rakesh; dos Santos, Vítor A P Martins; Puchałka, Jacek
2016-03-01
Genome-Scale Metabolic Reconstructions (GSMRs), along with optimization-based methods, predominantly Flux Balance Analysis (FBA) and its derivatives, are widely applied for assessing and predicting the behavior of metabolic networks upon perturbation, thereby enabling identification of potential novel drug targets and biotechnologically relevant pathways. The abundance of alternate flux profiles has led to the evolution of methods to explore the complete solution space aiming to increase the accuracy of predictions. Herein we present a novel, generic algorithm to characterize the entire flux space of GSMR upon application of FBA, leading to the optimal value of the objective (the optimal flux space). Our method employs Modified Latin-Hypercube Sampling (LHS) to effectively border the optimal space, followed by Principal Component Analysis (PCA) to identify and explain the major sources of variability within it. The approach was validated with the elementary mode analysis of a smaller network of Saccharomyces cerevisiae and applied to the GSMR of Pseudomonas aeruginosa PAO1 (iMO1086). It is shown to surpass the commonly used Monte Carlo Sampling (MCS) in providing a more uniform coverage for a much larger network in less number of samples. Results show that although many fluxes are identified as variable upon fixing the objective value, majority of the variability can be reduced to several main patterns arising from a few alternative pathways. In iMO1086, initial variability of 211 reactions could almost entirely be explained by 7 alternative pathway groups. These findings imply that the possibilities to reroute greater portions of flux may be limited within metabolic networks of bacteria. Furthermore, the optimal flux space is subject to change with environmental conditions. Our method may be a useful device to validate the predictions made by FBA-based tools, by describing the optimal flux space associated with these predictions, thus to improve them.
Stochastic Investigation of Natural Frequency for Functionally Graded Plates
NASA Astrophysics Data System (ADS)
Karsh, P. K.; Mukhopadhyay, T.; Dey, S.
2018-03-01
This paper presents the stochastic natural frequency analysis of functionally graded plates by applying artificial neural network (ANN) approach. Latin hypercube sampling is utilised to train the ANN model. The proposed algorithm for stochastic natural frequency analysis of FGM plates is validated and verified with original finite element method and Monte Carlo simulation (MCS). The combined stochastic variation of input parameters such as, elastic modulus, shear modulus, Poisson ratio, and mass density are considered. Power law is applied to distribute the material properties across the thickness. The present ANN model reduces the sample size and computationally found efficient as compared to conventional Monte Carlo simulation.
NASA Astrophysics Data System (ADS)
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans
2015-02-01
The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this relative improvement decreases with increasing number of sample points and input parameter dimensions. Since the computational time and efforts for generating the sample designs in the two approaches are identical, the use of midpoint LHS as the initial design in OLHS is thus recommended.
Assessing performance and validating finite element simulations using probabilistic knowledge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolin, Ronald M.; Rodriguez, E. A.
Two probabilistic approaches for assessing performance are presented. The first approach assesses probability of failure by simultaneously modeling all likely events. The probability each event causes failure along with the event's likelihood of occurrence contribute to the overall probability of failure. The second assessment method is based on stochastic sampling using an influence diagram. Latin-hypercube sampling is used to stochastically assess events. The overall probability of failure is taken as the maximum probability of failure of all the events. The Likelihood of Occurrence simulation suggests failure does not occur while the Stochastic Sampling approach predicts failure. The Likelihood of Occurrencemore » results are used to validate finite element predictions.« less
Probabilistic Modeling of Intracranial Pressure Effects on Optic Nerve Biomechanics
NASA Technical Reports Server (NTRS)
Ethier, C. R.; Feola, Andrew J.; Raykin, Julia; Myers, Jerry G.; Nelson, Emily S.; Samuels, Brian C.
2016-01-01
Altered intracranial pressure (ICP) is involved/implicated in several ocular conditions: papilledema, glaucoma and Visual Impairment and Intracranial Pressure (VIIP) syndrome. The biomechanical effects of altered ICP on optic nerve head (ONH) tissues in these conditions are uncertain but likely important. We have quantified ICP-induced deformations of ONH tissues, using finite element (FE) and probabilistic modeling (Latin Hypercube Simulations (LHS)) to consider a range of tissue properties and relevant pressures.
Maulidiani; Rudiyanto; Abas, Faridah; Ismail, Intan Safinar; Lajis, Nordin H
2018-06-01
Optimization process is an important aspect in the natural product extractions. Herein, an alternative approach is proposed for the optimization in extraction, namely, the Generalized Likelihood Uncertainty Estimation (GLUE). The approach combines the Latin hypercube sampling, the feasible range of independent variables, the Monte Carlo simulation, and the threshold criteria of response variables. The GLUE method is tested in three different techniques including the ultrasound, the microwave, and the supercritical CO 2 assisted extractions utilizing the data from previously published reports. The study found that this method can: provide more information on the combined effects of the independent variables on the response variables in the dotty plots; deal with unlimited number of independent and response variables; consider combined multiple threshold criteria, which is subjective depending on the target of the investigation for response variables; and provide a range of values with their distribution for the optimization. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
García-Moreno, Angel-Iván; González-Barbosa, José-Joel; Ramírez-Pedraza, Alfonso; Hurtado-Ramos, Juan B.; Ornelas-Rodriguez, Francisco-Javier
2016-04-01
Computer-based reconstruction models can be used to approximate urban environments. These models are usually based on several mathematical approximations and the usage of different sensors, which implies dependency on many variables. The sensitivity analysis presented in this paper is used to weigh the relative importance of each uncertainty contributor into the calibration of a panoramic camera-LiDAR system. Both sensors are used for three-dimensional urban reconstruction. Simulated and experimental tests were conducted. For the simulated tests we analyze and compare the calibration parameters using the Monte Carlo and Latin hypercube sampling techniques. Sensitivity analysis for each variable involved into the calibration was computed by the Sobol method, which is based on the analysis of the variance breakdown, and the Fourier amplitude sensitivity test method, which is based on Fourier's analysis. Sensitivity analysis is an essential tool in simulation modeling and for performing error propagation assessments.
Fault tree models for fault tolerant hypercube multiprocessors
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Tuazon, Jezus O.
1991-01-01
Three candidate fault tolerant hypercube architectures are modeled, their reliability analyses are compared, and the resulting implications of these methods of incorporating fault tolerance into hypercube multiprocessors are discussed. In the course of performing the reliability analyses, the use of HARP and fault trees in modeling sequence dependent system behaviors is demonstrated.
NASA Astrophysics Data System (ADS)
Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing; Walker, Paul D.
2017-02-01
This paper proposes an uncertain modelling and computational method to analyze dynamic responses of rigid-flexible multibody systems (or mechanisms) with random geometry and material properties. Firstly, the deterministic model for the rigid-flexible multibody system is built with the absolute node coordinate formula (ANCF), in which the flexible parts are modeled by using ANCF elements, while the rigid parts are described by ANCF reference nodes (ANCF-RNs). Secondly, uncertainty for the geometry of rigid parts is expressed as uniform random variables, while the uncertainty for the material properties of flexible parts is modeled as a continuous random field, which is further discretized to Gaussian random variables using a series expansion method. Finally, a non-intrusive numerical method is developed to solve the dynamic equations of systems involving both types of random variables, which systematically integrates the deterministic generalized-α solver with Latin Hypercube sampling (LHS) and Polynomial Chaos (PC) expansion. The benchmark slider-crank mechanism is used as a numerical example to demonstrate the characteristics of the proposed method.
Least squares polynomial chaos expansion: A review of sampling strategies
NASA Astrophysics Data System (ADS)
Hadigol, Mohammad; Doostan, Alireza
2018-04-01
As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.
Scott, Sarah N.; Dodd, Amanda B.; Larsen, Marvin E.; ...
2014-12-09
In this study, polymer foam encapsulants provide mechanical, electrical, and thermal isolation in engineered systems. It can be advantageous to surround objects of interest, such as electronics, with foams in a hermetically sealed container in order to protect them from hostile environments or from accidents such as fire. In fire environments, gas pressure from thermal decomposition of foams can cause mechanical failure of sealed systems. In this work, a detailed uncertainty quantification study of polymeric methylene diisocyanate (PMDI)-polyether-polyol based polyurethane foam is presented and compared to experimental results to assess the validity of a 3-D finite element model of themore » heat transfer and degradation processes. In this series of experiments, 320 kg/m 3 PMDI foam in a 0.2 L sealed steel container is heated to 1,073 K at a rate of 150 K/min. The experiment ends when the can breaches due to the buildup of pressure. The temperature at key location is monitored as well as the internal pressure of the can. Both experimental uncertainty and computational uncertainty are examined and compared. The mean value method (MV) and Latin hypercube sampling (LHS) approach are used to propagate the uncertainty through the model. The results of the both the MV method and the LHS approach show that while the model generally can predict the temperature at given locations in the system, it is less successful at predicting the pressure response. Also, these two approaches for propagating uncertainty agree with each other, the importance of each input parameter on the simulation results is also investigated, showing that for the temperature response the conductivity of the steel container and the effective conductivity of the foam, are the most important parameters. For the pressure response, the activation energy, effective conductivity, and specific heat are most important. The comparison to experiments and the identification of the drivers of uncertainty allow for targeted development of the computational model and for definition of the experiments necessary to improve accuracy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, Sarah N.; Dodd, Amanda B.; Larsen, Marvin E.
In this study, polymer foam encapsulants provide mechanical, electrical, and thermal isolation in engineered systems. It can be advantageous to surround objects of interest, such as electronics, with foams in a hermetically sealed container in order to protect them from hostile environments or from accidents such as fire. In fire environments, gas pressure from thermal decomposition of foams can cause mechanical failure of sealed systems. In this work, a detailed uncertainty quantification study of polymeric methylene diisocyanate (PMDI)-polyether-polyol based polyurethane foam is presented and compared to experimental results to assess the validity of a 3-D finite element model of themore » heat transfer and degradation processes. In this series of experiments, 320 kg/m 3 PMDI foam in a 0.2 L sealed steel container is heated to 1,073 K at a rate of 150 K/min. The experiment ends when the can breaches due to the buildup of pressure. The temperature at key location is monitored as well as the internal pressure of the can. Both experimental uncertainty and computational uncertainty are examined and compared. The mean value method (MV) and Latin hypercube sampling (LHS) approach are used to propagate the uncertainty through the model. The results of the both the MV method and the LHS approach show that while the model generally can predict the temperature at given locations in the system, it is less successful at predicting the pressure response. Also, these two approaches for propagating uncertainty agree with each other, the importance of each input parameter on the simulation results is also investigated, showing that for the temperature response the conductivity of the steel container and the effective conductivity of the foam, are the most important parameters. For the pressure response, the activation energy, effective conductivity, and specific heat are most important. The comparison to experiments and the identification of the drivers of uncertainty allow for targeted development of the computational model and for definition of the experiments necessary to improve accuracy.« less
Design optimization of hydraulic turbine draft tube based on CFD and DOE method
NASA Astrophysics Data System (ADS)
Nam, Mun chol; Dechun, Ba; Xiangji, Yue; Mingri, Jin
2018-03-01
In order to improve performance of the hydraulic turbine draft tube in its design process, the optimization for draft tube is performed based on multi-disciplinary collaborative design optimization platform by combining the computation fluid dynamic (CFD) and the design of experiment (DOE) in this paper. The geometrical design variables are considered as the median section in the draft tube and the cross section in its exit diffuser and objective function is to maximize the pressure recovery factor (Cp). Sample matrixes required for the shape optimization of the draft tube are generated by optimal Latin hypercube (OLH) method of the DOE technique and their performances are evaluated through computational fluid dynamic (CFD) numerical simulation. Subsequently the main effect analysis and the sensitivity analysis of the geometrical parameters of the draft tube are accomplished. Then, the design optimization of the geometrical design variables is determined using the response surface method. The optimization result of the draft tube shows a marked performance improvement over the original.
NASA Astrophysics Data System (ADS)
Wang, S.; Huang, G. H.; Huang, W.; Fan, Y. R.; Li, Z.
2015-10-01
In this study, a fractional factorial probabilistic collocation method is proposed to reveal statistical significance of hydrologic model parameters and their multi-level interactions affecting model outputs, facilitating uncertainty propagation in a reduced dimensional space. The proposed methodology is applied to the Xiangxi River watershed in China to demonstrate its validity and applicability, as well as its capability of revealing complex and dynamic parameter interactions. A set of reduced polynomial chaos expansions (PCEs) only with statistically significant terms can be obtained based on the results of factorial analysis of variance (ANOVA), achieving a reduction of uncertainty in hydrologic predictions. The predictive performance of reduced PCEs is verified by comparing against standard PCEs and the Monte Carlo with Latin hypercube sampling (MC-LHS) method in terms of reliability, sharpness, and Nash-Sutcliffe efficiency (NSE). Results reveal that the reduced PCEs are able to capture hydrologic behaviors of the Xiangxi River watershed, and they are efficient functional representations for propagating uncertainties in hydrologic predictions.
Validation of Heat Transfer Thermal Decomposition and Container Pressurization of Polyurethane Foam.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, Sarah Nicole; Dodd, Amanda B.; Larsen, Marvin E.
Polymer foam encapsulants provide mechanical, electrical, and thermal isolation in engineered systems. In fire environments, gas pressure from thermal decomposition of polymers can cause mechanical failure of sealed systems. In this work, a detailed uncertainty quantification study of PMDI-based polyurethane foam is presented to assess the validity of the computational model. Both experimental measurement uncertainty and model prediction uncertainty are examined and compared. Both the mean value method and Latin hypercube sampling approach are used to propagate the uncertainty through the model. In addition to comparing computational and experimental results, the importance of each input parameter on the simulation resultmore » is also investigated. These results show that further development in the physics model of the foam and appropriate associated material testing are necessary to improve model accuracy.« less
Multi-objective aerodynamic shape optimization of small livestock trailers
NASA Astrophysics Data System (ADS)
Gilkeson, C. A.; Toropov, V. V.; Thompson, H. M.; Wilson, M. C. T.; Foxley, N. A.; Gaskell, P. H.
2013-11-01
This article presents a formal optimization study of the design of small livestock trailers, within which the majority of animals are transported to market in the UK. The benefits of employing a headboard fairing to reduce aerodynamic drag without compromising the ventilation of the animals' microclimate are investigated using a multi-stage process involving computational fluid dynamics (CFD), optimal Latin hypercube (OLH) design of experiments (DoE) and moving least squares (MLS) metamodels. Fairings are parameterized in terms of three design variables and CFD solutions are obtained at 50 permutations of design variables. Both global and local search methods are employed to locate the global minimum from metamodels of the objective functions and a Pareto front is generated. The importance of carefully selecting an objective function is demonstrated and optimal fairing designs, offering drag reductions in excess of 5% without compromising animal ventilation, are presented.
An uncertainty analysis of the flood-stage upstream from a bridge.
Sowiński, M
2006-01-01
The paper begins with the formulation of the problem in the form of a general performance function. Next the Latin hypercube sampling (LHS) technique--a modified version of the Monte Carlo method is briefly described. The essential uncertainty analysis of the flood-stage upstream from a bridge starts with a description of the hydraulic model. This model concept is based on the HEC-RAS model developed for subcritical flow under a bridge without piers in which the energy equation is applied. The next section contains the characteristic of the basic variables including a specification of their statistics (means and variances). Next the problem of correlated variables is discussed and assumptions concerning correlation among basic variables are formulated. The analysis of results is based on LHS ranking lists obtained from the computer package UNCSAM. Results fot two examples are given: one for independent and the other for correlated variables.
A flexible importance sampling method for integrating subgrid processes
Raut, E. K.; Larson, V. E.
2016-01-29
Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that containsmore » both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). Here, the resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.« less
NASA Astrophysics Data System (ADS)
Smith, K. A.; Barker, L. J.; Harrigan, S.; Prudhomme, C.; Hannaford, J.; Tanguy, M.; Parry, S.
2017-12-01
Earth and environmental models are relied upon to investigate system responses that cannot otherwise be examined. In simulating physical processes, models have adjustable parameters which may, or may not, have a physical meaning. Determining the values to assign to these model parameters is an enduring challenge for earth and environmental modellers. Selecting different error metrics by which the models results are compared to observations will lead to different sets of calibrated model parameters, and thus different model results. Furthermore, models may exhibit `equifinal' behaviour, where multiple combinations of model parameters lead to equally acceptable model performance against observations. These decisions in model calibration introduce uncertainty that must be considered when model results are used to inform environmental decision-making. This presentation focusses on the uncertainties that derive from the calibration of a four parameter lumped catchment hydrological model (GR4J). The GR models contain an inbuilt automatic calibration algorithm that can satisfactorily calibrate against four error metrics in only a few seconds. However, a single, deterministic model result does not provide information on parameter uncertainty. Furthermore, a modeller interested in extreme events, such as droughts, may wish to calibrate against more low flows specific error metrics. In a comprehensive assessment, the GR4J model has been run with 500,000 Latin Hypercube Sampled parameter sets across 303 catchments in the United Kingdom. These parameter sets have been assessed against six error metrics, including two drought specific metrics. This presentation compares the two approaches, and demonstrates that the inbuilt automatic calibration can outperform the Latin Hypercube experiment approach in single metric assessed performance. However, it is also shown that there are many merits of the more comprehensive assessment, which allows for probabilistic model results, multi-objective optimisation, and better tailoring to calibrate the model for specific applications such as drought event characterisation. Modellers and decision-makers may be constrained in their choice of calibration method, so it is important that they recognise the strengths and limitations of their chosen approach.
Sensitivity analysis of infectious disease models: methods, advances and their application
Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.
2013-01-01
Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497
Agent-Based Modeling of China's Rural-Urban Migration and Social Network Structure.
Fu, Zhaohao; Hao, Lingxin
2018-01-15
We analyze China's rural-urban migration and endogenous social network structures using agent-based modeling. The agents from census micro data are located in their rural origin with an empirical-estimated prior propensity to move. The population-scale social network is a hybrid one, combining observed family ties and locations of the origin with a parameter space calibrated from census, survey and aggregate data and sampled using a stepwise Latin Hypercube Sampling method. At monthly intervals, some agents migrate and these migratory acts change the social network by turning within-nonmigrant connections to between-migrant-nonmigrant connections, turning local connections to nonlocal connections, and adding among-migrant connections. In turn, the changing social network structure updates migratory propensities of those well-connected nonmigrants who become more likely to move. These two processes iterate over time. Using a core-periphery method developed from the k -core decomposition method, we identify and quantify the network structural changes and map these changes with the migration acceleration patterns. We conclude that network structural changes are essential for explaining migration acceleration observed in China during the 1995-2000 period.
Agent-based modeling of China's rural-urban migration and social network structure
NASA Astrophysics Data System (ADS)
Fu, Zhaohao; Hao, Lingxin
2018-01-01
We analyze China's rural-urban migration and endogenous social network structures using agent-based modeling. The agents from census micro data are located in their rural origin with an empirical-estimated prior propensity to move. The population-scale social network is a hybrid one, combining observed family ties and locations of the origin with a parameter space calibrated from census, survey and aggregate data and sampled using a stepwise Latin Hypercube Sampling method. At monthly intervals, some agents migrate and these migratory acts change the social network by turning within-nonmigrant connections to between-migrant-nonmigrant connections, turning local connections to nonlocal connections, and adding among-migrant connections. In turn, the changing social network structure updates migratory propensities of those well-connected nonmigrants who become more likely to move. These two processes iterate over time. Using a core-periphery method developed from the k-core decomposition method, we identify and quantify the network structural changes and map these changes with the migration acceleration patterns. We conclude that network structural changes are essential for explaining migration acceleration observed in China during the 1995-2000 period.
Hypercube matrix computation task
NASA Technical Reports Server (NTRS)
Calalo, Ruel H.; Imbriale, William A.; Jacobi, Nathan; Liewer, Paulett C.; Lockhart, Thomas G.; Lyzenga, Gregory A.; Lyons, James R.; Manshadi, Farzin; Patterson, Jean E.
1988-01-01
A major objective of the Hypercube Matrix Computation effort at the Jet Propulsion Laboratory (JPL) is to investigate the applicability of a parallel computing architecture to the solution of large-scale electromagnetic scattering problems. Three scattering analysis codes are being implemented and assessed on a JPL/California Institute of Technology (Caltech) Mark 3 Hypercube. The codes, which utilize different underlying algorithms, give a means of evaluating the general applicability of this parallel architecture. The three analysis codes being implemented are a frequency domain method of moments code, a time domain finite difference code, and a frequency domain finite elements code. These analysis capabilities are being integrated into an electromagnetics interactive analysis workstation which can serve as a design tool for the construction of antennas and other radiating or scattering structures. The first two years of work on the Hypercube Matrix Computation effort is summarized. It includes both new developments and results as well as work previously reported in the Hypercube Matrix Computation Task: Final Report for 1986 to 1987 (JPL Publication 87-18).
Optimization of High-Dimensional Functions through Hypercube Evaluation
Abiyev, Rahib H.; Tunay, Mustafa
2015-01-01
A novel learning algorithm for solving global numerical optimization problems is proposed. The proposed learning algorithm is intense stochastic search method which is based on evaluation and optimization of a hypercube and is called the hypercube optimization (HO) algorithm. The HO algorithm comprises the initialization and evaluation process, displacement-shrink process, and searching space process. The initialization and evaluation process initializes initial solution and evaluates the solutions in given hypercube. The displacement-shrink process determines displacement and evaluates objective functions using new points, and the search area process determines next hypercube using certain rules and evaluates the new solutions. The algorithms for these processes have been designed and presented in the paper. The designed HO algorithm is tested on specific benchmark functions. The simulations of HO algorithm have been performed for optimization of functions of 1000-, 5000-, or even 10000 dimensions. The comparative simulation results with other approaches demonstrate that the proposed algorithm is a potential candidate for optimization of both low and high dimensional functions. PMID:26339237
Non-intrusive reduced order modeling of nonlinear problems using neural networks
NASA Astrophysics Data System (ADS)
Hesthaven, J. S.; Ubbiali, S.
2018-06-01
We develop a non-intrusive reduced basis (RB) method for parametrized steady-state partial differential equations (PDEs). The method extracts a reduced basis from a collection of high-fidelity solutions via a proper orthogonal decomposition (POD) and employs artificial neural networks (ANNs), particularly multi-layer perceptrons (MLPs), to accurately approximate the coefficients of the reduced model. The search for the optimal number of neurons and the minimum amount of training samples to avoid overfitting is carried out in the offline phase through an automatic routine, relying upon a joint use of the Latin hypercube sampling (LHS) and the Levenberg-Marquardt (LM) training algorithm. This guarantees a complete offline-online decoupling, leading to an efficient RB method - referred to as POD-NN - suitable also for general nonlinear problems with a non-affine parametric dependence. Numerical studies are presented for the nonlinear Poisson equation and for driven cavity viscous flows, modeled through the steady incompressible Navier-Stokes equations. Both physical and geometrical parametrizations are considered. Several results confirm the accuracy of the POD-NN method and show the substantial speed-up enabled at the online stage as compared to a traditional RB strategy.
Concurrent Image Processing Executive (CIPE)
NASA Technical Reports Server (NTRS)
Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.
1988-01-01
The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are discussed. The target machine for this software is a JPL/Caltech Mark IIIfp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules; (1) user interface, (2) host-resident executive, (3) hypercube-resident executive, and (4) application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube a data management method which distributes, redistributes, and tracks data set information was implemented.
Cheng, Xianfu; Lin, Yuqun
2014-01-01
The performance of the suspension system is one of the most important factors in the vehicle design. For the double wishbone suspension system, the conventional deterministic optimization does not consider any deviations of design parameters, so design sensitivity analysis and robust optimization design are proposed. In this study, the design parameters of the robust optimization are the positions of the key points, and the random factors are the uncertainties in manufacturing. A simplified model of the double wishbone suspension is established by software ADAMS. The sensitivity analysis is utilized to determine main design variables. Then, the simulation experiment is arranged and the Latin hypercube design is adopted to find the initial points. The Kriging model is employed for fitting the mean and variance of the quality characteristics according to the simulation results. Further, a particle swarm optimization method based on simple PSO is applied and the tradeoff between the mean and deviation of performance is made to solve the robust optimization problem of the double wishbone suspension system.
Maximum projection designs for computer experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joseph, V. Roshan; Gul, Evren; Ba, Shan
Space-filling properties are important in designing computer experiments. The traditional maximin and minimax distance designs only consider space-filling in the full dimensional space. This can result in poor projections onto lower dimensional spaces, which is undesirable when only a few factors are active. Restricting maximin distance design to the class of Latin hypercubes can improve one-dimensional projections, but cannot guarantee good space-filling properties in larger subspaces. We propose designs that maximize space-filling properties on projections to all subsets of factors. We call our designs maximum projection designs. As a result, our design criterion can be computed at a cost nomore » more than a design criterion that ignores projection properties.« less
Maximum projection designs for computer experiments
Joseph, V. Roshan; Gul, Evren; Ba, Shan
2015-03-18
Space-filling properties are important in designing computer experiments. The traditional maximin and minimax distance designs only consider space-filling in the full dimensional space. This can result in poor projections onto lower dimensional spaces, which is undesirable when only a few factors are active. Restricting maximin distance design to the class of Latin hypercubes can improve one-dimensional projections, but cannot guarantee good space-filling properties in larger subspaces. We propose designs that maximize space-filling properties on projections to all subsets of factors. We call our designs maximum projection designs. As a result, our design criterion can be computed at a cost nomore » more than a design criterion that ignores projection properties.« less
Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions
NASA Astrophysics Data System (ADS)
Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter
2017-11-01
Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.
NASA Astrophysics Data System (ADS)
Stumpf, Felix; Schönbrodt-Stitt, Sarah; Schmidt, Karsten; Behrens, Thorsten; Scholten, Thomas
2013-04-01
The Three Gorges Dam at the Yangtze River in Central China outlines a prominent example of human-induced environmental impacts. Throughout one year the water table at the main river fluctuates about 30m due to impoundment and drainage activities. The dynamic water table implicates a range of georisks such as soil erosion, mass movements, sediment transport and diffuse matter inputs into the reservoir. Within the framework of the joint Sino-German project YANGTZE GEO, the subproject "Soil Erosion" deals with soil erosion risks and sediment transport pathways into the reservoir. The study site is a small catchment (4.8 km²) in Badong, approximately 100 km upstream the dam. It is characterized by scattered plots of agricultural landuse and resettlements in a largely wooded, steep sloping and mountainous area. Our research is focused on data acquisition and processing to develop a process-oriented erosion model. Hereby, area-covering knowledge of specific soil properties in the catchment is an intrinsic input parameter. This will be acquired by means of digital soil mapping (DSM). Thereby, soil properties are estimated by covariates. The functions are calibrated by soil property samples. The DSM approach is based on an appropriate sample design, which reflects the heterogeneity of the catchment, regarding the covariates with influence on the relevant soil properties. In this approach the covariates, processed by a digital terrain analysis, are outlined by the slope, altitude, profile curvature, plane curvature, and the aspect. For the development of the sample design, we chose the Conditioned Latin Hypercube Sampling (cLHS) procedure (Minasny and McBratney, 2006). It provides an efficient method of sampling variables from their multivariate distribution. Thereby, a sample size n from multiple variables is drawn such that for each variable the sample is marginally maximally stratified. The method ensures the maximal stratification by two features: First, number of strata equals the sample size n and secondly, the probability of falling in each of the strata is n-¹ (McKay et al., 1979). We extended the classical cLHS with extremes (Schmidt et al., 2012) approach by incorporating legacy data of previous field campaigns. Instead of identifying precise sample locations by CLHS, we demarcate the multivariate attribute space of the samples based on the histogram borders of each stratum. This widens the spatial scope of the actual CLHS sample locations and allows the incorporation of legacy data lying within that scope. Furthermore, this approach provides an extended potential regarding the accessibility of sample sites in the field.
Data Analysis Approaches for the Risk-Informed Safety Margins Characterization Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Alfonsi, Andrea; Maljovec, Daniel P.
2016-09-01
In the past decades, several numerical simulation codes have been employed to simulate accident dynamics (e.g., RELAP5-3D, RELAP-7, MELCOR, MAAP). In order to evaluate the impact of uncertainties into accident dynamics, several stochastic methodologies have been coupled with these codes. These stochastic methods range from classical Monte-Carlo and Latin Hypercube sampling to stochastic polynomial methods. Similar approaches have been introduced into the risk and safety community where stochastic methods (such as RAVEN, ADAPT, MCDET, ADS) have been coupled with safety analysis codes in order to evaluate the safety impact of timing and sequencing of events. These approaches are usually calledmore » Dynamic PRA or simulation-based PRA methods. These uncertainties and safety methods usually generate a large number of simulation runs (database storage may be on the order of gigabytes or higher). The scope of this paper is to present a broad overview of methods and algorithms that can be used to analyze and extract information from large data sets containing time dependent data. In this context, “extracting information” means constructing input-output correlations, finding commonalities, and identifying outliers. Some of the algorithms presented here have been developed or are under development within the RAVEN statistical framework.« less
Kolokotroni, Eleni; Dionysiou, Dimitra; Veith, Christian; Kim, Yoo-Jin; Franz, Astrid; Grgic, Aleksandar; Bohle, Rainer M.; Stamatakos, Georgios
2016-01-01
The 5-year survival of non-small cell lung cancer patients can be as low as 1% in advanced stages. For patients with resectable disease, the successful choice of preoperative chemotherapy is critical to eliminate micrometastasis and improve operability. In silico experimentations can suggest the optimal treatment protocol for each patient based on their own multiscale data. A determinant for reliable predictions is the a priori estimation of the drugs’ cytotoxic efficacy on cancer cells for a given treatment. In the present work a mechanistic model of cancer response to treatment is applied for the estimation of a plausible value range of the cell killing efficacy of various cisplatin-based doublet regimens. Among others, the model incorporates the cancer related mechanism of uncontrolled proliferation, population heterogeneity, hypoxia and treatment resistance. The methodology is based on the provision of tumor volumetric data at two time points, before and after or during treatment. It takes into account the effect of tumor microenvironment and cell repopulation on treatment outcome. A thorough sensitivity analysis based on one-factor-at-a-time and latin hypercube sampling/partial rank correlation coefficient approaches has established the volume growth rate and the growth fraction at diagnosis as key features for more accurate estimates. The methodology is applied on the retrospective data of thirteen patients with non-small cell lung cancer who received cisplatin in combination with gemcitabine, vinorelbine or docetaxel in the neoadjuvant context. The selection of model input values has been guided by a comprehensive literature survey on cancer-specific proliferation kinetics. The latin hypercube sampling has been recruited to compensate for patient-specific uncertainties. Concluding, the present work provides a quantitative framework for the estimation of the in-vivo cell-killing ability of various chemotherapies. Correlation studies of such estimates with the molecular profile of patients could serve as a basis for reliable personalized predictions. PMID:27657742
Hypercube matrix computation task
NASA Technical Reports Server (NTRS)
Calalo, R.; Imbriale, W.; Liewer, P.; Lyons, J.; Manshadi, F.; Patterson, J.
1987-01-01
The Hypercube Matrix Computation (Year 1986-1987) task investigated the applicability of a parallel computing architecture to the solution of large scale electromagnetic scattering problems. Two existing electromagnetic scattering codes were selected for conversion to the Mark III Hypercube concurrent computing environment. They were selected so that the underlying numerical algorithms utilized would be different thereby providing a more thorough evaluation of the appropriateness of the parallel environment for these types of problems. The first code was a frequency domain method of moments solution, NEC-2, developed at Lawrence Livermore National Laboratory. The second code was a time domain finite difference solution of Maxwell's equations to solve for the scattered fields. Once the codes were implemented on the hypercube and verified to obtain correct solutions by comparing the results with those from sequential runs, several measures were used to evaluate the performance of the two codes. First, a comparison was provided of the problem size possible on the hypercube with 128 megabytes of memory for a 32-node configuration with that available in a typical sequential user environment of 4 to 8 megabytes. Then, the performance of the codes was anlyzed for the computational speedup attained by the parallel architecture.
NASA Astrophysics Data System (ADS)
Wu, Jie; Yan, Quan-sheng; Li, Jian; Hu, Min-yi
2016-04-01
In bridge construction, geometry control is critical to ensure that the final constructed bridge has the consistent shape as design. A common method is by predicting the deflections of the bridge during each construction phase through the associated finite element models. Therefore, the cambers of the bridge during different construction phases can be determined beforehand. These finite element models are mostly based on the design drawings and nominal material properties. However, the accuracy of these bridge models can be large due to significant uncertainties of the actual properties of the materials used in construction. Therefore, the predicted cambers may not be accurate to ensure agreement of bridge geometry with design, especially for long-span bridges. In this paper, an improved geometry control method is described, which incorporates finite element (FE) model updating during the construction process based on measured bridge deflections. A method based on the Kriging model and Latin hypercube sampling is proposed to perform the FE model updating due to its simplicity and efficiency. The proposed method has been applied to a long-span continuous girder concrete bridge during its construction. Results show that the method is effective in reducing construction error and ensuring the accuracy of the geometry of the final constructed bridge.
Optimal cube-connected cube multiprocessors
NASA Technical Reports Server (NTRS)
Sun, Xian-He; Wu, Jie
1993-01-01
Many CFD (computational fluid dynamics) and other scientific applications can be partitioned into subproblems. However, in general the partitioned subproblems are very large. They demand high performance computing power themselves, and the solutions of the subproblems have to be combined at each time step. The cube-connect cube (CCCube) architecture is studied. The CCCube architecture is an extended hypercube structure with each node represented as a cube. It requires fewer physical links between nodes than the hypercube, and provides the same communication support as the hypercube does on many applications. The reduced physical links can be used to enhance the bandwidth of the remaining links and, therefore, enhance the overall performance. The concept and the method to obtain optimal CCCubes, which are the CCCubes with a minimum number of links under a given total number of nodes, are proposed. The superiority of optimal CCCubes over standard hypercubes was also shown in terms of the link usage in the embedding of a binomial tree. A useful computation structure based on a semi-binomial tree for divide-and-conquer type of parallel algorithms was identified. It was shown that this structure can be implemented in optimal CCCubes without performance degradation compared with regular hypercubes. The result presented should provide a useful approach to design of scientific parallel computers.
Detailed design of a lattice composite fuselage structure by a mixed optimization method
NASA Astrophysics Data System (ADS)
Liu, D.; Lohse-Busch, H.; Toropov, V.; Hühne, C.; Armani, U.
2016-10-01
In this article, a procedure for designing a lattice fuselage barrel is developed. It comprises three stages: first, topology optimization of an aircraft fuselage barrel is performed with respect to weight and structural performance to obtain the conceptual design. The interpretation of the optimal result is given to demonstrate the development of this new lattice airframe concept for the fuselage barrel. Subsequently, parametric optimization of the lattice aircraft fuselage barrel is carried out using genetic algorithms on metamodels generated with genetic programming from a 101-point optimal Latin hypercube design of experiments. The optimal design is achieved in terms of weight savings subject to stability, global stiffness and strain requirements, and then verified by the fine mesh finite element simulation of the lattice fuselage barrel. Finally, a practical design of the composite skin complying with the aircraft industry lay-up rules is presented. It is concluded that the mixed optimization method, combining topology optimization with the global metamodel-based approach, allows the problem to be solved with sufficient accuracy and provides the designers with a wealth of information on the structural behaviour of the novel anisogrid composite fuselage design.
Concurrent Image Processing Executive (CIPE). Volume 1: Design overview
NASA Technical Reports Server (NTRS)
Lee, Meemong; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.
1990-01-01
The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are described. The target machine for this software is a JPL/Caltech Mark 3fp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules: user interface, host-resident executive, hypercube-resident executive, and application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube, a data management method which distributes, redistributes, and tracks data set information was implemented. The data management also allows data sharing among application programs. The CIPE software architecture provides a flexible environment for scientific analysis of complex remote sensing image data, such as planetary data and imaging spectrometry, utilizing state-of-the-art concurrent computation capabilities.
Simulation of Fault Tolerance in a Hypercube Arrangement of Discrete Processors.
1987-12-01
Geometric Properties .................... 22 Binary Properties ....................... 26 Intel Hypercube Hardware Arrangement ... 28 IV. Cube-Connected... Properties of the CCC..............35 CCC Redundancy............................... 38 iii 6L V. Re-Configurable Cube-Connected Cycles ....... 40 Global...o........ 74 iv List of Figures Page Figure 1: Hypercubes of Different Dimensions ......... 21 Figure 2: Hypercube Properties
Parameter identification and optimization of slide guide joint of CNC machine tools
NASA Astrophysics Data System (ADS)
Zhou, S.; Sun, B. B.
2017-11-01
The joint surface has an important influence on the performance of CNC machine tools. In order to identify the dynamic parameters of slide guide joint, the parametric finite element model of the joint is established and optimum design method is used based on the finite element simulation and modal test. Then the mode that has the most influence on the dynamics of slip joint is found through harmonic response analysis. Take the frequency of this mode as objective, the sensitivity analysis of the stiffness of each joint surface is carried out using Latin Hypercube Sampling and Monte Carlo Simulation. The result shows that the vertical stiffness of slip joint surface constituted by the bed and the slide plate has the most obvious influence on the structure. Therefore, this stiffness is taken as the optimization variable and the optimal value is obtained through studying the relationship between structural dynamic performance and stiffness. Take the stiffness values before and after optimization into the FEM of machine tool, and it is found that the dynamic performance of the machine tool is improved.
The Influence of PV Module Materials and Design on Solder Joint Thermal Fatigue Durability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bosco, Nick; Silverman, Timothy J.; Kurtz, Sarah
Finite element model (FEM) simulations have been performed to elucidate the effect of flat plate photovoltaic (PV) module materials and design on PbSn eutectic solder joint thermal fatigue durability. The statistical method of Latin Hypercube sampling was employed to investigate the sensitivity of simulated damage to each input variable. Variables of laminate material properties and their thicknesses were investigated. Using analysis of variance, we determined that the rate of solder fatigue was most sensitive to solder layer thickness, with copper ribbon and silicon thickness being the next two most sensitive variables. By simulating both accelerated thermal cycles (ATCs) and PVmore » cell temperature histories through two characteristic days of service, we determined that the acceleration factor between the ATC and outdoor service was independent of the variables sampled in this study. This result implies that an ATC test will represent a similar time of outdoor exposure for a wide range of module designs. This is an encouraging result for the standard ATC that must be universally applied across all modules.« less
NASA Astrophysics Data System (ADS)
Doležel, Jiří; Novák, Drahomír; Petrů, Jan
2017-09-01
Transportation routes of oversize and excessive loads are currently planned in relation to ensure the transit of a vehicle through critical points on the road. Critical points are level-intersection of roads, bridges etc. This article presents a comprehensive procedure to determine a reliability and a load-bearing capacity level of the existing bridges on highways and roads using the advanced methods of reliability analysis based on simulation techniques of Monte Carlo type in combination with nonlinear finite element method analysis. The safety index is considered as a main criterion of the reliability level of the existing construction structures and the index is described in current structural design standards, e.g. ISO and Eurocode. An example of a single-span slab bridge made of precast prestressed concrete girders of the 60 year current time and its load bearing capacity is set for the ultimate limit state and serviceability limit state. The structure’s design load capacity was estimated by the full probability nonlinear MKP analysis using a simulation technique Latin Hypercube Sampling (LHS). Load-bearing capacity values based on a fully probabilistic analysis are compared with the load-bearing capacity levels which were estimated by deterministic methods of a critical section of the most loaded girders.
Louri, A; Furlonge, S; Neocleous, C
1996-12-10
A prototype of a novel topology for scaleable optical interconnection networks called the optical multi-mesh hypercube (OMMH) is experimentally demonstrated to as high as a 150-Mbit/s data rate (2(7) - 1 nonreturn-to-zero pseudo-random data pattern) at a bit error rate of 10(-13)/link by the use of commercially available devices. OMMH is a scaleable network [Appl. Opt. 33, 7558 (1994); J. Lightwave Technol. 12, 704 (1994)] architecture that combines the positive features of the hypercube (small diameter, connectivity, symmetry, simple routing, and fault tolerance) and the mesh (constant node degree and size scaleability). The optical implementation method is divided into two levels: high-density local connections for the hypercube modules, and high-bit-rate, low-density, long connections for the mesh links connecting the hypercube modules. Free-space imaging systems utilizing vertical-cavity surface-emitting laser (VCSEL) arrays, lenslet arrays, space-invariant holographic techniques, and photodiode arrays are demonstrated for the local connections. Optobus fiber interconnects from Motorola are used for the long-distance connections. The OMMH was optimized to operate at the data rate of Motorola's Optobus (10-bit-wide, VCSEL-based bidirectional data interconnects at 150 Mbits/s). Difficulties encountered included the varying fan-out efficiencies of the different orders of the hologram, misalignment sensitivity of the free-space links, low power (1 mW) of the individual VCSEL's, and noise.
Design Optimization of a Centrifugal Fan with Splitter Blades
NASA Astrophysics Data System (ADS)
Heo, Man-Woong; Kim, Jin-Hyuk; Kim, Kwang-Yong
2015-05-01
Multi-objective optimization of a centrifugal fan with additionally installed splitter blades was performed to simultaneously maximize the efficiency and pressure rise using three-dimensional Reynolds-averaged Navier-Stokes equations and hybrid multi-objective evolutionary algorithm. Two design variables defining the location of splitter, and the height ratio between inlet and outlet of impeller were selected for the optimization. In addition, the aerodynamic characteristics of the centrifugal fan were investigated with the variation of design variables in the design space. Latin hypercube sampling was used to select the training points, and response surface approximation models were constructed as surrogate models of the objective functions. With the optimization, both the efficiency and pressure rise of the centrifugal fan with splitter blades were improved considerably compared to the reference model.
Estimation of the discharges of the multiple water level stations by multi-objective optimization
NASA Astrophysics Data System (ADS)
Matsumoto, Kazuhiro; Miyamoto, Mamoru; Yamakage, Yuzuru; Tsuda, Morimasa; Yanami, Hitoshi; Anai, Hirokazu; Iwami, Yoichi
2016-04-01
This presentation shows two aspects of the parameter identification to estimate the discharges of the multiple water level stations by multi-objective optimization. One is how to adjust the parameters to estimate the discharges accurately. The other is which optimization algorithms are suitable for the parameter identification. Regarding the previous studies, there is a study that minimizes the weighted error of the discharges of the multiple water level stations by single-objective optimization. On the other hand, there are some studies that minimize the multiple error assessment functions of the discharge of a single water level station by multi-objective optimization. This presentation features to simultaneously minimize the errors of the discharges of the multiple water level stations by multi-objective optimization. Abe River basin in Japan is targeted. The basin area is 567.0km2. There are thirteen rainfall stations and three water level stations. Nine flood events are investigated. They occurred from 2005 to 2012 and the maximum discharges exceed 1,000m3/s. The discharges are calculated with PWRI distributed hydrological model. The basin is partitioned into the meshes of 500m x 500m. Two-layer tanks are placed on each mesh. Fourteen parameters are adjusted to estimate the discharges accurately. Twelve of them are the hydrological parameters and two of them are the parameters of the initial water levels of the tanks. Three objective functions are the mean squared errors between the observed and calculated discharges at the water level stations. Latin Hypercube sampling is one of the uniformly sampling algorithms. The discharges are calculated with respect to the parameter values sampled by a simplified version of Latin Hypercube sampling. The observed discharge is surrounded by the calculated discharges. It suggests that it might be possible to estimate the discharge accurately by adjusting the parameters. In a sense, it is true that the discharge of a water level station can be accurately estimated by setting the parameter values optimized to the responding water level station. However, there are some cases that the calculated discharge by setting the parameter values optimized to one water level station does not meet the observed discharge at another water level station. It is important to estimate the discharges of all the water level stations in some degree of accuracy. It turns out to be possible to select the parameter values from the pareto optimal solutions by the condition that all the normalized errors by the minimum error of the responding water level station are under 3. The optimization performance of five implementations of the algorithms and a simplified version of Latin Hypercube sampling are compared. Five implementations are NSGA2 and PAES of an optimization software inspyred and MCO_NSGA2R, MOPSOCD and NSGA2R_NSGA2R of a statistical software R. NSGA2, PAES and MOPSOCD are the optimization algorithms of a genetic algorithm, an evolution strategy and a particle swarm optimization respectively. The number of the evaluations of the objective functions is 10,000. Two implementations of NSGA2 of R outperform the others. They are promising to be suitable for the parameter identification of PWRI distributed hydrological model.
Mapping implicit spectral methods to distributed memory architectures
NASA Technical Reports Server (NTRS)
Overman, Andrea L.; Vanrosendale, John
1991-01-01
Spectral methods were proven invaluable in numerical simulation of PDEs (Partial Differential Equations), but the frequent global communication required raises a fundamental barrier to their use on highly parallel architectures. To explore this issue, a 3-D implicit spectral method was implemented on an Intel hypercube. Utilization of about 50 percent was achieved on a 32 node iPSC/860 hypercube, for a 64 x 64 x 64 Fourier-spectral grid; finer grids yield higher utilizations. Chebyshev-spectral grids are more problematic, since plane-relaxation based multigrid is required. However, by using a semicoarsening multigrid algorithm, and by relaxing all multigrid levels concurrently, relatively high utilizations were also achieved in this harder case.
Implementation and Performance Analysis of Parallel Assignment Algorithms on a Hypercube Computer.
1987-12-01
coupled pro- cessors because of the degree of interaction between processors imposed by the global memory [HwB84]. Another sub-class of MIMD... interaction between the individual processors [MuA87]. Many of the commercial MIMD computers available today are loosely coupled [HwB84]. 2.1.3 The Hypercube...Alpha-beta is a method usually employed in the solution of two-person zero-sum games like chess and checkers [Qui87]. The ha sic approach of the alpha
Concurrent Image Processing Executive (CIPE). Volume 2: Programmer's guide
NASA Technical Reports Server (NTRS)
Williams, Winifred I.
1990-01-01
This manual is intended as a guide for application programmers using the Concurrent Image Processing Executive (CIPE). CIPE is intended to become the support system software for a prototype high performance science analysis workstation. In its current configuration CIPE utilizes a JPL/Caltech Mark 3fp Hypercube with a Sun-4 host. CIPE's design is capable of incorporating other concurrent architectures as well. CIPE provides a programming environment to applications' programmers to shield them from various user interfaces, file transactions, and architectural complexities. A programmer may choose to write applications to use only the Sun-4 or to use the Sun-4 with the hypercube. A hypercube program will use the hypercube's data processors and optionally the Weitek floating point accelerators. The CIPE programming environment provides a simple set of subroutines to activate user interface functions, specify data distributions, activate hypercube resident applications, and to communicate parameters to and from the hypercube.
A discrete random walk on the hypercube
NASA Astrophysics Data System (ADS)
Zhang, Jingyuan; Xiang, Yonghong; Sun, Weigang
2018-03-01
In this paper, we study the scaling for mean first-passage time (MFPT) of random walks on the hypercube and obtain a closed-form formula for the MFPT over all node pairs. We also determine the exponent of scaling efficiency characterizing the random walks and compare it with those of the existing networks. Finally we study the random walks on the hypercube with a located trap and provide a solution of the Kirchhoff index of the hypercube.
Stochastic Multi-Commodity Facility Location Based on a New Scenario Generation Technique
NASA Astrophysics Data System (ADS)
Mahootchi, M.; Fattahi, M.; Khakbazan, E.
2011-11-01
This paper extends two models for stochastic multi-commodity facility location problem. The problem is formulated as two-stage stochastic programming. As a main point of this study, a new algorithm is applied to efficiently generate scenarios for uncertain correlated customers' demands. This algorithm uses Latin Hypercube Sampling (LHS) and a scenario reduction approach. The relation between customer satisfaction level and cost are considered in model I. The risk measure using Conditional Value-at-Risk (CVaR) is embedded into the optimization model II. Here, the structure of the network contains three facility layers including plants, distribution centers, and retailers. The first stage decisions are the number, locations, and the capacity of distribution centers. In the second stage, the decisions are the amount of productions, the volume of transportation between plants and customers.
Matsuyama, Ryota; Lee, Hyojung; Yamaguchi, Takayuki; Tsuzuki, Shinya
2018-01-01
Background A Rohingya refugee camp in Cox’s Bazar, Bangladesh experienced a large-scale diphtheria epidemic in 2017. The background information of previously immune fraction among refugees cannot be explicitly estimated, and thus we conducted an uncertainty analysis of the basic reproduction number, R0. Methods A renewal process model was devised to estimate the R0 and ascertainment rate of cases, and loss of susceptible individuals was modeled as one minus the sum of initially immune fraction and the fraction naturally infected during the epidemic. To account for the uncertainty of initially immune fraction, we employed a Latin Hypercube sampling (LHS) method. Results R0 ranged from 4.7 to 14.8 with the median estimate at 7.2. R0 was positively correlated with ascertainment rates. Sensitivity analysis indicated that R0 would become smaller with greater variance of the generation time. Discussion Estimated R0 was broadly consistent with published estimate from endemic data, indicating that the vaccination coverage of 86% has to be satisfied to prevent the epidemic by means of mass vaccination. LHS was particularly useful in the setting of a refugee camp in which the background health status is poorly quantified. PMID:29629244
Huang, X N; Ren, H P
2016-05-13
Robust adaptation is a critical ability of gene regulatory network (GRN) to survive in a fluctuating environment, which represents the system responding to an input stimulus rapidly and then returning to its pre-stimulus steady state timely. In this paper, the GRN is modeled using the Michaelis-Menten rate equations, which are highly nonlinear differential equations containing 12 undetermined parameters. The robust adaption is quantitatively described by two conflicting indices. To identify the parameter sets in order to confer the GRNs with robust adaptation is a multi-variable, multi-objective, and multi-peak optimization problem, which is difficult to acquire satisfactory solutions especially high-quality solutions. A new best-neighbor particle swarm optimization algorithm is proposed to implement this task. The proposed algorithm employs a Latin hypercube sampling method to generate the initial population. The particle crossover operation and elitist preservation strategy are also used in the proposed algorithm. The simulation results revealed that the proposed algorithm could identify multiple solutions in one time running. Moreover, it demonstrated a superior performance as compared to the previous methods in the sense of detecting more high-quality solutions within an acceptable time. The proposed methodology, owing to its universality and simplicity, is useful for providing the guidance to design GRN with superior robust adaptation.
Geostatistical Sampling Methods for Efficient Uncertainty Analysis in Flow and Transport Problems
NASA Astrophysics Data System (ADS)
Liodakis, Stylianos; Kyriakidis, Phaedon; Gaganis, Petros
2015-04-01
In hydrogeological applications involving flow and transport of in heterogeneous porous media the spatial distribution of hydraulic conductivity is often parameterized in terms of a lognormal random field based on a histogram and variogram model inferred from data and/or synthesized from relevant knowledge. Realizations of simulated conductivity fields are then generated using geostatistical simulation involving simple random (SR) sampling and are subsequently used as inputs to physically-based simulators of flow and transport in a Monte Carlo framework for evaluating the uncertainty in the spatial distribution of solute concentration due to the uncertainty in the spatial distribution of hydraulic con- ductivity [1]. Realistic uncertainty analysis, however, calls for a large number of simulated concentration fields; hence, can become expensive in terms of both time and computer re- sources. A more efficient alternative to SR sampling is Latin hypercube (LH) sampling, a special case of stratified random sampling, which yields a more representative distribution of simulated attribute values with fewer realizations [2]. Here, term representative implies realizations spanning efficiently the range of possible conductivity values corresponding to the lognormal random field. In this work we investigate the efficiency of alternative methods to classical LH sampling within the context of simulation of flow and transport in a heterogeneous porous medium. More precisely, we consider the stratified likelihood (SL) sampling method of [3], in which attribute realizations are generated using the polar simulation method by exploring the geometrical properties of the multivariate Gaussian distribution function. In addition, we propose a more efficient version of the above method, here termed minimum energy (ME) sampling, whereby a set of N representative conductivity realizations at M locations is constructed by: (i) generating a representative set of N points distributed on the surface of a M-dimensional, unit radius hyper-sphere, (ii) relocating the N points on a representative set of N hyper-spheres of different radii, and (iii) transforming the coordinates of those points to lie on N different hyper-ellipsoids spanning the multivariate Gaussian distribution. The above method is applied in a dimensionality reduction context by defining flow-controlling points over which representative sampling of hydraulic conductivity is performed, thus also accounting for the sensitivity of the flow and transport model to the input hydraulic conductivity field. The performance of the various stratified sampling methods, LH, SL, and ME, is compared to that of SR sampling in terms of reproduction of ensemble statistics of hydraulic conductivity and solute concentration for different sample sizes N (numbers of realizations). The results indicate that ME sampling constitutes an equally if not more efficient simulation method than LH and SL sampling, as it can reproduce to a similar extent statistics of the conductivity and concentration fields, yet with smaller sampling variability than SR sampling. References [1] Gutjahr A.L. and Bras R.L. Spatial variability in subsurface flow and transport: A review. Reliability Engineering & System Safety, 42, 293-316, (1993). [2] Helton J.C. and Davis F.J. Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems. Reliability Engineering & System Safety, 81, 23-69, (2003). [3] Switzer P. Multiple simulation of spatial fields. In: Heuvelink G, Lemmens M (eds) Proceedings of the 4th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences, Coronet Books Inc., pp 629?635 (2000).
Parallel and fault-tolerant algorithms for hypercube multiprocessors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aykanat, C.
1988-01-01
Several techniques for increasing the performance of parallel algorithms on distributed-memory message-passing multi-processor systems are investigated. These techniques are effectively implemented for the parallelization of the Scaled Conjugate Gradient (SCG) algorithm on a hypercube connected message-passing multi-processor. Significant performance improvement is achieved by using these techniques. The SCG algorithm is used for the solution phase of an FE modeling system. Almost linear speed-up is achieved, and it is shown that hypercube topology is scalable for an FE class of problem. The SCG algorithm is also shown to be suitable for vectorization, and near supercomputer performance is achieved on a vectormore » hypercube multiprocessor by exploiting both parallelization and vectorization. Fault-tolerance issues for the parallel SCG algorithm and for the hypercube topology are also addressed.« less
Uncertainty and Sensitivity Analyses of a Pebble Bed HTGR Loss of Cooling Event
Strydom, Gerhard
2013-01-01
The Very High Temperature Reactor Methods Development group at the Idaho National Laboratory identified the need for a defensible and systematic uncertainty and sensitivity approach in 2009. This paper summarizes the results of an uncertainty and sensitivity quantification investigation performed with the SUSA code, utilizing the International Atomic Energy Agency CRP 5 Pebble Bed Modular Reactor benchmark and the INL code suite PEBBED-THERMIX. Eight model input parameters were selected for inclusion in this study, and after the input parameters variations and probability density functions were specified, a total of 800 steady state and depressurized loss of forced cooling (DLOFC) transientmore » PEBBED-THERMIX calculations were performed. The six data sets were statistically analyzed to determine the 5% and 95% DLOFC peak fuel temperature tolerance intervals with 95% confidence levels. It was found that the uncertainties in the decay heat and graphite thermal conductivities were the most significant contributors to the propagated DLOFC peak fuel temperature uncertainty. No significant differences were observed between the results of Simple Random Sampling (SRS) or Latin Hypercube Sampling (LHS) data sets, and use of uniform or normal input parameter distributions also did not lead to any significant differences between these data sets.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Zhenhua; Rose, Adam Z.; Prager, Fynnwin
The state of the art approach to economic consequence analysis (ECA) is computable general equilibrium (CGE) modeling. However, such models contain thousands of equations and cannot readily be incorporated into computerized systems used by policy analysts to yield estimates of economic impacts of various types of transportation system failures due to natural hazards, human related attacks or technological accidents. This paper presents a reduced-form approach to simplify the analytical content of CGE models to make them more transparent and enhance their utilization potential. The reduced-form CGE analysis is conducted by first running simulations one hundred times, varying key parameters, suchmore » as magnitude of the initial shock, duration, location, remediation, and resilience, according to a Latin Hypercube sampling procedure. Statistical analysis is then applied to the “synthetic data” results in the form of both ordinary least squares and quantile regression. The analysis yields linear equations that are incorporated into a computerized system and utilized along with Monte Carlo simulation methods for propagating uncertainties in economic consequences. Although our demonstration and discussion focuses on aviation system disruptions caused by terrorist attacks, the approach can be applied to a broad range of threat scenarios.« less
On the effect of model parameters on forecast objects
NASA Astrophysics Data System (ADS)
Marzban, Caren; Jones, Corinne; Li, Ning; Sandgathe, Scott
2018-04-01
Many physics-based numerical models produce a gridded, spatial field of forecasts, e.g., a temperature map
. The field for some quantities generally consists of spatially coherent and disconnected objects
. Such objects arise in many problems, including precipitation forecasts in atmospheric models, eddy currents in ocean models, and models of forest fires. Certain features of these objects (e.g., location, size, intensity, and shape) are generally of interest. Here, a methodology is developed for assessing the impact of model parameters on the features of forecast objects. The main ingredients of the methodology include the use of (1) Latin hypercube sampling for varying the values of the model parameters, (2) statistical clustering algorithms for identifying objects, (3) multivariate multiple regression for assessing the impact of multiple model parameters on the distribution (across the forecast domain) of object features, and (4) methods for reducing the number of hypothesis tests and controlling the resulting errors. The final output
of the methodology is a series of box plots and confidence intervals that visually display the sensitivities. The methodology is demonstrated on precipitation forecasts from a mesoscale numerical weather prediction model.
Dynamics of hepatitis C under optimal therapy and sampling based analysis
NASA Astrophysics Data System (ADS)
Pachpute, Gaurav; Chakrabarty, Siddhartha P.
2013-08-01
We examine two models for hepatitis C viral (HCV) dynamics, one for monotherapy with interferon (IFN) and the other for combination therapy with IFN and ribavirin. Optimal therapy for both the models is determined using the steepest gradient method, by defining an objective functional which minimizes infected hepatocyte levels, virion population and side-effects of the drug(s). The optimal therapies for both the models show an initial period of high efficacy, followed by a gradual decline. The period of high efficacy coincides with a significant decrease in the viral load, whereas the efficacy drops after hepatocyte levels are restored. We use the Latin hypercube sampling technique to randomly generate a large number of patient scenarios and study the dynamics of each set under the optimal therapy already determined. Results show an increase in the percentage of responders (indicated by drop in viral load below detection levels) in case of combination therapy (72%) as compared to monotherapy (57%). Statistical tests performed to study correlations between sample parameters and time required for the viral load to fall below detection level, show a strong monotonic correlation with the death rate of infected hepatocytes, identifying it to be an important factor in deciding individual drug regimens.
Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator
NASA Astrophysics Data System (ADS)
Rehman, Naveed Ur; Siddiqui, Mubashir Ali
2017-03-01
In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.
Bennett, Katrina Eleanor; Urrego Blanco, Jorge Rolando; Jonko, Alexandra; ...
2017-11-20
The Colorado River basin is a fundamentally important river for society, ecology and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model.more » Here, we combine global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach.« less
Development of building energy asset rating using stock modelling in the USA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Na; Goel, Supriya; Makhmalbaf, Atefe
2016-01-29
The US Building Energy Asset Score helps building stakeholders quickly gain insight into the efficiency of building systems (envelope, electrical and mechanical systems). A robust, easy-to-understand 10-point scoring system was developed to facilitate an unbiased comparison of similar building types across the country. The Asset Score does not rely on a database or specific building baselines to establish a rating. Rather, distributions of energy use intensity (EUI) for various building use types were constructed using Latin hypercube sampling and converted to a series of stepped linear scales to score buildings. A score is calculated based on the modelled source EUImore » after adjusting for climate. A web-based scoring tool, which incorporates an analytical engine and a simulation engine, was developed to standardize energy modelling and reduce implementation cost. This paper discusses the methodology used to perform several hundred thousand building simulation runs and develop the scoring scales.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
HELTON,JON CRAIG; BEAN,J.E.; ECONOMY,K.
2000-05-19
Uncertainty and sensitivity analysis results obtained in the 1996 performance assessment for the Waste Isolation Pilot Plant are presented for two-phase flow the vicinity of the repository under undisturbed conditions. Techniques based on Latin hypercube sampling, examination of scatterplots, stepwise regression analysis, partial correlation analysis and rank transformation are used to investigate brine inflow, gas generation repository pressure, brine saturation and brine and gas outflow. Of the variables under study, repository pressure is potentially the most important due to its influence on spallings and direct brine releases, with the uncertainty in its value being dominated by the extent to whichmore » the microbial degradation of cellulose takes place, the rate at which the corrosion of steel takes place, and the amount of brine that drains from the surrounding disturbed rock zone into the repository.« less
Optimisation of lateral car dynamics taking into account parameter uncertainties
NASA Astrophysics Data System (ADS)
Busch, Jochen; Bestle, Dieter
2014-02-01
Simulation studies on an active all-wheel-steering car show that disturbance of vehicle parameters have high influence on lateral car dynamics. This motivates the need of robust design against such parameter uncertainties. A specific parametrisation is established combining deterministic, velocity-dependent steering control parameters with partly uncertain, velocity-independent vehicle parameters for simultaneous use in a numerical optimisation process. Model-based objectives are formulated and summarised in a multi-objective optimisation problem where especially the lateral steady-state behaviour is improved by an adaption strategy based on measurable uncertainties. The normally distributed uncertainties are generated by optimal Latin hypercube sampling and a response surface based strategy helps to cut down time consuming model evaluations which offers the possibility to use a genetic optimisation algorithm. Optimisation results are discussed in different criterion spaces and the achieved improvements confirm the validity of the proposed procedure.
Efficient hybrid evolutionary algorithm for optimization of a strip coiling process
NASA Astrophysics Data System (ADS)
Pholdee, Nantiwat; Park, Won-Woong; Kim, Dong-Kyu; Im, Yong-Taek; Bureerat, Sujin; Kwon, Hyuck-Cheol; Chun, Myung-Sik
2015-04-01
This article proposes an efficient metaheuristic based on hybridization of teaching-learning-based optimization and differential evolution for optimization to improve the flatness of a strip during a strip coiling process. Differential evolution operators were integrated into the teaching-learning-based optimization with a Latin hypercube sampling technique for generation of an initial population. The objective function was introduced to reduce axial inhomogeneity of the stress distribution and the maximum compressive stress calculated by Love's elastic solution within the thin strip, which may cause an irregular surface profile of the strip during the strip coiling process. The hybrid optimizer and several well-established evolutionary algorithms (EAs) were used to solve the optimization problem. The comparative studies show that the proposed hybrid algorithm outperformed other EAs in terms of convergence rate and consistency. It was found that the proposed hybrid approach was powerful for process optimization, especially with a large-scale design problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, Katrina Eleanor; Urrego Blanco, Jorge Rolando; Jonko, Alexandra
The Colorado River basin is a fundamentally important river for society, ecology and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model.more » Here, we combine global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach.« less
NASA Technical Reports Server (NTRS)
Dongarra, Jack (Editor); Messina, Paul (Editor); Sorensen, Danny C. (Editor); Voigt, Robert G. (Editor)
1990-01-01
Attention is given to such topics as an evaluation of block algorithm variants in LAPACK and presents a large-grain parallel sparse system solver, a multiprocessor method for the solution of the generalized Eigenvalue problem on an interval, and a parallel QR algorithm for iterative subspace methods on the CM2. A discussion of numerical methods includes the topics of asynchronous numerical solutions of PDEs on parallel computers, parallel homotopy curve tracking on a hypercube, and solving Navier-Stokes equations on the Cedar Multi-Cluster system. A section on differential equations includes a discussion of a six-color procedure for the parallel solution of elliptic systems using the finite quadtree structure, data parallel algorithms for the finite element method, and domain decomposition methods in aerodynamics. Topics dealing with massively parallel computing include hypercube vs. 2-dimensional meshes and massively parallel computation of conservation laws. Performance and tools are also discussed.
A Cascade Approach to Uncertainty Estimation for the Hydrological Simulation of Droughts
NASA Astrophysics Data System (ADS)
Smith, Katie; Tanguy, Maliko; Parry, Simon; Prudhomme, Christel
2016-04-01
Uncertainty poses a significant challenge in environmental research and the characterisation and quantification of uncertainty has become a research priority over the past decade. Studies of extreme events are particularly affected by issues of uncertainty. This study focusses on the sources of uncertainty in the modelling of streamflow droughts in the United Kingdom. Droughts are a poorly understood natural hazard with no universally accepted definition. Meteorological, hydrological and agricultural droughts have different meanings and vary both spatially and temporally, yet each is inextricably linked. The work presented here is part of two extensive interdisciplinary projects investigating drought reconstruction and drought forecasting capabilities in the UK. Lumped catchment models are applied to simulate streamflow drought, and uncertainties from 5 different sources are investigated: climate input data, potential evapotranspiration (PET) method, hydrological model, within model structure, and model parameterisation. Latin Hypercube sampling is applied to develop large parameter ensembles for each model structure which are run using parallel computing on a high performance computer cluster. Parameterisations are assessed using a multi-objective evaluation criteria which includes both general and drought performance metrics. The effect of different climate input data and PET methods on model output is then considered using the accepted model parameterisations. The uncertainty from each of the sources creates a cascade, and when presented as such the relative importance of each aspect of uncertainty can be determined.
NASA Astrophysics Data System (ADS)
Alfano, M.; Bisagni, C.
2017-01-01
The objective of the running EU project DESICOS (New Robust DESign Guideline for Imperfection Sensitive COmposite Launcher Structures) is to formulate an improved shell design methodology in order to meet the demand of aerospace industry for lighter structures. Within the project, this article discusses the development of a probability-based methodology developed at Politecnico di Milano. It is based on the combination of the Stress-Strength Interference Method and the Latin Hypercube Method with the aim to predict the bucking response of three sandwich composite cylindrical shells, assuming a loading condition of pure compression. The three shells are made of the same material, but have different stacking sequence and geometric dimensions. One of them presents three circular cut-outs. Different types of input imperfections, treated as random variables, are taken into account independently and in combination: variability in longitudinal Young's modulus, ply misalignment, geometric imperfections, and boundary imperfections. The methodology enables a first assessment of the structural reliability of the shells through the calculation of a probabilistic buckling factor for a specified level of probability. The factor depends highly on the reliability level, on the number of adopted samples, and on the assumptions made in modeling the input imperfections. The main advantage of the developed procedure is the versatility, as it can be applied to the buckling analysis of laminated composite shells and sandwich composite shells including different types of imperfections.
A parallel simulated annealing algorithm for standard cell placement on a hypercube computer
NASA Technical Reports Server (NTRS)
Jones, Mark Howard
1987-01-01
A parallel version of a simulated annealing algorithm is presented which is targeted to run on a hypercube computer. A strategy for mapping the cells in a two dimensional area of a chip onto processors in an n-dimensional hypercube is proposed such that both small and large distance moves can be applied. Two types of moves are allowed: cell exchanges and cell displacements. The computation of the cost function in parallel among all the processors in the hypercube is described along with a distributed data structure that needs to be stored in the hypercube to support parallel cost evaluation. A novel tree broadcasting strategy is used extensively in the algorithm for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms. An improved uniprocessor algorithm is proposed which is based on the improved results obtained from parallelization of the simulated annealing algorithm.
Benchmarking hypercube hardware and software
NASA Technical Reports Server (NTRS)
Grunwald, Dirk C.; Reed, Daniel A.
1986-01-01
It was long a truism in computer systems design that balanced systems achieve the best performance. Message passing parallel processors are no different. To quantify the balance of a hypercube design, an experimental methodology was developed and the associated suite of benchmarks was applied to several existing hypercubes. The benchmark suite includes tests of both processor speed in the absence of internode communication and message transmission speed as a function of communication patterns.
NASA Astrophysics Data System (ADS)
Sugiyanto, S.; Hardyanto, W.; Marwoto, P.
2018-03-01
Transport phenomena are found in many problems in many engineering and industrial sectors. We analyzed a Lattice Boltzmann method with Two-Relaxation Time (LTRT) collision operators for simulation of pollutant moving through the medium as a two-dimensional (2D) transport problem in a rectangular region model. This model consists of a 2D rectangular region with 54 length (x), 27 width (y), and it has isotropic homogeneous medium. Initially, the concentration is zero and is distributed evenly throughout the region of interest. A concentration of 1 is maintained at 9 < y < 18, whereas the concentration of zero is maintained at 0 < y < 9 and 18 < y < 27. A specific discharge (Darcy velocity) of 1.006 is assumed. A diffusion coefficient of 0.8333 is distributed uniformly with a uniform porosity of 0.35. A computer program is written in MATLAB to compute the concentration of pollutant at any specified place and time. The program shows that LTRT solution with quadratic equilibrium distribution functions (EDFs) and relaxation time τa=1.0 are in good agreement result with other numerical solutions methods such as 3DLEWASTE (Hybrid Three-dimensional Lagrangian-Eulerian Finite Element Model of Waste Transport Through Saturated-Unsaturated Media) obtained by Yeh and 3DFEMWATER-LHS (Three-dimensional Finite Element Model of Water Flow Through Saturated-Unsaturated Media with Latin Hypercube Sampling) obtained by Hardyanto.
Physically motivated correlation formalism in hyperspectral imaging
NASA Astrophysics Data System (ADS)
Roy, Ankita; Rafert, J. Bruce
2004-05-01
Most remote sensing data-sets contain a limiting number of independent spatial and spectral measurements, beyond which no effective increase in information is achieved. This paper presents a Physically Motivated Correlation Formalism (PMCF) ,which places both Spatial and Spectral data on an equivalent mathematical footing in the context of a specific Kernel, such that, optimal combinations of independent data can be selected from the entire Hypercube via the method of "Correlation Moments". We present an experimental and computational analysis of Hyperspectral data sets using the Michigan Tech VFTHSI [Visible Fourier Transform Hyperspectral Imager] based on a Sagnac Interferometer, adjusted to obtain high SNR levels. The captured Signal Interferograms of different targets - aerial snaps of Houghton and lab-based data (white light , He-Ne laser , discharge tube sources) with the provision of customized scan of targets with the same exposures are processed using inverse imaging transformations and filtering techniques to obtain the Spectral profiles and generate Hypercubes to compute Spectral/Spatial/Cross Moments. PMCF answers the question of how optimally the entire hypercube should be sampled and finds how many spatial-spectral pixels are required for a particular target recognition.
NASA Astrophysics Data System (ADS)
Pei, Ji; Wang, Wenjie; Yuan, Shouqi; Zhang, Jinfeng
2016-09-01
In order to widen the high-efficiency operating range of a low-specific-speed centrifugal pump, an optimization process for considering efficiencies under 1.0 Q d and 1.4 Q d is proposed. Three parameters, namely, the blade outlet width b 2, blade outlet angle β 2, and blade wrap angle φ, are selected as design variables. Impellers are generated using the optimal Latin hypercube sampling method. The pump efficiencies are calculated using the software CFX 14.5 at two operating points selected as objectives. Surrogate models are also constructed to analyze the relationship between the objectives and the design variables. Finally, the particle swarm optimization algorithm is applied to calculate the surrogate model to determine the best combination of the impeller parameters. The results show that the performance curve predicted by numerical simulation has a good agreement with the experimental results. Compared with the efficiencies of the original impeller, the hydraulic efficiencies of the optimized impeller are increased by 4.18% and 0.62% under 1.0 Q d and 1.4Qd, respectively. The comparison of inner flow between the original pump and optimized one illustrates the improvement of performance. The optimization process can provide a useful reference on performance improvement of other pumps, even on reduction of pressure fluctuations.
Beddows, Andrew V; Kitwiroon, Nutthida; Williams, Martin L; Beevers, Sean D
2017-06-06
Gaussian process emulation techniques have been used with the Community Multiscale Air Quality model, simulating the effects of input uncertainties on ozone and NO 2 output, to allow robust global sensitivity analysis (SA). A screening process ranked the effect of perturbations in 223 inputs, isolating the 30 most influential from emissions, boundary conditions (BCs), and reaction rates. Community Multiscale Air Quality (CMAQ) simulations of a July 2006 ozone pollution episode in the UK were made with input values for these variables plus ozone dry deposition velocity chosen according to a 576 point Latin hypercube design. Emulators trained on the output of these runs were used in variance-based SA of the model output to input uncertainties. Performing these analyses for every hour of a 21 day period spanning the episode and several days on either side allowed the results to be presented as a time series of sensitivity coefficients, showing how the influence of different input uncertainties changed during the episode. This is one of the most complex models to which these methods have been applied, and here, they reveal detailed spatiotemporal patterns of model sensitivities, with NO and isoprene emissions, NO 2 photolysis, ozone BCs, and deposition velocity being among the most influential input uncertainties.
Matsuyama, Ryota; Akhmetzhanov, Andrei R; Endo, Akira; Lee, Hyojung; Yamaguchi, Takayuki; Tsuzuki, Shinya; Nishiura, Hiroshi
2018-01-01
A Rohingya refugee camp in Cox's Bazar, Bangladesh experienced a large-scale diphtheria epidemic in 2017. The background information of previously immune fraction among refugees cannot be explicitly estimated, and thus we conducted an uncertainty analysis of the basic reproduction number, R 0 . A renewal process model was devised to estimate the R 0 and ascertainment rate of cases, and loss of susceptible individuals was modeled as one minus the sum of initially immune fraction and the fraction naturally infected during the epidemic. To account for the uncertainty of initially immune fraction, we employed a Latin Hypercube sampling (LHS) method. R 0 ranged from 4.7 to 14.8 with the median estimate at 7.2. R 0 was positively correlated with ascertainment rates. Sensitivity analysis indicated that R 0 would become smaller with greater variance of the generation time. Estimated R 0 was broadly consistent with published estimate from endemic data, indicating that the vaccination coverage of 86% has to be satisfied to prevent the epidemic by means of mass vaccination. LHS was particularly useful in the setting of a refugee camp in which the background health status is poorly quantified.
A smart Monte Carlo procedure for production costing and uncertainty analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, C.; Stremel, J.
1996-11-01
Electric utilities using chronological production costing models to decide whether to buy or sell power over the next week or next few weeks need to determine potential profits or losses under a number of uncertainties. A large amount of money can be at stake--often $100,000 a day or more--and one party of the sale must always take on the risk. In the case of fixed price ($/MWh) contracts, the seller accepts the risk. In the case of cost plus contracts, the buyer must accept the risk. So, modeling uncertainty and understanding the risk accurately can improve the competitive edge ofmore » the user. This paper investigates an efficient procedure for representing risks and costs from capacity outages. Typically, production costing models use an algorithm based on some form of random number generator to select resources as available or on outage. These algorithms allow experiments to be repeated and gains and losses to be observed in a short time. The authors perform several experiments to examine the capability of three unit outage selection methods and measures their results. Specifically, a brute force Monte Carlo procedure, a Monte Carlo procedure with Latin Hypercube sampling, and a Smart Monte Carlo procedure with cost stratification and directed sampling are examined.« less
NASA Astrophysics Data System (ADS)
Snow, Michael G.; Bajaj, Anil K.
2015-08-01
This work presents an uncertainty quantification (UQ) analysis of a comprehensive model for an electrostatically actuated microelectromechanical system (MEMS) switch. The goal is to elucidate the effects of parameter variations on certain key performance characteristics of the switch. A sufficiently detailed model of the electrostatically actuated switch in the basic configuration of a clamped-clamped beam is developed. This multi-physics model accounts for various physical effects, including the electrostatic fringing field, finite length of electrodes, squeeze film damping, and contact between the beam and the dielectric layer. The performance characteristics of immediate interest are the static and dynamic pull-in voltages for the switch. Numerical approaches for evaluating these characteristics are developed and described. Using Latin Hypercube Sampling and other sampling methods, the model is evaluated to find these performance characteristics when variability in the model's geometric and physical parameters is specified. Response surfaces of these results are constructed via a Multivariate Adaptive Regression Splines (MARS) technique. Using a Direct Simulation Monte Carlo (DSMC) technique on these response surfaces gives smooth probability density functions (PDFs) of the outputs characteristics when input probability characteristics are specified. The relative variation in the two pull-in voltages due to each of the input parameters is used to determine the critical parameters.
The Mark III Hypercube-Ensemble Computers
NASA Technical Reports Server (NTRS)
Peterson, John C.; Tuazon, Jesus O.; Lieberman, Don; Pniel, Moshe
1988-01-01
Mark III Hypercube concept applied in development of series of increasingly powerful computers. Processor of each node of Mark III Hypercube ensemble is specialized computer containing three subprocessors and shared main memory. Solves problem quickly by simultaneously processing part of problem at each such node and passing combined results to host computer. Disciplines benefitting from speed and memory capacity include astrophysics, geophysics, chemistry, weather, high-energy physics, applied mechanics, image processing, oil exploration, aircraft design, and microcircuit design.
NASA Technical Reports Server (NTRS)
Sargent, Jeff Scott
1988-01-01
A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel new approaches to controlling error in parallel cell-placement algorithms; Heuristic Cell-Coloring and Adaptive (Parallel Move) Sequence Control. Heuristic Cell-Coloring identifies sets of noninteracting cells that can be moved repeatedly, and in parallel, with no buildup of error in the placement cost. Adaptive Sequence Control allows multiple parallel cell moves to take place between global cell-position updates. This feedback mechanism is based on an error bound derived analytically from the traditional annealing move-acceptance profile. Placement results are presented for real industry circuits and the performance is summarized of an implementation on the Intel iPSC/2 Hypercube. The runtime of this algorithm is 5 to 16 times faster than a previous program developed for the Hypercube, while producing equivalent quality placement. An integrated place and route program for the Intel iPSC/2 Hypercube is currently being developed.
Hyperswitch Network For Hypercube Computer
NASA Technical Reports Server (NTRS)
Chow, Edward; Madan, Herbert; Peterson, John
1989-01-01
Data-driven dynamic switching enables high speed data transfer. Proposed hyperswitch network based on mixed static and dynamic topologies. Routing header modified in response to congestion or faults encountered as path established. Static topology meets requirement if nodes have switching elements that perform necessary routing header revisions dynamically. Hypercube topology now being implemented with switching element in each computer node aimed at designing very-richly-interconnected multicomputer system. Interconnection network connects great number of small computer nodes, using fixed hypercube topology, characterized by point-to-point links between nodes.
Ordered fast Fourier transforms on a massively parallel hypercube multiprocessor
NASA Technical Reports Server (NTRS)
Tong, Charles; Swarztrauber, Paul N.
1991-01-01
The present evaluation of alternative, massively parallel hypercube processor-applicable designs for ordered radix-2 decimation-in-frequency FFT algorithms gives attention to the reduction of computation time-dominating communication. A combination of the order and computational phases of the FFT is accordingly employed, in conjunction with sequence-to-processor maps which reduce communication. Two orderings, 'standard' and 'cyclic', in which the order of the transform is the same as that of the input sequence, can be implemented with ease on the Connection Machine (where orderings are determined by geometries and priorities. A parallel method for trigonometric coefficient computation is presented which does not employ trigonometric functions or interprocessor communication.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S. M.; Kim, K. Y.
Printed circuit heat exchanger (PCHE) is recently considered as a recuperator for the high temperature gas cooled reactor. In this work, the zigzag-channels of a PCHE have been optimized by using three-dimensional Reynolds-Averaged Navier-Stokes (RANS) analysis and response surface approximation (RSA) modeling technique to enhance thermal-hydraulic performance. Shear stress transport turbulence model is used as a turbulence closure. The objective function is defined as a linear combination of the functions related to heat transfer and friction loss of the PCHE, respectively. Three geometric design variables viz., the ratio of the radius of the fillet to hydraulic diameter of the channels,more » the ratio of wavelength to hydraulic diameter of the channels, and the ratio of wave height to hydraulic diameter of the channels, are used for the optimization. Design points are selected through Latin-hypercube sampling. The optimal design is determined through the RSA model which uses RANS derived calculations at the design points. The results show that the optimum shape enhances considerably the thermal-hydraulic performance than a reference shape. (authors)« less
Gurkan-Cavusoglu, Evren; Avadhani, Sriya; Liu, Lili; Kinsella, Timothy J; Loparo, Kenneth A
2013-04-01
Base excision repair (BER) is a major DNA repair pathway involved in the processing of exogenous non-bulky base damages from certain classes of cancer chemotherapy drugs as well as ionising radiation (IR). Methoxyamine (MX) is a small molecule chemical inhibitor of BER that is shown to enhance chemotherapy and/or IR cytotoxicity in human cancers. In this study, the authors have analysed the inhibitory effect of MX on the BER pathway kinetics using a computational model of the repair pathway. The inhibitory effect of MX depends on the BER efficiency. The authors have generated variable efficiency groups using different sets of protein concentrations generated by Latin hypercube sampling, and they have clustered simulation results into high, medium and low efficiency repair groups. From analysis of the inhibitory effect of MX on each of the three groups, it is found that the inhibition is most effective for high efficiency BER, and least effective for low efficiency repair.
NASA Astrophysics Data System (ADS)
Yi, Jin; Li, Xinyu; Xiao, Mi; Xu, Junnan; Zhang, Lin
2017-01-01
Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.
Pu239 Cross-Section Variations Based on Experimental Uncertainties and Covariances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sigeti, David Edward; Williams, Brian J.; Parsons, D. Kent
2016-10-18
Algorithms and software have been developed for producing variations in plutonium-239 neutron cross sections based on experimental uncertainties and covariances. The varied cross-section sets may be produced as random samples from the multi-variate normal distribution defined by an experimental mean vector and covariance matrix, or they may be produced as Latin-Hypercube/Orthogonal-Array samples (based on the same means and covariances) for use in parametrized studies. The variations obey two classes of constraints that are obligatory for cross-section sets and which put related constraints on the mean vector and covariance matrix that detemine the sampling. Because the experimental means and covariances domore » not obey some of these constraints to sufficient precision, imposing the constraints requires modifying the experimental mean vector and covariance matrix. Modification is done with an algorithm based on linear algebra that minimizes changes to the means and covariances while insuring that the operations that impose the different constraints do not conflict with each other.« less
High-Accuracy Finite Element Method: Benchmark Calculations
NASA Astrophysics Data System (ADS)
Gusev, Alexander; Vinitsky, Sergue; Chuluunbaatar, Ochbadrakh; Chuluunbaatar, Galmandakh; Gerdt, Vladimir; Derbov, Vladimir; Góźdź, Andrzej; Krassovitskiy, Pavel
2018-02-01
We describe a new high-accuracy finite element scheme with simplex elements for solving the elliptic boundary-value problems and show its efficiency on benchmark solutions of the Helmholtz equation for the triangle membrane and hypercube.
Determining Reduced Order Models for Optimal Stochastic Reduced Order Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonney, Matthew S.; Brake, Matthew R.W.
2015-08-01
The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better representmore » the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.« less
Using Response Surface Methods to Correlate the Modal Test of an Inflatable Test Article
NASA Technical Reports Server (NTRS)
Gupta, Anju
2013-01-01
This paper presents a practical application of response surface methods (RSM) to correlate a finite element model of a structural modal test. The test article is a quasi-cylindrical inflatable structure which primarily consists of a fabric weave, with an internal bladder and metallic bulkheads on either end. To mitigate model size, the fabric weave was simplified by representing it with shell elements. The task at hand is to represent the material behavior of the weave. The success of the model correlation is measured by comparing the four major modal frequencies of the analysis model to the four major modal frequencies of the test article. Given that only individual strap material properties were provided and material properties of the overall weave were not available, defining the material properties of the finite element model became very complex. First it was necessary to determine which material properties (modulus of elasticity in the hoop and longitudinal directions, shear modulus, Poisson's ratio, etc.) affected the modal frequencies. Then a Latin Hypercube of the parameter space was created to form an efficiently distributed finite case set. Each case was then analyzed with the results input into RSM. In the resulting response surface it was possible to see how each material parameter affected the modal frequencies of the analysis model. If the modal frequencies of the analysis model and its corresponding parameters match the test with acceptable accuracy, it can be said that the model correlation is successful.
Robust human body model injury prediction in simulated side impact crashes.
Golman, Adam J; Danelson, Kerry A; Stitzel, Joel D
2016-01-01
This study developed a parametric methodology to robustly predict occupant injuries sustained in real-world crashes using a finite element (FE) human body model (HBM). One hundred and twenty near-side impact motor vehicle crashes were simulated over a range of parameters using a Toyota RAV4 (bullet vehicle), Ford Taurus (struck vehicle) FE models and a validated human body model (HBM) Total HUman Model for Safety (THUMS). Three bullet vehicle crash parameters (speed, location and angle) and two occupant parameters (seat position and age) were varied using a Latin hypercube design of Experiments. Four injury metrics (head injury criterion, half deflection, thoracic trauma index and pelvic force) were used to calculate injury risk. Rib fracture prediction and lung strain metrics were also analysed. As hypothesized, bullet speed had the greatest effect on each injury measure. Injury risk was reduced when bullet location was further from the B-pillar or when the bullet angle was more oblique. Age had strong correlation to rib fractures frequency and lung strain severity. The injuries from a real-world crash were predicted using two different methods by (1) subsampling the injury predictors from the 12 best crush profile matching simulations and (2) using regression models. Both injury prediction methods successfully predicted the case occupant's low risk for pelvic injury, high risk for thoracic injury, rib fractures and high lung strains with tight confidence intervals. This parametric methodology was successfully used to explore crash parameter interactions and to robustly predict real-world injuries.
Coelho, Antonio Augusto Rodrigues
2016-01-01
This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723
Performance analysis of parallel branch and bound search with the hypercube architecture
NASA Technical Reports Server (NTRS)
Mraz, Richard T.
1987-01-01
With the availability of commercial parallel computers, researchers are examining new classes of problems which might benefit from parallel computing. This paper presents results of an investigation of the class of search intensive problems. The specific problem discussed is the Least-Cost Branch and Bound search method of deadline job scheduling. The object-oriented design methodology was used to map the problem into a parallel solution. While the initial design was good for a prototype, the best performance resulted from fine-tuning the algorithm for a specific computer. The experiments analyze the computation time, the speed up over a VAX 11/785, and the load balance of the problem when using loosely coupled multiprocessor system based on the hypercube architecture.
Optimization design of energy deposition on single expansion ramp nozzle
NASA Astrophysics Data System (ADS)
Ju, Shengjun; Yan, Chao; Wang, Xiaoyong; Qin, Yupei; Ye, Zhifei
2017-11-01
Optimization design has been widely used in the aerodynamic design process of scramjets. The single expansion ramp nozzle is an important component for scramjets to produces most of thrust force. A new concept of increasing the aerodynamics of the scramjet nozzle with energy deposition is presented. The essence of the method is to create a heated region in the inner flow field of the scramjet nozzle. In the current study, the two-dimensional coupled implicit compressible Reynolds Averaged Navier-Stokes and Menter's shear stress transport turbulence model have been applied to numerically simulate the flow fields of the single expansion ramp nozzle with and without energy deposition. The numerical results show that the proposal of energy deposition can be an effective method to increase force characteristics of the scramjet nozzle, the thrust coefficient CT increase by 6.94% and lift coefficient CN decrease by 26.89%. Further, the non-dominated sorting genetic algorithm coupled with the Radial Basis Function neural network surrogate model has been employed to determine optimum location and density of the energy deposition. The thrust coefficient CT and lift coefficient CN are selected as objective functions, and the sampling points are obtained numerically by using a Latin hypercube design method. The optimized thrust coefficient CT further increase by 1.94%, meanwhile, the optimized lift coefficient CN further decrease by 15.02% respectively. At the same time, the optimized performances are in good and reasonable agreement with the numerical predictions. The findings suggest that scramjet nozzle design and performance can benefit from the application of energy deposition.
Parameter Uncertainty on AGCM-simulated Tropical Cyclones
NASA Astrophysics Data System (ADS)
He, F.
2015-12-01
This work studies the parameter uncertainty on tropical cyclone (TC) simulations in Atmospheric General Circulation Models (AGCMs) using the Reed-Jablonowski TC test case, which is illustrated in Community Atmosphere Model (CAM). It examines the impact from 24 parameters across the physical parameterization schemes that represent the convection, turbulence, precipitation and cloud processes in AGCMs. The one-at-a-time (OAT) sensitivity analysis method first quantifies their relative importance on TC simulations and identifies the key parameters to the six different TC characteristics: intensity, precipitation, longwave cloud radiative forcing (LWCF), shortwave cloud radiative forcing (SWCF), cloud liquid water path (LWP) and ice water path (IWP). Then, 8 physical parameters are chosen and perturbed using the Latin-Hypercube Sampling (LHS) method. The comparison between OAT ensemble run and LHS ensemble run shows that the simulated TC intensity is mainly affected by the parcel fractional mass entrainment rate in Zhang-McFarlane (ZM) deep convection scheme. The nonlinear interactive effect among different physical parameters is negligible on simulated TC intensity. In contrast, this nonlinear interactive effect plays a significant role in other simulated tropical cyclone characteristics (precipitation, LWCF, SWCF, LWP and IWP) and greatly enlarge their simulated uncertainties. The statistical emulator Extended Multivariate Adaptive Regression Splines (EMARS) is applied to characterize the response functions for nonlinear effect. Last, we find that the intensity uncertainty caused by physical parameters is in a degree comparable to uncertainty caused by model structure (e.g. grid) and initial conditions (e.g. sea surface temperature, atmospheric moisture). These findings suggest the importance of using the perturbed physics ensemble (PPE) method to revisit tropical cyclone prediction under climate change scenario.
1991-09-01
addition, support for Saltz was provided by NSF from NSF Grant ASC-8819374. i 1, introduction Over the past fewyers, ,we have devoped -methods needed to... network . In Third Conf. on Hypercube Concurrent Computers and Applications, pages 241-27278, 1988. [17] G. Fox, S. Hiranandani, K. Kennedy, C. Koelbel
NASA Astrophysics Data System (ADS)
Momoh, James A.; Salkuti, Surender Reddy
2016-06-01
This paper proposes a stochastic optimization technique for solving the Voltage/VAr control problem including the load demand and Renewable Energy Resources (RERs) variation. The RERs often take along some inputs like stochastic behavior. One of the important challenges i. e., Voltage/VAr control is a prime source for handling power system complexity and reliability, hence it is the fundamental requirement for all the utility companies. There is a need for the robust and efficient Voltage/VAr optimization technique to meet the peak demand and reduction of system losses. The voltages beyond the limit may damage costly sub-station devices and equipments at consumer end as well. Especially, the RERs introduces more disturbances and some of the RERs are not even capable enough to meet the VAr demand. Therefore, there is a strong need for the Voltage/VAr control in RERs environment. This paper aims at the development of optimal scheme for Voltage/VAr control involving RERs. In this paper, Latin Hypercube Sampling (LHS) method is used to cover full range of variables by maximally satisfying the marginal distribution. Here, backward scenario reduction technique is used to reduce the number of scenarios effectively and maximally retain the fitting accuracy of samples. The developed optimization scheme is tested on IEEE 24 bus Reliability Test System (RTS) considering the load demand and RERs variation.
NASA Astrophysics Data System (ADS)
Papaioannou, George; Vasiliades, Lampros; Loukas, Athanasios; Aronica, Giuseppe T.
2017-04-01
Probabilistic flood inundation mapping is performed and analysed at the ungauged Xerias stream reach, Volos, Greece. The study evaluates the uncertainty introduced by the roughness coefficient values on hydraulic models in flood inundation modelling and mapping. The well-established one-dimensional (1-D) hydraulic model, HEC-RAS is selected and linked to Monte-Carlo simulations of hydraulic roughness. Terrestrial Laser Scanner data have been used to produce a high quality DEM for input data uncertainty minimisation and to improve determination accuracy on stream channel topography required by the hydraulic model. Initial Manning's n roughness coefficient values are based on pebble count field surveys and empirical formulas. Various theoretical probability distributions are fitted and evaluated on their accuracy to represent the estimated roughness values. Finally, Latin Hypercube Sampling has been used for generation of different sets of Manning roughness values and flood inundation probability maps have been created with the use of Monte Carlo simulations. Historical flood extent data, from an extreme historical flash flood event, are used for validation of the method. The calibration process is based on a binary wet-dry reasoning with the use of Median Absolute Percentage Error evaluation metric. The results show that the proposed procedure supports probabilistic flood hazard mapping at ungauged rivers and provides water resources managers with valuable information for planning and implementing flood risk mitigation strategies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
FINSTERLE, STEFAN; JUNG, YOOJIN; KOWALSKY, MICHAEL
2016-09-15
iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional, multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. iTOUGH2 performs sensitivity analyses, data-worth analyses, parameter estimation, and uncertainty propagation analyses in geosciences and reservoir engineering and other application areas. iTOUGH2 supports a number of different combinations of fluids and components (equation-of-state (EOS) modules). In addition, the optimization routines implemented in iTOUGH2 can also be used for sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files using the PEST protocol. iTOUGH2 solves the inverse problem bymore » minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative-free, gradient-based, and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlo simulations for uncertainty propagation analyses. A detailed residual and error analysis is provided. This upgrade includes (a) global sensitivity analysis methods, (b) dynamic memory allocation (c) additional input features and output analyses, (d) increased forward simulation capabilities, (e) parallel execution on multicore PCs and Linux clusters, and (f) bug fixes. More details can be found at http://esd.lbl.gov/iTOUGH2.« less
NASA Astrophysics Data System (ADS)
Chaney, Nathaniel W.; Herman, Jonathan D.; Ek, Michael B.; Wood, Eric F.
2016-11-01
With their origins in numerical weather prediction and climate modeling, land surface models aim to accurately partition the surface energy balance. An overlooked challenge in these schemes is the role of model parameter uncertainty, particularly at unmonitored sites. This study provides global parameter estimates for the Noah land surface model using 85 eddy covariance sites in the global FLUXNET network. The at-site parameters are first calibrated using a Latin Hypercube-based ensemble of the most sensitive parameters, determined by the Sobol method, to be the minimum stomatal resistance (rs,min), the Zilitinkevich empirical constant (Czil), and the bare soil evaporation exponent (fxexp). Calibration leads to an increase in the mean Kling-Gupta Efficiency performance metric from 0.54 to 0.71. These calibrated parameter sets are then related to local environmental characteristics using the Extra-Trees machine learning algorithm. The fitted Extra-Trees model is used to map the optimal parameter sets over the globe at a 5 km spatial resolution. The leave-one-out cross validation of the mapped parameters using the Noah land surface model suggests that there is the potential to skillfully relate calibrated model parameter sets to local environmental characteristics. The results demonstrate the potential to use FLUXNET to tune the parameterizations of surface fluxes in land surface models and to provide improved parameter estimates over the globe.
Soil hydraulic material properties and layered architecture from time-lapse GPR
NASA Astrophysics Data System (ADS)
Jaumann, Stefan; Roth, Kurt
2018-04-01
Quantitative knowledge of the subsurface material distribution and its effective soil hydraulic material properties is essential to predict soil water movement. Ground-penetrating radar (GPR) is a noninvasive and nondestructive geophysical measurement method that is suitable to monitor hydraulic processes. Previous studies showed that the GPR signal from a fluctuating groundwater table is sensitive to the soil water characteristic and the hydraulic conductivity function. In this work, we show that the GPR signal originating from both the subsurface architecture and the fluctuating groundwater table is suitable to estimate the position of layers within the subsurface architecture together with the associated effective soil hydraulic material properties with inversion methods. To that end, we parameterize the subsurface architecture, solve the Richards equation, convert the resulting water content to relative permittivity with the complex refractive index model (CRIM), and solve Maxwell's equations numerically. In order to analyze the GPR signal, we implemented a new heuristic algorithm that detects relevant signals in the radargram (events) and extracts the corresponding signal travel time and amplitude. This algorithm is applied to simulated as well as measured radargrams and the detected events are associated automatically. Using events instead of the full wave regularizes the inversion focussing on the relevant measurement signal. For optimization, we use a global-local approach with preconditioning. Starting from an ensemble of initial parameter sets drawn with a Latin hypercube algorithm, we sequentially couple a simulated annealing algorithm with a Levenberg-Marquardt algorithm. The method is applied to synthetic as well as measured data from the ASSESS test site. We show that the method yields reasonable estimates for the position of the layers as well as for the soil hydraulic material properties by comparing the results to references derived from ground truth data as well as from time domain reflectometry (TDR).
High-speed mid-infrared hyperspectral imaging using quantum cascade lasers
NASA Astrophysics Data System (ADS)
Kelley, David B.; Goyal, Anish K.; Zhu, Ninghui; Wood, Derek A.; Myers, Travis R.; Kotidis, Petros; Murphy, Cara; Georgan, Chelsea; Raz, Gil; Maulini, Richard; Müller, Antoine
2017-05-01
We report on a standoff chemical detection system using widely tunable external-cavity quantum cascade lasers (ECQCLs) to illuminate target surfaces in the mid infrared (λ = 7.4 - 10.5 μm). Hyperspectral images (hypercubes) are acquired by synchronously operating the EC-QCLs with a LN2-cooled HgCdTe camera. The use of rapidly tunable lasers and a high-frame-rate camera enables the capture of hypercubes with 128 x 128 pixels and >100 wavelengths in <0.1 s. Furthermore, raster scanning of the laser illumination allowed imaging of a 100-cm2 area at 5-m standoff. Raw hypercubes are post-processed to generate a hypercube that represents the surface reflectance relative to that of a diffuse reflectance standard. Results will be shown for liquids (e.g., silicone oil) and solid particles (e.g., caffeine, acetaminophen) on a variety of surfaces (e.g., aluminum, plastic, glass). Signature spectra are obtained for particulate loadings of RDX on glass of <1 μg/cm2.
Performance of a parallel code for the Euler equations on hypercube computers
NASA Technical Reports Server (NTRS)
Barszcz, Eric; Chan, Tony F.; Jesperson, Dennis C.; Tuminaro, Raymond S.
1990-01-01
The performance of hypercubes were evaluated on a computational fluid dynamics problem and the parallel environment issues were considered that must be addressed, such as algorithm changes, implementation choices, programming effort, and programming environment. The evaluation focuses on a widely used fluid dynamics code, FLO52, which solves the two dimensional steady Euler equations describing flow around the airfoil. The code development experience is described, including interacting with the operating system, utilizing the message-passing communication system, and code modifications necessary to increase parallel efficiency. Results from two hypercube parallel computers (a 16-node iPSC/2, and a 512-node NCUBE/ten) are discussed and compared. In addition, a mathematical model of the execution time was developed as a function of several machine and algorithm parameters. This model accurately predicts the actual run times obtained and is used to explore the performance of the code in interesting but yet physically realizable regions of the parameter space. Based on this model, predictions about future hypercubes are made.
Gan, Yanjun; Duan, Qingyun; Gong, Wei; ...
2014-01-01
Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones.« less
Ordered fast fourier transforms on a massively parallel hypercube multiprocessor
NASA Technical Reports Server (NTRS)
Tong, Charles; Swarztrauber, Paul N.
1989-01-01
Design alternatives for ordered Fast Fourier Transformation (FFT) algorithms were examined on massively parallel hypercube multiprocessors such as the Connection Machine. Particular emphasis is placed on reducing communication which is known to dominate the overall computing time. To this end, the order and computational phases of the FFT were combined, and the sequence to processor maps that reduce communication were used. The class of ordered transforms is expanded to include any FFT in which the order of the transform is the same as that of the input sequence. Two such orderings are examined, namely, standard-order and A-order which can be implemented with equal ease on the Connection Machine where orderings are determined by geometries and priorities. If the sequence has N = 2 exp r elements and the hypercube has P = 2 exp d processors, then a standard-order FFT can be implemented with d + r/2 + 1 parallel transmissions. An A-order sequence can be transformed with 2d - r/2 parallel transmissions which is r - d + 1 fewer than the standard order. A parallel method for computing the trigonometric coefficients is presented that does not use trigonometric functions or interprocessor communication. A performance of 0.9 GFLOPS was obtained for an A-order transform on the Connection Machine.
Optimized Reduction of Unsteady Radial Forces in a Singlechannel Pump for Wastewater Treatment
NASA Astrophysics Data System (ADS)
Kim, Jin-Hyuk; Cho, Bo-Min; Choi, Young-Seok; Lee, Kyoung-Yong; Peck, Jong-Hyeon; Kim, Seon-Chang
2016-11-01
A single-channel pump for wastewater treatment was optimized to reduce unsteady radial force sources caused by impeller-volute interactions. The steady and unsteady Reynolds- averaged Navier-Stokes equations using the shear-stress transport turbulence model were discretized by finite volume approximations and solved on tetrahedral grids to analyze the flow in the single-channel pump. The sweep area of radial force during one revolution and the distance of the sweep-area center of mass from the origin were selected as the objective functions; the two design variables were related to the internal flow cross-sectional area of the volute. These objective functions were integrated into one objective function by applying the weighting factor for optimization. Latin hypercube sampling was employed to generate twelve design points within the design space. A response-surface approximation model was constructed as a surrogate model for the objectives, based on the objective function values at the generated design points. The optimized results showed considerable reduction in the unsteady radial force sources in the optimum design, relative to those of the reference design.
Determining the nuclear data uncertainty on MONK10 and WIMS10 criticality calculations
NASA Astrophysics Data System (ADS)
Ware, Tim; Dobson, Geoff; Hanlon, David; Hiles, Richard; Mason, Robert; Perry, Ray
2017-09-01
The ANSWERS Software Service is developing a number of techniques to better understand and quantify uncertainty on calculations of the neutron multiplication factor, k-effective, in nuclear fuel and other systems containing fissile material. The uncertainty on the calculated k-effective arises from a number of sources, including nuclear data uncertainties, manufacturing tolerances, modelling approximations and, for Monte Carlo simulation, stochastic uncertainty. For determining the uncertainties due to nuclear data, a set of application libraries have been generated for use with the MONK10 Monte Carlo and the WIMS10 deterministic criticality and reactor physics codes. This paper overviews the generation of these nuclear data libraries by Latin hypercube sampling of JEFF-3.1.2 evaluated data based upon a library of covariance data taken from JEFF, ENDF/B, JENDL and TENDL evaluations. Criticality calculations have been performed with MONK10 and WIMS10 using these sampled libraries for a number of benchmark models of fissile systems. Results are presented which show the uncertainty on k-effective for these systems arising from the uncertainty on the input nuclear data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
HELTON,JON CRAIG; BEAN,J.E.; ECONOMY,K.
2000-05-22
Uncertainty and sensitivity analysis results obtained in the 1996 performance assessment (PA) for the Waste Isolation Pilot Plant (WIPP) are presented for two-phase flow in the vicinity of the repository under disturbed conditions resulting from drilling intrusions. Techniques based on Latin hypercube sampling, examination of scatterplots, stepwise regression analysis, partial correlation analysis and rank transformations are used to investigate brine inflow, gas generation repository pressure, brine saturation and brine and gas outflow. Of the variables under study, repository pressure and brine flow from the repository to the Culebra Dolomite are potentially the most important in PA for the WIPP. Subsequentmore » to a drilling intrusion repository pressure was dominated by borehole permeability and generally below the level (i.e., 8 MPa) that could potentially produce spallings and direct brine releases. Brine flow from the repository to the Culebra Dolomite tended to be small or nonexistent with its occurrence and size also dominated by borehole permeability.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Po-Lun; Gattiker, J. R.; Liu, Xiaohong
2013-06-27
A Gaussian process (GP) emulator is applied to quantify the contribution of local and remote emissions of black carbon (BC) on the BC concentrations in different regions using a Latin Hypercube sampling strategy for emission perturbations in the offline version of the Community Atmosphere Model Version 5.1 (CAM5) simulations. The source-receptor relationships are computed based on simulations constrained by a standard free-running CAM5 simulation and the ERA-Interim reanalysis product. The analysis demonstrates that the emulator is capable of retrieving the source-receptor relationships based on a small number of CAM5 simulations. Most regions are found susceptible to their local emissions. Themore » emulator also finds that the source-receptor relationships retrieved from the model-driven and the reanalysis-driven simulations are very similar, suggesting that the simulated circulation in CAM5 resembles the assimilated meteorology in ERA-Interim. The robustness of the results provides confidence for applying the emulator to detect dose-response signals in the climate system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tom Elicson; Bentley Harwood; Jim Bouchard
Over a 12 month period, a fire PRA was developed for a DOE facility using the NUREG/CR-6850 EPRI/NRC fire PRA methodology. The fire PRA modeling included calculation of fire severity factors (SFs) and fire non-suppression probabilities (PNS) for each safe shutdown (SSD) component considered in the fire PRA model. The SFs were developed by performing detailed fire modeling through a combination of CFAST fire zone model calculations and Latin Hypercube Sampling (LHS). Component damage times and automatic fire suppression system actuation times calculated in the CFAST LHS analyses were then input to a time-dependent model of fire non-suppression probability. Themore » fire non-suppression probability model is based on the modeling approach outlined in NUREG/CR-6850 and is supplemented with plant specific data. This paper presents the methodology used in the DOE facility fire PRA for modeling fire-induced SSD component failures and includes discussions of modeling techniques for: • Development of time-dependent fire heat release rate profiles (required as input to CFAST), • Calculation of fire severity factors based on CFAST detailed fire modeling, and • Calculation of fire non-suppression probabilities.« less
NASA Astrophysics Data System (ADS)
Ojeda, David; Le Rolle, Virginie; Romero-Ugalde, Hector M.; Gallet, Clément; Bonnet, Jean-Luc; Henry, Christine; Bel, Alain; Mabo, Philippe; Carrault, Guy; Hernández, Alfredo I.
2017-11-01
Vagus nerve stimulation (VNS) is an established therapy for drug-resistant epilepsy and depression, and is considered as a potential therapy for other pathologies, including Heart Failure (HF) or inflammatory diseases. In the case of HF, several experimental studies on animals have shown an improvement in the cardiac function and a reverse remodeling of the cardiac cavity when VNS is applied. However, recent clinical trials have not been able to reproduce the same response in humans. One of the hypothesis to explain this lack of response is related to the way in which stimulation parameters are defined. The combined effect of VNS parameters is still poorly-known, especially in the case of VNS synchronously delivered with cardiac activity. In this paper, we propose a methodology to analyze the acute cardiovascular effects of VNS parameters individually, as well as their interactive effects. A Latin hypercube sampling method was applied to design a uniform experimental plan. Data gathered from this experimental plan was used to produce a Gaussian process regression (GPR) model in order to estimate unobserved VNS sequences. Finally, a Morris screening sensitivity analysis method was applied to each obtained GPR model. Results highlight dominant effects of pulse current, pulse width and number of pulses over frequency and delay and, more importantly, the degree of interactions between these parameters on the most important acute cardiovascular responses. In particular, high interacting effects between current and pulse width were found. Similar sensitivity profiles were observed for chronotropic, dromotropic and inotropic effects. These findings are of primary importance for the future development of closed-loop, personalized neuromodulator technologies.
NASA Astrophysics Data System (ADS)
Li, Shuai; Wang, Yiping; Wang, Tao; Yang, Xue; Deng, Yadong; Su, Chuqi
2017-05-01
Thermoelectric generators (TEGs) have become a topic of interest for vehicle exhaust energy recovery. Electrical power generation is deeply influenced by temperature differences, temperature uniformity and topological structures of TEGs. When the dimpled surfaces are adopted in heat exchangers, the heat transfer rates can be augmented with a minimal pressure drop. However, the temperature distribution shows a large gradient along the flow direction which has adverse effects on the power generation. In the current study, the heat exchanger performance was studied in a computational fluid dynamics (CFD) model. The dimple depth, dimple print diameter, and channel height were chosen as design variables. The objective function was defined as a combination of average temperature, temperature uniformity and pressure loss. The optimal Latin hypercube method was used to determine the experiment points as a method of design of the experiment in order to analyze the sensitivity of the design variables. A Kriging surrogate model was built and verified according to the database resulting from the CFD simulation. A multi-island genetic algorithm was used to optimize the structure in the heat exchanger based on the surrogate model. The results showed that the average temperature of the heat exchanger was most sensitive to the dimple depth. The pressure loss and temperature uniformity were most sensitive to the parameter of channel rear height, h 2. With an optimal design of channel structure, the temperature uniformity can be greatly improved compared with the initial exchanger, and the additional pressure loss also increased.
Requirements analysis for a hardware, discrete-event, simulation engine accelerator
NASA Astrophysics Data System (ADS)
Taylor, Paul J., Jr.
1991-12-01
An analysis of a general Discrete Event Simulation (DES), executing on the distributed architecture of an eight mode Intel PSC/2 hypercube, was performed. The most time consuming portions of the general DES algorithm were determined to be the functions associated with message passing of required simulation data between processing nodes of the hypercube architecture. A behavioral description, using the IEEE standard VHSIC Hardware Description and Design Language (VHDL), for a general DES hardware accelerator is presented. The behavioral description specifies the operational requirements for a DES coprocessor to augment the hypercube's execution of DES simulations. The DES coprocessor design implements the functions necessary to perform distributed discrete event simulations using a conservative time synchronization protocol.
An, Yongkai; Lu, Wenxi; Cheng, Weiguo
2015-01-01
This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS) method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately. PMID:26264008
Multiphase complete exchange on Paragon, SP2 and CS-2
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1995-01-01
The overhead of interprocessor communication is a major factor in limiting the performance of parallel computer systems. The complete exchange is the severest communication pattern in that it requires each processor to send a distinct message to every other processor. This pattern is at the heart of many important parallel applications. On hypercubes, multiphase complete exchange has been developed and shown to provide optimal performance over varying message sizes. Most commercial multicomputer systems do not have a hypercube interconnect. However, they use special purpose hardware and dedicated communication processors to achieve very high performance communication and can be made to emulate the hypercube quite well. Multiphase complete exchange has been implemented on three contemporary parallel architectures: the Intel Paragon, IBM SP2 and Meiko CS-2. The essential features of these machines are described and their basic interprocessor communication overheads are discussed. The performance of multiphase complete exchange is evaluated on each machine. It is shown that the theoretical ideas developed for hypercubes are also applicable in practice to these machines and that multiphase complete exchange can lead to major savings in execution time over traditional solutions.
Parallel spatial direct numerical simulations on the Intel iPSC/860 hypercube
NASA Technical Reports Server (NTRS)
Joslin, Ronald D.; Zubair, Mohammad
1993-01-01
The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube is documented. The direct numerical simulation approach is used to compute spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows. The feasibility of using the PSDNS on the hypercube to perform transition studies is examined. The results indicate that the direct numerical simulation approach can effectively be parallelized on a distributed-memory parallel machine. By increasing the number of processors nearly ideal linear speedups are achieved with nonoptimized routines; slower than linear speedups are achieved with optimized (machine dependent library) routines. This slower than linear speedup results because the Fast Fourier Transform (FFT) routine dominates the computational cost and because the routine indicates less than ideal speedups. However with the machine-dependent routines the total computational cost decreases by a factor of 4 to 5 compared with standard FORTRAN routines. The computational cost increases linearly with spanwise wall-normal and streamwise grid refinements. The hypercube with 32 processors was estimated to require approximately twice the amount of Cray supercomputer single processor time to complete a comparable simulation; however it is estimated that a subgrid-scale model which reduces the required number of grid points and becomes a large-eddy simulation (PSLES) would reduce the computational cost and memory requirements by a factor of 10 over the PSDNS. This PSLES implementation would enable transition simulations on the hypercube at a reasonable computational cost.
NASA Technical Reports Server (NTRS)
Karmarkar, J. S.
1972-01-01
Proposal of an algorithmic procedure, based on mathematical programming methods, to design compensators for hyperstable discrete model-reference adaptive systems (MRAS). The objective of the compensator is to render the MRAS insensitive to initial parameter estimates within a maximized hypercube in the model parameter space.
Parallel Algorithm Solves Coupled Differential Equations
NASA Technical Reports Server (NTRS)
Hayashi, A.
1987-01-01
Numerical methods adapted to concurrent processing. Algorithm solves set of coupled partial differential equations by numerical integration. Adapted to run on hypercube computer, algorithm separates problem into smaller problems solved concurrently. Increase in computing speed with concurrent processing over that achievable with conventional sequential processing appreciable, especially for large problems.
Nadkarni, P M; Miller, P L
1991-01-01
A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations.
A 2D electrostatic PIC code for the Mark III Hypercube
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferraro, R.D.; Liewer, P.C.; Decyk, V.K.
We have implemented a 2D electrostastic plasma particle in cell (PIC) simulation code on the Caltech/JPL Mark IIIfp Hypercube. The code simulates plasma effects by evolving in time the trajectories of thousands to millions of charged particles subject to their self-consistent fields. Each particle`s position and velocity is advanced in time using a leap frog method for integrating Newton`s equations of motion in electric and magnetic fields. The electric field due to these moving charged particles is calculated on a spatial grid at each time by solving Poisson`s equation in Fourier space. These two tasks represent the largest part ofmore » the computation. To obtain efficient operation on a distributed memory parallel computer, we are using the General Concurrent PIC (GCPIC) algorithm previously developed for a 1D parallel PIC code.« less
NASA Technical Reports Server (NTRS)
Parker, Jay W.; Cwik, Tom; Ferraro, Robert D.; Liewer, Paulett C.; Patterson, Jean E.
1991-01-01
The JPL designed MARKIII hypercube supercomputer has been in application service since June 1988 and has had successful application to a broad problem set including electromagnetic scattering, discrete event simulation, plasma transport, matrix algorithms, neural network simulation, image processing, and graphics. Currently, problems that are not homogeneous are being attempted, and, through this involvement with real world applications, the software is evolving to handle the heterogeneous class problems efficiently.
Nadkarni, P. M.; Miller, P. L.
1991-01-01
A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations. PMID:1807632
Communication overhead on the Intel iPSC-860 hypercube
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1990-01-01
Experiments were conducted on the Intel iPSC-860 hypercube in order to evaluate the overhead of interprocessor communication. It is demonstrated that: (1) contrary to popular belief, the distance between two communicating processors has a significant impact on communication time, (2) edge contention can increase communication time by a factor of more than 7, and (3) node contention has no measurable impact.
Global sensitivity analysis of water age and temperature for informing salmonid disease management
NASA Astrophysics Data System (ADS)
Javaheri, Amir; Babbar-Sebens, Meghna; Alexander, Julie; Bartholomew, Jerri; Hallett, Sascha
2018-06-01
Many rivers in the Pacific Northwest region of North America are anthropogenically manipulated via dam operations, leading to system-wide impacts on hydrodynamic conditions and aquatic communities. Understanding how dam operations alter abiotic and biotic variables is important for designing management actions. For example, in the Klamath River, dam outflows could be manipulated to alter water age and temperature to reduce risk of parasite infections in salmon by diluting or altering viability of parasite spores. However, sensitivity of water age and temperature to the riverine conditions such as bathymetry can affect outcomes from dam operations. To examine this issue in detail, we conducted a global sensitivity analysis of water age and temperature to a comprehensive set of hydraulics and meteorological parameters in the Klamath River, California, where management of salmonid disease is a high priority. We applied an analysis technique, which combined Latin-hypercube and one-at-a-time sampling methods, and included simulation runs with the hydrodynamic numerical model of the Lower Klamath. We found that flow rate and bottom roughness were the two most important parameters that influence water age. Water temperature was more sensitive to inflow temperature, air temperature, solar radiation, wind speed, flow rate, and wet bulb temperature respectively. Our results are relevant for managers because they provide a framework for predicting how water within 'high infection risk' sections of the river will respond to dam water (low infection risk) input. Moreover, these data will be useful for prioritizing the use of water age (dilution) versus temperature (spore viability) under certain contexts when considering flow manipulation as a method to reduce risk of infection and disease in Klamath River salmon.
Uncertainty analysis for absorbed dose from a brain receptor imaging agent
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aydogan, B.; Miller, L.F.; Sparks, R.B.
Absorbed dose estimates are known to contain uncertainties. A recent literature search indicates that prior to this study no rigorous investigation of uncertainty associated with absorbed dose has been undertaken. A method of uncertainty analysis for absorbed dose calculations has been developed and implemented for the brain receptor imaging agent {sup 123}I-IPT. The two major sources of uncertainty considered were the uncertainty associated with the determination of residence time and that associated with the determination of the S values. There are many sources of uncertainty in the determination of the S values, but only the inter-patient organ mass variation wasmore » considered in this work. The absorbed dose uncertainties were determined for lung, liver, heart and brain. Ninety-five percent confidence intervals of the organ absorbed dose distributions for each patient and for a seven-patient population group were determined by the ``Latin Hypercube Sampling`` method. For an individual patient, the upper bound of the 95% confidence interval of the absorbed dose was found to be about 2.5 times larger than the estimated mean absorbed dose. For the seven-patient population the upper bound of the 95% confidence interval of the absorbed dose distribution was around 45% more than the estimated population mean. For example, the 95% confidence interval of the population liver dose distribution was found to be between 1.49E+0.7 Gy/MBq and 4.65E+07 Gy/MBq with a mean of 2.52E+07 Gy/MBq. This study concluded that patients in a population receiving {sup 123}I-IPT could receive absorbed doses as much as twice as large as the standard estimated absorbed dose due to these uncertainties.« less
NASA Astrophysics Data System (ADS)
Ki, Seo Jin; Ray, Chittaranjan
2015-03-01
A regional screening tool-which is useful in cases where few site-specific parameters are available for complex vadose zone models-assesses the leaching potential of pollutants to groundwater over large areas. In this study, the previous pesticide leaching tool used in Hawaii was revised to account for the release of new volatile organic compounds (VOCs) from the soil surface. The tool was modified to introduce expanded terms in the traditional pesticide ranking indices (i.e., retardation and attenuation factors), allowing the estimation of the leaching fraction of volatile chemicals based on recharge, soil, and chemical properties to be updated. Results showed that the previous tool significantly overestimated the mass fraction of VOCs leached through soils as the recharge rates increased above 0.001801 m/d. In contrast, the revised tool successfully delineated vulnerable areas to the selected VOCs based on two reference chemicals, a known leacher and non-leacher, which were determined in local conditions. The sensitivity analysis with the Latin-Hypercube-One-factor-At-a-Time method revealed that the new leaching tool was most sensitive to changes in the soil organic carbon sorption coefficient, fractional organic carbon content, and Henry's law constant; and least sensitive to parameters such as the bulk density, water content at field capacity, and particle density in soils. When the revised tool was compared to the analytical (STANMOD) and numerical (HYDRUS-1D) models as a susceptibility measure, it ranked particular VOCs well (e.g., benzene, carbofuran, and toluene) that were consistent with other two models under the given conditions. Therefore, the new leaching tool can be widely used to address intrinsic groundwater vulnerability to contamination of pesticides and VOCs, along with the DRASTIC method or similar Tier 1 models such as SCI-GROW and WIN-PST.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauntt, Randall O.; Bixler, Nathan E.; Wagner, Kenneth Charles
2014-03-01
A methodology for using the MELCOR code with the Latin Hypercube Sampling method was developed to estimate uncertainty in various predicted quantities such as hydrogen generation or release of fission products under severe accident conditions. In this case, the emphasis was on estimating the range of hydrogen sources in station blackout conditions in the Sequoyah Ice Condenser plant, taking into account uncertainties in the modeled physics known to affect hydrogen generation. The method uses user-specified likelihood distributions for uncertain model parameters, which may include uncertainties of a stochastic nature, to produce a collection of code calculations, or realizations, characterizing themore » range of possible outcomes. Forty MELCOR code realizations of Sequoyah were conducted that included 10 uncertain parameters, producing a range of in-vessel hydrogen quantities. The range of total hydrogen produced was approximately 583kg 131kg. Sensitivity analyses revealed expected trends with respected to the parameters of greatest importance, however, considerable scatter in results when plotted against any of the uncertain parameters was observed, with no parameter manifesting dominant effects on hydrogen generation. It is concluded that, with respect to the physics parameters investigated, in order to further reduce predicted hydrogen uncertainty, it would be necessary to reduce all physics parameter uncertainties similarly, bearing in mind that some parameters are inherently uncertain within a range. It is suspected that some residual uncertainty associated with modeling complex, coupled and synergistic phenomena, is an inherent aspect of complex systems and cannot be reduced to point value estimates. The probabilistic analyses such as the one demonstrated in this work are important to properly characterize response of complex systems such as severe accident progression in nuclear power plants.« less
NASA Astrophysics Data System (ADS)
Li, Ning; McLaughlin, Dennis; Kinzelbach, Wolfgang; Li, WenPeng; Dong, XinGuang
2015-10-01
Model uncertainty needs to be quantified to provide objective assessments of the reliability of model predictions and of the risk associated with management decisions that rely on these predictions. This is particularly true in water resource studies that depend on model-based assessments of alternative management strategies. In recent decades, Bayesian data assimilation methods have been widely used in hydrology to assess uncertain model parameters and predictions. In this case study, a particular data assimilation algorithm, the Ensemble Smoother with Multiple Data Assimilation (ESMDA) (Emerick and Reynolds, 2012), is used to derive posterior samples of uncertain model parameters and forecasts for a distributed hydrological model of Yanqi basin, China. This model is constructed using MIKESHE/MIKE11software, which provides for coupling between surface and subsurface processes (DHI, 2011a-d). The random samples in the posterior parameter ensemble are obtained by using measurements to update 50 prior parameter samples generated with a Latin Hypercube Sampling (LHS) procedure. The posterior forecast samples are obtained from model runs that use the corresponding posterior parameter samples. Two iterative sample update methods are considered: one based on an a perturbed observation Kalman filter update and one based on a square root Kalman filter update. These alternatives give nearly the same results and converge in only two iterations. The uncertain parameters considered include hydraulic conductivities, drainage and river leakage factors, van Genuchten soil property parameters, and dispersion coefficients. The results show that the uncertainty in many of the parameters is reduced during the smoother updating process, reflecting information obtained from the observations. Some of the parameters are insensitive and do not benefit from measurement information. The correlation coefficients among certain parameters increase in each iteration, although they generally stay below 0.50.
Estimating the Non-Monetary Burden of Neurocysticercosis in Mexico
Bhattarai, Rachana; Budke, Christine M.; Carabin, Hélène; Proaño, Jefferson V.; Flores-Rivera, Jose; Corona, Teresa; Ivanek, Renata; Snowden, Karen F.; Flisser, Ana
2012-01-01
Background Neurocysticercosis (NCC) is a major public health problem in many developing countries where health education, sanitation, and meat inspection infrastructure are insufficient. The condition occurs when humans ingest eggs of the pork tapeworm Taenia solium, which then develop into larvae in the central nervous system. Although NCC is endemic in many areas of the world and is associated with considerable socio-economic losses, the burden of NCC remains largely unknown. This study provides the first estimate of disability adjusted life years (DALYs) associated with NCC in Mexico. Methods DALYs lost for symptomatic cases of NCC in Mexico were estimated by incorporating morbidity and mortality due to NCC-associated epilepsy, and morbidity due to NCC-associated severe chronic headaches. Latin hypercube sampling methods were employed to sample the distributions of uncertain parameters and to estimate 95% credible regions (95% CRs). Findings In Mexico, 144,433 and 98,520 individuals are estimated to suffer from NCC-associated epilepsy and NCC-associated severe chronic headaches, respectively. A total of 25,341 (95% CR: 12,569–46,640) DALYs were estimated to be lost due to these clinical manifestations, with 0.25 (95% CR: 0.12–0.46) DALY lost per 1,000 person-years of which 90% was due to NCC-associated epilepsy. Conclusion This is the first estimate of DALYs associated with NCC in Mexico. However, this value is likely to be underestimated since only the clinical manifestations of epilepsy and severe chronic headaches were included. In addition, due to limited country specific data, some parameters used in the analysis were based on systematic reviews of the literature or primary research from other geographic locations. Even with these limitations, our estimates suggest that healthy years of life are being lost due to NCC in Mexico. PMID:22363827
RISC-type microprocessors may revolutionize aerospace simulation
NASA Astrophysics Data System (ADS)
Jackson, Albert S.
The author explores the application of RISC (reduced instruction set computer) processors in massively parallel computer (MPC) designs for aerospace simulation. The MPC approach is shown to be well adapted to the needs of aerospace simulation. It is shown that any of the three common types of interconnection schemes used with MPCs are effective for general-purpose simulation, although the bus-or switch-oriented machines are somewhat easier to use. For partial differential equation models, the hypercube approach at first glance appears more efficient because the nearest-neighbor connections required for three-dimensional models are hardwired in a hypercube machine. However, the data broadcast ability of a bus system, combined with the fact that data can be transmitted over a bus as soon as it has been updated, makes the bus approach very competitive with the hypercube approach even for these types of models.
Generalized hypercube structures and hyperswitch communication network
NASA Technical Reports Server (NTRS)
Young, Steven D.
1992-01-01
This paper discusses an ongoing study that uses a recent development in communication control technology to implement hybrid hypercube structures. These architectures are similar to binary hypercubes, but they also provide added connectivity between the processors. This added connectivity increases communication reliability while decreasing the latency of interprocessor message passing. Because these factors directly determine the speed that can be obtained by multiprocessor systems, these architectures are attractive for applications such as remote exploration and experimentation, where high performance and ultrareliability are required. This paper describes and enumerates these architectures and discusses how they can be implemented with a modified version of the hyperswitch communication network (HCN). The HCN is analyzed because it has three attractive features that enable these architectures to be effective: speed, fault tolerance, and the ability to pass multiple messages simultaneously through the same hyperswitch controller.
Photovoltaic frequency–watt curve design for frequency regulation and fast contingency reserves
Johnson, Jay; Neely, Jason C.; Delhotal, Jarod J.; ...
2016-09-02
When renewable energy resources are installed in electricity grids, they typically increase generation variability and displace thermal generator control action and inertia. Grid operators combat these emerging challenges with advanced distributed energy resource (DER) functions to support frequency and provide voltage regulation and protection mechanisms. This paper focuses on providing frequency reserves using autonomous IEC TR 61850-90-7 pointwise frequency-watt (FW) functions that adjust DER active power as a function of measured grid frequency. The importance of incorporating FW functions into a fleet of photovoltaic (PV) systems is demonstrated in simulation. Effects of FW curve design, including curtailment, deadband, and droop,more » were analyzed against performance metrics using Latin hypercube sampling for 20%, 70%, and 120% PV penetration scenarios on the Hawaiian island of Lanai. Finally, to understand the financial implications of FW functions to utilities, a performance function was defined based on monetary costs attributable to curtailed PV production, load shedding, and generator wear. An optimization wrapper was then created to find the best FW function curve for each penetration level. Lastly, it was found that in all cases, the utility would save money by implementing appropriate FW functions.« less
Concurrent electromagnetic scattering analysis
NASA Technical Reports Server (NTRS)
Patterson, Jean E.; Cwik, Tom; Ferraro, Robert D.; Jacobi, Nathan; Liewer, Paulett C.; Lockhart, Thomas G.; Lyzenga, Gregory A.; Parker, Jay
1989-01-01
The computational power of the hypercube parallel computing architecture is applied to the solution of large-scale electromagnetic scattering and radiation problems. Three analysis codes have been implemented. A Hypercube Electromagnetic Interactive Analysis Workstation was developed to aid in the design and analysis of metallic structures such as antennas and to facilitate the use of these analysis codes. The workstation provides a general user environment for specification of the structure to be analyzed and graphical representations of the results.
Asynchronous Communication Scheme For Hypercube Computer
NASA Technical Reports Server (NTRS)
Madan, Herb S.
1988-01-01
Scheme devised for asynchronous-message communication system for Mark III hypercube concurrent-processor network. Network consists of up to 1,024 processing elements connected electrically as though were at corners of 10-dimensional cube. Each node contains two Motorola 68020 processors along with Motorola 68881 floating-point processor utilizing up to 4 megabytes of shared dynamic random-access memory. Scheme intended to support applications requiring passage of both polled or solicited and unsolicited messages.
Image-Processing Software For A Hypercube Computer
NASA Technical Reports Server (NTRS)
Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.
1992-01-01
Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.
NASA Technical Reports Server (NTRS)
Jefferson, David; Beckman, Brian
1986-01-01
This paper describes the concept of virtual time and its implementation in the Time Warp Operating System at the Jet Propulsion Laboratory. Virtual time is a distributed synchronization paradigm that is appropriate for distributed simulation, database concurrency control, real time systems, and coordination of replicated processes. The Time Warp Operating System is targeted toward the distributed simulation application and runs on a 32-node JPL Mark II Hypercube.
Electrooptical adaptive switching network for the hypercube computer
NASA Technical Reports Server (NTRS)
Chow, E.; Peterson, J.
1988-01-01
An all-optical network design for the hyperswitch network using regular free-space interconnects between electronic processor nodes is presented. The adaptive routing model used is described, and an adaptive routing control example is presented. The design demonstrates that existing electrooptical techniques are sufficient for implementing efficient parallel architectures without the need for more complex means of implementing arbitrary interconnection schemes. The electrooptical hyperswitch network significantly improves the communication performance of the hypercube computer.
Novel snapshot hyperspectral imager for fluorescence imaging
NASA Astrophysics Data System (ADS)
Chandler, Lynn; Chandler, Andrea; Periasamy, Ammasi
2018-02-01
Hyperspectral imaging has emerged as a new technique for the identification and classification of biological tissue1. Benefitting recent developments in sensor technology, the new class of hyperspectral imagers can capture entire hypercubes with single shot operation and it shows great potential for real-time imaging in biomedical sciences. This paper explores the use of a SnapShot imager in fluorescence imaging via microscope for the very first time. Utilizing the latest imaging sensor, the Snapshot imager is both compact and attachable via C-mount to any commercially available light microscope. Using this setup, fluorescence hypercubes of several cells were generated, containing both spatial and spectral information. The fluorescence images were acquired with one shot operation for all the emission range from visible to near infrared (VIS-IR). The paper will present the hypercubes obtained images from example tissues (475-630nm). This study demonstrates the potential of application in cell biology or biomedical applications for real time monitoring.
Inferring patterns in mitochondrial DNA sequences through hypercube independent spanning trees.
Silva, Eduardo Sant Ana da; Pedrini, Helio
2016-03-01
Given a graph G, a set of spanning trees rooted at a vertex r of G is said vertex/edge independent if, for each vertex v of G, v≠r, the paths of r to v in any pair of trees are vertex/edge disjoint. Independent spanning trees (ISTs) provide a number of advantages in data broadcasting due to their fault tolerant properties. For this reason, some studies have addressed the issue by providing mechanisms for constructing independent spanning trees efficiently. In this work, we investigate how to construct independent spanning trees on hypercubes, which are generated based upon spanning binomial trees, and how to use them to predict mitochondrial DNA sequence parts through paths on the hypercube. The prediction works both for inferring mitochondrial DNA sequences comprised of six bases as well as infer anomalies that probably should not belong to the mitochondrial DNA standard. Copyright © 2016 Elsevier Ltd. All rights reserved.
Multiphase complete exchange on a circuit switched hypercube
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1991-01-01
On a distributed memory parallel computer, the complete exchange (all-to-all personalized) communication pattern requires each of n processors to send a different block of data to each of the remaining n - 1 processors. This pattern is at the heart of many important algorithms, most notably the matrix transpose. For a circuit switched hypercube of dimension d(n = 2(sup d)), two algorithms for achieving complete exchange are known. These are (1) the Standard Exchange approach that employs d transmissions of size 2(sup d-1) blocks each and is useful for small block sizes, and (2) the Optimal Circuit Switched algorithm that employs 2(sup d) - 1 transmissions of 1 block each and is best for large block sizes. A unified multiphase algorithm is described that includes these two algorithms as special cases. The complete exchange on a hypercube of dimension d and block size m is achieved by carrying out k partial exchange on subcubes of dimension d(sub i) Sigma(sup k)(sub i=1) d(sub i) = d and effective block size m(sub i) = m2(sup d-di). When k = d and all d(sub i) = 1, this corresponds to algorithm (1) above. For the case of k = 1 and d(sub i) = d, this becomes the circuit switched algorithm (2). Changing the subcube dimensions d, varies the effective block size and permits a compromise between the data permutation and block transmission overhead of (1) and the startup overhead of (2). For a hypercube of dimension d, the number of possible combinations of subcubes is p(d), the number of partitions of the integer d. This is an exponential but very slowly growing function and it is feasible over these partitions to discover the best combination for a given message size. The approach was analyzed for, and implemented on, the Intel iPSC-860 circuit switched hypercube. Measurements show good agreement with predictions and demonstrate that the multiphase approach can substantially improve performance for block sizes in the 0 to 160 byte range. This range, which corresponds to 0 to 40 floating point numbers per processor, is commonly encountered in practical numeric applications. The multiphase technique is applicable to all circuit-switched hypercubes that use the common e-cube routing strategy.
Parallel algorithms for quantum chemistry. I. Integral transformations on a hypercube multiprocessor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whiteside, R.A.; Binkley, J.S.; Colvin, M.E.
1987-02-15
For many years it has been recognized that fundamental physical constraints such as the speed of light will limit the ultimate speed of single processor computers to less than about three billion floating point operations per second (3 GFLOPS). This limitation is becoming increasingly restrictive as commercially available machines are now within an order of magnitude of this asymptotic limit. A natural way to avoid this limit is to harness together many processors to work on a single computational problem. In principle, these parallel processing computers have speeds limited only by the number of processors one chooses to acquire. Themore » usefulness of potentially unlimited processing speed to a computationally intensive field such as quantum chemistry is obvious. If these methods are to be applied to significantly larger chemical systems, parallel schemes will have to be employed. For this reason we have developed distributed-memory algorithms for a number of standard quantum chemical methods. We are currently implementing these on a 32 processor Intel hypercube. In this paper we present our algorithm and benchmark results for one of the bottleneck steps in quantum chemical calculations: the four index integral transformation.« less
Revolution or evolution? An analysis of E-health innovation and impact using a hypercube model.
Wu, Jen-Her; Huang, An-Sheng; Hisa, Tzyh-Lih; Tsai, Hsien-Tang
2006-01-01
This study utilises a hypercube innovation model to analyse the changes in both healthcare informatics and medical related delivery models based on the innovations from Tele-healthcare, electronic healthcare (E-healthcare), to mobile healthcare (M-healthcare). Further, the critical impacts of these E-health innovations on the stakeholders: healthcare customers, hospitals, healthcare complementary providers and healthcare regulators are identified. Thereafter, the critical capabilities for adopting each innovation are discussed.
West, Caroline; Ploth, David; Fonner, Virginia; Mbwambo, Jessie; Fredrick, Francis; Sweat, Michael
2016-04-01
Noncommunicable diseases are on pace to outnumber infectious disease as the leading cause of death in sub-Saharan Africa, yet many questions remain unanswered with concern toward effective methods of screening for type II diabetes mellitus (DM) in this resource-limited setting. We aim to design a screening algorithm for type II DM that optimizes sensitivity and specificity of identifying individuals with undiagnosed DM, as well as affordability to health systems and individuals. Baseline demographic and clinical data, including hemoglobin A1c (HbA1c), were collected from 713 participants using probability sampling of the general population. We used these data, along with model parameters obtained from the literature, to mathematically model 8 purposed DM screening algorithms, while optimizing the sensitivity and specificity using Monte Carlo and Latin Hypercube simulation. An algorithm that combines risk assessment and measurement of fasting blood glucose was found to be superior for the most resource-limited settings (sensitivity 68%, sensitivity 99% and cost per patient having DM identified as $2.94). Incorporating HbA1c testing improves the sensitivity to 75.62%, but raises the cost per DM case identified to $6.04. The preferred algorithms are heavily biased to diagnose those with more severe cases of DM. Using basic risk assessment tools and fasting blood sugar testing in lieu of HbA1c testing in resource-limited settings could allow for significantly more feasible DM screening programs with reasonable sensitivity and specificity. Copyright © 2016 Southern Society for Clinical Investigation. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Yu, Maolin; Du, R.
2005-08-01
Sheet metal stamping is one of the most commonly used manufacturing processes, and hence, much research has been carried for economic gain. Searching through the literatures, however, it is found that there are still a lots of problems unsolved. For example, it is well known that for a same press, same workpiece material, and same set of die, the product quality may vary owing to a number of factors, such as the inhomogeneous of the workpice material, the loading error, the lubrication, and etc. Presently, few seem able to predict the quality variation, not to mention what contribute to the quality variation. As a result, trial-and-error is still needed in the shop floor, causing additional cost and time delay. This paper introduces a new approach to predict the product quality variation and identify the sensitive design / process parameters. The new approach is based on a combination of inverse Finite Element Modeling (FEM) and Monte Carlo Simulation (more specifically, the Latin Hypercube Sampling (LHS) approach). With an acceptable accuracy, the inverse FEM (also called one-step FEM) requires much less computation load than that of the usual incremental FEM and hence, can be used to predict the quality variations under various conditions. LHS is a statistical method, through which the sensitivity analysis can be carried out. The result of the sensitivity analysis has clear physical meaning and can be used to optimize the die design and / or the process design. Two simulation examples are presented including drawing a rectangular box and drawing a two-step rectangular box.
Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.
2009-01-01
The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.
Evaluation of incremental reactivity and its uncertainty in Southern California.
Martien, Philip T; Harley, Robert A; Milford, Jana B; Russell, Armistead G
2003-04-15
The incremental reactivity (IR) and relative incremental reactivity (RIR) of carbon monoxide and 30 individual volatile organic compounds (VOC) were estimated for the South Coast Air Basin using two photochemical air quality models: a 3-D, grid-based model and a vertically resolved trajectory model. Both models include an extended version of the SAPRC99 chemical mechanism. For the 3-D modeling, the decoupled direct method (DDM-3D) was used to assess reactivities. The trajectory model was applied to estimate uncertainties in reactivities due to uncertainties in chemical rate parameters, deposition parameters, and emission rates using Monte Carlo analysis with Latin hypercube sampling. For most VOC, RIRs were found to be consistent in rankings with those produced by Carter using a box model. However, 3-D simulations show that coastal regions, upwind of most of the emissions, have comparatively low IR but higher RIR than predicted by box models for C4-C5 alkenes and carbonyls that initiate the production of HOx radicals. Biogenic VOC emissions were found to have a lower RIR than predicted by box model estimates, because emissions of these VOC were mostly downwind of the areas of primary ozone production. Uncertainties in RIR of individual VOC were found to be dominated by uncertainties in the rate parameters of their primary oxidation reactions. The coefficient of variation (COV) of most RIR values ranged from 20% to 30%, whereas the COV of absolute incremental reactivity ranged from about 30% to 40%. In general, uncertainty and variability both decreased when relative rather than absolute reactivity metrics were used.
Advanced Stochastic Collocation Methods for Polynomial Chaos in RAVEN
NASA Astrophysics Data System (ADS)
Talbot, Paul W.
As experiment complexity in fields such as nuclear engineering continually increases, so does the demand for robust computational methods to simulate them. In many simulations, input design parameters and intrinsic experiment properties are sources of uncertainty. Often small perturbations in uncertain parameters have significant impact on the experiment outcome. For instance, in nuclear fuel performance, small changes in fuel thermal conductivity can greatly affect maximum stress on the surrounding cladding. The difficulty quantifying input uncertainty impact in such systems has grown with the complexity of numerical models. Traditionally, uncertainty quantification has been approached using random sampling methods like Monte Carlo. For some models, the input parametric space and corresponding response output space is sufficiently explored with few low-cost calculations. For other models, it is computationally costly to obtain good understanding of the output space. To combat the expense of random sampling, this research explores the possibilities of using advanced methods in Stochastic Collocation for generalized Polynomial Chaos (SCgPC) as an alternative to traditional uncertainty quantification techniques such as Monte Carlo (MC) and Latin Hypercube Sampling (LHS) methods for applications in nuclear engineering. We consider traditional SCgPC construction strategies as well as truncated polynomial spaces using Total Degree and Hyperbolic Cross constructions. We also consider applying anisotropy (unequal treatment of different dimensions) to the polynomial space, and offer methods whereby optimal levels of anisotropy can be approximated. We contribute development to existing adaptive polynomial construction strategies. Finally, we consider High-Dimensional Model Reduction (HDMR) expansions, using SCgPC representations for the subspace terms, and contribute new adaptive methods to construct them. We apply these methods on a series of models of increasing complexity. We use analytic models of various levels of complexity, then demonstrate performance on two engineering-scale problems: a single-physics nuclear reactor neutronics problem, and a multiphysics fuel cell problem coupling fuels performance and neutronics. Lastly, we demonstrate sensitivity analysis for a time-dependent fuels performance problem. We demonstrate the application of all the algorithms in RAVEN, a production-level uncertainty quantification framework.
ERIC Educational Resources Information Center
Emanouilidis, Emanuel
2005-01-01
Latin squares have existed for hundreds of years but it wasn't until rather recently that Latin squares were used in other areas such as statistics, graph theory, coding theory and the generation of random numbers as well as in the design and analysis of experiments. This note describes Latin and diagonal Latin squares, a method of constructing…
Guo, Guang-Hui; Wu, Feng-Chang; He, Hong-Ping; Feng, Cheng-Lian; Zhang, Rui-Qing; Li, Hui-Xian
2012-04-01
Probabilistic approaches, such as Monte Carlo Sampling (MCS) and Latin Hypercube Sampling (LHS), and non-probabilistic approaches, such as interval analysis, fuzzy set theory and variance propagation, were used to characterize uncertainties associated with risk assessment of sigma PAH8 in surface water of Taihu Lake. The results from MCS and LHS were represented by probability distributions of hazard quotients of sigma PAH8 in surface waters of Taihu Lake. The probabilistic distribution of hazard quotient were obtained from the results of MCS and LHS based on probabilistic theory, which indicated that the confidence intervals of hazard quotient at 90% confidence level were in the range of 0.000 18-0.89 and 0.000 17-0.92, with the mean of 0.37 and 0.35, respectively. In addition, the probabilities that the hazard quotients from MCS and LHS exceed the threshold of 1 were 9.71% and 9.68%, respectively. The sensitivity analysis suggested the toxicity data contributed the most to the resulting distribution of quotients. The hazard quotient of sigma PAH8 to aquatic organisms ranged from 0.000 17 to 0.99 using interval analysis. The confidence interval was (0.001 5, 0.016 3) at the 90% confidence level calculated using fuzzy set theory, and the confidence interval was (0.000 16, 0.88) at the 90% confidence level based on the variance propagation. These results indicated that the ecological risk of sigma PAH8 to aquatic organisms were low. Each method has its own set of advantages and limitations, which was based on different theory; therefore, the appropriate method should be selected on a case-by-case to quantify the effects of uncertainties on the ecological risk assessment. Approach based on the probabilistic theory was selected as the most appropriate method to assess the risk of sigma PAH8 in surface water of Taihu Lake, which provided an important scientific foundation of risk management and control for organic pollutants in water.
Atlanta Public Schools Latin Guide.
ERIC Educational Resources Information Center
Atlanta Public Schools, GA.
This teacher's guide outlines the basic objectives and the content of the Atlanta Public Schools Latin program and suggests resources and methods to achieve the stated goals. The philosophy and general objectives of the program are presented. Course outlines include: (1) Beginning Latin, (2) Intermediate Latin, (3) Vergil's "Aeneid," (4) Ovid:…
Estimated Under-Five Deaths Associated with Poor-Quality Antimalarials in Sub-Saharan Africa
Renschler, John P.; Walters, Kelsey M.; Newton, Paul N.; Laxminarayan, Ramanan
2015-01-01
Many antimalarials sold in sub-Saharan Africa are poor-quality (falsified, substandard, or degraded), and the burden of disease caused by this problem is inadequately quantified. In this article, we estimate the number of under-five deaths caused by ineffective treatment of malaria associated with consumption of poor-quality antimalarials in 39 sub-Saharan countries. Using Latin hypercube sampling our estimates were calculated as the product of the number of private sector antimalarials consumed by malaria-positive children in 2013; the proportion of private sector antimalarials consumed that were of poor-quality; and the case fatality rate (CFR) of under-five malaria-positive children who did not receive appropriate treatment. An estimated 122,350 (interquartile range [IQR]: 91,577–154,736) under-five malaria deaths were associated with consumption of poor-quality antimalarials, representing 3.75% (IQR: 2.81–4.75%) of all under-five deaths in our sample of 39 countries. There is considerable uncertainty surrounding our results because of gaps in data on case fatality rates and prevalence of poor-quality antimalarials. Our analysis highlights the need for further investigation into the distribution of poor-quality antimalarials and the need for stronger surveillance and regulatory efforts to prevent the sale of poor-quality antimalarials. PMID:25897068
NASA Astrophysics Data System (ADS)
Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.
2017-10-01
Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.
Zeelenberg, René; Pecher, Diane
2015-03-01
Counterbalanced designs are frequently used in the behavioral sciences. Studies often counterbalance either the order in which conditions are presented in the experiment or the assignment of stimulus materials to conditions. Occasionally, researchers need to simultaneously counterbalance both condition order and stimulus assignment to conditions. Lewis (1989; Behavior Research Methods, Instruments, & Computers 25:414-415, 1993) presented a method for constructing Latin squares that fulfill these requirements. The resulting Latin squares counterbalance immediate sequential effects, but not remote sequential effects. Here, we present a new method for generating Latin squares that simultaneously counterbalance both immediate and remote sequential effects and assignment of stimuli to conditions. An Appendix is provided to facilitate implementation of these Latin square designs.
NASA Astrophysics Data System (ADS)
Demirel, M. C.; Mai, J.; Stisen, S.; Mendiguren González, G.; Koch, J.; Samaniego, L. E.
2016-12-01
Distributed hydrologic models are traditionally calibrated and evaluated against observations of streamflow. Spatially distributed remote sensing observations offer a great opportunity to enhance spatial model calibration schemes. For that it is important to identify the model parameters that can change spatial patterns before the satellite based hydrologic model calibration. Our study is based on two main pillars: first we use spatial sensitivity analysis to identify the key parameters controlling the spatial distribution of actual evapotranspiration (AET). Second, we investigate the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mesoscale Hydrologic Model (mHM). This distributed model is selected as it allows for a change in the spatial distribution of key soil parameters through the calibration of pedo-transfer function parameters and includes options for using fully distributed daily Leaf Area Index (LAI) directly as input. In addition the simulated AET can be estimated at the spatial resolution suitable for comparison to the spatial patterns observed using MODIS data. We introduce a new dynamic scaling function employing remotely sensed vegetation to downscale coarse reference evapotranspiration. In total, 17 parameters of 47 mHM parameters are identified using both sequential screening and Latin hypercube one-at-a-time sampling methods. The spatial patterns are found to be sensitive to the vegetation parameters whereas streamflow dynamics are sensitive to the PTF parameters. The results of multi-objective model calibration show that calibration of mHM against observed streamflow does not reduce the spatial errors in AET while they improve only the streamflow simulations. We will further examine the results of model calibration using only multi spatial objective functions measuring the association between observed AET and simulated AET maps and another case including spatial and streamflow metrics together.
Bui, T D; Dabdub, D; George, S C
1998-06-01
The steady-state exchange of inert gases across an in situ canine trachea has recently been shown to be limited equally by diffusion and perfusion over a wide range (0.01-350) of blood solubilities (betablood; ml . ml-1 . atm-1). Hence, we hypothesize that the exchange of ethanol (betablood = 1,756 at 37 degrees C) in the airways depends on the blood flow rate from the bronchial circulation. To test this hypothesis, the dynamics of the bronchial circulation were incorporated into an existing model that describes the simultaneous exchange of heat, water, and a soluble gas in the airways. A detailed sensitivity analysis of key model parameters was performed by using the method of Latin hypercube sampling. The model accurately predicted a previously reported experimental exhalation profile of ethanol (R2 = 0.991) as well as the end-exhalation airstream temperature (34.6 degrees C). The model predicts that 27, 29, and 44% of exhaled ethanol in a single exhalation are derived from the tissues of the mucosa and submucosa, the bronchial circulation, and the tissue exterior to the submucosa (which would include the pulmonary circulation), respectively. Although the concentration of ethanol in the bronchial capillary decreased during inspiration, the three key model outputs (end-exhaled ethanol concentration, the slope of phase III, and end-exhaled temperature) were all statistically insensitive (P > 0.05) to the parameters describing the bronchial circulation. In contrast, the model outputs were all sensitive (P < 0.05) to the thickness of tissue separating the core body conditions from the bronchial smooth muscle. We conclude that both the bronchial circulation and the pulmonary circulation impact soluble gas exchange when the entire conducting airway tree is considered.
NASA Astrophysics Data System (ADS)
Gan, Yanjun; Liang, Xin-Zhong; Duan, Qingyun; Choi, Hyun Il; Dai, Yongjiu; Wu, Huan
2015-06-01
An uncertainty quantification framework was employed to examine the sensitivities of 24 model parameters from a newly developed Conjunctive Surface-Subsurface Process (CSSP) land surface model (LSM). The sensitivity analysis (SA) was performed over 18 representative watersheds in the contiguous United States to examine the influence of model parameters in the simulation of terrestrial hydrological processes. Two normalized metrics, relative bias (RB) and Nash-Sutcliffe efficiency (NSE), were adopted to assess the fit between simulated and observed streamflow discharge (SD) and evapotranspiration (ET) for a 14 year period. SA was conducted using a multiobjective two-stage approach, in which the first stage was a qualitative SA using the Latin Hypercube-based One-At-a-Time (LH-OAT) screening, and the second stage was a quantitative SA using the Multivariate Adaptive Regression Splines (MARS)-based Sobol' sensitivity indices. This approach combines the merits of qualitative and quantitative global SA methods, and is effective and efficient for understanding and simplifying large, complex system models. Ten of the 24 parameters were identified as important across different watersheds. The contribution of each parameter to the total response variance was then quantified by Sobol' sensitivity indices. Generally, parameter interactions contribute the most to the response variance of the CSSP, and only 5 out of 24 parameters dominate model behavior. Four photosynthetic and respiratory parameters are shown to be influential to ET, whereas reference depth for saturated hydraulic conductivity is the most influential parameter for SD in most watersheds. Parameter sensitivity patterns mainly depend on hydroclimatic regime, as well as vegetation type and soil texture. This article was corrected on 26 JUN 2015. See the end of the full text for details.
Tang, Sanyi; Liang, Juhua; Tan, Yuanshun; Cheke, Robert A
2013-01-01
Impulsive differential equations (hybrid dynamical systems) can provide a natural description of pulse-like actions such as when a pesticide kills a pest instantly. However, pesticides may have long-term residual effects, with some remaining active against pests for several weeks, months or years. Therefore, a more realistic method for modelling chemical control in such cases is to use continuous or piecewise-continuous periodic functions which affect growth rates. How to evaluate the effects of the duration of the pesticide residual effectiveness on successful pest control is key to the implementation of integrated pest management (IPM) in practice. To address these questions in detail, we have modelled IPM including residual effects of pesticides in terms of fixed pulse-type actions. The stability threshold conditions for pest eradication are given. Moreover, effects of the killing efficiency rate and the decay rate of the pesticide on the pest and on its natural enemies, the duration of residual effectiveness, the number of pesticide applications and the number of natural enemy releases on the threshold conditions are investigated with regard to the extent of depression or resurgence resulting from pulses of pesticide applications and predator releases. Latin Hypercube Sampling/Partial Rank Correlation uncertainty and sensitivity analysis techniques are employed to investigate the key control parameters which are most significantly related to threshold values. The findings combined with Volterra's principle confirm that when the pesticide has a strong effect on the natural enemies, repeated use of the same pesticide can result in target pest resurgence. The results also indicate that there exists an optimal number of pesticide applications which can suppress the pest most effectively, and this may help in the design of an optimal control strategy.
Choi, Kwanghun; Spohn, Marie; Park, Soo Jin; Huwe, Bernd; Ließ, Mareike
2017-01-01
Nitrogen (N) and phosphorus (P) in topsoils are critical for plant nutrition. Relatively little is known about the spatial patterns of N and P in the organic layer of mountainous landscapes. Therefore, the spatial distributions of N and P in both the organic layer and the A horizon were analyzed using a light detection and ranging (LiDAR) digital elevation model and vegetation metrics. The objective of the study was to analyze the effect of vegetation and topography on the spatial patterns of N and P in a small watershed covered by forest in South Korea. Soil samples were collected using the conditioned latin hypercube method. LiDAR vegetation metrics, the normalized difference vegetation index (NDVI), and terrain parameters were derived as predictors. Spatial explicit predictions of N/P ratios were obtained using a random forest with uncertainty analysis. We tested different strategies of model validation (repeated 2-fold to 20-fold and leave-one-out cross validation). Repeated 10-fold cross validation was selected for model validation due to the comparatively high accuracy and low variance of prediction. Surface curvature was the best predictor of P contents in the organic layer and in the A horizon, while LiDAR vegetation metrics and NDVI were important predictors of N in the organic layer. N/P ratios increased with surface curvature and were higher on the convex upper slope than on the concave lower slope. This was due to P enrichment of the soil on the lower slope and a more even spatial distribution of N. Our digital soil maps showed that the topsoils on the upper slopes contained relatively little P. These findings are critical for understanding N and P dynamics in mountainous ecosystems. PMID:28837590
Towards a consensus-based biokinetic model for green microalgae - The ASM-A.
Wágner, Dorottya S; Valverde-Pérez, Borja; Sæbø, Mariann; Bregua de la Sotilla, Marta; Van Wagenen, Jonathan; Smets, Barth F; Plósz, Benedek Gy
2016-10-15
Cultivation of microalgae in open ponds and closed photobioreactors (PBRs) using wastewater resources offers an opportunity for biochemical nutrient recovery. Effective reactor system design and process control of PBRs requires process models. Several models with different complexities have been developed to predict microalgal growth. However, none of these models can effectively describe all the relevant processes when microalgal growth is coupled with nutrient removal and recovery from wastewaters. Here, we present a mathematical model developed to simulate green microalgal growth (ASM-A) using the systematic approach of the activated sludge modelling (ASM) framework. The process model - identified based on a literature review and using new experimental data - accounts for factors influencing photoautotrophic and heterotrophic microalgal growth, nutrient uptake and storage (i.e. Droop model) and decay of microalgae. Model parameters were estimated using laboratory-scale batch and sequenced batch experiments using the novel Latin Hypercube Sampling based Simplex (LHSS) method. The model was evaluated using independent data obtained in a 24-L PBR operated in sequenced batch mode. Identifiability of the model was assessed. The model can effectively describe microalgal biomass growth, ammonia and phosphate concentrations as well as the phosphorus storage using a set of average parameter values estimated with the experimental data. A statistical analysis of simulation and measured data suggests that culture history and substrate availability can introduce significant variability on parameter values for predicting the reaction rates for bulk nitrate and the intracellularly stored nitrogen state-variables, thereby requiring scenario specific model calibration. ASM-A was identified using standard cultivation medium and it can provide a platform for extensions accounting for factors influencing algal growth and nutrient storage using wastewater resources. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Trudel, Mélanie; Leconte, Robert; Paniconi, Claudio
2014-06-01
Data assimilation techniques not only enhance model simulations and forecast, they also provide the opportunity to obtain a diagnostic of both the model and observations used in the assimilation process. In this research, an ensemble Kalman filter was used to assimilate streamflow observations at a basin outlet and at interior locations, as well as soil moisture at two different depths (15 and 45 cm). The simulation model is the distributed physically-based hydrological model CATHY (CATchment HYdrology) and the study site is the Des Anglais watershed, a 690 km2 river basin located in southern Quebec, Canada. Use of Latin hypercube sampling instead of a conventional Monte Carlo method to generate the ensemble reduced the size of the ensemble, and therefore the calculation time. Different post-assimilation diagnostics, based on innovations (observation minus background), analysis residuals (observation minus analysis), and analysis increments (analysis minus background), were used to evaluate assimilation optimality. An important issue in data assimilation is the estimation of error covariance matrices. These diagnostics were also used in a calibration exercise to determine the standard deviation of model parameters, forcing data, and observations that led to optimal assimilations. The analysis of innovations showed a lag between the model forecast and the observation during rainfall events. Assimilation of streamflow observations corrected this discrepancy. Assimilation of outlet streamflow observations improved the Nash-Sutcliffe efficiencies (NSE) between the model forecast (one day) and the observation at both outlet and interior point locations, owing to the structure of the state vector used. However, assimilation of streamflow observations systematically increased the simulated soil moisture values.
Optimization of a simplified automobile finite element model using time varying injury metrics.
Gaewsky, James P; Danelson, Kerry A; Weaver, Caitlin M; Stitzel, Joel D
2014-01-01
In 2011, frontal crashes resulted in 55% of passenger car injuries with 10,277 fatalities and 866,000 injuries in the United States. To better understand frontal crash injury mechanisms, human body finite element models (FEMs) can be used to reconstruct Crash Injury Research and Engineering Network (CIREN) cases. A limitation of this method is the paucity of vehicle FEMs; therefore, we developed a functionally equivalent simplified vehicle model. The New Car Assessment Program (NCAP) data for our selected vehicle was from a frontal collision with Hybrid III (H3) Anthropomorphic Test Device (ATD) occupant. From NCAP test reports, the vehicle geometry was created and the H3 ATD was positioned. The material and component properties optimized using a variation study process were: steering column shear bolt fracture force and stroke resistance, seatbelt pretensioner force, frontal and knee bolster airbag stiffness, and belt friction through the D-ring. These parameters were varied using three successive Latin Hypercube Designs of Experiments with 130-200 simulations each. The H3 injury response was compared to the reported NCAP frontal test results for the head, chest and pelvis accelerations, and seat belt and femur forces. The phase, magnitude, and comprehensive error factors, from a Sprague and Geers analysis were calculated for each injury metric and then combined to determine the simulations with the best match to the crash test. The Sprague and Geers analyses typically yield error factors ranging from 0 to 1 with lower scores being more optimized. The total body injury response error factor for the most optimized simulation from each round of the variation study decreased from 0.466 to 0.395 to 0.360. This procedure to optimize vehicle FEMs is a valuable tool to conduct future CIREN case reconstructions in a variety of vehicles.
Simplifying the complexity of a coupled carbon turnover and pesticide degradation model
NASA Astrophysics Data System (ADS)
Marschmann, Gianna; Erhardt, André H.; Pagel, Holger; Kügler, Philipp; Streck, Thilo
2016-04-01
The mechanistic one-dimensional model PECCAD (PEsticide degradation Coupled to CArbon turnover in the Detritusphere; Pagel et al. 2014, Biogeochemistry 117, 185-204) has been developed as a tool to elucidate regulation mechanisms of pesticide degradation in soil. A feature of this model is that it integrates functional traits of microorganisms, identifiable by molecular tools, and physicochemical processes such as transport and sorption that control substrate availability. Predicting the behavior of microbially active interfaces demands a fundamental understanding of factors controlling their dynamics. Concepts from dynamical systems theory allow us to study general properties of the model such as its qualitative behavior, intrinsic timescales and dynamic stability: Using a Latin hypercube method we sampled the parameter space for physically realistic steady states of the PECCAD ODE system and set up a numerical continuation and bifurcation problem with the open-source toolbox MatCont in order to obtain a complete classification of the dynamical system's behaviour. Bifurcation analysis reveals an equilibrium state of the system entirely controlled by fungal kinetic parameters. The equilibrium is generally unstable in response to small perturbations except for a small band in parameter space where the pesticide pool is stable. Time scale separation is a phenomenon that occurs in almost every complex open physical system. Motivated by the notion of "initial-stage" and "late-stage" decomposers and the concept of r-, K- or L-selected microbial life strategies, we test the applicability of geometric singular perturbation theory to identify fast and slow time scales of PECCAD. Revealing a generic fast-slow structure would greatly simplify the analysis of complex models of organic matter turnover by reducing the number of unknowns and parameters and providing a systematic mathematical framework for studying their properties.
Energies and radial distributions of Bs mesons - the effect of hypercubic blocking
NASA Astrophysics Data System (ADS)
Koponen, Jonna
2006-12-01
This is a follow-up to our earlier work for the energies and the charge (vector) and matter (scalar) distributions for S-wave states in a heavy-light meson, where the heavy quark is static and the light quark has a mass about that of the strange quark. We study the radial distributions of higher angular momentum states, namely P- and D-wave states, using a "fuzzy" static quark. A new improvement is the use of hypercubic blocking in the time direction, which effectively constrains the heavy quark to move within a 2a hypercube (a is the lattice spacing). The calculation is carried out with dynamical fermions on a 163 × 32 lattice with a ≈ 0.10 fm generated using the non-perturbatively improved clover action. The configurations were gener- ated by the UKQCD Collaboration using lattice action parameters β = 5.2, c SW = 2.0171 and κ = 0.1350. In nature the closest equivalent of this heavy-light system is the Bs meson. Attempts are now being made to understand these results in terms of the Dirac equation.
Finite elements and the method of conjugate gradients on a concurrent processor
NASA Technical Reports Server (NTRS)
Lyzenga, G. A.; Raefsky, A.; Hager, G. H.
1985-01-01
An algorithm for the iterative solution of finite element problems on a concurrent processor is presented. The method of conjugate gradients is used to solve the system of matrix equations, which is distributed among the processors of a MIMD computer according to an element-based spatial decomposition. This algorithm is implemented in a two-dimensional elastostatics program on the Caltech Hypercube concurrent processor. The results of tests on up to 32 processors show nearly linear concurrent speedup, with efficiencies over 90 percent for sufficiently large problems.
Finite elements and the method of conjugate gradients on a concurrent processor
NASA Technical Reports Server (NTRS)
Lyzenga, G. A.; Raefsky, A.; Hager, B. H.
1984-01-01
An algorithm for the iterative solution of finite element problems on a concurrent processor is presented. The method of conjugate gradients is used to solve the system of matrix equations, which is distributed among the processors of a MIMD computer according to an element-based spatial decomposition. This algorithm is implemented in a two-dimensional elastostatics program on the Caltech Hypercube concurrent processor. The results of tests on up to 32 processors show nearly linear concurrent speedup, with efficiencies over 90% for sufficiently large problems.
Robust stabilization of the Space Station in the presence of inertia matrix uncertainty
NASA Technical Reports Server (NTRS)
Wie, Bong; Liu, Qiang; Sunkel, John
1993-01-01
This paper presents a robust H-infinity full-state feedback control synthesis method for uncertain systems with D11 not equal to 0. The method is applied to the robust stabilization problem of the Space Station in the face of inertia matrix uncertainty. The control design objective is to find a robust controller that yields the largest stable hypercube in uncertain parameter space, while satisfying the nominal performance requirements. The significance of employing an uncertain plant model with D11 not equal 0 is demonstrated.
Optimization of the computational load of a hypercube supercomputer onboard a mobile robot.
Barhen, J; Toomarian, N; Protopopescu, V
1987-12-01
A combinatorial optimization methodology is developed, which enables the efficient use of hypercube multiprocessors onboard mobile intelligent robots dedicated to time-critical missions. The methodology is implemented in terms of large-scale concurrent algorithms based either on fast simulated annealing, or on nonlinear asynchronous neural networks. In particular, analytic expressions are given for the effect of singleneuron perturbations on the systems' configuration energy. Compact neuromorphic data structures are used to model effects such as prec xdence constraints, processor idling times, and task-schedule overlaps. Results for a typical robot-dynamics benchmark are presented.
Optimization of the computational load of a hypercube supercomputer onboard a mobile robot
NASA Technical Reports Server (NTRS)
Barhen, Jacob; Toomarian, N.; Protopopescu, V.
1987-01-01
A combinatorial optimization methodology is developed, which enables the efficient use of hypercube multiprocessors onboard mobile intelligent robots dedicated to time-critical missions. The methodology is implemented in terms of large-scale concurrent algorithms based either on fast simulated annealing, or on nonlinear asynchronous neural networks. In particular, analytic expressions are given for the effect of single-neuron perturbations on the systems' configuration energy. Compact neuromorphic data structures are used to model effects such as precedence constraints, processor idling times, and task-schedule overlaps. Results for a typical robot-dynamics benchmark are presented.
Maximum Torque and Momentum Envelopes for Reaction Wheel Arrays
NASA Technical Reports Server (NTRS)
Reynolds, R. G.; Markley, F. Landis
2001-01-01
Spacecraft reaction wheel maneuvers are limited by the maximum torque and/or angular momentum which the wheels can provide. For an n-wheel configuration, the torque or momentum envelope can be obtained by projecting the n-dimensional hypercube, representing the domain boundary of individual wheel torques or momenta, into three dimensional space via the 3xn matrix of wheel axes. In this paper, the properties of the projected hypercube are discussed, and algorithms are proposed for determining this maximal torque or momentum envelope for general wheel configurations. Practical implementation strategies for specific wheel configurations are also considered.
Optimal design of a piezoelectric transducer for exciting guided wave ultrasound in rails
NASA Astrophysics Data System (ADS)
Ramatlo, Dineo A.; Wilke, Daniel N.; Loveday, Philip W.
2017-02-01
An existing Ultrasonic Broken Rail Detection System installed in South Africa on a heavy duty railway line is currently being upgraded to include defect detection and location. To accomplish this, an ultrasonic piezoelectric transducer to strongly excite a guided wave mode with energy concentrated in the web (web mode) of a rail is required. A previous study demonstrated that the recently developed SAFE-3D (Semi-Analytical Finite Element - 3 Dimensional) method can effectively predict the guided waves excited by a resonant piezoelectric transducer. In this study, the SAFE-3D model is used in the design optimization of a rail web transducer. A bound-constrained optimization problem was formulated to maximize the energy transmitted by the transducer in the web mode when driven by a pre-defined excitation signal. Dimensions of the transducer components were selected as the three design variables. A Latin hypercube sampled design of experiments that required a total of 500 SAFE-3D analyses in the design space was employed in a response surface-based optimization approach. The Nelder-Mead optimization algorithm was then used to find an optimal transducer design on the constructed response surface. The radial basis function response surface was first verified by comparing a number of predicted responses against the computed SAFE-3D responses. The performance of the optimal transducer predicted by the optimization algorithm on the response surface was also verified to be sufficiently accurate using SAFE-3D. The computational advantages of SAFE-3D in optimal transducer design are noteworthy as more than 500 analyses were performed. The optimal design was then manufactured and experimental measurements were used to validate the predicted performance. The adopted design method has demonstrated the capability to automate the design of transducers for a particular rail cross-section and frequency range.
Designing an optimal software intensive system acquisition: A game theoretic approach
NASA Astrophysics Data System (ADS)
Buettner, Douglas John
The development of schedule-constrained software-intensive space systems is challenging. Case study data from national security space programs developed at the U.S. Air Force Space and Missile Systems Center (USAF SMC) provide evidence of the strong desire by contractors to skip or severely reduce software development design and early defect detection methods in these schedule-constrained environments. The research findings suggest recommendations to fully address these issues at numerous levels. However, the observations lead us to investigate modeling and theoretical methods to fundamentally understand what motivated this behavior in the first place. As a result, Madachy's inspection-based system dynamics model is modified to include unit testing and an integration test feedback loop. This Modified Madachy Model (MMM) is used as a tool to investigate the consequences of this behavior on the observed defect dynamics for two remarkably different case study software projects. Latin Hypercube sampling of the MMM with sample distributions for quality, schedule and cost-driven strategies demonstrate that the higher cost and effort quality-driven strategies provide consistently better schedule performance than the schedule-driven up-front effort-reduction strategies. Game theory reasoning for schedule-driven engineers cutting corners on inspections and unit testing is based on the case study evidence and Austin's agency model to describe the observed phenomena. Game theory concepts are then used to argue that the source of the problem and hence the solution to developers cutting corners on quality for schedule-driven system acquisitions ultimately lies with the government. The game theory arguments also lead to the suggestion that the use of a multi-player dynamic Nash bargaining game provides a solution for our observed lack of quality game between the government (the acquirer) and "large-corporation" software developers. A note is provided that argues this multi-player dynamic Nash bargaining game also provides the solution to Freeman Dyson's problem, for a way to place a label of good or bad on systems.
Seismic Fragility Analysis of a Condensate Storage Tank with Age-Related Degradations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nie, J.; Braverman, J.; Hofmayer, C
2011-04-01
The Korea Atomic Energy Research Institute (KAERI) is conducting a five-year research project to develop a realistic seismic risk evaluation system which includes the consideration of aging of structures and components in nuclear power plants (NPPs). The KAERI research project includes three specific areas that are essential to seismic probabilistic risk assessment (PRA): (1) probabilistic seismic hazard analysis, (2) seismic fragility analysis including the effects of aging, and (3) a plant seismic risk analysis. Since 2007, Brookhaven National Laboratory (BNL) has entered into a collaboration agreement with KAERI to support its development of seismic capability evaluation technology for degraded structuresmore » and components. The collaborative research effort is intended to continue over a five year period. The goal of this collaboration endeavor is to assist KAERI to develop seismic fragility analysis methods that consider the potential effects of age-related degradation of structures, systems, and components (SSCs). The research results of this multi-year collaboration will be utilized as input to seismic PRAs. This report describes the research effort performed by BNL for the Year 4 scope of work. This report was developed as an update to the Year 3 report by incorporating a major supplement to the Year 3 fragility analysis. In the Year 4 research scope, an additional study was carried out to consider an additional degradation scenario, in which the three basic degradation scenarios, i.e., degraded tank shell, degraded anchor bolts, and cracked anchorage concrete, are combined in a non-perfect correlation manner. A representative operational water level is used for this effort. Building on the same CDFM procedure implemented for the Year 3 Tasks, a simulation method was applied using optimum Latin Hypercube samples to characterize the deterioration behavior of the fragility capacity as a function of age-related degradations. The results are summarized in Section 5 and Appendices G through I.« less
NASA Astrophysics Data System (ADS)
Sun, Mei; Zhang, Xiaolin; Huo, Zailin; Feng, Shaoyuan; Huang, Guanhua; Mao, Xiaomin
2016-03-01
Quantitatively ascertaining and analyzing the effects of model uncertainty on model reliability is a focal point for agricultural-hydrological models due to more uncertainties of inputs and processes. In this study, the generalized likelihood uncertainty estimation (GLUE) method with Latin hypercube sampling (LHS) was used to evaluate the uncertainty of the RZWQM-DSSAT (RZWQM2) model outputs responses and the sensitivity of 25 parameters related to soil properties, nutrient transport and crop genetics. To avoid the one-sided risk of model prediction caused by using a single calibration criterion, the combined likelihood (CL) function integrated information concerning water, nitrogen, and crop production was introduced in GLUE analysis for the predictions of the following four model output responses: the total amount of water content (T-SWC) and the nitrate nitrogen (T-NIT) within the 1-m soil profile, the seed yields of waxy maize (Y-Maize) and winter wheat (Y-Wheat). In the process of evaluating RZWQM2, measurements and meteorological data were obtained from a field experiment that involved a winter wheat and waxy maize crop rotation system conducted from 2003 to 2004 in southern Beijing. The calibration and validation results indicated that RZWQM2 model can be used to simulate the crop growth and water-nitrogen migration and transformation in wheat-maize crop rotation planting system. The results of uncertainty analysis using of GLUE method showed T-NIT was sensitive to parameters relative to nitrification coefficient, maize growth characteristics on seedling period, wheat vernalization period, and wheat photoperiod. Parameters on soil saturated hydraulic conductivity, nitrogen nitrification and denitrification, and urea hydrolysis played an important role in crop yield component. The prediction errors for RZWQM2 outputs with CL function were relatively lower and uniform compared with other likelihood functions composed of individual calibration criterion. This new and successful application of the GLUE method for determining the uncertainty and sensitivity of the RZWQM2 could provide a reference for the optimization of model parameters with different emphases according to research interests.
The Power of Tradition: Methods for Teaching Latin in the Context of History of Educational Thought
ERIC Educational Resources Information Center
Fomin, Andriy
2005-01-01
Many authors note that the history of teaching Latin would be a fruitful topic for a comprehensive treatise. Although intense debates about the quality and necessity of teaching Latin date back as early as in the eighteenth century, Latin courses have persisted into the present and, notably, with few changes in content. The author supports the…
A non-invasive implementation of a mixed domain decomposition method for frictional contact problems
NASA Astrophysics Data System (ADS)
Oumaziz, Paul; Gosselet, Pierre; Boucard, Pierre-Alain; Guinard, Stéphane
2017-11-01
A non-invasive implementation of the Latin domain decomposition method for frictional contact problems is described. The formulation implies to deal with mixed (Robin) conditions on the faces of the subdomains, which is not a classical feature of commercial software. Therefore we propose a new implementation of the linear stage of the Latin method with a non-local search direction built as the stiffness of a layer of elements on the interfaces. This choice enables us to implement the method within the open source software Code_Aster, and to derive 2D and 3D examples with similar performance as the standard Latin method.
Performance of a plasma fluid code on the Intel parallel computers
NASA Technical Reports Server (NTRS)
Lynch, V. E.; Carreras, B. A.; Drake, J. B.; Leboeuf, J. N.; Liewer, P.
1992-01-01
One approach to improving the real-time efficiency of plasma turbulence calculations is to use a parallel algorithm. A parallel algorithm for plasma turbulence calculations was tested on the Intel iPSC/860 hypercube and the Touchtone Delta machine. Using the 128 processors of the Intel iPSC/860 hypercube, a factor of 5 improvement over a single-processor CRAY-2 is obtained. For the Touchtone Delta machine, the corresponding improvement factor is 16. For plasma edge turbulence calculations, an extrapolation of the present results to the Intel (sigma) machine gives an improvement factor close to 64 over the single-processor CRAY-2.
A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor
NASA Technical Reports Server (NTRS)
Rao, Hariprasad Nannapaneni
1989-01-01
The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.
A Bayesian ensemble data assimilation to constrain model parameters and land-use carbon emissions
NASA Astrophysics Data System (ADS)
Lienert, Sebastian; Joos, Fortunat
2018-05-01
A dynamic global vegetation model (DGVM) is applied in a probabilistic framework and benchmarking system to constrain uncertain model parameters by observations and to quantify carbon emissions from land-use and land-cover change (LULCC). Processes featured in DGVMs include parameters which are prone to substantial uncertainty. To cope with these uncertainties Latin hypercube sampling (LHS) is used to create a 1000-member perturbed parameter ensemble, which is then evaluated with a diverse set of global and spatiotemporally resolved observational constraints. We discuss the performance of the constrained ensemble and use it to formulate a new best-guess version of the model (LPX-Bern v1.4). The observationally constrained ensemble is used to investigate historical emissions due to LULCC (ELUC) and their sensitivity to model parametrization. We find a global ELUC estimate of 158 (108, 211) PgC (median and 90 % confidence interval) between 1800 and 2016. We compare ELUC to other estimates both globally and regionally. Spatial patterns are investigated and estimates of ELUC of the 10 countries with the largest contribution to the flux over the historical period are reported. We consider model versions with and without additional land-use processes (shifting cultivation and wood harvest) and find that the difference in global ELUC is on the same order of magnitude as parameter-induced uncertainty and in some cases could potentially even be offset with appropriate parameter choice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finsterle, Stefan A.
2010-11-01
iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional , multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. It performs sensitivity analysis, parameter estimation, and uncertainty propagation, analysis in geosciences and reservoir engineering and other application areas. It supports a number of different combination of fluids and components [equation-of-state (EOS) modules]. In addition, the optimization routines implemented in iTOUGH2 can also be used or sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files. This link is achieved by means of the PEST application programmingmore » interface. iTOUGH2 solves the inverse problem by minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative fee, gradient-based and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlos simulation for uncertainty propagation analysis. A detailed residual and error analysis is provided. This upgrade includes new EOS modules (specifically EOS7c, ECO2N and TMVOC), hysteretic relative permeability and capillary pressure functions and the PEST API. More details can be found at http://esd.lbl.gov/iTOUGH2 and the publications cited there. Hardware Req.: Multi-platform; Related/auxiliary software PVM (if running in parallel).« less
Machine learning from computer simulations with applications in rail vehicle dynamics
NASA Astrophysics Data System (ADS)
Taheri, Mehdi; Ahmadian, Mehdi
2016-05-01
The application of stochastic modelling for learning the behaviour of a multibody dynamics (MBD) models is investigated. Post-processing data from a simulation run are used to train the stochastic model that estimates the relationship between model inputs (suspension relative displacement and velocity) and the output (sum of suspension forces). The stochastic model can be used to reduce the computational burden of the MBD model by replacing a computationally expensive subsystem in the model (suspension subsystem). With minor changes, the stochastic modelling technique is able to learn the behaviour of a physical system and integrate its behaviour within MBD models. The technique is highly advantageous for MBD models where real-time simulations are necessary, or with models that have a large number of repeated substructures, e.g. modelling a train with a large number of railcars. The fact that the training data are acquired prior to the development of the stochastic model discards the conventional sampling plan strategies like Latin Hypercube sampling plans where simulations are performed using the inputs dictated by the sampling plan. Since the sampling plan greatly influences the overall accuracy and efficiency of the stochastic predictions, a sampling plan suitable for the process is developed where the most space-filling subset of the acquired data with ? number of sample points that best describes the dynamic behaviour of the system under study is selected as the training data.
A Monte Carlo risk assessment model for acrylamide formation in French fries.
Cummins, Enda; Butler, Francis; Gormley, Ronan; Brunton, Nigel
2009-10-01
The objective of this study is to estimate the likely human exposure to the group 2a carcinogen, acrylamide, from French fries by Irish consumers by developing a quantitative risk assessment model using Monte Carlo simulation techniques. Various stages in the French-fry-making process were modeled from initial potato harvest, storage, and processing procedures. The model was developed in Microsoft Excel with the @Risk add-on package. The model was run for 10,000 iterations using Latin hypercube sampling. The simulated mean acrylamide level in French fries was calculated to be 317 microg/kg. It was found that females are exposed to smaller levels of acrylamide than males (mean exposure of 0.20 microg/kg bw/day and 0.27 microg/kg bw/day, respectively). Although the carcinogenic potency of acrylamide is not well known, the simulated probability of exceeding the average chronic human dietary intake of 1 microg/kg bw/day (as suggested by WHO) was 0.054 and 0.029 for males and females, respectively. A sensitivity analysis highlighted the importance of the selection of appropriate cultivars with known low reducing sugar levels for French fry production. Strict control of cooking conditions (correlation coefficient of 0.42 and 0.35 for frying time and temperature, respectively) and blanching procedures (correlation coefficient -0.25) were also found to be important in ensuring minimal acrylamide formation.
Scalability of Parallel Spatial Direct Numerical Simulations on Intel Hypercube and IBM SP1 and SP2
NASA Technical Reports Server (NTRS)
Joslin, Ronald D.; Hanebutte, Ulf R.; Zubair, Mohammad
1995-01-01
The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube and IBM SP1 and SP2 parallel computers is documented. Spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows are computed with the PSDNS code. The feasibility of using the PSDNS to perform transition studies on these computers is examined. The results indicate that PSDNS approach can effectively be parallelized on a distributed-memory parallel machine by remapping the distributed data structure during the course of the calculation. Scalability information is provided to estimate computational costs to match the actual costs relative to changes in the number of grid points. By increasing the number of processors, slower than linear speedups are achieved with optimized (machine-dependent library) routines. This slower than linear speedup results because the computational cost is dominated by FFT routine, which yields less than ideal speedups. By using appropriate compile options and optimized library routines on the SP1, the serial code achieves 52-56 M ops on a single node of the SP1 (45 percent of theoretical peak performance). The actual performance of the PSDNS code on the SP1 is evaluated with a "real world" simulation that consists of 1.7 million grid points. One time step of this simulation is calculated on eight nodes of the SP1 in the same time as required by a Cray Y/MP supercomputer. For the same simulation, 32-nodes of the SP1 and SP2 are required to reach the performance of a Cray C-90. A 32 node SP1 (SP2) configuration is 2.9 (4.6) times faster than a Cray Y/MP for this simulation, while the hypercube is roughly 2 times slower than the Y/MP for this application. KEY WORDS: Spatial direct numerical simulations; incompressible viscous flows; spectral methods; finite differences; parallel computing.
Parallel, adaptive finite element methods for conservation laws
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Devine, Karen D.; Flaherty, Joseph E.
1994-01-01
We construct parallel finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. A posteriori estimates of spatial errors are obtained by a p-refinement technique using superconvergence at Radau points. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We compare results using different limiting schemes and demonstrate parallel efficiency through computations on an NCUBE/2 hypercube. We also present results using adaptive h- and p-refinement to reduce the computational cost of the method.
1988-04-01
hypercube; methods, which work best in structured scenes, and parent/ child operations run in a smaller fixed time independent spatiotemporal energy...rectangle defines a new coordinate sVstem for it, image is constructed under orthographic projection. The child links that is relative to its own...that the child rectangle can shift in the X -- V plane, rectangle in the image. At first, a noiseless image is created relative to the nominal
Uncertainty Analysis of Decomposing Polyurethane Foam
NASA Technical Reports Server (NTRS)
Hobbs, Michael L.; Romero, Vicente J.
2000-01-01
Sensitivity/uncertainty analyses are necessary to determine where to allocate resources for improved predictions in support of our nation's nuclear safety mission. Yet, sensitivity/uncertainty analyses are not commonly performed on complex combustion models because the calculations are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, a variety of sensitivity/uncertainty analyses were used to determine the uncertainty associated with thermal decomposition of polyurethane foam exposed to high radiative flux boundary conditions. The polyurethane used in this study is a rigid closed-cell foam used as an encapsulant. Related polyurethane binders such as Estane are used in many energetic materials of interest to the JANNAF community. The complex, finite element foam decomposition model used in this study has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state decomposition front velocity calculated as the derivative of the decomposition front location versus time. An analytical mean value sensitivity/uncertainty (MV) analysis was used to determine the standard deviation by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation was essentially determined from a second derivative that was extremely sensitive to numerical noise. To minimize the numerical noise, 50-micrometer element dimensions and approximately 1-msec time steps were required to obtain stable uncertainty results. As an alternative method to determine the uncertainty and sensitivity in the decomposition front velocity, surrogate response surfaces were generated for use with a constrained Latin Hypercube Sampling (LHS) technique. Two surrogate response surfaces were investigated: 1) a linear surrogate response surface (LIN) and 2) a quadratic response surface (QUAD). The LHS techniques do not require derivatives of the response variable and are subsequently relatively insensitive to numerical noise. To compare the LIN and QUAD methods to the MV method, a direct LHS analysis (DLHS) was performed using the full grid and timestep resolved finite element model. The surrogate response models (LIN and QUAD) are shown to give acceptable values of the mean and standard deviation when compared to the fully converged DLHS model.
System health monitoring using multiple-model adaptive estimation techniques
NASA Astrophysics Data System (ADS)
Sifford, Stanley Ryan
Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.
Coach simplified structure modeling and optimization study based on the PBM method
NASA Astrophysics Data System (ADS)
Zhang, Miaoli; Ren, Jindong; Yin, Ying; Du, Jian
2016-09-01
For the coach industry, rapid modeling and efficient optimization methods are desirable for structure modeling and optimization based on simplified structures, especially for use early in the concept phase and with capabilities of accurately expressing the mechanical properties of structure and with flexible section forms. However, the present dimension-based methods cannot easily meet these requirements. To achieve these goals, the property-based modeling (PBM) beam modeling method is studied based on the PBM theory and in conjunction with the characteristics of coach structure of taking beam as the main component. For a beam component of concrete length, its mechanical characteristics are primarily affected by the section properties. Four section parameters are adopted to describe the mechanical properties of a beam, including the section area, the principal moments of inertia about the two principal axles, and the torsion constant of the section. Based on the equivalent stiffness strategy, expressions for the above section parameters are derived, and the PBM beam element is implemented in HyperMesh software. A case is realized using this method, in which the structure of a passenger coach is simplified. The model precision is validated by comparing the basic performance of the total structure with that of the original structure, including the bending and torsion stiffness and the first-order bending and torsional modal frequencies. Sensitivity analysis is conducted to choose design variables. The optimal Latin hypercube experiment design is adopted to sample the test points, and polynomial response surfaces are used to fit these points. To improve the bending and torsion stiffness and the first-order torsional frequency and taking the allowable maximum stresses of the braking and left turning conditions as constraints, the multi-objective optimization of the structure is conducted using the NSGA-II genetic algorithm on the ISIGHT platform. The result of the Pareto solution set is acquired, and the selection strategy of the final solution is discussed. The case study demonstrates that the mechanical performances of the structure can be well-modeled and simulated by PBM beam. Because of the merits of fewer parameters and convenience of use, this method is suitable to be applied in the concept stage. Another merit is that the optimization results are the requirements for the mechanical performance of the beam section instead of those of the shape and dimensions, bringing flexibility to the succeeding design.
A hypercube compact neural network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rostykus, P.L.; Somani, A.K.
1988-09-01
A major problem facing implementation of neural networks is the connection problem. One popular tradeoff is to remove connections. Random disconnection severely degrades the capabilities. The hypercube based Compact Neural Network (CNN) has structured architecture combined with a rearrangement of the memory vectors gives a larger input space and better degradation than a cost equivalent network with more connections. The CNNs are based on a Hopfield network. The changes from the Hopfield net include states of -1 and +1 and when a node was evaluated to 0, it was not biased either positive or negative, instead it resumed its previousmore » state. L = PEs, N = memories and t/sub ij/s is the weights between i and j.« less
Maximum Torque and Momentum Envelopes for Reaction Wheel Arrays
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Reynolds, Reid G.; Liu, Frank X.; Lebsock, Kenneth L.
2009-01-01
Spacecraft reaction wheel maneuvers are limited by the maximum torque and/or angular momentum that the wheels can provide. For an n-wheel configuration, the torque or momentum envelope can be obtained by projecting the n-dimensional hypercube, representing the domain boundary of individual wheel torques or momenta, into three dimensional space via the 3xn matrix of wheel axes. In this paper, the properties of the projected hypercube are discussed, and algorithms are proposed for determining this maximal torque or momentum envelope for general wheel configurations. Practical strategies for distributing a prescribed torque or momentum among the n wheels are presented, with special emphasis on configurations of four, five, and six wheels.
Barnoya, J; Glantz, S
2002-01-01
Objective: To examine the tobacco industry's strategy to avoid regulations on secondhand smoke exposure in Latin America. Methods: Systematic search of tobacco industry documents available through the internet. All available materials, including confidential reports regarding research, lobbying, and internal memoranda exchanged between the tobacco industry representatives, tobacco industry lawyers, and key players in Latin America. Results: In Latin America, Philip Morris International and British American Tobacco, working through the law firm Covington & Burling, developed a network of well placed physicians and scientists through their "Latin Project" to generate scientific arguments minimising secondhand smoke as a health hazard, produce low estimates of exposure, and to lobby against smoke-free workplaces and public places. The tobacco industry's role was not disclosed. Conclusions: The strategies used by the industry have been successful in hindering development of public health programmes on secondhand smoke. Latin American health professionals need to be aware of this industry involvement and must take steps to counter it to halt the tobacco epidemic in Latin America. PMID:12432156
Reducing Latin America’s Bumper Crop: Babies.
birth control measures. Data was gathered using a literature search which relied heavily on periodicals and materials written as a result of on site research in Latin America by the American Universities Field Staff. The high birth rate in Latin America is caused primarily either by the prohibitions of the Catholic Church against artifical contraceptive methods nor by cultural attitudes towards large families. High birth rates are caused primarily by poverty, illiteracy, and underdevelopment. Where socioechnomic conditions have improved in Latin America birth rates have
Boardman, Allison; Jayawardena, Asitha; Oprescu, Florin; Cook, Thomas; Morcuende, Jose A
2011-01-01
The Ponseti method for correcting clubfoot is a safe, effective, and minimally invasive treatment that has recently been implemented in Latin America. This study evaluates the initial impact and unique barriers to the diffusion of the Ponseti method throughout this region. Structured interviews were conducted with 30 physicians practicing the Ponseti method in three socioeconomically diverse countries: Chile, Peru and Guatemala. Since learning the Ponseti method, these physicians have treated approximately 1,740 clubfoot patients, with an estimated 1,705 (98%) patients treated using the Ponseti method, and 35 (2%) patients treated using surgical techniques. The barriers were classified into the following themes: physician education, health care system of the country, culture and beliefs of patients, physical distance and transport, financial barriers for patients, and parental compliance with the method. The results yielded several common barriers throughout Latin America including lack of physician education, physical distance to the treatment centers, and financial barriers for patients. Information from this study can be used to inform, and to implement and evaluate specific strategies to improve the diffusion of the Ponseti method for treating clubfoot throughout Latin America.
PC-CUBE: A Personal Computer Based Hypercube
NASA Technical Reports Server (NTRS)
Ho, Alex; Fox, Geoffrey; Walker, David; Snyder, Scott; Chang, Douglas; Chen, Stanley; Breaden, Matt; Cole, Terry
1988-01-01
PC-CUBE is an ensemble of IBM PCs or close compatibles connected in the hypercube topology with ordinary computer cables. Communication occurs at the rate of 115.2 K-band via the RS-232 serial links. Available for PC-CUBE is the Crystalline Operating System III (CrOS III), Mercury Operating System, CUBIX and PLOTIX which are parallel I/O and graphics libraries. A CrOS performance monitor was developed to facilitate the measurement of communication and computation time of a program and their effects on performance. Also available are CXLISP, a parallel version of the XLISP interpreter; GRAFIX, some graphics routines for the EGA and CGA; and a general execution profiler for determining execution time spent by program subroutines. PC-CUBE provides a programming environment similar to all hypercube systems running CrOS III, Mercury and CUBIX. In addition, every node (personal computer) has its own graphics display monitor and storage devices. These allow data to be displayed or stored at every processor, which has much instructional value and enables easier debugging of applications. Some application programs which are taken from the book Solving Problems on Concurrent Processors (Fox 88) were implemented with graphics enhancement on PC-CUBE. The applications range from solving the Mandelbrot set, Laplace equation, wave equation, long range force interaction, to WaTor, an ecological simulation.
NASA Astrophysics Data System (ADS)
Miller, K. L.; Berg, S. J.; Davison, J. H.; Sudicky, E. A.; Forsyth, P. A.
2018-01-01
Although high performance computers and advanced numerical methods have made the application of fully-integrated surface and subsurface flow and transport models such as HydroGeoSphere common place, run times for large complex basin models can still be on the order of days to weeks, thus, limiting the usefulness of traditional workhorse algorithms for uncertainty quantification (UQ) such as Latin Hypercube simulation (LHS) or Monte Carlo simulation (MCS), which generally require thousands of simulations to achieve an acceptable level of accuracy. In this paper we investigate non-intrusive polynomial chaos for uncertainty quantification, which in contrast to random sampling methods (e.g., LHS and MCS), represents a model response of interest as a weighted sum of polynomials over the random inputs. Once a chaos expansion has been constructed, approximating the mean, covariance, probability density function, cumulative distribution function, and other common statistics as well as local and global sensitivity measures is straightforward and computationally inexpensive, thus making PCE an attractive UQ method for hydrologic models with long run times. Our polynomial chaos implementation was validated through comparison with analytical solutions as well as solutions obtained via LHS for simple numerical problems. It was then used to quantify parametric uncertainty in a series of numerical problems with increasing complexity, including a two-dimensional fully-saturated, steady flow and transient transport problem with six uncertain parameters and one quantity of interest; a one-dimensional variably-saturated column test involving transient flow and transport, four uncertain parameters, and two quantities of interest at 101 spatial locations and five different times each (1010 total); and a three-dimensional fully-integrated surface and subsurface flow and transport problem for a small test catchment involving seven uncertain parameters and three quantities of interest at 241 different times each. Numerical experiments show that polynomial chaos is an effective and robust method for quantifying uncertainty in fully-integrated hydrologic simulations, which provides a rich set of features and is computationally efficient. Our approach has the potential for significant speedup over existing sampling based methods when the number of uncertain model parameters is modest ( ≤ 20). To our knowledge, this is the first implementation of the algorithm in a comprehensive, fully-integrated, physically-based three-dimensional hydrosystem model.
Growing a hypercubical output space in a self-organizing feature map.
Bauer, H U; Villmann, T
1997-01-01
Neural maps project data from an input space onto a neuron position in a (often lower dimensional) output space grid in a neighborhood preserving way, with neighboring neurons in the output space responding to neighboring data points in the input space. A map-learning algorithm can achieve an optimal neighborhood preservation only, if the output space topology roughly matches the effective structure of the data in the input space. We here present a growth algorithm, called the GSOM or growing self-organizing map, which enhances a widespread map self-organization process, Kohonen's self-organizing feature map (SOFM), by an adaptation of the output space grid during learning. The GSOM restricts the output space structure to the shape of a general hypercubical shape, with the overall dimensionality of the grid and its extensions along the different directions being subject of the adaptation. This constraint meets the demands of many larger information processing systems, of which the neural map can be a part. We apply our GSOM-algorithm to three examples, two of which involve real world data. Using recently developed methods for measuring the degree of neighborhood preservation in neural maps, we find the GSOM-algorithm to produce maps which preserve neighborhoods in a nearly optimal fashion.
ERIC Educational Resources Information Center
Maylath, Bruce
1996-01-01
Uses 90 postsecondary writing instructors and their rankings of 9 student essays to determine whether they have biases for a Greco-Latinate or Anglo-Saxon vocabulary. Raises questions about assessment practices, teaching methods, and possible effects on students who are exposed to teachers who variously favor a Greco-Latinate or an Anglo-Saxon…
Stochastic species abundance models involving special copulas
NASA Astrophysics Data System (ADS)
Huillet, Thierry E.
2018-01-01
Copulas offer a very general tool to describe the dependence structure of random variables supported by the hypercube. Inspired by problems of species abundances in Biology, we study three distinct toy models where copulas play a key role. In a first one, a Marshall-Olkin copula arises in a species extinction model with catastrophe. In a second one, a quasi-copula problem arises in a flagged species abundance model. In a third model, we study completely random species abundance models in the hypercube as those, not of product type, with uniform margins and singular. These can be understood from a singular copula supported by an inflated simplex. An exchangeable singular Dirichlet copula is also introduced, together with its induced completely random species abundance vector.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nash, T.; Atac, R.; Cook, A.
1989-03-06
The ACPMAPS multipocessor is a highly cost effective, local memory parallel computer with a hypercube or compound hypercube architecture. Communication requires the attention of only the two communicating nodes. The design is aimed at floating point intensive, grid like problems, particularly those with extreme computing requirements. The processing nodes of the system are single board array processors, each with a peak power of 20 Mflops, supported by 8 Mbytes of data and 2 Mbytes of instruction memory. The system currently being assembled has a peak power of 5 Gflops. The nodes are based on the Weitek XL Chip set. Themore » system delivers performance at approximately $300/Mflop. 8 refs., 4 figs.« less
A Heuristic Method of Optimal Generalized Hypercube Encoding for Pictorial Databases.
1980-01-15
Sqhm*w1*d~WZ ’Nfr ~ K >~ V January15, lfl *1 ~’>L-N; %j4~S ~ C- (*4Th ~4,.,* - <C ~fl2~ ~tS-ri*’.v **’~~ >, 4. 4-. ? 7 V A > - ~ W>4~,,.. * ~S td ...IF I SUM .6?. I I PSUm a PSUM + SUM *SUM ,2. P~to = PSLIM L53 . RETURN £54. so CALL PF4APII.MI.TEMPRI 155. G070 20 156o.N £57. CCCCCCCCCC 158. C 159. C
Vindrola-Padros, Cecilia; Mertnoff, Rosa; Lasmarias, Cristina; Gómez-Batiste, Xavier
2018-02-01
The integration of palliative care (PC) education into medical and nursing curricula has been identified as an international priority. PC education has undergone significant development in Latin America, but gaps in the integration of PC courses into undergraduate and postgraduate curricula remain. The aim of our review was to systematically examine the delivery of PC education in Latin America in order to explore the content and method of delivery of current PC programs, identify gaps in the availability of education opportunities, and document common barriers encountered in the course of their implementation. We carried out a systematic review of peer-reviewed academic articles and grey literature. Peer-reviewed articles were obtained from the following databases: CINAHL Plus, Embase, the Web of Science, and Medline. Grey literature was obtained from the following directories: the International Association for Hospice and Palliative Care's Global Directory of Education in Palliative Care, the Worldwide Hospice Palliative Care Alliance's lists of palliative care resources, the Latin American Association for Palliative Care's training resources, and the Latin American Atlas of Palliative Care. The inclusion criteria were that the work: (1) focused on describing PC courses; (2) was aimed at healthcare professionals; and (3) was implemented in Latin America. The PRISMA checklist was employed to guide the reporting of methods and findings. We found 36 programs that were delivered in 8 countries. Most of the programs were composed of interdisciplinary teams, taught at a postgraduate level, focused on pain and symptom management, and utilized classroom-based methods. The tools for evaluating the courses were rarely reported. The main barriers during implementation included: a lack of recognition of the importance of PC education, a lack of funding, and the unavailability of trained teaching staff. Considerable work needs to be done to improve the delivery of PC education programs in Latin American countries. Practice-based methods and exposure to clinical settings should be integrated into ongoing courses to facilitate learning. A regional platform needs to be created to share experiences of successful training programs and foster the development of PC education throughout Latin America.
Colombo, Arnaldo Lopes; Cortes, Jorge Alberto; Zurita, Jeannete; Guzman-Blanco, Manuel; Alvarado Matute, Tito; de Queiroz Telles, Flavio; Santolaya, María E; Tiraboschi, Iris Nora; Echevarría, Juan; Sifuentes, Jose; Thompson-Moya, Luis; Nucci, Marcio
2013-01-01
Candidemia is one of the most frequent opportunistic mycoses worldwide. Limited epidemiological studies in Latin America indicate that incidence rates are higher in this region than in the Northern Hemisphere. Diagnosis is often made late in the infection, affecting the initiation of antifungal therapy. A more scientific approach, based on specific parameters, for diagnosis and management of candidemia in Latin America is warranted. 'Recommendations for the diagnosis and management of candidemia' are a series of manuscripts that have been developed by members of the Latin America Invasive Mycosis Network. They aim to provide a set of best-evidence recommendations for the diagnosis and management of candidemia. This publication, 'Recommendations for the diagnosis of candidemia in Latin America', was written to provide guidance to healthcare professionals on the diagnosis of candidemia, as well as on the usefulness and application of susceptibility testing in patients who have a confirmed diagnosis of candidemia. Computerized searches of existing literature were performed by PubMed. The data were extensively reviewed and analyzed by members of the group. The group also met on two occasions to pose questions, discuss conflicting views, and deliberate on a series of management recommendations. 'Recommendations for the diagnosis of candidemia in Latin America' includes diagnostic methods used to detect candidemia, Candida species identification, and susceptibility testing. The availability of methods, their costs and treatment settings are considered. This manuscript is the first of this series that deals with diagnosis and treatment of invasive candidiasis. Other publications in this series include: 'Recommendations for the management of candidemia in adults in Latin America', 'Recommendations for the management of candidemia in children in Latin America', and 'Recommendations for the management of candidemia in neonates in Latin America'. Copyright © 2013 Revista Iberoamericana de Micología. Published by Elsevier Espana. All rights reserved.
Polyhedra and Higher Dimensions.
ERIC Educational Resources Information Center
Scott, Paul
1988-01-01
Describes the definition and characteristics of a regular polyhedron, tessellation, and pseudopolyhedra with diagrams. Discusses the nature of simplex, hypercube, and cross-polytope in the fourth dimension and beyond. (YP)
Retrospective and current risks of mercury to panthers in the Florida Everglades.
Barron, Mace G; Duvall, Stephanie E; Barron, Kyle J
2004-04-01
Florida panthers are an endangered species inhabiting south Florida. Hg has been suggested as a causative factor for low populations and some reported panther deaths, but a quantitative assessment of risks has never been performed. This study quantitatively evaluated retrospective (pre-1992) and current (2002) risks of chronic dietary Hg exposures to panthers in the Florida Everglades. A probabilistic assessment of Hg risks was performed using a dietary exposure model and Latin Hypercube sampling that incorporated the variability and uncertainty in ingestion rate, diet, body weight, and mercury exposure of panthers. Hazard quotients (HQs) for retrospective risks ranged from less than 0.1-20, with a 46% probability of exceeding chronic dietary thresholds for methylmercury. Retrospective risks of developing clinical symptoms, including ataxia and convulsions, had an HQ range of <0.1-5.4 with a 17% probability of exceeding an HQ of 1. Current risks were substantially lower (4% probability of exceedences; HQ range <0.1-3.5) because of an estimated 70-90% decline in Hg exposure to panthers over the last decade. Under worst case conditions of panthers consuming only raccoons from the most contaminated area of the Everglades, current risks of developing clinical symptoms that may lead to death was 4.6%. Current risks of mercury poisoning of panthers with a diversified diet was 0.1% (HQ range of <0.1-1.4). The results of this assessment indicate that past Hg exposures likely adversely affected panthers in the Everglades, but current risks of Hg are low.
Critical Concentration Ratio for Solar Thermoelectric Generators
NASA Astrophysics Data System (ADS)
ur Rehman, Naveed; Siddiqui, Mubashir Ali
2016-10-01
A correlation for determining the critical concentration ratio (CCR) of solar concentrated thermoelectric generators (SCTEGs) has been established, and the significance of the contributing parameters is discussed in detail. For any SCTEG, higher concentration ratio leads to higher temperatures at the hot side of modules. However, the maximum value of this temperature for safe operation is limited by the material properties of the modules and should be considered as an important design constraint. Taking into account this limitation, the CCR can be defined as the maximum concentration ratio usable for a particular SCTEG. The established correlation is based on factors associated with the material and geometric properties of modules, thermal characteristics of the receiver, installation site attributes, and thermal and electrical operating conditions. To reduce the number of terms in the correlation, these factors are combined to form dimensionless groups by applying the Buckingham Pi theorem. A correlation model containing these groups is proposed and fit to a dataset obtained by simulating a thermodynamic (physical) model over sampled values acquired by applying the Latin hypercube sampling (LHS) technique over a realistic distribution of factors. The coefficient of determination and relative error are found to be 97% and ±20%, respectively. The correlation is validated by comparing the predicted results with literature values. In addition, the significance and effects of the Pi groups on the CCR are evaluated and thoroughly discussed. This study will lead to a wide range of opportunities regarding design and optimization of SCTEGs.
'spup' - an R package for uncertainty propagation in spatial environmental modelling
NASA Astrophysics Data System (ADS)
Sawicka, Kasia; Heuvelink, Gerard
2016-04-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability, including case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected static and interactive visualization methods that are understandable by non-experts with limited background in statistics can be used to summarize and visualize uncertainty about the measured input, model parameters and output of the uncertainty propagation. We demonstrate that the 'spup' package is an effective and easy tool to apply and can be used in multi-disciplinary research and model-based decision support.
'spup' - an R package for uncertainty propagation analysis in spatial environmental modelling
NASA Astrophysics Data System (ADS)
Sawicka, Kasia; Heuvelink, Gerard
2017-04-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability and being able to deal with case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected visualization methods that are understandable by non-experts with limited background in statistics can be used to summarize and visualize uncertainty about the measured input, model parameters and output of the uncertainty propagation. We demonstrate that the 'spup' package is an effective and easy tool to apply and can be used in multi-disciplinary research and model-based decision support.
Carabin, Hélène; Balsera-Rodríguez, Francisco J.; Rebollar-Sáenz, José; Benner, Christine T.; Benito, Aitziber; Fernández-Crespo, Juan C.; Carmena, David
2014-01-01
Cystic echinococcosis (CE) is endemic in Spain but has been considered non-endemic in the province of Álava, Northern Spain, since 1997. However, Álava is surrounded by autonomous regions with some of the highest CE prevalence proportions in the nation, casting doubts about the current classification. The purpose of this study is to estimate the frequency of CE in humans and animals and to use this data to determine the societal cost incurred due to CE in the Álava population in 2005. We have identified epidemiological and clinical data from surveillance and hospital records, prevalence data in intermediate (sheep and cattle) host species from abattoir records, and economical data from national and regional official institutions. Direct costs (diagnosis, treatment, medical care in humans and condemnation of offal in livestock species) and indirect costs (productivity losses in humans and reduction in growth, fecundity and milk production in livestock) were modelled using the Latin hypercube method under five different scenarios reflecting different assumptions regarding the prevalence of asymptomatic cases and associated productivity losses in humans. A total of 13 human CE cases were reported in 2005. The median total cost (95% credible interval) of CE in humans and animals in Álava in 2005 was estimated to range between €61,864 (95%CI%: €47,304–€76,590) and €360,466 (95%CI: €76,424–€752,469), with human-associated losses ranging from 57% to 93% of the total losses, depending on the scenario used. Our data provide evidence that CE is still very well present in Álava and incurs important cost to the province every year. We expect this information to prove valuable for public health agencies and policy-makers, as it seems advisable to reinstate appropriate surveillance and monitoring systems and to implement effective control measures that avoid the spread and recrudescence of the disease. PMID:25102173
NASA Astrophysics Data System (ADS)
Ha, Taesung
A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential usefulness of quantifying model uncertainty as sensitivity analysis in the PRA model.
Probabilistic seismic history matching using binary images
NASA Astrophysics Data System (ADS)
Davolio, Alessandra; Schiozer, Denis Jose
2018-02-01
Currently, the goal of history-matching procedures is not only to provide a model matching any observed data but also to generate multiple matched models to properly handle uncertainties. One such approach is a probabilistic history-matching methodology based on the discrete Latin Hypercube sampling algorithm, proposed in previous works, which was particularly efficient for matching well data (production rates and pressure). 4D seismic (4DS) data have been increasingly included into history-matching procedures. A key issue in seismic history matching (SHM) is to transfer data into a common domain: impedance, amplitude or pressure, and saturation. In any case, seismic inversions and/or modeling are required, which can be time consuming. An alternative to avoid these procedures is using binary images in SHM as they allow the shape, rather than the physical values, of observed anomalies to be matched. This work presents the incorporation of binary images in SHM within the aforementioned probabilistic history matching. The application was performed with real data from a segment of the Norne benchmark case that presents strong 4D anomalies, including softening signals due to pressure build up. The binary images are used to match the pressurized zones observed in time-lapse data. Three history matchings were conducted using: only well data, well and 4DS data, and only 4DS. The methodology is very flexible and successfully utilized the addition of binary images for seismic objective functions. Results proved the good convergence of the method in few iterations for all three cases. The matched models of the first two cases provided the best results, with similar well matching quality. The second case provided models presenting pore pressure changes according to the expected dynamic behavior (pressurized zones) observed on 4DS data. The use of binary images in SHM is relatively new with few examples in the literature. This work enriches this discussion by presenting a new application to match pressure in a reservoir segment with complex pressure behavior.
NASA Astrophysics Data System (ADS)
Leta, O. T.; El-Kadi, A. I.; Dulaiova, H.
2016-12-01
Extreme events, such as flooding and drought, are expected to occur at increased frequencies worldwide due to climate change influencing the water cycle. This is particularly critical for tropical islands where the local freshwater resources are very sensitive to climate. This study examined the impact of climate change on extreme streamflow, reservoir water volume and outflow for the Nuuanu watershed, using the Soil and Water Assessment Tool (SWAT) model. Based on the sensitive parameters screened by the Latin Hypercube-One-factor-At-a-Time (LH-OAT) method, SWAT was calibrated and validated to daily streamflow using the SWAT Calibration and Uncertainty Program (SWAT-CUP) at three streamflow gauging stations. Results showed that SWAT adequately reproduced the observed daily streamflow hydrographs at all stations. This was verified with Nash-Sutcliffe Efficiency that resulted in acceptable values of 0.58 to 0.88, whereby more than 90% of observations were bracketed within 95% model prediction uncertainty interval for both calibration and validation periods, signifying the potential applicability of SWAT for future prediction. The climate change impact on extreme flows, reservoir water volume and outflow was assessed under the Representative Concentration Pathways of 4.5 and 8.5 scenarios. We found wide changes in extreme peak and low flows ranging from -44% to 20% and -50% to -2%, respectively, compared to baseline. Consequently, the amount of water stored in Nuuanu reservoir will be decreased up to 27% while the corresponding outflow rates are expected to decrease up to 37% relative to the baseline. In addition, the stored water and extreme flows are highly sensitive to rainfall change when compared to temperature and solar radiation changes. It is concluded that the decrease in extreme low and peak flows can have serious consequences, such as flooding, drought, with detrimental effects on riparian ecological functioning. This study's results are expected to aid in reservoir operation as well as in identifying appropriate climate change adaptation strategies.
Nishida, Yoshifumi; Kobayashi, Hiromi; Nishida, Hideo; Sugimura, Kazuyuki
2013-05-01
The effect of the design parameters of a return channel on the performance of a multistage centrifugal compressor was numerically investigated, and the shape of the return channel was optimized using a multiobjective optimization method based on a genetic algorithm to improve the performance of the centrifugal compressor. The results of sensitivity analysis using Latin hypercube sampling suggested that the inlet-to-outlet area ratio of the return vane affected the total pressure loss in the return channel, and that the inlet-to-outlet radius ratio of the return vane affected the outlet flow angle from the return vane. Moreover, this analysis suggested that the number of return vanes affected both the loss and the flow angle at the outlet. As a result of optimization, the number of return vane was increased from 14 to 22 and the area ratio was decreased from 0.71 to 0.66. The radius ratio was also decreased from 2.1 to 2.0. Performance tests on a centrifugal compressor with two return channels (the original design and optimized design) were carried out using two-stage test apparatus. The measured flow distribution exhibited a swirl flow in the center region and a reversed swirl flow near the hub and shroud sides. The exit flow of the optimized design was more uniform than that of the original design. For the optimized design, the overall two-stage efficiency and pressure coefficient were increased by 0.7% and 1.5%, respectively. Moreover, the second-stage efficiency and pressure coefficient were respectively increased by 1.0% and 3.2%. It is considered that the increase in the second-stage efficiency was caused by the increased uniformity of the flow, and the rise in the pressure coefficient was caused by a decrease in the residual swirl flow. It was thus concluded from the numerical and experimental results that the optimized return channel improved the performance of the multistage centrifugal compressor.
Davidson, Ross S; McKendrick, Iain J; Wood, Joanna C; Marion, Glenn; Greig, Alistair; Stevenson, Karen; Sharp, Michael; Hutchings, Michael R
2012-09-10
A common approach to the application of epidemiological models is to determine a single (point estimate) parameterisation using the information available in the literature. However, in many cases there is considerable uncertainty about parameter values, reflecting both the incomplete nature of current knowledge and natural variation, for example between farms. Furthermore model outcomes may be highly sensitive to different parameter values. Paratuberculosis is an infection for which many of the key parameter values are poorly understood and highly variable, and for such infections there is a need to develop and apply statistical techniques which make maximal use of available data. A technique based on Latin hypercube sampling combined with a novel reweighting method was developed which enables parameter uncertainty and variability to be incorporated into a model-based framework for estimation of prevalence. The method was evaluated by applying it to a simulation of paratuberculosis in dairy herds which combines a continuous time stochastic algorithm with model features such as within herd variability in disease development and shedding, which have not been previously explored in paratuberculosis models. Generated sample parameter combinations were assigned a weight, determined by quantifying the model's resultant ability to reproduce prevalence data. Once these weights are generated the model can be used to evaluate other scenarios such as control options. To illustrate the utility of this approach these reweighted model outputs were used to compare standard test and cull control strategies both individually and in combination with simple husbandry practices that aim to reduce infection rates. The technique developed has been shown to be applicable to a complex model incorporating realistic control options. For models where parameters are not well known or subject to significant variability, the reweighting scheme allowed estimated distributions of parameter values to be combined with additional sources of information, such as that available from prevalence distributions, resulting in outputs which implicitly handle variation and uncertainty. This methodology allows for more robust predictions from modelling approaches by allowing for parameter uncertainty and combining different sources of information, and is thus expected to be useful in application to a large number of disease systems.
A probabilistic assessment of calcium carbonate export and dissolution in the modern ocean
NASA Astrophysics Data System (ADS)
Battaglia, G.; Steinacher, M.; Joos, F.
2015-12-01
The marine cycle of calcium carbonate (CaCO3) is an important element of the carbon cycle and co-governs the distribution of carbon and alkalinity within the ocean. However, CaCO3 fluxes and mechanisms governing CaCO3 dissolution are highly uncertain. We present an observationally-constrained, probabilistic assessment of the global and regional CaCO3 budgets. Parameters governing pelagic CaCO3 export fluxes and dissolution rates are sampled using a Latin-Hypercube scheme to construct a 1000 member ensemble with the Bern3D ocean model. Ensemble results are constrained by comparing simulated and observation-based fields of excess dissolved calcium carbonate (TA*). The minerals calcite and aragonite are modelled explicitly and ocean-sediment fluxes are considered. For local dissolution rates either a strong, a weak or no dependency on CaCO3 saturation is assumed. Median (68 % confidence interval) global CaCO3 export is 0.82 (0.67-0.98) Gt PIC yr-1, within the lower half of previously published estimates (0.4-1.8 Gt PIC yr-1). The spatial pattern of CaCO3 export is broadly consistent with earlier assessments. Export is large in the Southern Ocean, the tropical Indo-Pacific, the northern Pacific and relatively small in the Atlantic. Dissolution within the 200 to 1500 m depth range (0.33; 0.26-0.40 Gt PIC yr-1) is substantially lower than inferred from the TA*-CFC age method (1 ± 0.5 Gt PIC yr-1). The latter estimate is likely biased high as the TA*-CFC method neglects transport. The constrained results are robust across a range of diapycnal mixing coefficients and, thus, ocean circulation strengths. Modelled ocean circulation and transport time scales for the different setups were further evaluated with CFC11 and radiocarbon observations. Parameters and mechanisms governing dissolution are hardly constrained by either the TA* data or the current compilation of CaCO3 flux measurements such that model realisations with and without saturation-dependent dissolution achieve skill. We suggest to apply saturation-independent dissolution rates in Earth System Models to minimise computational costs.
NASA Technical Reports Server (NTRS)
Myers, J. G.; Feola, A.; Werner, C.; Nelson, E. S.; Raykin, J.; Samuels, B.; Ethier, C. R.
2016-01-01
The earliest manifestations of Visual Impairment and Intracranial Pressure (VIIP) syndrome become evident after months of spaceflight and include a variety of ophthalmic changes, including posterior globe flattening and distension of the optic nerve sheath. Prevailing evidence links the occurrence of VIIP to the cephalic fluid shift induced by microgravity and the subsequent pressure changes around the optic nerve and eye. Deducing the etiology of VIIP is challenging due to the wide range of physiological parameters that may be influenced by spaceflight and are required to address a realistic spectrum of physiological responses. Here, we report on the application of an efficient approach to interrogating physiological parameter space through computational modeling. Specifically, we assess the influence of uncertainty in input parameters for two models of VIIP syndrome: a lumped-parameter model (LPM) of the cardiovascular and central nervous systems, and a finite-element model (FEM) of the posterior eye, optic nerve head (ONH) and optic nerve sheath. Methods: To investigate the parameter space in each model, we employed Latin hypercube sampling partial rank correlation coefficient (LHSPRCC) strategies. LHS techniques outperform Monte Carlo approaches by enforcing efficient sampling across the entire range of all parameters. The PRCC method estimates the sensitivity of model outputs to these parameters while adjusting for the linear effects of all other inputs. The LPM analysis addressed uncertainties in 42 physiological parameters, such as initial compartmental volume and nominal compartment percentage of total cardiac output in the supine state, while the FEM evaluated the effects on biomechanical strain from uncertainties in 23 material and pressure parameters for the ocular anatomy. Results and Conclusion: The LPM analysis identified several key factors including high sensitivity to the initial fluid distribution. The FEM study found that intraocular pressure and intracranial pressure had dominant impact on the peak strains in the ONH and retro-laminar optic nerve, respectively; optic nerve and lamina cribrosa stiffness were also important. This investigation illustrates the ability of LHSPRCC to identify the most influential physiological parameters, which must therefore be well-characterized to produce the most accurate numerical results.
Domain decomposition methods in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Gropp, William D.; Keyes, David E.
1991-01-01
The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.
Blastocyst classification systems used in Latin America: is a consensus possible?
Puga-Torres, Tatiana; Blum-Rojas, Xavier; Blum-Narváez, Medardo
2017-01-01
Objective To identify different blastocyst classification systems used by embryologists in Latin American countries and evaluate the possibility of establishing a consensus among these countries. Methods An E-mail survey was carried out through the Latin American Network of Assisted Reproduction (REDLARA) aimed at embryologists from assisted reproduction centers in Latin countries. Results Sixty surveys were collected from 12 Latin American countries, of which 66.7% had >10years of professional practice as embryologists. Seven different blastocyst classification systems were reported, of which 5 have previously been described in the literature. Conclusion Although the group of embryologists surveyed use different blastocyst classification systems, most in this group consider that the embryo score system should be unified in their countries as well as in the region. PMID:28837032
Possible treatments for arsenic removal in Latin American waters for human consumption.
Litter, Marta I; Morgada, Maria E; Bundschuh, Jochen
2010-05-01
Considering the toxic effects of arsenic, the World Health Organization recommends a maximum concentration of 10 microg L(-1) of arsenic in drinking water. Latin American populations present severe health problems due to consumption of waters with high arsenic contents. The physicochemical properties of surface and groundwaters are different from those of other more studied regions of the planet, and the problem is still publicly unknown. Methods for arsenic removal suitable to be applied in Latin American waters are here summarized and commented. Conventional technologies (oxidation, coagulation-coprecipitation, adsorption, reverse osmosis, use of ion exchangers) are described, but emphasis is made in emergent decentralized economical methods as the use of inexpensive natural adsorbents, solar light technologies or biological treatments, as essential to palliate the situation in poor, isolated and dispersed populations of Latin American regions. Copyright 2010 Elsevier Ltd. All rights reserved.
[Recommendations for the diagnosis of candidemia in Latin America. Grupo Proyecto Épico].
Colombo, Arnaldo Lopes; Cortes, Jorge Alberto; Zurita, Jeannete; Guzman-Blanco, Manuel; Alvarado Matute, Tito; de Queiroz Telles, Flavio; Santolaya, María E; Tiraboschi, Iris Nora; Echevarría, Juan; Sifuentes, Jose; Thompson-Moya, Luis; Nucci, Marcio
2013-01-01
Candidemia is one of the most frequent opportunistic mycoses worldwide. Limited epidemiological studies in Latin America indicate that incidence rates are higher in this region than in the Northern Hemisphere. Diagnosis is often made late in the infection, affecting the initiation of antifungal therapy. A more scientific approach, based on specific parameters, for diagnosis and management of candidemia in Latin America is warranted. 'Recommendations for the diagnosis and management of candidemia' are a series of manuscripts that have been developed by members of the Latin America Invasive Mycosis Network. They aim to provide a set of best-evidence recommendations for the diagnosis and management of candidemia. This publication, 'Recommendations for the diagnosis of candidemia in Latin America', was written to provide guidance to healthcare professionals on the diagnosis of candidemia, as well as on the usefulness and application of susceptibility testing in patients who have a confirmed diagnosis of candidemia. Computerized searches of existing literature were performed by PubMed. The data were extensively reviewed and analyzed by members of the group. The group also met on two occasions to pose questions, discuss conflicting views, and deliberate on a series of management recommendations. 'Recommendations for the diagnosis of candidemia in Latin America' includes diagnostic methods used to detect candidemia, Candida species identification, and susceptibility testing. The availability of methods, their costs and treatment settings are considered. This manuscript is the first of this series that deals with diagnosis and treatment of invasive candidiasis. Other publications in this series include: 'Recommendations for the management of candidemia in adults in Latin America', 'Recommendations for the management of candidemia in children in Latin America', and 'Recommendations for the management of candidemia in neonates in Latin America'. Copyright © 2013 Revista Iberoamericana de Micología. Published by Elsevier Espana. All rights reserved.
SPOTting model parameters using a ready-made Python package
NASA Astrophysics Data System (ADS)
Houska, Tobias; Kraft, Philipp; Breuer, Lutz
2015-04-01
The selection and parameterization of reliable process descriptions in ecological modelling is driven by several uncertainties. The procedure is highly dependent on various criteria, like the used algorithm, the likelihood function selected and the definition of the prior parameter distributions. A wide variety of tools have been developed in the past decades to optimize parameters. Some of the tools are closed source. Due to this, the choice for a specific parameter estimation method is sometimes more dependent on its availability than the performance. A toolbox with a large set of methods can support users in deciding about the most suitable method. Further, it enables to test and compare different methods. We developed the SPOT (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of modules, to analyze and optimize parameters of (environmental) models. SPOT comes along with a selected set of algorithms for parameter optimization and uncertainty analyses (Monte Carlo, MC; Latin Hypercube Sampling, LHS; Maximum Likelihood, MLE; Markov Chain Monte Carlo, MCMC; Scuffled Complex Evolution, SCE-UA; Differential Evolution Markov Chain, DE-MCZ), together with several likelihood functions (Bias, (log-) Nash-Sutcliff model efficiency, Correlation Coefficient, Coefficient of Determination, Covariance, (Decomposed-, Relative-, Root-) Mean Squared Error, Mean Absolute Error, Agreement Index) and prior distributions (Binomial, Chi-Square, Dirichlet, Exponential, Laplace, (log-, multivariate-) Normal, Pareto, Poisson, Cauchy, Uniform, Weibull) to sample from. The model-independent structure makes it suitable to analyze a wide range of applications. We apply all algorithms of the SPOT package in three different case studies. Firstly, we investigate the response of the Rosenbrock function, where the MLE algorithm shows its strengths. Secondly, we study the Griewank function, which has a challenging response surface for optimization methods. Here we see simple algorithms like the MCMC struggling to find the global optimum of the function, while algorithms like SCE-UA and DE-MCZ show their strengths. Thirdly, we apply an uncertainty analysis of a one-dimensional physically based hydrological model build with the Catchment Modelling Framework (CMF). The model is driven by meteorological and groundwater data from a Free Air Carbon Enrichment (FACE) experiment in Linden (Hesse, Germany). Simulation results are evaluated with measured soil moisture data. We search for optimal parameter sets of the van Genuchten-Mualem function and find different equally optimal solutions with some of the algorithms. The case studies reveal that the implemented SPOT methods work sufficiently well. They further show the benefit of having one tool at hand that includes a number of parameter search methods, likelihood functions and a priori parameter distributions within one platform independent package.
Genetic code, hamming distance and stochastic matrices.
He, Matthew X; Petoukhov, Sergei V; Ricci, Paolo E
2004-09-01
In this paper we use the Gray code representation of the genetic code C=00, U=10, G=11 and A=01 (C pairs with G, A pairs with U) to generate a sequence of genetic code-based matrices. In connection with these code-based matrices, we use the Hamming distance to generate a sequence of numerical matrices. We then further investigate the properties of the numerical matrices and show that they are doubly stochastic and symmetric. We determine the frequency distributions of the Hamming distances, building blocks of the matrices, decomposition and iterations of matrices. We present an explicit decomposition formula for the genetic code-based matrix in terms of permutation matrices, which provides a hypercube representation of the genetic code. It is also observed that there is a Hamiltonian cycle in a genetic code-based hypercube.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, J.; Derrida, B.
The problem of directed polymers on disordered hierarchical and hypercubic lattices is considered. For the hierarchical lattices the problem can be reduced to the study of the stable laws for combining random variables in a nonlinear way. The authors present the results of numerical simulations of two hierarchical lattices, finding evidence of a phase transition in one case. For a limiting case they extend the perturbation theory developed by Derrida and Griffiths to nonzero temperature and to higher order and use this approach to calculate thermal and geometrical properties (overlaps) of the model. In this limit they obtain an interpolationmore » formula, allowing one to obtain the noninteger moments of the partition function from the integer moments. They obtain bounds for the transition temperature for hierarchical and hypercubic lattices, and some similarities between the problem on the two different types of lattice are discussed.« less
Lysanets, Yuliia V; Bieliaieva, Olena M
2018-02-23
This paper focuses on the prevalence of Latin terms and terminological collocations in the issues of Journal of Medical Case Reports (February 2007-August 2017) and discusses the role of Latin terminology in the contemporary process of writing medical case reports. The objective of the research is to study the frequency of using Latin terminology in English-language medical case reports, thus providing relevant guidelines for medical professionals who deal with this genre and drawing their attention to the peculiarities of using Latin in case reports. The selected medical case reports are considered, using methods of quantitative examination and structural, narrative, and contextual analyses. We developed structural and thematic typologies of Latin terms and expressions, and we conducted a quantitative analysis that enabled us to observe the tendencies in using these lexical units in medical case reports. The research revealed that the use of Latin fully complies with the communicative strategies of medical case reports as a genre. Owing to the fact that Latin medical lexis is internationally adopted and understood worldwide, it promotes the conciseness of medical case reports, as well as contributes to their narrative style and educational intentions. The adequate use of Latin terms in medical case reports is an essential prerequisite of effective sharing of one's clinical findings with fellow researchers from all over the world. Therefore, it is highly important to draw students' attention to Latin terms and expressions that are used in medical case reports most frequently. Hence, the analysis of structural, thematic, and contextual features of Latin terms in case reports should be an integral part of curricula at medical universities.
Probabilistic Analysis and Density Parameter Estimation Within Nessus
NASA Astrophysics Data System (ADS)
Godines, Cody R.; Manteufel, Randall D.
2002-12-01
This NASA educational grant has the goal of promoting probabilistic analysis methods to undergraduate and graduate UTSA engineering students. Two undergraduate-level and one graduate-level course were offered at UTSA providing a large number of students exposure to and experience in probabilistic techniques. The grant provided two research engineers from Southwest Research Institute the opportunity to teach these courses at UTSA, thereby exposing a large number of students to practical applications of probabilistic methods and state-of-the-art computational methods. In classroom activities, students were introduced to the NESSUS computer program, which embodies many algorithms in probabilistic simulation and reliability analysis. Because the NESSUS program is used at UTSA in both student research projects and selected courses, a student version of a NESSUS manual has been revised and improved, with additional example problems being added to expand the scope of the example application problems. This report documents two research accomplishments in the integration of a new sampling algorithm into NESSUS and in the testing of the new algorithm. The new Latin Hypercube Sampling (LHS) subroutines use the latest NESSUS input file format and specific files for writing output. The LHS subroutines are called out early in the program so that no unnecessary calculations are performed. Proper correlation between sets of multidimensional coordinates can be obtained by using NESSUS' LHS capabilities. Finally, two types of correlation are written to the appropriate output file. The program enhancement was tested by repeatedly estimating the mean, standard deviation, and 99th percentile of four different responses using Monte Carlo (MC) and LHS. These test cases, put forth by the Society of Automotive Engineers, are used to compare probabilistic methods. For all test cases, it is shown that LHS has a lower estimation error than MC when used to estimate the mean, standard deviation, and 99th percentile of the four responses at the 50 percent confidence level and using the same number of response evaluations for each method. In addition, LHS requires fewer calculations than MC in order to be 99.7 percent confident that a single mean, standard deviation, or 99th percentile estimate will be within at most 3 percent of the true value of the each parameter. Again, this is shown for all of the test cases studied. For that reason it can be said that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ; furthermore, the newest LHS module is a valuable new enhancement of the program.
Probabilistic Analysis and Density Parameter Estimation Within Nessus
NASA Technical Reports Server (NTRS)
Godines, Cody R.; Manteufel, Randall D.; Chamis, Christos C. (Technical Monitor)
2002-01-01
This NASA educational grant has the goal of promoting probabilistic analysis methods to undergraduate and graduate UTSA engineering students. Two undergraduate-level and one graduate-level course were offered at UTSA providing a large number of students exposure to and experience in probabilistic techniques. The grant provided two research engineers from Southwest Research Institute the opportunity to teach these courses at UTSA, thereby exposing a large number of students to practical applications of probabilistic methods and state-of-the-art computational methods. In classroom activities, students were introduced to the NESSUS computer program, which embodies many algorithms in probabilistic simulation and reliability analysis. Because the NESSUS program is used at UTSA in both student research projects and selected courses, a student version of a NESSUS manual has been revised and improved, with additional example problems being added to expand the scope of the example application problems. This report documents two research accomplishments in the integration of a new sampling algorithm into NESSUS and in the testing of the new algorithm. The new Latin Hypercube Sampling (LHS) subroutines use the latest NESSUS input file format and specific files for writing output. The LHS subroutines are called out early in the program so that no unnecessary calculations are performed. Proper correlation between sets of multidimensional coordinates can be obtained by using NESSUS' LHS capabilities. Finally, two types of correlation are written to the appropriate output file. The program enhancement was tested by repeatedly estimating the mean, standard deviation, and 99th percentile of four different responses using Monte Carlo (MC) and LHS. These test cases, put forth by the Society of Automotive Engineers, are used to compare probabilistic methods. For all test cases, it is shown that LHS has a lower estimation error than MC when used to estimate the mean, standard deviation, and 99th percentile of the four responses at the 50 percent confidence level and using the same number of response evaluations for each method. In addition, LHS requires fewer calculations than MC in order to be 99.7 percent confident that a single mean, standard deviation, or 99th percentile estimate will be within at most 3 percent of the true value of the each parameter. Again, this is shown for all of the test cases studied. For that reason it can be said that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ; furthermore, the newest LHS module is a valuable new enhancement of the program.
A new sampling scheme for developing metamodels with the zeros of Chebyshev polynomials
NASA Astrophysics Data System (ADS)
Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing
2015-09-01
The accuracy of metamodelling is determined by both the sampling and approximation. This article proposes a new sampling method based on the zeros of Chebyshev polynomials to capture the sampling information effectively. First, the zeros of one-dimensional Chebyshev polynomials are applied to construct Chebyshev tensor product (CTP) sampling, and the CTP is then used to construct high-order multi-dimensional metamodels using the 'hypercube' polynomials. Secondly, the CTP sampling is further enhanced to develop Chebyshev collocation method (CCM) sampling, to construct the 'simplex' polynomials. The samples of CCM are randomly and directly chosen from the CTP samples. Two widely studied sampling methods, namely the Smolyak sparse grid and Hammersley, are used to demonstrate the effectiveness of the proposed sampling method. Several numerical examples are utilized to validate the approximation accuracy of the proposed metamodel under different dimensions.
NASA Astrophysics Data System (ADS)
Ausloos, M.; Ivanova, K.
2004-06-01
The classical technical analysis methods of financial time series based on the moving average and momentum is recalled. Illustrations use the IBM share price and Latin American (Argentinian MerVal, Brazilian Bovespa and Mexican IPC) market indices. We have also searched for scaling ranges and exponents in exchange rates between Latin American currencies ($ARS$, $CLP$, $MXP$) and other major currencies $DEM$, $GBP$, $JPY$, $USD$, and $SDR$s. We have sorted out correlations and anticorrelations of such exchange rates with respect to $DEM$, $GBP$, $JPY$ and $USD$. They indicate a very complex or speculative behavior.
A LATIN-based model reduction approach for the simulation of cycling damage
NASA Astrophysics Data System (ADS)
Bhattacharyya, Mainak; Fau, Amelie; Nackenhorst, Udo; Néron, David; Ladevèze, Pierre
2017-11-01
The objective of this article is to introduce a new method including model order reduction for the life prediction of structures subjected to cycling damage. Contrary to classical incremental schemes for damage computation, a non-incremental technique, the LATIN method, is used herein as a solution framework. This approach allows to introduce a PGD model reduction technique which leads to a drastic reduction of the computational cost. The proposed framework is exemplified for structures subjected to cyclic loading, where damage is considered to be isotropic and micro-defect closure effects are taken into account. A difficulty herein for the use of the LATIN method comes from the state laws which can not be transformed into linear relations through an internal variable transformation. A specific treatment of this issue is introduced in this work.
National working conditions surveys in Latin America: comparison of methodological characteristics
Merino-Salazar, Pamela; Artazcoz, Lucía; Campos-Serna, Javier; Gimeno, David; Benavides, Fernando G.
2015-01-01
Background: High-quality and comparable data to monitor working conditions and health in Latin America are not currently available. In 2007, multiple Latin American countries started implementing national working conditions surveys. However, little is known about their methodological characteristics. Objective: To identify commonalities and differences in the methodologies of working conditions surveys (WCSs) conducted in Latin America through 2013. Methods: The study critically examined WCSs in Latin America between 2007 and 2013. Sampling design, data collection, and questionnaire content were compared. Results: Two types of surveys were identified: (1) surveys covering the entire working population and administered at the respondent's home and (2) surveys administered at the workplace. There was considerable overlap in the topics covered by the dimensions of employment and working conditions measured, but less overlap in terms of health outcomes, prevention resources, and activities. Conclusions: Although WCSs from Latin America are similar, there was heterogeneity across surveyed populations and location of the interview. Reducing differences in surveys between countries will increase comparability and allow for a more comprehensive understanding of occupational health in the region. PMID:26079314
Puig, Susana; Potrony, Miriam; Cuellar, Francisco; Puig-Butille, Joan Anton; Carrera, Cristina; Aguilera, Paula; Nagore, Eduardo; Garcia-Casado, Zaida; Requena, Celia; Kumar, Rajiv; Landman, Gilles; Costa Soares de Sá, Bianca; Gargantini Rezze, Gisele; Facure, Luciana; de Avila, Alexandre Leon Ribeiro; Achatz, Maria Isabel; Carraro, Dirce Maria; Duprat Neto, João Pedreira; Grazziotin, Thais C.; Bonamigo, Renan R.; Rey, Maria Carolina W.; Balestrini, Claudia; Morales, Enrique; Molgo, Montserrat; Bakos, Renato Marchiori; Ashton-Prolla, Patricia; Giugliani, Roberto; Larre Borges, Alejandra; Barquet, Virginia; Pérez, Javiera; Martínez, Miguel; Cabo, Horacio; Cohen Sabban, Emilia; Latorre, Clara; Carlos-Ortega, Blanca; Salas-Alanis, Julio C; Gonzalez, Roger; Olazaran, Zulema; Malvehy, Josep; Badenas, Celia
2016-01-01
Purpose: CDKN2A is the main high-risk melanoma-susceptibility gene, but it has been poorly assessed in Latin America. We sought to analyze CDKN2A and MC1R in patients from Latin America with familial and sporadic multiple primary melanoma (SMP) and compare the data with those for patients from Spain to establish bases for melanoma genetic counseling in Latin America. Genet Med 18 7, 727–736. Methods: CDKN2A and MC1R were sequenced in 186 Latin American patients from Argentina, Brazil, Chile, Mexico, and Uruguay, and in 904 Spanish patients. Clinical and phenotypic data were obtained. Genet Med 18 7, 727–736. Results: Overall, 24 and 14% of melanoma-prone families in Latin America and Spain, respectively, had mutations in CDKN2A. Latin American families had CDKN2A mutations more frequently (P = 0.014) than Spanish ones. Of patients with SMP, 10% of those from Latin America and 8.5% of those from Spain had mutations in CDKN2A (P = 0.623). The most recurrent CDKN2A mutations were c.-34G>T and p.G101W. Latin American patients had fairer hair (P = 0.016) and skin (P < 0.001) and a higher prevalence of MC1R variants (P = 0.003) compared with Spanish patients. Genet Med 18 7, 727–736. Conclusion: The inclusion criteria for genetic counseling of melanoma in Latin America may be the same criteria used in Spain, as suggested in areas with low to medium incidence, SMP with at least two melanomas, or families with at least two cases among first- or second-degree relatives. Genet Med 18 7, 727–736. PMID:26681309
1986-12-01
17 III. Analysis of Parallel Design ................................................ 18 Parallel Abstract Data ...Types ........................................... 18 Abstract Data Type .................................................. 19 Parallel ADT...22 Data -Structure Design ........................................... 23 Object-Oriented Design
Discrete sensitivity derivatives of the Navier-Stokes equations with a parallel Krylov solver
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Taylor, Arthur C., III
1994-01-01
This paper solves an 'incremental' form of the sensitivity equations derived by differentiating the discretized thin-layer Navier Stokes equations with respect to certain design variables of interest. The equations are solved with a parallel, preconditioned Generalized Minimal RESidual (GMRES) solver on a distributed-memory architecture. The 'serial' sensitivity analysis code is parallelized by using the Single Program Multiple Data (SPMD) programming model, domain decomposition techniques, and message-passing tools. Sensitivity derivatives are computed for low and high Reynolds number flows over a NACA 1406 airfoil on a 32-processor Intel Hypercube, and found to be identical to those computed on a single-processor Cray Y-MP. It is estimated that the parallel sensitivity analysis code has to be run on 40-50 processors of the Intel Hypercube in order to match the single-processor processing time of a Cray Y-MP.
NASA Astrophysics Data System (ADS)
Bennett, Katrina E.; Urrego Blanco, Jorge R.; Jonko, Alexandra; Bohn, Theodore J.; Atchley, Adam L.; Urban, Nathan M.; Middleton, Richard S.
2018-01-01
The Colorado River Basin is a fundamentally important river for society, ecology, and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent, and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model. We combine global sensitivity analysis with a space-filling Latin Hypercube Sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach. We find that snow-dominated regions are much more sensitive to uncertainties in VIC parameters. Although baseflow and runoff changes respond to parameters used in previous sensitivity studies, we discover new key parameter sensitivities. For instance, changes in runoff and evapotranspiration are sensitive to albedo, while changes in snow water equivalent are sensitive to canopy fraction and Leaf Area Index (LAI) in the VIC model. It is critical for improved modeling to narrow uncertainty in these parameters through improved observations and field studies. This is important because LAI and albedo are anticipated to change under future climate and narrowing uncertainty is paramount to advance our application of models such as VIC for water resource management.
An Elitist Multiobjective Tabu Search for Optimal Design of Groundwater Remediation Systems.
Yang, Yun; Wu, Jianfeng; Wang, Jinguo; Zhou, Zhifang
2017-11-01
This study presents a new multiobjective evolutionary algorithm (MOEA), the elitist multiobjective tabu search (EMOTS), and incorporates it with MODFLOW/MT3DMS to develop a groundwater simulation-optimization (SO) framework based on modular design for optimal design of groundwater remediation systems using pump-and-treat (PAT) technique. The most notable improvement of EMOTS over the original multiple objective tabu search (MOTS) lies in the elitist strategy, selection strategy, and neighborhood move rule. The elitist strategy is to maintain all nondominated solutions within later search process for better converging to the true Pareto front. The elitism-based selection operator is modified to choose two most remote solutions from current candidate list as seed solutions to increase the diversity of searching space. Moreover, neighborhood solutions are uniformly generated using the Latin hypercube sampling (LHS) in the bounded neighborhood space around each seed solution. To demonstrate the performance of the EMOTS, we consider a synthetic groundwater remediation example. Problem formulations consist of two objective functions with continuous decision variables of pumping rates while meeting water quality requirements. Especially, sensitivity analysis is evaluated through the synthetic case for determination of optimal combination of the heuristic parameters. Furthermore, the EMOTS is successfully applied to evaluate remediation options at the field site of the Massachusetts Military Reservation (MMR) in Cape Cod, Massachusetts. With both the hypothetical and the large-scale field remediation sites, the EMOTS-based SO framework is demonstrated to outperform the original MOTS in achieving the performance metrics of optimality and diversity of nondominated frontiers with desirable stability and robustness. © 2017, National Ground Water Association.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez-Solis, A.; Demaziere, C.; Ekberg, C.
2012-07-01
In this paper, multi-group microscopic cross-section uncertainty is propagated through the DRAGON (Version 4) lattice code, in order to perform uncertainty analysis on k{infinity} and 2-group homogenized macroscopic cross-sections predictions. A statistical methodology is employed for such purposes, where cross-sections of certain isotopes of various elements belonging to the 172 groups DRAGLIB library format, are considered as normal random variables. This library is based on JENDL-4 data, because JENDL-4 contains the largest amount of isotopic covariance matrixes among the different major nuclear data libraries. The aim is to propagate multi-group nuclide uncertainty by running the DRAGONv4 code 500 times, andmore » to assess the output uncertainty of a test case corresponding to a 17 x 17 PWR fuel assembly segment without poison. The chosen sampling strategy for the current study is Latin Hypercube Sampling (LHS). The quasi-random LHS allows a much better coverage of the input uncertainties than simple random sampling (SRS) because it densely stratifies across the range of each input probability distribution. Output uncertainty assessment is based on the tolerance limits concept, where the sample formed by the code calculations infers to cover 95% of the output population with at least a 95% of confidence. This analysis is the first attempt to propagate parameter uncertainties of modern multi-group libraries, which are used to feed advanced lattice codes that perform state of the art resonant self-shielding calculations such as DRAGONv4. (authors)« less
Analysis of the influence of handset phone position on RF exposure of brain tissue.
Ghanmi, Amal; Varsier, Nadège; Hadjem, Abdelhamid; Conil, Emmanuelle; Picon, Odile; Wiart, Joe
2014-12-01
Exposure to mobile phone radio frequency (RF) electromagnetic fields depends on many different parameters. For epidemiological studies investigating the risk of brain cancer linked to RF exposure from mobile phones, it is of great interest to characterize brain tissue exposure and to know which parameters this exposure is sensitive to. One such parameter is the position of the phone during communication. In this article, we analyze the influence of the phone position on the brain exposure by comparing the specific absorption rate (SAR) induced in the head by two different mobile phone models operating in Global System for Mobile Communications (GSM) frequency bands. To achieve this objective, 80 different phone positions were chosen using an experiment based on the Latin hypercube sampling (LHS) to select a representative set of positions. The averaged SAR over 10 g (SAR10 g) in the head, the averaged SAR over 1 g (SAR1 g ) in the brain, and the averaged SAR in different anatomical brain structures were estimated at 900 and 1800 MHz for the 80 positions. The results illustrate that SAR distributions inside the brain area are sensitive to the position of the mobile phone relative to the head. The results also show that for 5-10% of the studied positions the SAR10 g in the head and the SAR1 g in the brain can be 20% higher than the SAR estimated for the standard cheek position and that the Specific Anthropomorphic Mannequin (SAM) model is conservative for 95% of all the studied positions. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Zhu, Yueying; Alexandre Wang, Qiuping; Li, Wei; Cai, Xu
2017-09-01
The formation of continuous opinion dynamics is investigated based on a virtual gambling mechanism where agents fight for a limited resource. We propose a model with agents holding opinions between -1 and 1. Agents are segregated into two cliques according to the sign of their opinions. Local communication happens only when the opinion distance between corresponding agents is no larger than a pre-defined confidence threshold. Theoretical analysis regarding special cases provides a deep understanding of the roles of both the resource allocation parameter and confidence threshold in the formation of opinion dynamics. For a sparse network, the evolution of opinion dynamics is negligible in the region of low confidence threshold when the mindless agents are absent. Numerical results also imply that, in the presence of economic agents, high confidence threshold is required for apparent clustering of agents in opinion. Moreover, a consensus state is generated only when the following three conditions are satisfied simultaneously: mindless agents are absent, the resource is concentrated in one clique, and confidence threshold tends to a critical value(=1.25+2/ka ; k_a>8/3 , the average number of friends of individual agents). For fixed a confidence threshold and resource allocation parameter, the most chaotic steady state of the dynamics happens when the fraction of mindless agents is about 0.7. It is also demonstrated that economic agents are more likely to win at gambling, compared to mindless ones. Finally, the importance of three involved parameters in establishing the uncertainty of model response is quantified in terms of Latin hypercube sampling-based sensitivity analysis.
Qian, Yun; Yan, Huiping; Hou, Zhangshuan; ...
2015-04-10
We investigate the sensitivity of precipitation characteristics (mean, extreme and diurnal cycle) to a set of uncertain parameters that influence the qualitative and quantitative behavior of the cloud and aerosol processes in the Community Atmosphere Model (CAM5). We adopt both the Latin hypercube and quasi-Monte Carlo sampling approaches to effectively explore the high-dimensional parameter space and then conduct two large sets of simulations. One set consists of 1100 simulations (cloud ensemble) perturbing 22 parameters related to cloud physics and convection, and the other set consists of 256 simulations (aerosol ensemble) focusing on 16 parameters related to aerosols and cloud microphysics.more » Results show that for the 22 parameters perturbed in the cloud ensemble, the six having the greatest influences on the global mean precipitation are identified, three of which (related to the deep convection scheme) are the primary contributors to the total variance of the phase and amplitude of the precipitation diurnal cycle over land. The extreme precipitation characteristics are sensitive to a fewer number of parameters. The precipitation does not always respond monotonically to parameter change. The influence of individual parameters does not depend on the sampling approaches or concomitant parameters selected. Generally the GLM is able to explain more of the parametric sensitivity of global precipitation than local or regional features. The total explained variance for precipitation is primarily due to contributions from the individual parameters (75-90% in total). The total variance shows a significant seasonal variability in the mid-latitude continental regions, but very small in tropical continental regions.« less
NASA Astrophysics Data System (ADS)
Bertin, Daniel
2017-02-01
An innovative 3-D numerical model for the dynamics of volcanic ballistic projectiles is presented here. The model focuses on ellipsoidal particles and improves previous approaches by considering horizontal wind field, virtual mass forces, and drag forces subjected to variable shape-dependent drag coefficients. Modeling suggests that the projectile's launch velocity and ejection angle are first-order parameters influencing ballistic trajectories. The projectile's density and minor radius are second-order factors, whereas both intermediate and major radii of the projectile are of third order. Comparing output parameters, assuming different input data, highlights the importance of considering a horizontal wind field and variable shape-dependent drag coefficients in ballistic modeling, which suggests that they should be included in every ballistic model. On the other hand, virtual mass forces should be discarded since they almost do not contribute to ballistic trajectories. Simulation results were used to constrain some crucial input parameters (launch velocity, ejection angle, wind speed, and wind azimuth) of the block that formed the biggest and most distal ballistic impact crater during the 1984-1993 eruptive cycle of Lascar volcano, Northern Chile. Subsequently, up to 106 simulations were performed, whereas nine ejection parameters were defined by a Latin-hypercube sampling approach. Simulation results were summarized as a quantitative probabilistic hazard map for ballistic projectiles. Transects were also done in order to depict aerial hazard zones based on the same probabilistic procedure. Both maps combined can be used as a hazard prevention tool for ground and aerial transits nearby unresting volcanoes.
A Novel Approach for Determining Source-Receptor Relationships of Aerosols in Model Simulations
NASA Astrophysics Data System (ADS)
Ma, P.; Gattiker, J.; Liu, X.; Rasch, P. J.
2013-12-01
The climate modeling community usually performs sensitivity studies in the 'one-factor-at-a-time' fashion. However, owing to the a-priori unknown complexity and nonlinearity of the climate system and simulation response, it is computationally expensive to systematically identify the cause-and-effect of multiple factors in climate models. In this study, we use a Gaussian Process emulator, based on a small number of Community Atmosphere Model Version 5.1 (CAM5) simulations (constrained by meteorological reanalyses) using a Latin Hypercube experimental design, to demonstrate that it is possible to characterize model behavior accurately and very efficiently without any modifications to the model itself. We use the emulator to characterize the source-receptor relationships of black carbon (BC), focusing specifically on describing the constituent burden and surface deposition rates from emissions in various regions. Our results show that the emulator is capable of quantifying the contribution of aerosol burden and surface deposition from different source regions, finding that most of current Arctic BC comes from remote sources. We also demonstrate that the sensitivity of the BC burdens to emission perturbations differs for various source regions. For example, the emission growth in Africa where dry convections are strong results in a moderate increase of BC burden over the globe while the same emission growth in the Arctic leads to a significant increase of local BC burdens and surface deposition rates. These results provide insights into the dynamical, physical, and chemical processes of the climate model, and the conclusions may have policy implications for making cost-effective global and regional pollution management strategies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ovchinnikov, Mikhail; Lim, Kyo-Sun Sunny; Larson, Vincent E.
Coarse-resolution climate models increasingly rely on probability density functions (PDFs) to represent subgrid-scale variability of prognostic variables. While PDFs characterize the horizontal variability, a separate treatment is needed to account for the vertical structure of clouds and precipitation. When sub-columns are drawn from these PDFs for microphysics or radiation parameterizations, appropriate vertical correlations must be enforced via PDF overlap specifications. This study evaluates the representation of PDF overlap in the Subgrid Importance Latin Hypercube Sampler (SILHS) employed in the assumed PDF turbulence and cloud scheme called the Cloud Layers Unified By Binormals (CLUBB). PDF overlap in CLUBB-SILHS simulations of continentalmore » and tropical oceanic deep convection is compared with overlap of PDF of various microphysics variables in cloud-resolving model (CRM) simulations of the same cases that explicitly predict the 3D structure of cloud and precipitation fields. CRM results show that PDF overlap varies significantly between different hydrometeor types, as well as between PDFs of mass and number mixing ratios for each species, - a distinction that the current SILHS implementation does not make. In CRM simulations that explicitly resolve cloud and precipitation structures, faster falling species, such as rain and graupel, exhibit significantly higher coherence in their vertical distributions than slow falling cloud liquid and ice. These results suggest that to improve the overlap treatment in the sub-column generator, the PDF correlations need to depend on hydrometeor properties, such as fall speeds, in addition to the currently implemented dependency on the turbulent convective length scale.« less
Simulating maize yield and bomass with spatial variability of soil field capacity
Ma, Liwang; Ahuja, Lajpat; Trout, Thomas; Nolan, Bernard T.; Malone, Robert W.
2015-01-01
Spatial variability in field soil properties is a challenge for system modelers who use single representative values, such as means, for model inputs, rather than their distributions. In this study, the root zone water quality model (RZWQM2) was first calibrated for 4 yr of maize (Zea mays L.) data at six irrigation levels in northern Colorado and then used to study spatial variability of soil field capacity (FC) estimated in 96 plots on maize yield and biomass. The best results were obtained when the crop parameters were fitted along with FCs, with a root mean squared error (RMSE) of 354 kg ha–1 for yield and 1202 kg ha–1 for biomass. When running the model using each of the 96 sets of field-estimated FC values, instead of calibrating FCs, the average simulated yield and biomass from the 96 runs were close to measured values with a RMSE of 376 kg ha–1 for yield and 1504 kg ha–1 for biomass. When an average of the 96 FC values for each soil layer was used, simulated yield and biomass were also acceptable with a RMSE of 438 kg ha–1 for yield and 1627 kg ha–1 for biomass. Therefore, when there are large numbers of FC measurements, an average value might be sufficient for model inputs. However, when the ranges of FC measurements were known for each soil layer, a sampled distribution of FCs using the Latin hypercube sampling (LHS) might be used for model inputs.
Significant uncertainty in global scale hydrological modeling from precipitation data errors
NASA Astrophysics Data System (ADS)
Sperna Weiland, Frederiek C.; Vrugt, Jasper A.; van Beek, Rens (L.) P. H.; Weerts, Albrecht H.; Bierkens, Marc F. P.
2015-10-01
In the past decades significant progress has been made in the fitting of hydrologic models to data. Most of this work has focused on simple, CPU-efficient, lumped hydrologic models using discharge, water table depth, soil moisture, or tracer data from relatively small river basins. In this paper, we focus on large-scale hydrologic modeling and analyze the effect of parameter and rainfall data uncertainty on simulated discharge dynamics with the global hydrologic model PCR-GLOBWB. We use three rainfall data products; the CFSR reanalysis, the ERA-Interim reanalysis, and a combined ERA-40 reanalysis and CRU dataset. Parameter uncertainty is derived from Latin Hypercube Sampling (LHS) using monthly discharge data from five of the largest river systems in the world. Our results demonstrate that the default parameterization of PCR-GLOBWB, derived from global datasets, can be improved by calibrating the model against monthly discharge observations. Yet, it is difficult to find a single parameterization of PCR-GLOBWB that works well for all of the five river basins considered herein and shows consistent performance during both the calibration and evaluation period. Still there may be possibilities for regionalization based on catchment similarities. Our simulations illustrate that parameter uncertainty constitutes only a minor part of predictive uncertainty. Thus, the apparent dichotomy between simulations of global-scale hydrologic behavior and actual data cannot be resolved by simply increasing the model complexity of PCR-GLOBWB and resolving sub-grid processes. Instead, it would be more productive to improve the characterization of global rainfall amounts at spatial resolutions of 0.5° and smaller.
Language and style: A barrier to neurosurgical research and advancement in Latin America
Ashfaq, Adeel; Lazareff, Jorge
2017-01-01
Background: The neurosurgical burden in Latin America is understudied and likely underestimated, thus it is imperative to improve quality, training, and delivery of neurosurgical care. A significant aspect of this endeavor is for Latin America to become an integral aspect of the global neurosurgical community, however, there is a paucity of ideology and literature coming from Central and South America. We sought to explore neurosurgical dialogue originating from Latin America as well as barriers to the advancement of neurosurgery in this region. Methods: We conducted a systematic literature review exploring research originating in Latin America in three international neurosurgical journals – Journal of Neurosurgery, Surgical Neurology International, and World Neurosurgery. We utilized PubMed search algorithms to identify articles. Inclusion criteria included publication within the three aforementioned journals, author affiliation with Latin American institutions, and publication within the specified time frame of January 2014 to July 2017. Results: There were 7469 articles identified that met the search criteria. Of these 7469 articles, 326 (4.4%) were from Latin American nations. Conclusion: Our data suggests a relatively low percentage of neurosurgical research originating from Latin America, suggesting a significant lack of participation in the global neurosurgical community. Barriers to global scientific communication include language, rhetorical style, culture, history, biases, funding, and governmental support. Despite challenges, Latin America is making strides towards improvement including the development of neurosurgical societies, as well as international collaborative training and research programs. We consider our report to be a valid initiation of discussion of the broader issue of neurosurgical communication. PMID:29404195
[The need to develop demographic census systems for Latin America].
Silva, A
1987-01-01
The author presents the case for developing new software packages specifically designed to process population census information for Latin America. The focus is on the problems faced by developing countries in handling vast amounts of data in an efficient way. First, the basic methods of census data processing are discussed, then brief descriptions of some of the available software are included. Finally, ways in which data processing programs could be geared toward and utilized for improving the accuracy of Latin American censuses in the 1990s are proposed.
A framework for building hypercubes using MapReduce
NASA Astrophysics Data System (ADS)
Tapiador, D.; O'Mullane, W.; Brown, A. G. A.; Luri, X.; Huedo, E.; Osuna, P.
2014-05-01
The European Space Agency's Gaia mission will create the largest and most precise three dimensional chart of our galaxy (the Milky Way), by providing unprecedented position, parallax, proper motion, and radial velocity measurements for about one billion stars. The resulting catalog will be made available to the scientific community and will be analyzed in many different ways, including the production of a variety of statistics. The latter will often entail the generation of multidimensional histograms and hypercubes as part of the precomputed statistics for each data release, or for scientific analysis involving either the final data products or the raw data coming from the satellite instruments. In this paper we present and analyze a generic framework that allows the hypercube generation to be easily done within a MapReduce infrastructure, providing all the advantages of the new Big Data analysis paradigm but without dealing with any specific interface to the lower level distributed system implementation (Hadoop). Furthermore, we show how executing the framework for different data storage model configurations (i.e. row or column oriented) and compression techniques can considerably improve the response time of this type of workload for the currently available simulated data of the mission. In addition, we put forward the advantages and shortcomings of the deployment of the framework on a public cloud provider, benchmark against other popular solutions available (that are not always the best for such ad-hoc applications), and describe some user experiences with the framework, which was employed for a number of dedicated astronomical data analysis techniques workshops.
RTNN: The New Parallel Machine in Zaragoza
NASA Astrophysics Data System (ADS)
Sijs, A. J. V. D.
I report on the development of RTNN, a parallel computer designed as a 4^4 hypercube of 256 T9000 transputer nodes, each with 8 MB memory. The peak performance of the machine is expected to be 2.5 Gflops.
Fault tolerant hypercube computer system architecture
NASA Technical Reports Server (NTRS)
Madan, Herb S. (Inventor); Chow, Edward (Inventor)
1989-01-01
A fault-tolerant multiprocessor computer system of the hypercube type comprising a hierarchy of computers of like kind which can be functionally substituted for one another as necessary is disclosed. Communication between the working nodes is via one communications network while communications between the working nodes and watch dog nodes and load balancing nodes higher in the structure is via another communications network separate from the first. A typical branch of the hierarchy reporting to a master node or host computer comprises, a plurality of first computing nodes; a first network of message conducting paths for interconnecting the first computing nodes as a hypercube. The first network provides a path for message transfer between the first computing nodes; a first watch dog node; and a second network of message connecting paths for connecting the first computing nodes to the first watch dog node independent from the first network, the second network provides an independent path for test message and reconfiguration affecting transfers between the first computing nodes and the first switch watch dog node. There is additionally, a plurality of second computing nodes; a third network of message conducting paths for interconnecting the second computing nodes as a hypercube. The third network provides a path for message transfer between the second computing nodes; a fourth network of message conducting paths for connecting the second computing nodes to the first watch dog node independent from the third network. The fourth network provides an independent path for test message and reconfiguration affecting transfers between the second computing nodes and the first watch dog node; and a first multiplexer disposed between the first watch dog node and the second and fourth networks for allowing the first watch dog node to selectively communicate with individual ones of the computing nodes through the second and fourth networks; as well as, a second watch dog node operably connected to the first multiplexer whereby the second watch dog node can selectively communicate with individual ones of the computing nodes through the second and fourth networks. The branch is completed by a first load balancing node; and a second multiplexer connected between the first load balancing node and the first and second watch dog nodes, allowing the first load balancing node to selectively communicate with the first and second watch dog nodes.
The software and algorithms for hyperspectral data processing
NASA Astrophysics Data System (ADS)
Shyrayeva, Anhelina; Martinov, Anton; Ivanov, Victor; Katkovsky, Leonid
2017-04-01
Hyperspectral remote sensing technique is widely used for collecting and processing -information about the Earth's surface objects. Hyperspectral data are combined to form a three-dimensional (x, y, λ) data cube. Department of Aerospace Research of the Institute of Applied Physical Problems of the Belarusian State University presents a general model of the software for hyperspectral image data analysis and processing. The software runs in Windows XP/7/8/8.1/10 environment on any personal computer. This complex has been has been written in C++ language using QT framework and OpenGL for graphical data visualization. The software has flexible structure that consists of a set of independent plugins. Each plugin was compiled as Qt Plugin and represents Windows Dynamic library (dll). Plugins can be categorized in terms of data reading types, data visualization (3D, 2D, 1D) and data processing The software has various in-built functions for statistical and mathematical analysis, signal processing functions like direct smoothing function for moving average, Savitzky-Golay smoothing technique, RGB correction, histogram transformation, and atmospheric correction. The software provides two author's engineering techniques for the solution of atmospheric correction problem: iteration method of refinement of spectral albedo's parameters using Libradtran and analytical least square method. The main advantages of these methods are high rate of processing (several minutes for 1 GB data) and low relative error in albedo retrieval (less than 15%). Also, the software supports work with spectral libraries, region of interest (ROI) selection, spectral analysis such as cluster-type image classification and automatic hypercube spectrum comparison by similarity criterion with similar ones from spectral libraries, and vice versa. The software deals with different kinds of spectral information in order to identify and distinguish spectrally unique materials. Also, the following advantages should be noted: fast and low memory hypercube manipulation features, user-friendly interface, modularity, and expandability.
Three Dimensional Speckle Imaging Employing a Frequency-Locked Tunable Diode Laser
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cannon, Bret D.; Bernacki, Bruce E.; Schiffern, John T.
2015-09-01
We describe a high accuracy frequency stepping method for a tunable diode laser to improve a three dimensional (3D) imaging approach based upon interferometric speckle imaging. The approach, modeled after Takeda, exploits tuning an illumination laser in frequency as speckle interferograms of the object (specklegrams) are acquired at each frequency in a Michelson interferometer. The resulting 3D hypercube of specklegrams encode spatial information in the x-y plane of each image with laser tuning arrayed along its z-axis. We present laboratory data of before and after results showing enhanced 3D imaging resulting from precise laser frequency control.
A partitioning strategy for nonuniform problems on multiprocessors
NASA Technical Reports Server (NTRS)
Berger, M. J.; Bokhari, S.
1985-01-01
The partitioning of a problem on a domain with unequal work estimates in different subddomains is considered in a way that balances the work load across multiple processors. Such a problem arises for example in solving partial differential equations using an adaptive method that places extra grid points in certain subregions of the domain. A binary decomposition of the domain is used to partition it into rectangles requiring equal computational effort. The communication costs of mapping this partitioning onto different microprocessors: a mesh-connected array, a tree machine and a hypercube is then studied. The communication cost expressions can be used to determine the optimal depth of the above partitioning.
NASA Technical Reports Server (NTRS)
Mccormick, S.; Quinlan, D.
1989-01-01
The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids (global and local) to provide adaptive resolution and fast solution of PDEs. Like all such methods, it offers parallelism by using possibly many disconnected patches per level, but is hindered by the need to handle these levels sequentially. The finest levels must therefore wait for processing to be essentially completed on all the coarser ones. A recently developed asynchronous version of FAC, called AFAC, completely eliminates this bottleneck to parallelism. This paper describes timing results for AFAC, coupled with a simple load balancing scheme, applied to the solution of elliptic PDEs on an Intel iPSC hypercube. These tests include performance of certain processes necessary in adaptive methods, including moving grids and changing refinement. A companion paper reports on numerical and analytical results for estimating convergence factors of AFAC applied to very large scale examples.
Income and beyond: Multidimensional Poverty in Six Latin American Countries
ERIC Educational Resources Information Center
Battiston, Diego; Cruces, Guillermo; Lopez-Calva, Luis Felipe; Lugo, Maria Ana; Santos, Maria Emma
2013-01-01
This paper studies multidimensional poverty for Argentina, Brazil, Chile, El Salvador, Mexico and Uruguay for the period 1992-2006. The approach overcomes the limitations of the two traditional methods of poverty analysis in Latin America (income-based and unmet basic needs) by combining income with five other dimensions: school attendance for…
Stereotypes of Latin Americans Perpetuated in Secondary School History Textbooks.
ERIC Educational Resources Information Center
Cruz, Barbara C.
1994-01-01
This study reviewed six history textbooks widely used in grades 7-12 across the U.S. Using a story-line analysis, the findings of this study suggest: (1) textbooks reinforce negative stereotypes of Latin Americans as lazy, passive, irresponsible, and, somewhat paradoxically, lustful, animalistic and violent; (2) the method of description employed…
Experiences and results multitasking a hydrodynamics code on global and local memory machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandell, D.
1987-01-01
A one-dimensional, time-dependent Lagrangian hydrodynamics code using a Godunov solution method has been multitasked for the Cray X-MP/48, the Intel iPSC hypercube, the Alliant FX series and the IBM RP3 computers. Actual multitasking results have been obtained for the Cray, Intel and Alliant computers and simulated results were obtained for the Cray and RP3 machines. The differences in the methods required to multitask on each of the machines is discussed. Results are presented for a sample problem involving a shock wave moving down a channel. Comparisons are made between theoretical speedups, predicted by Amdahl's law, and the actual speedups obtained.more » The problems of debugging on the different machines are also described.« less
Scalable and fault tolerant orthogonalization based on randomized distributed data aggregation
Gansterer, Wilfried N.; Niederbrucker, Gerhard; Straková, Hana; Schulze Grotthoff, Stefan
2013-01-01
The construction of distributed algorithms for matrix computations built on top of distributed data aggregation algorithms with randomized communication schedules is investigated. For this purpose, a new aggregation algorithm for summing or averaging distributed values, the push-flow algorithm, is developed, which achieves superior resilience properties with respect to failures compared to existing aggregation methods. It is illustrated that on a hypercube topology it asymptotically requires the same number of iterations as the optimal all-to-all reduction operation and that it scales well with the number of nodes. Orthogonalization is studied as a prototypical matrix computation task. A new fault tolerant distributed orthogonalization method rdmGS, which can produce accurate results even in the presence of node failures, is built on top of distributed data aggregation algorithms. PMID:24748902
NASA Astrophysics Data System (ADS)
Vesselinov, V. V.; Harp, D.
2010-12-01
The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.
Hypercube Expert System Shell - Applying Production Parallelism.
1989-12-01
possible processor organizations, or int( rconntction n thod,, for par- allel architetures . The following are examples of commonlv used interconnection...this timing analysis because match speed-up avaiiah& from production parallelism is proportional to the average number of affected produclions1 ( 11:5
ERIC Educational Resources Information Center
Chung, Grace H.; Flook, Lisa; Fuligni, Andrew J.
2009-01-01
The authors employed a daily diary method to assess daily frequencies of interparental and parent-adolescent conflict over a 2-week period and their implications for emotional distress across the high school years in a longitudinal sample of 415 adolescents from Latin American, Asian, and European backgrounds. Although family conflict remained…
Matched-filter algorithm for subpixel spectral detection in hyperspectral image data
NASA Astrophysics Data System (ADS)
Borough, Howard C.
1991-11-01
Hyperspectral imagery, spatial imagery with associated wavelength data for every pixel, offers a significant potential for improved detection and identification of certain classes of targets. The ability to make spectral identifications of objects which only partially fill a single pixel (due to range or small size) is of considerable interest. Multiband imagery such as Landsat's 5 and 7 band imagery has demonstrated significant utility in the past. Hyperspectral imaging systems with hundreds of spectral bands offer improved performance. To explore the application of differentpixel spectral detection algorithms a synthesized set of hyperspectral image data (hypercubes) was generated utilizing NASA earth resources and other spectral data. The data was modified using LOWTRAN 7 to model the illumination, atmospheric contributions, attenuations and viewing geometry to represent a nadir view from 10,000 ft. altitude. The base hypercube (HC) represented 16 by 21 spatial pixels with 101 wavelength samples from 0.5 to 2.5 micrometers for each pixel. Insertions were made into the base data to provide random location, random pixel percentage, and random material. Fifteen different hypercubes were generated for blind testing of candidate algorithms. An algorithm utilizing a matched filter in the spectral dimension proved surprisingly good yielding 100% detections for pixels filled greater than 40% with a standard camouflage paint, and a 50% probability of detection for pixels filled 20% with the paint, with no false alarms. The false alarm rate as a function of the number of spectral bands in the range from 101 to 12 bands was measured and found to increase from zero to 50% illustrating the value of a large number of spectral bands. This test was on imagery without system noise; the next step is to incorporate typical system noise sources.
NASA Astrophysics Data System (ADS)
Allphin, Devin
Computational fluid dynamics (CFD) solution approximations for complex fluid flow problems have become a common and powerful engineering analysis technique. These tools, though qualitatively useful, remain limited in practice by their underlying inverse relationship between simulation accuracy and overall computational expense. While a great volume of research has focused on remedying these issues inherent to CFD, one traditionally overlooked area of resource reduction for engineering analysis concerns the basic definition and determination of functional relationships for the studied fluid flow variables. This artificial relationship-building technique, called meta-modeling or surrogate/offline approximation, uses design of experiments (DOE) theory to efficiently approximate non-physical coupling between the variables of interest in a fluid flow analysis problem. By mathematically approximating these variables, DOE methods can effectively reduce the required quantity of CFD simulations, freeing computational resources for other analytical focuses. An idealized interpretation of a fluid flow problem can also be employed to create suitably accurate approximations of fluid flow variables for the purposes of engineering analysis. When used in parallel with a meta-modeling approximation, a closed-form approximation can provide useful feedback concerning proper construction, suitability, or even necessity of an offline approximation tool. It also provides a short-circuit pathway for further reducing the overall computational demands of a fluid flow analysis, again freeing resources for otherwise unsuitable resource expenditures. To validate these inferences, a design optimization problem was presented requiring the inexpensive estimation of aerodynamic forces applied to a valve operating on a simulated piston-cylinder heat engine. The determination of these forces was to be found using parallel surrogate and exact approximation methods, thus evidencing the comparative benefits of this technique. For the offline approximation, latin hypercube sampling (LHS) was used for design space filling across four (4) independent design variable degrees of freedom (DOF). Flow solutions at the mapped test sites were converged using STAR-CCM+ with aerodynamic forces from the CFD models then functionally approximated using Kriging interpolation. For the closed-form approximation, the problem was interpreted as an ideal 2-D converging-diverging (C-D) nozzle, where aerodynamic forces were directly mapped by application of the Euler equation solutions for isentropic compression/expansion. A cost-weighting procedure was finally established for creating model-selective discretionary logic, with a synthesized parallel simulation resource summary provided.
NASA Astrophysics Data System (ADS)
Fitzenz, D. D.; Nyst, M.; Apel, E. V.; Muir-Wood, R.
2014-12-01
The recent Canterbury earthquake sequence (CES) renewed public and academic awareness concerning the clustered nature of seismicity. Multiple event occurrence in short time and space intervals is reminiscent of aftershock sequences, but aftershock is a statistical definition, not a label one can give an earthquake in real-time. Aftershocks are defined collectively as what creates the Omori event rate decay after a large event or are defined as what is taken away as "dependent events" using a declustering method. It is noteworthy that depending on the declustering method used on the Canterbury earthquake sequence, the number of independent events varies a lot. This lack of unambiguous definition of aftershocks leads to the need to investigate the amount of clustering inherent in "declustered" risk models. This is the task we concentrate on in this contribution. We start from a background source model for the Canterbury region, in which 1) centroids of events of given magnitude are distributed using a latin-hypercube lattice, 2) following the range of preferential orientations determined from stress maps and focal mechanism, 3) with length determined using the local scaling relationship and 4) rates from a and b values derived from the declustered pre-2010 catalog. We then proceed to create tens of thousands of realizations of 6 to 20 year periods, and we define criteria to identify which successions of events in the region would be perceived as a sequence. Note that the spatial clustering expected is a lower end compared to a fully uniform distribution of events. Then we perform the same exercise with rates and b-values determined from the catalog including the CES. If the pre-2010 catalog was long (or rich) enough, then the computed "stationary" rates calculated from it would include the CES declustered events (by construction, regardless of the physical meaning of or relationship between those events). In regions of low seismicity rate (e.g., Canterbury before 2010), it is hard to know how long is long enough. Using simulations, we can look at the apparent activity rate for the region over a few years (for example for events above M5.8), see how often it exceeds some level and also long those high activity rate periods last on average.
NASA Technical Reports Server (NTRS)
Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan
1994-01-01
A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.
Optimisation and evaluation of hyperspectral imaging system using machine learning algorithm
NASA Astrophysics Data System (ADS)
Suthar, Gajendra; Huang, Jung Y.; Chidangil, Santhosh
2017-10-01
Hyperspectral imaging (HSI), also called imaging spectrometer, originated from remote sensing. Hyperspectral imaging is an emerging imaging modality for medical applications, especially in disease diagnosis and image-guided surgery. HSI acquires a three-dimensional dataset called hypercube, with two spatial dimensions and one spectral dimension. Spatially resolved spectral imaging obtained by HSI provides diagnostic information about the objects physiology, morphology, and composition. The present work involves testing and evaluating the performance of the hyperspectral imaging system. The methodology involved manually taking reflectance of the object in many images or scan of the object. The object used for the evaluation of the system was cabbage and tomato. The data is further converted to the required format and the analysis is done using machine learning algorithm. The machine learning algorithms applied were able to distinguish between the object present in the hypercube obtain by the scan. It was concluded from the results that system was working as expected. This was observed by the different spectra obtained by using the machine-learning algorithm.
Leading countries in mental health research in Latin America and the Caribbean.
Razzouk, Denise; Zorzetto, Ricardo; Dubugras, Maria Thereza; Gerolin, Jerônimo; Mari, Jair de Jesus
2007-06-01
The prevalence and burden of mental disorders have been growing in Latin-American and the Caribbean countries and research is an important tool for changing this scenario. The objective of this paper is to describe the development of mental health research in Latin American and the Caribbean countries from 1995 to 2005. The indicators of productivity were based on the ISI Essential Science Indicators database. We compared the number of papers and citations, as well as the number of citations per paper between 1995 and 2005 for each country ranked in the Essential Science Indicators. Eleven Latin-American countries were ranked in the ISI database and six of them demonstrated a higher level of development in mental health research: Argentina, Brazil, Chile, Colombia, Mexico, and Venezuela. Mexico produced the largest number of papers, while Brazil showed a larger number of citations per paper. Mental health research is still incipient in Latin American and the Caribbean countries, and many challenges remain to be overcome. Also, it is necessary to establish the research priorities, to allocate more funding, and to improve researchers training in research method and design.
NASA Astrophysics Data System (ADS)
Xiao, D.; Shi, Y.; Hoagland, B.; Del Vecchio, J.; Russo, T. A.; DiBiase, R. A.; Li, L.
2017-12-01
How do watershed hydrologic processes differ in catchments derived from different lithology? This study compares two first order, deciduous forest watersheds in Pennsylvania, a sandstone watershed, Garner Run (GR, 1.34 km2), and a shale-derived watershed, Shale Hills (SH, 0.08 km2). Both watersheds are simulated using a combination of national datasets and field measurements, and a physics-based land surface hydrologic model, Flux-PIHM. We aim to evaluate the effects of lithology on watershed hydrology and assess if we can simulate a new watershed without intensive measurements, i.e., directly use calibration information from one watershed (SH) to reproduce hydrologic dynamics of another watershed (GR). Without any calibration, the model at GR based on national datasets and calibration inforamtion from SH cannot capture some discharge peaks or the baseflow during dry periods. The model prediction agrees well with the GR field discharge and soil moisture after calibrating the soil hydraulic parameters using the uncertainty based Hornberger-Spear-Young algorithm and the Latin Hypercube Sampling method. Agreeing with the field observation and national datasets, the difference in parameter values shows that the sandstone watershed has a larger averaged soil pore diameter, greater water storage created by porosity, lower water retention ability, and greater preferential flow. The water budget calculation shows that the riparian zone and the colluvial valley serves as buffer zones that stores water at GR. Using the same procedure, we compared Flux-PIHM simulations with and without a field measured surface boulder map at GR. When the boulder map is used, the prediction of areal averaged soil moisture is improved, without performing extra calibration. When calibrated separately, the cases with or without boulder map yield different calibration values, but their hydrologic predictions are similar, showing equifinality. The calibrated soil hydraulic parameter values in the with boulder map case is more physically plausible than the without boulder map case. We switched the topography and soil properties between GR and SH, and results indicate that the hydrologic processes are more sensitive to changes in domain topography than to changes in the soil properties.
Uncertainty Analysis of Simulated Hydraulic Fracturing
NASA Astrophysics Data System (ADS)
Chen, M.; Sun, Y.; Fu, P.; Carrigan, C. R.; Lu, Z.
2012-12-01
Artificial hydraulic fracturing is being used widely to stimulate production of oil, natural gas, and geothermal reservoirs with low natural permeability. Optimization of field design and operation is limited by the incomplete characterization of the reservoir, as well as the complexity of hydrological and geomechanical processes that control the fracturing. Thus, there are a variety of uncertainties associated with the pre-existing fracture distribution, rock mechanics, and hydraulic-fracture engineering that require evaluation of their impact on the optimized design. In this study, a multiple-stage scheme was employed to evaluate the uncertainty. We first define the ranges and distributions of 11 input parameters that characterize the natural fracture topology, in situ stress, geomechanical behavior of the rock matrix and joint interfaces, and pumping operation, to cover a wide spectrum of potential conditions expected for a natural reservoir. These parameters were then sampled 1,000 times in an 11-dimensional parameter space constrained by the specified ranges using the Latin-hypercube method. These 1,000 parameter sets were fed into the fracture simulators, and the outputs were used to construct three designed objective functions, i.e. fracture density, opened fracture length and area density. Using PSUADE, three response surfaces (11-dimensional) of the objective functions were developed and global sensitivity was analyzed to identify the most sensitive parameters for the objective functions representing fracture connectivity, which are critical for sweep efficiency of the recovery process. The second-stage high resolution response surfaces were constructed with dimension reduced to the number of the most sensitive parameters. An additional response surface with respect to the objective function of the fractal dimension for fracture distributions was constructed in this stage. Based on these response surfaces, comprehensive uncertainty analyses were conducted among input parameters and objective functions. In addition, reduced-order emulation models resulting from this analysis can be used for optimal control of hydraulic fracturing. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Male Circumcision and the Epidemic Emergence of HIV-2 in West Africa
Hewlett, Barry Stephen; Camacho, Ricardo Jorge
2016-01-01
Background Epidemic HIV-2 (groups A and B) emerged in humans circa 1930–40. Its closest ancestors are SIVsmm infecting sooty mangabeys from southwestern Côte d'Ivoire. The earliest large-scale serological surveys of HIV-2 in West Africa (1985–91) show a patchy spread. Côte d'Ivoire and Guinea-Bissau had the highest prevalence rates by then, and phylogeographical analysis suggests they were the earliest epicenters. Wars and parenteral transmission have been hypothesized to have promoted HIV-2 spread. Male circumcision (MC) is known to correlate negatively with HIV-1 prevalence in Africa, but studies examining this issue for HIV-2 are lacking. Methods We reviewed published HIV-2 serosurveys for 30 cities of all West African countries and obtained credible estimates of real prevalence through Bayesian estimation. We estimated past MC rates of 218 West African ethnic groups, based on ethnographic literature and fieldwork. We collected demographic tables specifying the ethnic partition in cities. Uncertainty was incorporated by defining plausible ranges of parameters (e.g. timing of introduction, proportion circumcised). We generated 1,000 sets of past MC rates per city using Latin Hypercube Sampling with different parameter combinations, and explored the correlation between HIV-2 prevalence and estimated MC rate (both logit-transformed) in the 1,000 replicates. Results and Conclusions Our survey reveals that, in the early 20th century, MC was far less common and geographically more variable than nowadays. HIV-2 prevalence in 1985–91 and MC rates in 1950 were negatively correlated (Spearman rho = -0.546, IQR: -0.553–-0.546, p≤0.0021). Guinea-Bissau and Côte d'Ivoire cities had markedly lower MC rates. In addition, MC was uncommon in rural southwestern Côte d'Ivoire in 1930.The differential HIV-2 spread in West Africa correlates with different historical MC rates. We suggest HIV-2 only formed early substantial foci in cities with substantial uncircumcised populations. Lack of MC in rural areas exposed to bushmeat may have had a role in successful HIV-2 emergence. PMID:27926927
ERIC Educational Resources Information Center
Holck, Lasse; Saiz, Monika Contreras
2010-01-01
This article compares the methods and means employed by the state to enforce the education of (semi-)autonomous indigenous groups in southern Chile and northwestern Mexico (Sonora), border regions in the Latin American periphery, covering the transition from colonial times to the consolidation of independent republics until the middle of the…
Sperandio, Naiara; Morais, Dayane de Castro; Priore, Silvia Eloiza
2018-02-01
The scope of this systematic review was to compare the food insecurity scales validated and used in the countries in Latin America and the Caribbean, and analyze the methods used in validation studies. A search was conducted in the Lilacs, SciELO and Medline electronic databases. The publications were pre-selected by titles and abstracts, and subsequently by a full reading. Of the 16,325 studies reviewed, 14 were selected. Twelve validated scales were identified for the following countries: Venezuela, Brazil, Colombia, Bolivia, Ecuador, Costa Rica, Mexico, Haiti, the Dominican Republic, Argentina and Guatemala. Besides these, there is the Latin American and Caribbean scale, the scope of which is regional. The scales ranged from the standard reference used, number of questions and diagnosis of insecurity. The methods used by the studies for internal validation were calculation of Cronbach's alpha and the Rasch model; for external validation the authors calculated association and /or correlation with socioeconomic and food consumption variables. The successful experience of Latin America and the Caribbean in the development of national and regional scales can be an example for other countries that do not have this important indicator capable of measuring the phenomenon of food insecurity.
NASA Astrophysics Data System (ADS)
Marchand, Pierre; Brisebois, Alexandre; Bédard, Yvan; Edwards, Geoffrey
This paper presents the results obtained with a new type of spatiotemporal topological dimension implemented within a hypercube, i.e., within a multidimensional database (MDDB) structure formed by the conjunction of several thematic, spatial and temporal dimensions. Our goal is to support efficient SpatioTemporal Exploration and Analysis (STEA) in the context of Automatic Position Reporting System (APRS), the worldwide amateur radio system for position report transmission. Mobile APRS stations are equipped with GPS navigation systems to provide real-time positioning reports. Previous research about the multidimensional approach has proved good potential for spatiotemporal exploration and analysis despite a lack of explicit topological operators (spatial, temporal and spatiotemporal). Our project implemented such operators through a hierarchy of operators that are applied to pairs of instances of objects. At the top of the hierarchy, users can use simple operators such as "same place", "same time" or "same time, same place". As they drill down into the hierarchy, more detailed topological operators are made available such as "adjacent immediately after", "touch during" or more detailed operators. This hierarchy is structured according to four levels of granularity based on cognitive models, generalized relationships and formal models of topological relationships. In this paper, we also describe the generic approach which allows efficient STEA within the multidimensional approach. Finally, we demonstrate that such an implementation offers query run times which permit to maintain a "train-of-thought" during exploration and analysis operations as they are compatible with Newell's cognitive band (query runtime<10 s) (Newell, A., 1990. Unified theories of cognition. Harvard University Press, Cambridge MA, 549 p.).
Urquia, Marcelo L.
2015-01-01
Background We delved into the selective migration hypothesis on health by comparing birth outcomes of Latin American immigrants giving birth in two receiving countries with dissimilar immigration admission policies: Canada and Spain. We hypothesized that a stronger immigrant selection in Canada will reflect more favourable outcomes among Latin Americans giving birth in Canada than among their counterparts giving birth in Spain. Materials and Methods We conducted a cross-sectional bi-national comparative study. We analyzed birth data of singleton infants born in Canada (2000–2005) (N = 31,767) and Spain (1998–2007) (N = 150,405) to mothers born in Spanish-speaking Latin American countries. We compared mean birthweight at 37–41 weeks gestation, and low birthweight and preterm birth rates between Latin American immigrants to Canada vs. Spain. Regression analysis for aggregate data was used to obtain Odds Ratios and Mean birthweight differences adjusted for infant sex, maternal age, parity, marital status, and father born in same source country. Results Latin American women in Canada had heavier newborns than their same-country counterparts giving birth in Spain, overall [adjusted mean birthweight difference: 101 grams; 95% confidence interval (CI): 98, 104], and within each maternal country of origin. Latin American women in Canada had fewer low birthweight and preterm infants than those giving birth in Spain [adjusted Odds Ratio: 0.88; 95% CI: 0.82, 0.94 for low birthweight, and 0.88; 95% CI: 0.84, 0.93 for preterm birth, respectively]. Conclusion Latin American immigrant women had better birth outcomes in Canada than in Spain, suggesting a more selective migration in Canada than in Spain. PMID:26308857
Breast Cancer in Young Women in Latin America: An Unmet, Growing Burden
Aguila, Christian; Magallanes-Hoyos, Maria C.; Mohar, Alejandro; Bargalló, Enrique; Meneses, Abelardo; Cazap, Eduardo; Gomez, Henry; López-Carrillo, Lizbeth; Chávarri-Guerra, Yanin; Murillo, Raúl; Barrios, Carlos
2013-01-01
Background. Breast cancer (BC) is the leading cause of malignancy-related deaths among women aged ≤45 years. There are unexplored and uncertain issues for BC in this particular group in Latin America. The aim of this study is to evaluate BC incidence and mortality among young women and related clinicopathological and survivorship aspects in this region. Materials and Methods. Data were obtained from Globocan 2008 and the International Agency for Research on Cancer's Cancer Incidence in Five Continents series plus databases. We requested collaboration from the 12 different national cancer institutes in Latin America through SLACOM, the Latin American and Caribbean Society of Medical Oncology, and conducted a systematic literature review to obtain local data regarding the prevalence of BC among young women and their characteristics, outcomes, and survivorship-related issues. Results. BC incidence and mortality proportions for Latin American women aged <44 years were higher when compared with those of developed countries (20% vs. 12% and 14% vs. 7%, respectively). We found only a few Latin American series addressing this topic, and prevalence varied between 8% and 14%. Stage II and III disease, high histological grade, and triple-negative and HER2 BC were features frequently observed among young Latin American BC patients. Conclusion. The rising incidence and mortality of BC in young Latin American women is a call to action in the region. It is necessary to monitor the epidemiological and clinical data through reliable cancer registries and to consider the implementation of protocols for education of patients and health professionals. This unmet, growing burden must be considered as a top priority of the national programs in the fight against BC, and models of specialized units should be implemented for this particular group of patients to provide better care for this emergent challenge. PMID:24277771
Predictive Array Design. A method for sampling combinatorial chemistry library space.
Lipkin, M J; Rose, V S; Wood, J
2002-01-01
A method, Predictive Array Design, is presented for sampling combinatorial chemistry space and selecting a subarray for synthesis based on the experimental design method of Latin Squares. The method is appropriate for libraries with three sites of variation. Libraries with four sites of variation can be designed using the Graeco-Latin Square. Simulated annealing is used to optimise the physicochemical property profile of the sub-array. The sub-array can be used to make predictions of the activity of compounds in the all combinations array if we assume each monomer has a relatively constant contribution to activity and that the activity of a compound is composed of the sum of the activities of its constitutive monomers.
NASA Astrophysics Data System (ADS)
Wang, H.; Asefa, T.
2017-12-01
A real-time decision support tool (DST) for water supply system would consider system uncertainties, e.g., uncertain streamflow and demand, as well as operational constraints and infrastructure outage (e.g., pump station shutdown, an offline reservoir due to maintenance). Such DST is often used by water managers for resource allocation and delivery for customers. Although most seasonal DST used by water managers recognize those system uncertainties and operational constraints, most use only historical information or assume deterministic outlook of water supply systems. This study presents a seasonal DST that incorporates rainfall/streamflow uncertainties, seasonal demand outlook and system operational constraints. Large scale climate-information is captured through a rainfall simulator driven by a Bayesian non-homogeneous Markov Chain Monte Carlo model that allows non-stationary transition probabilities contingent on Nino 3.4 index. An ad-hoc seasonal demand forecasting model considers weather conditions explicitly and socio-economic factors implicitly. Latin Hypercube sampling is employed to effectively sample probability density functions of flow and demand. Seasonal system operation is modelled as a mixed-integer optimization problem that aims at minimizing operational costs. It embeds the flexibility of modifying operational rules at different components, e.g., surface water treatment plants, desalination facilities, and groundwater pumping stations. The proposed framework is illustrated at a wholesale water supplier in Southeastern United States, Tampa Bay Water. The use of the tool is demonstrated in proving operational guidance in a typical drawdown and refill cycle of a regional reservoir. The DST provided: 1) probabilistic outlook of reservoir storage and chance of a successful refill by the end of rainy season; 2) operational expectations for large infrastructures (e.g., high service pumps and booster stations) throughout the season. Other potential use of such DST is also discussed.
NASA Astrophysics Data System (ADS)
Reisner, J. M.; Dubey, M. K.
2010-12-01
To both quantify and reduce uncertainty in ice activation parameterizations for stratus clouds occurring in the temperature range between -5 to -10 C ensemble simulations of an ISDAC golden case have been conducted. To formulate the ensemble, three parameters found within an ice activation model have been sampled using a Latin hypercube technique over a parameter range that induces large variability in both number and mass of ice. The ice activation model is contained within a Lagrangian cloud model that simulates particle number as a function of radius for cloud ice, snow, graupel, cloud, and rain particles. A unique aspect of this model is that it produces very low levels of numerical diffusion that enable the model to accurately resolve the sharp cloud edges associated with the ISDAC stratus deck. Another important aspect of the model is that near the cloud edges the number of particles can be significantly increased to reduce sampling errors and accurately resolve physical processes such as collision-coalescence that occur in this region. Thus, given these relatively low numerical errors, as compared to traditional bin models, the sensitivity of a stratus deck to changes in parameters found within the activation model can be examined without fear of numerical contamination. Likewise, once the ensemble has been completed, ISDAC observations can be incorporated into a Kalman filter to optimally estimate the ice activation parameters and reduce overall model uncertainty. Hence, this work will highlight the ability of an ensemble Kalman filter system coupled to a highly accurate numerical model to estimate important parameters found within microphysical parameterizations containing high uncertainty.
Landscape patterns and soil organic carbon stocks in agricultural bocage landscapes
NASA Astrophysics Data System (ADS)
Viaud, Valérie; Lacoste, Marine; Michot, Didier; Walter, Christian
2014-05-01
Soil organic carbon (SOC) has a crucial impact on global carbon storage at world scale. SOC spatial variability is controlled by the landscape patterns resulting from the continuous interactions between the physical environment and the society. Natural and anthropogenic processes occurring and interplaying at the landscape scale, such as soil redistribution in the lateral and vertical dimensions by tillage and water erosion processes or spatial differentiation of land-use and land-management practices, strongly affect SOC dynamics. Inventories of SOC stocks, reflecting their spatial distribution, are thus key elements to develop relevant management strategies to improving carbon sequestration and mitigating climate change and soil degradation. This study aims to quantify SOC stocks and their spatial distribution in a 1,000-ha agricultural bocage landscape with dairy production as dominant farming system (Zone Atelier Armorique, LTER Europe, NW France). The site is characterized by high heterogeneity on short distance due to a high diversity of soils with varying waterlogging, soil parent material, topography, land-use and hedgerow density. SOC content and stocks were measured up to 105-cm depth in 200 sampling locations selected using conditioned Latin hypercube sampling. Additive sampling was designed to specifically explore SOC distribution near to hedges: 112 points were sampled at fixed distance on 14 transects perpendicular from hedges. We illustrate the heterogeneity of spatial and vertical distribution of SOC stocks at landscape scale, and quantify SOC stocks in the various landscape components. Using multivariate statistics, we discuss the variability and co-variability of existing spatial organization of cropping systems, environmental factors, and SOM stocks, over landscape. Ultimately, our results may contribute to improving regional or national digital soil mapping approaches, by considering the distribution of SOC stocks within each modeling unit and by accounting for the impact of sensitive ecosystems.
Abdulhameed, Mohanad F; Habib, Ihab; Al-Azizz, Suzan A; Robertson, Ian
2018-02-01
Cystic echinococcosis (CE) is a highly endemic parasitic zoonosis in Iraq with substantial impacts on livestock productivity and human health. The objectives of this study were to study the abattoir-based occurrence of CE in marketed offal of sheep in Basrah province, Iraq, and to estimate, using a probabilistic modelling approach, the direct economic losses due to hydatid cysts. Based on detailed visual meat inspection, results from an active abattoir survey in this study revealed detection of hydatid cysts in 7.3% (95% CI: 5.4; 9.6) of 631 examined sheep carcasses. Post-mortem lesions of hydatid cyst were concurrently present in livers and lungs of more than half (54.3% (25/46)) of the positive sheep. Direct economic losses due to hydatid cysts in marketed offal were estimated using data from government reports, the one abattoir survey completed in this study, and expert opinions of local veterinarians and butchers. A Monte-Carlo simulation model was developed in a spreadsheet utilizing Latin Hypercube sampling to account for uncertainty in the input parameters. The model estimated that the average annual economic losses associated with hydatid cysts in the liver and lungs of sheep marketed for human consumption in Basrah to be US$72,470 (90% Confidence Interval (CI); ±11,302). The mean proportion of annual losses in meat products value (carcasses and offal) due to hydatid cysts in the liver and lungs of sheep marketed in Basrah province was estimated as 0.42% (90% CI; ±0.21). These estimates suggest that CE is responsible for considerable livestock-associated monetary losses in the south of Iraq. These findings can be used to inform different regional CE control program options in Iraq.
NASA Astrophysics Data System (ADS)
Houska, Tobias; Kraus, David; Kiese, Ralf; Breuer, Lutz
2017-07-01
This study presents the results of a combined measurement and modelling strategy to analyse N2O and CO2 emissions from adjacent arable land, forest and grassland sites in Hesse, Germany. The measured emissions reveal seasonal patterns and management effects, including fertilizer application, tillage, harvest and grazing. The measured annual N2O fluxes are 4.5, 0.4 and 0.1 kg N ha-1 a-1, and the CO2 fluxes are 20.0, 12.2 and 3.0 t C ha-1 a-1 for the arable land, grassland and forest sites, respectively. An innovative model-data fusion concept based on a multicriteria evaluation (soil moisture at different depths, yield, CO2 and N2O emissions) is used to rigorously test the LandscapeDNDC biogeochemical model. The model is run in a Latin-hypercube-based uncertainty analysis framework to constrain model parameter uncertainty and derive behavioural model runs. The results indicate that the model is generally capable of predicting trace gas emissions, as evaluated with RMSE as the objective function. The model shows a reasonable performance in simulating the ecosystem C and N balances. The model-data fusion concept helps to detect remaining model errors, such as missing (e.g. freeze-thaw cycling) or incomplete model processes (e.g. respiration rates after harvest). This concept further elucidates the identification of missing model input sources (e.g. the uptake of N through shallow groundwater on grassland during the vegetation period) and uncertainty in the measured validation data (e.g. forest N2O emissions in winter months). Guidance is provided to improve the model structure and field measurements to further advance landscape-scale model predictions.
Park, Daeryong; Roesner, Larry A
2013-09-01
The performance of stormwater best management practices (BMPs) is affected by BMP geometric and hydrologic factors. The objective of this study was to investigate the effect of BMP surface area and inflow on BMP performance using the k-C* model with uncertainty analysis. Observed total suspended solids (TSS) from detention basins and retention ponds data sets in the International Stormwater BMP Database were used to build and evaluate the model. Detention basins are regarded as dry ponds because they do not always have water, whereas retention ponds have a permanent pool and are considered wet ponds. In this study, Latin hypercube sampling (LHS) was applied to consider uncertainty in both influent event mean concentration (EMC), C(in), and the areal removal constant, k. The latter was estimated from the hydraulic loading rate, q, through use of a power function relationship. Results show that effluent EMC, C(out), decreased as inflow decreased and as BMP surface area increased in both detention basins and retention ponds. However, the change in C(out), depending on inflow and BMP surface area for detention basins, differed from the change in C(out) for retention ponds. Specifically, C(in) was more dominantly associated with the performance of the k-C* model of detention basins than were BMP surface area and inflow. For retention ponds, however, results suggest that BMP surface area and inflow both influenced changes in C(out) as well as C(in). These results suggest that sensitive factors in the performance of the k-C* model are limited to C(in) for detention basins, whereas BMP surface area, inflow, and C(in) are important for retention ponds.
Using palaeoclimate data to improve models of the Antarctic Ice Sheet
NASA Astrophysics Data System (ADS)
Phipps, Steven; King, Matt; Roberts, Jason; White, Duanne
2017-04-01
Ice sheet models are the most descriptive tools available to simulate the future evolution of the Antarctic Ice Sheet (AIS), including its contribution towards changes in global sea level. However, our knowledge of the dynamics of the coupled ice-ocean-lithosphere system is inevitably limited, in part due to a lack of observations. Furthemore, to build computationally efficient models that can be run for multiple millennia, it is necessary to use simplified descriptions of ice dynamics. Ice sheet modelling is therefore an inherently uncertain exercise. The past evolution of the AIS provides an opportunity to constrain the description of physical processes within ice sheet models and, therefore, to constrain our understanding of the role of the AIS in driving changes in global sea level. We use the Parallel Ice Sheet Model (PISM) to demonstrate how palaeoclimate data can improve our ability to predict the future evolution of the AIS. A 50-member perturbed-physics ensemble is generated, spanning uncertainty in the parameterisations of three key physical processes within the model: (i) the stress balance within the ice sheet, (ii) basal sliding and (iii) calving of ice shelves. A Latin hypercube approach is used to optimally sample the range of uncertainty in parameter values. This perturbed-physics ensemble is used to simulate the evolution of the AIS from the Last Glacial Maximum ( 21,000 years ago) to present. Palaeoclimate records are then used to determine which ensemble members are the most realistic. This allows us to use data on past climates to directly constrain our understanding of the past contribution of the AIS towards changes in global sea level. Critically, it also allows us to determine which ensemble members are likely to generate the most realistic projections of the future evolution of the AIS.
Using paleoclimate data to improve models of the Antarctic Ice Sheet
NASA Astrophysics Data System (ADS)
King, M. A.; Phipps, S. J.; Roberts, J. L.; White, D.
2016-12-01
Ice sheet models are the most descriptive tools available to simulate the future evolution of the Antarctic Ice Sheet (AIS), including its contribution towards changes in global sea level. However, our knowledge of the dynamics of the coupled ice-ocean-lithosphere system is inevitably limited, in part due to a lack of observations. Furthemore, to build computationally efficient models that can be run for multiple millennia, it is necessary to use simplified descriptions of ice dynamics. Ice sheet modeling is therefore an inherently uncertain exercise. The past evolution of the AIS provides an opportunity to constrain the description of physical processes within ice sheet models and, therefore, to constrain our understanding of the role of the AIS in driving changes in global sea level. We use the Parallel Ice Sheet Model (PISM) to demonstrate how paleoclimate data can improve our ability to predict the future evolution of the AIS. A large, perturbed-physics ensemble is generated, spanning uncertainty in the parameterizations of four key physical processes within ice sheet models: ice rheology, ice shelf calving, and the stress balances within ice sheets and ice shelves. A Latin hypercube approach is used to optimally sample the range of uncertainty in parameter values. This perturbed-physics ensemble is used to simulate the evolution of the AIS from the Last Glacial Maximum ( 21,000 years ago) to present. Paleoclimate records are then used to determine which ensemble members are the most realistic. This allows us to use data on past climates to directly constrain our understanding of the past contribution of the AIS towards changes in global sea level. Critically, it also allows us to determine which ensemble members are likely to generate the most realistic projections of the future evolution of the AIS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bixler, Nathan E.; Osborn, Douglas M.; Sallaberry, Cedric Jean-Marie
2014-02-01
This paper describes the convergence of MELCOR Accident Consequence Code System, Version 2 (MACCS2) probabilistic results of offsite consequences for the uncertainty analysis of the State-of-the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout scenario at the Peach Bottom Atomic Power Station. The consequence metrics evaluated are individual latent-cancer fatality (LCF) risk and individual early fatality risk. Consequence results are presented as conditional risk (i.e., assuming the accident occurs, risk per event) to individuals of the public as a result of the accident. In order to verify convergence for this uncertainty analysis, as recommended by the Nuclear Regulatory Commission’s Advisorymore » Committee on Reactor Safeguards, a ‘high’ source term from the original population of Monte Carlo runs has been selected to be used for: (1) a study of the distribution of consequence results stemming solely from epistemic uncertainty in the MACCS2 parameters (i.e., separating the effect from the source term uncertainty), and (2) a comparison between Simple Random Sampling (SRS) and Latin Hypercube Sampling (LHS) in order to validate the original results obtained with LHS. Three replicates (each using a different random seed) of size 1,000 each using LHS and another set of three replicates of size 1,000 using SRS are analyzed. The results show that the LCF risk results are well converged with either LHS or SRS sampling. The early fatality risk results are less well converged at radial distances beyond 2 miles, and this is expected due to the sparse data (predominance of “zero” results).« less
Constructing Rigorous and Broad Biosurveillance Networks for Detecting Emerging Zoonotic Outbreaks
Brown, Mac; Moore, Leslie; McMahon, Benjamin; Powell, Dennis; LaBute, Montiago; Hyman, James M.; Rivas, Ariel; Jankowski, Mark; Berendzen, Joel; Loeppky, Jason; Manore, Carrie; Fair, Jeanne
2015-01-01
Determining optimal surveillance networks for an emerging pathogen is difficult since it is not known beforehand what the characteristics of a pathogen will be or where it will emerge. The resources for surveillance of infectious diseases in animals and wildlife are often limited and mathematical modeling can play a supporting role in examining a wide range of scenarios of pathogen spread. We demonstrate how a hierarchy of mathematical and statistical tools can be used in surveillance planning help guide successful surveillance and mitigation policies for a wide range of zoonotic pathogens. The model forecasts can help clarify the complexities of potential scenarios, and optimize biosurveillance programs for rapidly detecting infectious diseases. Using the highly pathogenic zoonotic H5N1 avian influenza 2006-2007 epidemic in Nigeria as an example, we determined the risk for infection for localized areas in an outbreak and designed biosurveillance stations that are effective for different pathogen strains and a range of possible outbreak locations. We created a general multi-scale, multi-host stochastic SEIR epidemiological network model, with both short and long-range movement, to simulate the spread of an infectious disease through Nigerian human, poultry, backyard duck, and wild bird populations. We chose parameter ranges specific to avian influenza (but not to a particular strain) and used a Latin hypercube sample experimental design to investigate epidemic predictions in a thousand simulations. We ranked the risk of local regions by the number of times they became infected in the ensemble of simulations. These spatial statistics were then complied into a potential risk map of infection. Finally, we validated the results with a known outbreak, using spatial analysis of all the simulation runs to show the progression matched closely with the observed location of the farms infected in the 2006-2007 epidemic. PMID:25946164
VARS-TOOL: A Comprehensive, Efficient, and Robust Sensitivity Analysis Toolbox
NASA Astrophysics Data System (ADS)
Razavi, S.; Sheikholeslami, R.; Haghnegahdar, A.; Esfahbod, B.
2016-12-01
VARS-TOOL is an advanced sensitivity and uncertainty analysis toolbox, applicable to the full range of computer simulation models, including Earth and Environmental Systems Models (EESMs). The toolbox was developed originally around VARS (Variogram Analysis of Response Surfaces), which is a general framework for Global Sensitivity Analysis (GSA) that utilizes the variogram/covariogram concept to characterize the full spectrum of sensitivity-related information, thereby providing a comprehensive set of "global" sensitivity metrics with minimal computational cost. VARS-TOOL is unique in that, with a single sample set (set of simulation model runs), it generates simultaneously three philosophically different families of global sensitivity metrics, including (1) variogram-based metrics called IVARS (Integrated Variogram Across a Range of Scales - VARS approach), (2) variance-based total-order effects (Sobol approach), and (3) derivative-based elementary effects (Morris approach). VARS-TOOL is also enabled with two novel features; the first one being a sequential sampling algorithm, called Progressive Latin Hypercube Sampling (PLHS), which allows progressively increasing the sample size for GSA while maintaining the required sample distributional properties. The second feature is a "grouping strategy" that adaptively groups the model parameters based on their sensitivity or functioning to maximize the reliability of GSA results. These features in conjunction with bootstrapping enable the user to monitor the stability, robustness, and convergence of GSA with the increase in sample size for any given case study. VARS-TOOL has been shown to achieve robust and stable results within 1-2 orders of magnitude smaller sample sizes (fewer model runs) than alternative tools. VARS-TOOL, available in MATLAB and Python, is under continuous development and new capabilities and features are forthcoming.
NASA Astrophysics Data System (ADS)
Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng
2016-09-01
This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.
Effective implementation of wavelet Galerkin method
NASA Astrophysics Data System (ADS)
Finěk, Václav; Šimunková, Martina
2012-11-01
It was proved by W. Dahmen et al. that an adaptive wavelet scheme is asymptotically optimal for a wide class of elliptic equations. This scheme approximates the solution u by a linear combination of N wavelets and a benchmark for its performance is the best N-term approximation, which is obtained by retaining the N largest wavelet coefficients of the unknown solution. Moreover, the number of arithmetic operations needed to compute the approximate solution is proportional to N. The most time consuming part of this scheme is the approximate matrix-vector multiplication. In this contribution, we will introduce our implementation of wavelet Galerkin method for Poisson equation -Δu = f on hypercube with homogeneous Dirichlet boundary conditions. In our implementation, we identified nonzero elements of stiffness matrix corresponding to the above problem and we perform matrix-vector multiplication only with these nonzero elements.
NASA Tech Briefs, November/December 1986, Special Edition
NASA Technical Reports Server (NTRS)
1986-01-01
Topics: Computing: The View from NASA Headquarters; Earth Resources Laboratory Applications Software: Versatile Tool for Data Analysis; The Hypercube: Cost-Effective Supercomputing; Artificial Intelligence: Rendezvous with NASA; NASA's Ada Connection; COSMIC: NASA's Software Treasurehouse; Golden Oldies: Tried and True NASA Software; Computer Technical Briefs; NASA TU Services; Digital Fly-by-Wire.
NASA Astrophysics Data System (ADS)
Gunduz, Mustafa Emre
Many government agencies and corporations around the world have found the unique capabilities of rotorcraft indispensable. Incorporating such capabilities into rotorcraft design poses extra challenges because it is a complicated multidisciplinary process. The concept of applying several disciplines to the design and optimization processes may not be new, but it does not currently seem to be widely accepted in industry. The reason for this might be the lack of well-known tools for realizing a complete multidisciplinary design and analysis of a product. This study aims to propose a method that enables engineers in some design disciplines to perform a fairly detailed analysis and optimization of a design using commercially available software as well as codes developed at Georgia Tech. The ultimate goal is when the system is set up properly, the CAD model of the design, including all subsystems, will be automatically updated as soon as a new part or assembly is added to the design; or it will be updated when an analysis and/or an optimization is performed and the geometry needs to be modified. Designers and engineers will be involved in only checking the latest design for errors or adding/removing features. Such a design process will take dramatically less time to complete; therefore, it should reduce development time and costs. The optimization method is demonstrated on an existing helicopter rotor originally designed in the 1960's. The rotor is already an effective design with novel features. However, application of the optimization principles together with high-speed computing resulted in an even better design. The objective function to be minimized is related to the vibrations of the rotor system under gusty wind conditions. The design parameters are all continuous variables. Optimization is performed in a number of steps. First, the most crucial design variables of the objective function are identified. With these variables, Latin Hypercube Sampling method is used to probe the design space of several local minima and maxima. After analysis of numerous samples, an optimum configuration of the design that is more stable than that of the initial design is reached. The above process requires several software tools: CATIA as the CAD tool, ANSYS as the FEA tool, VABS for obtaining the cross-sectional structural properties, and DYMORE for the frequency and dynamic analysis of the rotor. MATLAB codes are also employed to generate input files and read output files of DYMORE. All these tools are connected using ModelCenter.
Neural tube defects in Latin America and the impact of fortification: a literature review
Rosenthal, Jorge; Casas, Jessica; Taren, Douglas; Alverson, Clinton J; Flores, Alina; Frias, Jaime
2015-01-01
Objective Data on the prevalence of birth defects and neural tube defects (NTD) in Latin America are limited. The present review summarizes NTD prevalence and time trends in Latin American countries and compares pre- and post-fortification periods to assess the impact of folic acid fortification in these countries. Design We carried out a literature review of studies and institutional reports published between 1990 and 2010 that contained information on NTD prevalence in Latin America. Results NTD prevalence in Latin American countries varied from 0.2 to 9.6 per 1000 live births and was influenced by methods of ascertainment. Time trends from Bogota, Costa Rica, Dominican Republic, Guatemala City, México and Puerto Rico showed average annual declines of 2.5% to 21.8%. Pre- and post-fortification comparisons were available for Argentina, Brazil, Chile, Costa Rica, Puerto Rico and México. The aggregate percentage decline in NTD prevalence ranged from 33% to 59%. Conclusions The present publication is the first to review data on time trends and the impact of folic acid fortification on NTD prevalence in Latin America. Reported NTD prevalence varied markedly by geographic region and in some areas of Latin America was among the lowest in the world, while in other areas it was among the highest. For countries with available information, time trends showed significant declines in NTD prevalence and these declines were greater in countries where folic acid fortification of staples reached the majority of the population at risk, such as Chile and Costa Rica. PMID:23464652
ERIC Educational Resources Information Center
Zavalo, Lauro
This paper explains that materials on the teaching of Latin-American literature are sparse, even though most researchers in the field will dedicate much of their time to teaching. The paper adds that, in scholarly journals, little attention is given to teaching literature, and the topic is also absent from most academic congresses. The paper then…
Redundancy management for efficient fault recovery in NASA's distributed computing system
NASA Technical Reports Server (NTRS)
Malek, Miroslaw; Pandya, Mihir; Yau, Kitty
1991-01-01
The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osses de Eicker, Margarita, E-mail: Margarita.Osses@empa.c; Hischier, Roland, E-mail: Roland.Hischier@empa.c; Hurni, Hans, E-mail: Hans.Hurni@cde.unibe.c
2010-04-15
Nine non-local databases were evaluated with respect to their suitability for the environmental assessment of industrial activities in Latin America. Three assessment methods were considered, namely Life Cycle Assessment (LCA), Environmental Impact Assessment (EIA) and air emission inventories. The analysis focused on data availability in the databases and the applicability of their international data to Latin American industry. The study showed that the European EMEP/EEA Guidebook and the U.S. EPA AP-42 database are the most suitable ones for air emission inventories, whereas the LCI database Ecoinvent is the most suitable one for LCA and EIA. Due to the data coveragemore » in the databases, air emission inventories are easier to develop than LCA or EIA, which require more comprehensive information. One strategy to overcome the limitations of non-local databases for Latin American industry is the combination of validated data from international databases with newly developed local datasets.« less
[Efficacy of the keyword mnemonic method in adults].
Campos, Alfredo; Pérez-Fabello, María José; Camino, Estefanía
2010-11-01
Two experiments were used to assess the efficacy of the keyword mnemonic method in adults. In Experiment 1, immediate and delayed recall (at a one-day interval) were assessed by comparing the results obtained by a group of adults using the keyword mnemonic method in contrast to a group using the repetition method. The mean age of the sample under study was 59.35 years. Subjects were required to learn a list of 16 words translated from Latin into Spanish. Participants who used keyword mnemonics that had been devised by other experimental participants of the same characteristics, obtained significantly higher immediate and delayed recall scores than participants in the repetition method. In Experiment 2, other participants had to learn a list of 24 Latin words translated into Spanish by using the keyword mnemonic method reinforced with pictures. Immediate and delayed recall were significantly greater in the keyword mnemonic method group than in the repetition method group.
Super and parallel computers and their impact on civil engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamat, M.P.
1986-01-01
This book presents the papers given at a conference on the use of supercomputers in civil engineering. Topics considered at the conference included solving nonlinear equations on a hypercube, a custom architectured parallel processing system, distributed data processing, algorithms, computer architecture, parallel processing, vector processing, computerized simulation, and cost benefit analysis.
Assignment Of Finite Elements To Parallel Processors
NASA Technical Reports Server (NTRS)
Salama, Moktar A.; Flower, Jon W.; Otto, Steve W.
1990-01-01
Elements assigned approximately optimally to subdomains. Mapping algorithm based on simulated-annealing concept used to minimize approximate time required to perform finite-element computation on hypercube computer or other network of parallel data processors. Mapping algorithm needed when shape of domain complicated or otherwise not obvious what allocation of elements to subdomains minimizes cost of computation.
3D Navier-Stokes Flow Analysis for a Large-Array Multiprocessor
1989-04-17
computer, Alliant’s FX /8, Intel’s Hypercube, and Encore’s Multimax. Unfortunately, the current algorithms have been developed pri- marily for SISD machines...Reversing and Thrust-Vectoring Nozzle Flows," Ph.D. Dissertation in the Dept. of Aero. and Astro ., Univ. of Wash., Washington, 1986. [11] Anderson
Taste symmetry breaking with hypercubic-smeared staggered fermions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bae, Taegil; Adams, David H.; Kim, Hyung-Jin
2008-05-01
We study the impact of hypercubic (HYP) smearing on the size of taste-breaking for staggered fermions, comparing to unimproved and to asqtad-improved staggered fermions. As in previous studies, we find a substantial reduction in taste-breaking compared to unimproved staggered fermions (by a factor of 4-7 on lattices with spacing a{approx_equal}0.1 fm). In addition, we observe that discretization effects of next-to-leading order in the chiral expansion (O(a{sup 2}p{sup 2})) are markedly reduced by HYP smearing. Compared to asqtad valence fermions, we find that taste-breaking in the pion spectrum is reduced by a factor of 2.5-3, down to a level comparable tomore » the expected size of generic O(a{sup 2}) effects. Our results suggest that, once one reaches a lattice spacing of a{approx_equal}0.09 fm, taste-breaking will be small enough after HYP smearing that one can use a modified power counting in which O(a{sup 2})<
A parallel expert system for the control of a robotic air vehicle
NASA Technical Reports Server (NTRS)
Shakley, Donald; Lamont, Gary B.
1988-01-01
Expert systems can be used to govern the intelligent control of vehicles, for example the Robotic Air Vehicle (RAV). Due to the nature of the RAV system the associated expert system needs to perform in a demanding real-time environment. The use of a parallel processing capability to support the associated expert system's computational requirement is critical in this application. Thus, algorithms for parallel real-time expert systems must be designed, analyzed, and synthesized. The design process incorporates a consideration of the rule-set/face-set size along with representation issues. These issues are looked at in reference to information movement and various inference mechanisms. Also examined is the process involved with transporting the RAV expert system functions from the TI Explorer, where they are implemented in the Automated Reasoning Tool (ART), to the iPSC Hypercube, where the system is synthesized using Concurrent Common LISP (CCLISP). The transformation process for the ART to CCLISP conversion is described. The performance characteristics of the parallel implementation of these expert systems on the iPSC Hypercube are compared to the TI Explorer implementation.
High-throughput Raman chemical imaging for evaluating food safety and quality
NASA Astrophysics Data System (ADS)
Qin, Jianwei; Chao, Kuanglin; Kim, Moon S.
2014-05-01
A line-scan hyperspectral system was developed to enable Raman chemical imaging for large sample areas. A custom-designed 785 nm line-laser based on a scanning mirror serves as an excitation source. A 45° dichroic beamsplitter reflects the laser light to form a 24 cm x 1 mm excitation line normally incident on the sample surface. Raman signals along the laser line are collected by a detection module consisting of a dispersive imaging spectrograph and a CCD camera. A hypercube is accumulated line by line as a motorized table moves the samples transversely through the laser line. The system covers a Raman shift range of -648.7-2889.0 cm-1 and a 23 cm wide area. An example application, for authenticating milk powder, was presented to demonstrate the system performance. In four minutes, the system acquired a 512x110x1024 hypercube (56,320 spectra) from four 47-mm-diameter Petri dishes containing four powder samples. Chemical images were created for detecting two adulterants (melamine and dicyandiamide) that had been mixed into the milk powder.
Hypercat - Hypercube of AGN tori
NASA Astrophysics Data System (ADS)
Nikutta, Robert; Lopez-Rodriguez, Enrique; Ichikawa, Kohei; Levenson, Nancy A.; Packham, Christopher C.
2018-06-01
AGN unification and observations hold that a dusty torus obscures the central accretion engine along some lines of sight. SEDs of dust tori have been modeled for a long time, but resolved emission morphologies have not been studied in much detail, because resolved observations are only possible recently (VLTI,ALMA) and in the near future (TMT,ELT,GMT). Some observations challenge a simple torus model, because in several objects most of MIR emission appears to emanate from polar regions high above the equatorial plane, i.e. not where the dust supposedly resides.We introduce our software framework and hypercube of AGN tori (Hypercat) made with CLUMPY (www.clumpy.org), a large set of images (6 model parameters + wavelength) to facilitate studies of emission and dust morphologies. We make use of Hypercat to study the morphological properties of the emission and dust distributions as function of model parameters. We find that a simple clumpy torus can indeed produce 10-micron emission patterns extended in polar directions, with extension ratios compatible with those found in observations. We are able to constrain the range of parameters that produce such morphologies.
The Wavelet Element Method. Part 2; Realization and Additional Features in 2D and 3D
NASA Technical Reports Server (NTRS)
Canuto, Claudio; Tabacco, Anita; Urban, Karsten
1998-01-01
The Wavelet Element Method (WEM) provides a construction of multiresolution systems and biorthogonal wavelets on fairly general domains. These are split into subdomains that are mapped to a single reference hypercube. Tensor products of scaling functions and wavelets defined on the unit interval are used on the reference domain. By introducing appropriate matching conditions across the interelement boundaries, a globally continuous biorthogonal wavelet basis on the general domain is obtained. This construction does not uniquely define the basis functions but rather leaves some freedom for fulfilling additional features. In this paper we detail the general construction principle of the WEM to the 1D, 2D and 3D cases. We address additional features such as symmetry, vanishing moments and minimal support of the wavelet functions in each particular dimension. The construction is illustrated by using biorthogonal spline wavelets on the interval.
Cardiovascular Research Publications from Latin America between 1999 and 2008. A Bibliometric Study
Colantonio, Lisandro D.; Baldridge, Abigail S.; Huffman, Mark D.; Bloomfield, Gerald S.; Prabhakaran, Dorairaj
2015-01-01
Background Cardiovascular research publications seem to be increasing in Latin America overall. Objective To analyze trends in cardiovascular publications and their citations from countries in Latin America between 1999 and 2008, and to compare them with those from the rest of the countries. Methods We retrieved references of cardiovascular publications between 1999 and 2008 and their five-year post-publication citations from the Web of Knowledge database. For countries in Latin America, we calculated the total number of publications and their citation indices (total citations divided by number of publications) by year. We analyzed trends on publications and citation indices over time using Poisson regression models. The analysis was repeated for Latin America as a region, and compared with that for the rest of the countries grouped according to economic development. Results Brazil (n = 6,132) had the highest number of publications in1999-2008, followed by Argentina (n = 1,686), Mexico (n = 1,368) and Chile (n = 874). Most countries showed an increase in publications over time, leaded by Guatemala (36.5% annually [95%CI: 16.7%-59.7%]), Colombia (22.1% [16.3%-28.2%]), Costa Rica (18.1% [8.1%-28.9%]) and Brazil (17.9% [16.9%-19.1%]). However, trends on citation indices varied widely (from -33.8% to 28.4%). From 1999 to 2008, cardiovascular publications of Latin America increased by 12.9% (12.1%-13.5%) annually. However, the citation indices of Latin America increased 1.5% (1.3%-1.7%) annually, a lower increase than those of all other country groups analyzed. Conclusions Although the number of cardiovascular publications of Latin America increased from 1999 to 2008, trends on citation indices suggest they may have had a relatively low impact on the research field, stressing the importance of considering quality and dissemination on local research policies. PMID:25714407
Children’s Health in Latin America: The Influence of Environmental Exposures
Laborde, Amalia; Tomasina, Fernando; Bianchi, Fabrizio; Bruné, Marie-Noel; Buka, Irena; Comba, Pietro; Corra, Lilian; Cori, Liliana; Duffert, Christin Maria; Harari, Raul; Iavarone, Ivano; McDiarmid, Melissa A.; Gray, Kimberly A.; Sly, Peter D.; Soares, Agnes; Suk, William A.
2014-01-01
Background Chronic diseases are increasing among children in Latin America. Objective and Methods To examine environmental risk factors for chronic disease in Latin American children and to develop a strategic initiative for control of these exposures, the World Health Organization (WHO) including the Pan American Health Organization (PAHO), the Collegium Ramazzini, and Latin American scientists reviewed regional and relevant global data. Results Industrial development and urbanization are proceeding rapidly in Latin America, and environmental pollution has become widespread. Environmental threats to children’s health include traditional hazards such as indoor air pollution and drinking-water contamination; the newer hazards of urban air pollution; toxic chemicals such as lead, asbestos, mercury, arsenic, and pesticides; hazardous and electronic waste; and climate change. The mix of traditional and modern hazards varies greatly across and within countries reflecting industrialization, urbanization, and socioeconomic forces. Conclusions To control environmental threats to children’s health in Latin America, WHO, including PAHO, will focus on the most highly prevalent and serious hazards—indoor and outdoor air pollution, water pollution, and toxic chemicals. Strategies for controlling these hazards include developing tracking data on regional trends in children’s environmental health (CEH), building a network of Collaborating Centres, promoting biomedical research in CEH, building regional capacity, supporting development of evidence-based prevention policies, studying the economic costs of chronic diseases in children, and developing platforms for dialogue with relevant stakeholders. Citation Laborde A, Tomasina F, Bianchi F, Bruné MN, Buka I, Comba P, Corra L, Cori L, Duffert CM, Harari R, Iavarone I, McDiarmid MA, Gray KA, Sly PD, Soares A, Suk WA, Landrigan PJ. 2015. Children’s health in Latin America: the influence of environmental exposures. Environ Health Perspect 123:201–209; http://dx.doi.org/10.1289/ehp.1408292 PMID:25499717
ERIC Educational Resources Information Center
Contreras Flores, Jenniffer
2016-01-01
This study explored the preparation and career paths of academic deans in Church of God (COG) theological institutions located in Latin American and Caribbean. This study used a qualitative research approach and the in-depth interview method for data collection. A group of 14 academic deans that serve in COG theological schools and that…
ERIC Educational Resources Information Center
Mohor, S.
This paper, one of a series of Unesco technical reports, discusses agricultural production in Latin America and the Caribbean and examines the role played by education in the region. Written in Spanish, the paper consists of four parts. Part I deals with the different types of agricultural production and examines the historic origins and…
[A framework for evaluating primary health care in Latin America].
Haggerty, Jeannie L; Yavich, Natalia; Báscolo, Ernesto Pablo
2009-11-01
To determine the relevancy of applying the Canadian primary health care (PHC) assessment strategy to Latin America and to propose any modifications that might be needed for reaching a consensus in Latin America. The Delphi method was used to reach a consensus among 29 experts engaged in PHC development or evaluation in Latin America. Four virtual sessions and a face-to-face meeting were held to discuss the PHC evaluation logic model and the seven goals and six conditioning factors that make up the Canadian strategy, as well as any questions regarding the evaluation and indicators. The relevance of each concept was ranked according to the perspective of the Latin American countries. The experts considered the Canadian strategy's objectives and conditioning factors to be highly relevant to assessing PHC in Latin America, though they acknowledged that additional modification would increase relevance. The chief suggestions were to create a PHC vision and mission, to include additional objectives and conditioning factors, and to rework the original set. The objectives that concerned coordination and integrated comprehensive care did not achieve a high degree of consensus because of ambiguities in the original text and multiple interpretations of statements regarding certain aspects of the evaluation. Considerable progress was made on the road to building a PHC evaluation framework for the Region of the Americas. Indicators and information-gathering tools, which can be appropriately and practically applied in diverse contexts, need to be developed.
A Shared Heritage: Afro-Latin@s and Black History
ERIC Educational Resources Information Center
Busey, Christopher L.; Cruz, Bárbara C.
2015-01-01
As the Latin@ population continues to grow in the United States, it is imperative that social studies teachers are aware of the rich history and sociocultural complexities of Latin@ identity. In particular, there is a large population of Latin@s of African descent throughout Latin America, the Caribbean, and North America. However, Afro-Latin@s…
Structural Reliability Using Probability Density Estimation Methods Within NESSUS
NASA Technical Reports Server (NTRS)
Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric
2003-01-01
A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been proposed by the Society of Automotive Engineers (SAE). The test cases compare different probabilistic methods within NESSUS because it is important that a user can have confidence that estimates of stochastic parameters of a response will be within an acceptable error limit. For each response, the mean, standard deviation, and 0.99 percentile, are repeatedly estimated which allows confidence statements to be made for each parameter estimated, and for each method. Thus, the ability of several stochastic methods to efficiently and accurately estimate density parameters is compared using four valid test cases. While all of the reliability methods used performed quite well, for the new LHS module within NESSUS it was found that it had a lower estimation error than MC when they were used to estimate the mean, standard deviation, and 0.99 percentile of the four different stochastic responses. Also, LHS required a smaller amount of calculations to obtain low error answers with a high amount of confidence than MC. It can therefore be stated that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ and the newest LHS module is a valuable new enhancement of the program.
Parathyroid cysts: the Latin-American experience
Aristizábal, Natalia; Aguilar, Carolina; Palacios, Karen; Pérez, Juan Camilo; Vélez-Hoyos, Alejandro; Duque, Carlos Simon; Sanabria, Alvaro
2016-01-01
Background Parathyroid cyst is an infrequent and unsuspected disease. There are more than 300 hundred cases reported in the world literature, a few of them are from Latin America. The experience of our centers and a review of the cases are presented. Methods Case report of a series of patients with parathyroid cyst from our institutions according to the CARE guidelines (Case Reports). A search of Medline, Embase, BIREME (Biblioteca Regional de Medicina) LILACS (Literatura Latinoamericana y del Caribe en Ciencias de la Salud), Google Scholar and Scielo (Scientific Electronic Library on Line) databases and telephonic or email communications with other experts from Latin-America was performed . Results Six patients with parathyroid cyst were found in our centers in Colombia. Most of them were managed with aspiration of the cyst. Two of them required surgery. Only one case was functional. Twelve reports from Latin America were found for a total of 18 cases in our region adding ours. Conclusions Parathyroid cysts are uncommonly reported in Latin America. Most of them are diagnosed postoperatively. Suspicion for parathyroid cyst should be raised when a crystal clear fluid is aspirated from a cyst. The confirmation of the diagnosis may be easily done if parathyroid hormone (PTH) level is measured in the cyst fluid. PMID:28149800
Increasing access to Latin American social medicine resources: a preliminary report*
Buchanan, Holly Shipp; Waitzkin, Howard; Eldredge, Jonathan; Davidson, Russ; Iriart, Celia; Teal, Janis
2003-01-01
Purpose: This preliminary report describes the development and implementation of a project to improve access to literature in Latin American social medicine (LASM). Methods: The University of New Mexico project team collaborated with participants from Argentina, Brazil, Chile, and Ecuador to identify approximately 400 articles and books in Latin American social medicine. Structured abstracts were prepared, translated into English, Spanish, and Portuguese, assigned Medical Subject Headings (MeSH), and loaded into a Web-based database for public searching. The project has initiated Web-based publication for two LASM journals. Evaluation included measures of use and content. Results: The LASM Website (http://hsc.unm.edu/lasm) and database create access to formerly little-known literature that addresses problems relevant to current medicine and public health. This Website offers a unique resource for researchers, practitioners, and teachers who seek to understand the links between socioeconomic conditions and health. The project provides a model for collaboration between librarians and health care providers. Challenges included procurement of primary material; preparation of concise abstracts; working with trilingual translations of abstracts, metadata, and indexing; and the work processes of the multidisciplinary team. Conclusions: The literature of Latin American social medicine has become more readily available to researchers worldwide. The LASM project serves as a collaborative model for the creation of sustainable solutions for disseminating information that is difficult to access through traditional methods. PMID:14566372
Prevalence of Celiac Disease in Latin America: A Systematic Review and Meta-Regression
Parra-Medina, Rafael; Molano-Gonzalez, Nicolás; Rojas-Villarraga, Adriana; Agmon-Levin, Nancy; Arango, Maria-Teresa; Shoenfeld, Yehuda; Anaya, Juan-Manuel
2015-01-01
Background Celiac disease (CD) is an immune-mediated enteropathy triggered by the ingestion of gluten in susceptible individuals, and its prevalence varies depending on the studied population. Given that information on CD in Latin America is scarce, we aimed to investigate the prevalence of CD in this region of the world through a systematic review and meta-analysis. Methods and Findings This was a two-phase study. First, a cross-sectional analysis from 981 individuals of the Colombian population was made. Second, a systematic review and meta-regression analysis were performed following the Preferred Reporting Items for Systematic Meta- Analyses (PRISMA) guidelines. Our results disclosed a lack of celiac autoimmunity in the studied Colombian population (i.e., anti-tissue transglutaminase (tTG) and IgA anti-endomysium (EMA)). In the systematic review, 72 studies were considered. The estimated prevalence of CD in Latin Americans ranged between 0.46% and 0.64%. The prevalence of CD in first-degree relatives of CD probands was 5.5%. The coexistence of CD and type 1 diabetes mellitus varied from 4.6% to 8.7%, depending on the diagnosis methods (i.e., autoantibodies and/or biopsies). Conclusions Although CD seems to be a rare condition in Colombians; the general prevalence of the disease in Latin Americans seemingly corresponds to a similar scenario observed in Europeans. PMID:25942408
Research into the Architecture of CAD Based Robot Vision Systems
1988-02-09
Vision and "Automatic Generation of Recognition Features for Com- puter Vision," Mudge, Turney and Volz, published in Robotica (1987). All of the...Occluded Parts," (T.N. Mudge, J.L. Turney, and R.A. Volz), Robotica , vol. 5, 1987, pp. 117-127. 5. "Vision Algorithms for Hypercube Machines," (T.N. Mudge
A graph with fractional revival
NASA Astrophysics Data System (ADS)
Bernard, Pierre-Antoine; Chan, Ada; Loranger, Érika; Tamon, Christino; Vinet, Luc
2018-02-01
An example of a graph that admits balanced fractional revival between antipodes is presented. It is obtained by establishing the correspondence between the quantum walk on a hypercube where the opposite vertices across the diagonals of each face are connected and, the coherent transport of single excitations in the extension of the Krawtchouk spin chain with next-to-nearest neighbour interactions.
Santos Garcia, Joäo Batista; Lech, Osvandré; Campos Kraychete, Durval; Rico, María Antonieta; Hernández-Castro, John Jairo; Colimon, Frantz; Guerrero, Carlos; Sempértegui Gallegos, Manuel; Lara-Solares, Argelia; Flores Cantisani, José Alberto; Amescua-Garcia, César; Guillén Núñez, María Del Rocío; Berenguel Cook, María Del Rosario; Jreige Iskandar, Aziza; Bonilla Sierra, Patricia
2017-09-01
Change Pain Latin America (CPLA) was created to enhance chronic pain understanding and develop pain management improving strategies in this region. During its seventh meeting (August 2016), the main objective was to discuss tramadol's role in treating pain in Latin America. Furthermore, potential pain management consequences were considered, if tramadol was to become more stringently controlled. Key topics discussed were: main indications for prescribing tramadol, its pharmacological characteristics, safety and tolerability, effects of restrictions on its availability and use, and consequent impact on pain care quality. The experts agreed that tramadol is used to treat a wide spectrum of non-oncological pain conditions (e.g. post-surgical, musculoskeletal, post-traumatic, neuropathic, fibromyalgia), as well as cancer pain. Its relevance when treating special patient groups (e.g. the elderly) is recognized. The main reasons for tramadol's high significance as a treatment option are: its broad efficacy, an inconspicuous safety profile and its availability, considering that access to strong analgesics - mainly controlled drugs (classical opioids) - is highly restricted in some countries. The CPLA also agreed that tramadol is well tolerated, without the safety issues associated with long-term nonsteroidal anti-inflammatory drug (NSAID) use, with fewer opioid-like side effects than classical opioids and lower abuse risk. In Latin America, tramadol is a valuable and frequently used medication for treating moderate to severe pain. More stringent regulations would have significant impact on its availability, especially for outpatients. This could cause regression to older and frequently inadequate pain management methods, resulting in unnecessary suffering for many Latin American patients.
The current status of ethnobiological research in Latin America: gaps and perspectives
2013-01-01
Background Recent reviews have demonstrated an increase in the number of papers on ethnobiology in Latin America. Among factors that have influenced this increase are the biological and cultural diversity of these countries and the general scientific situation in some countries. This study aims to assess the panorama of ethnobiological research in Latin America by analyzing its evolution, trends, and future prospects. Methods To conduct this study, we searched for papers in the Scopus (http://www.scopus.com) and Web of Science (http://www.isiknowledge.com) databases. The search was performed using combinations of keywords and the name of each Latin American country. The following countries were included in this study: Argentina, Bolivia, Brazil, Chile, Colombia, Costa Rica, Cuba, Ecuador, Guatemala, Haiti, Honduras, Mexico, Panama, Paraguay, Peru, Venezuela, and Uruguay. Results and conclusions According to our inclusion criteria, 679 ethnobiological studies conducted in Latin America were found for the period between 1963 and 2012. Of these studies, 289 (41%) were conducted in Brazil, 153 in Mexico (22%), 61 in Peru (9%), 58 in Argentina (8%), 45 in Bolivia (6%), and 97 (14%) in other Latin American countries. The increased number of publications related to this area of knowledge in recent years demonstrates the remarkable growth of ethnobiology as a science. Ethnobiological research may be stimulated by an increase in the number of scientific events and journals for study dissemination and by the creation of undergraduate courses and graduate programs to train ethnoscientists who will produce high-quality studies, especially in certain countries. PMID:24131758
Lobo, Clarisse; Angulo, Ivan L; Aparicio, Lidia R; Drelichman, Guillermo I; Zanichelli, Maria A; Cancado, Rodolfo
2011-09-01
The retrospective epidemiological study of Latin Americans with transfusional hemosiderosis is the first regional patient registry to gather data regarding the burden of transfusional hemosiderosis and patterns of care in these patients. Retrospective and cross-sectional data were collected on patients ≥2 years with selected chronic anemias and minimum 20 transfusions. In the 960 patients analyzed, sickle-cell disease (48·3%) and thalassemias (24·0%) were the most frequent underlying diagnoses. The registry enrolled 355 pediatric patients (187 with sickle-cell disease/94 with thalassemia). Serum ferritin was the most frequent method used to detect iron overload. Complications from transfusional hemosiderosis were reported in ~80% of patients; hepatic (65·3%), endocrine (27·5%), and cardiac (18·2%) being the most frequent. These data indicate that hemoglobinopathies and complications due to transfusional hemosiderosis are a significant clinical problem in the Latin American population with iron overload. Chelation therapy is used insufficiently and has a high rate of discontinuation.
Social anxiety and social norms in individualistic and collectivistic countries
Schreier, Sina-Simone; Heinrichs, Nina; Alden, Lynn; Rapee, Ronald M.; Hofmann, Stefan G.; Chen, Junwen; Ja Oh, Kyung; Bögels, Susan
2010-01-01
Background Social anxiety is assumed to be related to cultural norms across countries. Heinrichs and colleagues [1] compared individualistic and collectivistic countries and found higher social anxiety and more positive attitudes toward socially avoidant behaviors in collectivistic than in individualistic countries. However, the authors failed to include Latin American countries in the collectivistic group. Methods To provide support for these earlier results within an extended sample of collectivistic countries, 478 undergraduate students from individualistic countries were compared with 388 undergraduate students from collectivistic countries (including East Asian and Latin American) via self report of social anxiety and social vignettes assessing social norms. Results As expected, the results of Heinrichs and colleagues [1] were replicated for the individualistic and Asian countries but not for Latin American countries. Latin American countries displayed the lowest social anxiety levels, whereas the collectivistic East Asian group displayed the highest. Conclusions These findings indicate that while culture-mediated social norms affect social anxiety and might help to shed light on the etiology of social anxiety disorder, the dimension of individualism-collectivism may not fully capture the relevant norms. PMID:21049538
Optimal mapping of irregular finite element domains to parallel processors
NASA Technical Reports Server (NTRS)
Flower, J.; Otto, S.; Salama, M.
1987-01-01
Mapping the solution domain of n-finite elements into N-subdomains that may be processed in parallel by N-processors is an optimal one if the subdomain decomposition results in a well-balanced workload distribution among the processors. The problem is discussed in the context of irregular finite element domains as an important aspect of the efficient utilization of the capabilities of emerging multiprocessor computers. Finding the optimal mapping is an intractable combinatorial optimization problem, for which a satisfactory approximate solution is obtained here by analogy to a method used in statistical mechanics for simulating the annealing process in solids. The simulated annealing analogy and algorithm are described, and numerical results are given for mapping an irregular two-dimensional finite element domain containing a singularity onto the Hypercube computer.
On multigrid methods for the Navier-Stokes Computer
NASA Technical Reports Server (NTRS)
Nosenchuck, D. M.; Krist, S. E.; Zang, T. A.
1988-01-01
The overall architecture of the multipurpose parallel-processing Navier-Stokes Computer (NSC) being developed by Princeton and NASA Langley (Nosenchuck et al., 1986) is described and illustrated with extensive diagrams, and the NSC implementation of an elementary multigrid algorithm for simulating isotropic turbulence (based on solution of the incompressible time-dependent Navier-Stokes equations with constant viscosity) is characterized in detail. The present NSC design concept calls for 64 nodes, each with the performance of a class VI supercomputer, linked together by a fiber-optic hypercube network and joined to a front-end computer by a global bus. In this configuration, the NSC would have a storage capacity of over 32 Gword and a peak speed of over 40 Gflops. The multigrid Navier-Stokes code discussed would give sustained operation rates of about 25 Gflops.
Parallel grid generation algorithm for distributed memory computers
NASA Technical Reports Server (NTRS)
Moitra, Stuti; Moitra, Anutosh
1994-01-01
A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.
NASA Astrophysics Data System (ADS)
Chlebda, Damian K.; Majda, Alicja; Łojewski, Tomasz; Łojewska, Joanna
2016-11-01
Differentiation of the written text can be performed with a non-invasive and non-contact tool that connects conventional imaging methods with spectroscopy. Hyperspectral imaging (HSI) is a relatively new and rapid analytical technique that can be applied in forensic science disciplines. It allows an image of the sample to be acquired, with full spectral information within every pixel. For this paper, HSI and three statistical methods (hierarchical cluster analysis, principal component analysis, and spectral angle mapper) were used to distinguish between traces of modern black gel pen inks. Non-invasiveness and high efficiency are among the unquestionable advantages of ink differentiation using HSI. It is also less time-consuming than traditional methods such as chromatography. In this study, a set of 45 modern gel pen ink marks deposited on a paper sheet were registered. The spectral characteristics embodied in every pixel were extracted from an image and analysed using statistical methods, externally and directly on the hypercube. As a result, different black gel inks deposited on paper can be distinguished and classified into several groups, in a non-invasive manner.
Guyonvarch, Estelle; Ramin, Elham; Kulahci, Murat; Plósz, Benedek Gy
2015-10-15
The present study aims at using statistically designed computational fluid dynamics (CFD) simulations as numerical experiments for the identification of one-dimensional (1-D) advection-dispersion models - computationally light tools, used e.g., as sub-models in systems analysis. The objective is to develop a new 1-D framework, referred to as interpreted CFD (iCFD) models, in which statistical meta-models are used to calculate the pseudo-dispersion coefficient (D) as a function of design and flow boundary conditions. The method - presented in a straightforward and transparent way - is illustrated using the example of a circular secondary settling tank (SST). First, the significant design and flow factors are screened out by applying the statistical method of two-level fractional factorial design of experiments. Second, based on the number of significant factors identified through the factor screening study and system understanding, 50 different sets of design and flow conditions are selected using Latin Hypercube Sampling (LHS). The boundary condition sets are imposed on a 2-D axi-symmetrical CFD simulation model of the SST. In the framework, to degenerate the 2-D model structure, CFD model outputs are approximated by the 1-D model through the calibration of three different model structures for D. Correlation equations for the D parameter then are identified as a function of the selected design and flow boundary conditions (meta-models), and their accuracy is evaluated against D values estimated in each numerical experiment. The evaluation and validation of the iCFD model structure is carried out using scenario simulation results obtained with parameters sampled from the corners of the LHS experimental region. For the studied SST, additional iCFD model development was carried out in terms of (i) assessing different density current sub-models; (ii) implementation of a combined flocculation, hindered, transient and compression settling velocity function; and (iii) assessment of modelling the onset of transient and compression settling. Furthermore, the optimal level of model discretization both in 2-D and 1-D was undertaken. Results suggest that the iCFD model developed for the SST through the proposed methodology is able to predict solid distribution with high accuracy - taking a reasonable computational effort - when compared to multi-dimensional numerical experiments, under a wide range of flow and design conditions. iCFD tools could play a crucial role in reliably predicting systems' performance under normal and shock events. Copyright © 2015 Elsevier Ltd. All rights reserved.
Assessing Degree of Susceptibility to Landslide Hazard
NASA Astrophysics Data System (ADS)
Sheridan, M. F.; Cordoba, G. A.; Delgado, H.; Stefanescu, R.
2013-05-01
The modeling of hazardous mass flows, both dry and water saturated, is currently an area of active research and several stable models have now emerged that have differing degrees of physical and mathematical fidelity. Models based on the early work of Savage and Hutter (1989) assume that very large dense granular flows could be modeled as incompressible continua governed by a Coulomb failure criterion. Based on this concept, Patra et al. (2005) developed a code for dry avalanches, which proposes a thin layer mathematical model similar to shallow-water equations. This concept was implemented in the widely-used TITAN2D program, which integrates the shock-capturing Godunov solution methodology for the equation system. We propose a method to assess the susceptibility of specific locations susceptible to landslides following heavy tephra fall using the TIATN2D code. Successful application requires that the range of several uncertainties must be framed in the selection of model input data: 1) initial conditions, like volume and location of origin of the landslide, 2) bed and internal friction parameters and 3) digital elevation model (DEM) uncertainties. Among the possible ways of coping with these uncertainties, we chose to use Latin Hypercube Sampling (LHS). This statistical technique reduces a computationally intractable problem to such an extent that is it possible to apply it, even with current personal computers. LHS requires that there is only one sample in each row and each column of the sampling matrix, where each row (multi-dimensional) corresponds to each uncertainty. LHS requires less than 10% of the sample runs needed by Monte Carlo approaches to achieve a stable solution. In our application LHS output provides model sampling for 4 input parameters: initial random volumes, UTM location (x and y), and bed friction. We developed a simple Octave script to link the output of LHS with TITAN2D. In this way, TITAN2D can run several times with successively different initial conditions provided by the LHC routine. Finally the set of results from TITAN2D are computed to obtain the distribution of maximum exceedance probability given that a landslide occurs at a place of interest. We apply this method to find sectors least prone to be affected by landslides, in a region along the Panamerican Highway in the southern part of Colombia. The goal of such a study is to provide decision makers to improve their assessments regarding permissions for development along the highway.
The linguistic roots of Modern English anatomical terminology.
Turmezei, Tom D
2012-11-01
Previous research focusing on Classical Latin and Greek roots has shown that understanding the etymology of English anatomical terms may be beneficial for students of human anatomy. However, not all anatomical terms are derived from Classical origins. This study aims to explore the linguistic roots of the Modern English terminology used in human gross anatomy. By reference to the Oxford English Dictionary, etymologies were determined for a lexicon of 798 Modern English gross anatomical terms from the 40(th) edition of Gray's Anatomy. Earliest traceable language of origin was determined for all 798 terms; language of acquisition was determined for 747 terms. Earliest traceable languages of origin were: Classical Latin (62%), Classical Greek (24%), Old English (7%), Post-Classical Latin (3%), and other (4%). Languages of acquisition were: Classical Latin (42%), Post-Classical Latin (29%), Old English (8%), Modern French (6%), Classical Greek (5%), Middle English (3%), and other (7%). While the roots of Modern English anatomical terminology mostly lie in Classical languages (accounting for the origin of 86% of terms), the anatomical lexicon of Modern English is actually much more diverse. Interesting and perhaps less familiar examples from these languages and the methods by which such terms have been created and absorbed are discussed. The author suggests that awareness of anatomical etymologies may enhance the enjoyment and understanding of human anatomy for students and teachers alike. Copyright © 2012 Wiley Periodicals, Inc.
Comparative Analysis of Reconstructed Image Quality in a Simulated Chromotomographic Imager
2014-03-01
quality . This example uses five basic images a backlit bar chart with random intensity, 100 nm separation. A total of 54 initial target...compared for a variety of scenes. Reconstructed image quality is highly dependent on the initial target hypercube so a total of 54 initial target...COMPARATIVE ANALYSIS OF RECONSTRUCTED IMAGE QUALITY IN A SIMULATED CHROMOTOMOGRAPHIC IMAGER THESIS
Different Formulations of the Orthogonal Array Problem and Their Symmetries
2014-06-19
t). Note that OAP(k, s, t, λ) is a polytope not an unbounded polyhedron because it can be embedded inside the hypercube [0, λ]m. Theorem 5. The OAP(k...3.1) is a facet since otherwise OAP(k, s, t, λ) would be an unbounded polyhedron . Then there exists Ny ∈ Rm satisfying all but the 28 facet defining
Hypercube Solutions for Conjugate Directions
1991-12-01
1800s, Charles Babbage had designed his Difference Engine and proceeded to the more advanced Analytical Engine. These machines were 1 never completed...Consider his motivation. The following example was frequently cited by Charles Babbage (1792-1871) to justify the construction of his first computing...360 LIST OF REFERENCES [1] P. Morrison and E. Morrison, editors. Charles Babbage and His Calculating Engines. Dover Publications, Inc., New
Diaz, Andres
2017-01-01
Avian influenza or bird flu is a highly contagious acute viral disease that can occur in epidemics and cross-border forms in poultry and wild birds. The characteristics of avian influenza viruses (AIVs) allow the emergence of new viral variants, some with zoonotic and pandemic potential. AIVs have been identified in Latin America; however, there is a lack of understanding of these viruses at the regional level. We performed a systematic literature review on serological or molecular evidence of AIVs circulation in Latin America. Methods were designed based on the PRISMA and STROME guidelines. Only peer-reviewed studies published between 2000 to 2015 and data was analysed based on country, viral subtype, avian species, and phylogenetic origins. From 271 studies initially found only twenty-six met our inclusion criteria. Evidence of AIVs infection was found in most Latin American countries, with Mexico as the country with the largest number of conducted studies and reported cases during the period analysed, followed by Chile and Argentina. Most of the AIVs were early reported through surveillance systems and at least 14 different subtypes of influenza viruses were reported in birds, and the presence of both low (92.9%) and high (7.1%) pathogenic AIVs was shown in Latin America. Of the reported AIVs in Latin America, 43.7% belong to migratory birds, 28.1% to local wild birds, and 28.1% to poultry. The migratory bird population mainly comprises families belonging to the orders Anseriformes and Charadriformes. We highlight the importance of epidemiological surveillance systems and the possible role of different migratory birds in the transmission of AIVs within the Americas. Our findings demonstrate the limited information on AIVs in Latin America and highlight the need of more studies on AIVs at the regional level, particularly those focused on identifying the endemic subtypes in regional wild birds. PMID:28632771
HERMIES-I: a mobile robot for navigation and manipulation experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weisbin, C.R.; Barhen, J.; de Saussure, G.
1985-01-01
The purpose of this paper is to report the current status of investigations ongoing at the Center for Engineering Systems Advanced Research (CESAR) in the areas of navigation and manipulation in unstructured environments. The HERMIES-I mobile robot, a prototype of a series which contains many of the major features needed for remote work in hazardous environments is discussed. Initial experimental work at CESAR has begun in the area of navigation. It briefly reviews some of the ongoing research in autonomous navigation and describes initial research with HERMIES-I and associated graphic simulation. Since the HERMIES robots will generally be composed ofmore » a variety of asynchronously controlled hardware components (such as manipulator arms, digital image sensors, sonars, etc.) it seems appropriate to consider future development of the HERMIES brain as a hypercube ensemble machine with concurrent computation and associated message passing. The basic properties of such a hypercube architecture are presented. Decision-making under uncertainty eventually permeates all of our work. Following a survey of existing analytical approaches, it was decided that a stronger theoretical basis is required. As such, this paper presents the framework for a recently developed hybrid uncertainty theory. 21 refs., 2 figs.« less
Singh, R R P; Young, A P
2017-08-01
We study the ±J transverse-field Ising spin-glass model at zero temperature on d-dimensional hypercubic lattices and in the Sherrington-Kirkpatrick (SK) model, by series expansions around the strong-field limit. In the SK model and in high dimensions our calculated critical properties are in excellent agreement with the exact mean-field results, surprisingly even down to dimension d=6, which is below the upper critical dimension of d=8. In contrast, at lower dimensions we find a rich singular behavior consisting of critical and Griffiths-McCoy singularities. The divergence of the equal-time structure factor allows us to locate the critical coupling where the correlation length diverges, implying the onset of a thermodynamic phase transition. We find that the spin-glass susceptibility as well as various power moments of the local susceptibility become singular in the paramagnetic phase before the critical point. Griffiths-McCoy singularities are very strong in two dimensions but decrease rapidly as the dimension increases. We present evidence that high enough powers of the local susceptibility may become singular at the pure-system critical point.
A parallel strategy for implementing real-time expert systems using CLIPS
NASA Technical Reports Server (NTRS)
Ilyes, Laszlo A.; Villaseca, F. Eugenio; Delaat, John
1994-01-01
As evidenced by current literature, there appears to be a continued interest in the study of real-time expert systems. It is generally recognized that speed of execution is only one consideration when designing an effective real-time expert system. Some other features one must consider are the expert system's ability to perform temporal reasoning, handle interrupts, prioritize data, contend with data uncertainty, and perform context focusing as dictated by the incoming data to the expert system. This paper presents a strategy for implementing a real time expert system on the iPSC/860 hypercube parallel computer using CLIPS. The strategy takes into consideration not only the execution time of the software, but also those features which define a true real-time expert system. The methodology is then demonstrated using a practical implementation of an expert system which performs diagnostics on the Space Shuttle Main Engine (SSME). This particular implementation uses an eight node hypercube to process ten sensor measurements in order to simultaneously diagnose five different failure modes within the SSME. The main program is written in ANSI C and embeds CLIPS to better facilitate and debug the rule based expert system.
NASA Astrophysics Data System (ADS)
Singh, R. R. P.; Young, A. P.
2017-08-01
We study the ±J transverse-field Ising spin-glass model at zero temperature on d -dimensional hypercubic lattices and in the Sherrington-Kirkpatrick (SK) model, by series expansions around the strong-field limit. In the SK model and in high dimensions our calculated critical properties are in excellent agreement with the exact mean-field results, surprisingly even down to dimension d =6 , which is below the upper critical dimension of d =8 . In contrast, at lower dimensions we find a rich singular behavior consisting of critical and Griffiths-McCoy singularities. The divergence of the equal-time structure factor allows us to locate the critical coupling where the correlation length diverges, implying the onset of a thermodynamic phase transition. We find that the spin-glass susceptibility as well as various power moments of the local susceptibility become singular in the paramagnetic phase before the critical point. Griffiths-McCoy singularities are very strong in two dimensions but decrease rapidly as the dimension increases. We present evidence that high enough powers of the local susceptibility may become singular at the pure-system critical point.
Spacecraft On-Board Information Extraction Computer (SOBIEC)
NASA Technical Reports Server (NTRS)
Eisenman, David; Decaro, Robert E.; Jurasek, David W.
1994-01-01
The Jet Propulsion Laboratory is the Technical Monitor on an SBIR Program issued for Irvine Sensors Corporation to develop a highly compact, dual use massively parallel processing node known as SOBIEC. SOBIEC couples 3D memory stacking technology provided by nCUBE. The node contains sufficient network Input/Output to implement up to an order-13 binary hypercube. The benefit of this network, is that it scales linearly as more processors are added, and it is a superset of other commonly used interconnect topologies such as: meshes, rings, toroids, and trees. In this manner, a distributed processing network can be easily devised and supported. The SOBIEC node has sufficient memory for most multi-computer applications, and also supports external memory expansion and DMA interfaces. The SOBIEC node is supported by a mature set of software development tools from nCUBE. The nCUBE operating system (OS) provides configuration and operational support for up to 8000 SOBIEC processors in an order-13 binary hypercube or any subset or partition(s) thereof. The OS is UNIX (USL SVR4) compatible, with C, C++, and FORTRAN compilers readily available. A stand-alone development system is also available to support SOBIEC test and integration.
Methods and challenges for the health impact assessment of vaccination programs in Latin America.
Sartori, Ana Marli Christovam; Nascimento, Andréia de Fátima; Yuba, Tânia Yuka; Soárez, Patrícia Coelho de; Novaes, Hillegonda Maria Dutilh
2015-01-01
To describe methods and challenges faced in the health impact assessment of vaccination programs, focusing on the pneumococcal conjugate and rotavirus vaccines in Latin America and the Caribbean. For this narrative review, we searched for the terms "rotavirus", "pneumococcal", "conjugate vaccine", "vaccination", "program", and "impact" in the databases Medline and LILACS. The search was extended to the grey literature in Google Scholar. No limits were defined for publication year. Original articles on the health impact assessment of pneumococcal and rotavirus vaccination programs in Latin America and the Caribbean in English, Spanish or Portuguese were included. We identified 207 articles. After removing duplicates and assessing eligibility, we reviewed 33 studies, 25 focusing on rotavirus and eight on pneumococcal vaccination programs. The most frequent studies were ecological, with time series analysis or comparing pre- and post-vaccination periods. The main data sources were: health information systems; population-, sentinel- or laboratory-based surveillance systems; statistics reports; and medical records from one or few health care services. Few studies used primary data. Hospitalization and death were the main outcomes assessed. Over the last years, a significant number of health impact assessments of pneumococcal and rotavirus vaccination programs have been conducted in Latin America and the Caribbean. These studies were carried out few years after the programs were implemented, meet the basic methodological requirements and suggest positive health impact. Future assessments should consider methodological issues and challenges arisen in these first studies conducted in the region.
Factors and outcomes associated with the induction of labour in Latin America.
Guerra, G V; Cecatti, J G; Souza, J P; Faúndes, A; Morais, S S; Gülmezoglu, A M; Parpinelli, M A; Passini, R; Carroli, G
2009-12-01
To describe the prevalence of labour induction, together with its risk factors and outcomes in Latin America. Analysis of the 2005 WHO global survey database. Eight selected Latin American countries. All women who gave birth during the study period in 120 participating institutions. Bivariate and multivariate analyses. Indications for labour induction per country, success rate per method, risk factors for induction, and maternal and perinatal outcomes. Of the 97,095 deliveries included in the survey, 11,077 (11.4%) were induced, with 74.2% occurring in public institutions, 20.9% in social security hospitals and 4.9% in private institutions. Induction rates ranged from 5.1% in Peru to 20.1% in Cuba. The main indications were premature rupture of membranes (25.3%) and elective induction (28.9%). The success rate of vaginal delivery was very similar for oxytocin (69.9%) and misoprostol (74.8%), with an overall success rate of 70.4%. Induced labour was more common in women over 35 years of age. Maternal complications included higher rates of perineal laceration, need for uterotonic agents, hysterectomy, ICU admission, hospital stay>7 days and increased need for anaesthetic/analgesic procedures. Some adverse perinatal outcomes were also higher: low 5-minute Apgar score, very low birthweight, admission to neonatal ICU and delayed initiation of breastfeeding. In Latin America, labour was induced in slightly more than 10% of deliveries; success rates were high irrespective of the method used. Induced labour is, however, associated with poorer maternal and perinatal outcomes than spontaneous labour.
Jones, Derek A; Gaewsky, James P; Kelley, Mireille E; Weaver, Ashley A; Miller, Anna N; Stitzel, Joel D
2016-09-01
The objective of this study was to reconstruct 4 real-world motor vehicle crashes (MVCs), 2 with lumbar vertebral fractures and 2 without vertebral fractures in order to elucidate the MVC and/or restraint variables that increase this injury risk. A finite element (FE) simplified vehicle model (SVM) was used in conjunction with a previously developed semi-automated tuning method to arrive at 4 SVMs that were tuned to mimic frontal crash responses of a 2006 Chevrolet Cobalt, 2012 Ford Escape, 2007 Hummer H3, and 2002 Chevrolet Cavalier. Real-world crashes in the first 2 vehicles resulted in lumbar vertebrae fractures, whereas the latter 2 did not. Once each SVM was tuned to its corresponding vehicle, the Total HUman Model for Safety (THUMS) v4.01 was positioned in 120 precrash configurations in each SVM by varying 5 parameters using a Latin hypercube design (LHD) of experiments: seat track position, seatback angle, steering column angle, steering column telescoping position, and d-ring height. For each case, the event data recorder (EDR) crash pulse was used to apply kinematic boundary conditions to the model. By analyzing cross-sectional vertebral loads, vertebral bending moments, and maximum principal strain and stress in both cortical and trabecular bone, injury metric response as a function of posture and restraint parameters was computed. Tuning the SVM to specific vehicle models produced close matches between the simulated and experimental crash test responses for head, T6, and pelvis resultant acceleration; left and right femur loads; and shoulder and lap belt loads. Though vertebral load in the THUMS simulations was highly similar between injury cases and noninjury cases, the amount of bending moment was much higher for the injury cases. Seatback angle had a large effect on the maximum compressive load and bending moment in the lumbar spine, indicating the upward tilt of the seat pan in conjunction with precrash positioning may increase the likelihood of suffering lumbar injury even in frontal, planar MVCs. In conclusion, precrash positioning has a large effect on lumbar injury metrics. The lack of lumbar injury criteria in regulatory crash tests may have led to inadvertent design of seat pans that work to apply axial force to the spinal column during frontal crashes.
2014-01-01
Background In the greater framework of the essential functions of Public Health, our focus is on a systematic, objective, external evaluation of Latin American scientific output, to compare its publications in the area of Public Health with those of other major geographic zones. We aim to describe the regional distribution of output in Public Health, and the level of visibility and specialization, for Latin America; it can then be characterized and compared in the international context. Methods The primary source of information was the Scopus database, using the category “Public Health, Environmental and Occupational Health”, in the period 1996–2011. Data were obtained through the portal of SCImago Journal and Country Rank. Using a set of qualitative (citation-based), quantitative (document recount) and collaborative (authors from more than one country) indicators, we derived complementary data. The methodology serves as an analytical tool for researchers and scientific policy-makers. Results The contribution of Latin America to the arsenal of world science lies more or less midway on the international scale in terms of its output and visibility. Revealed as its greatest strengths are the high level of specialization in Public Health and the sustained growth of output. The main limitations identified were a relative decrease in collaboration and low visibility. Conclusions Collaboration is a key factor behind the development of scientific activity in Latin America. Although this finding can be useful for formulating research policy in Latin American countries, it also underlines the need for further research into patterns of scientific communication in this region, to arrive at more specific recommendations. PMID:24950735
ERIC Educational Resources Information Center
Parker, Henry H.; Sturdivant, Ann
1966-01-01
Several Latin textbooks are described and evaluated in terms of their affectiveness in teaching language mastery. These are: (1) "Using Latin," Scott, Foresman, (2) "Latin for Americans," Macmillan, (3) "Lingua Latina," by Burns, Medicus, and Sherburne, and (4) "A Basic Course in Latin,""An Intermediate Course in Latin--Reading," and "An…
Latin American scientific contribution to ecology.
Wojciechowski, Juliana; Ceschin, Fernanda; Pereto, Suelen C A S; Ribas, Luiz G S; Bezerra, Luis A V; Dittrich, Jaqueline; Siqueira, Tadeu; Padial, André A
2017-01-01
Latin America embodies countries of special interest for ecological studies, given that areas with great value for biodiversity are located within their territories. This highlights the importance of an evaluation of ecological research in the Latin America region. We assessed the scientific participation of Latin American researchers in ecological journals, patterns of international collaboration, and defined the main characteristics of the articles. Although Latin American publications have increased in fourteen years, they accounted up to 9% of publications in Ecology. Brazil leaded the scientific production in Latin America, followed by Argentina and Mexico. In general, Latin American articles represented a low percentage of most journals total publication, with particularly low expression in high impact-factor journals. A half of the Latin American publications had international collaboration. Articles with more than five authors and with international collaboration were the most cited. Descriptive studies, mainly based in old theories, are still majority, suggesting that Ecology is in a developing stage in Latin America.
Childhood and adolescent overweight and obesity in Latin America: a systematic review.
Rivera, Juan Ángel; de Cossío, Teresita González; Pedraza, Lilia Susana; Aburto, Tania Cony; Sánchez, Tania Georgina; Martorell, Reynaldo
2014-04-01
The number of children and adolescents who are overweight or obese worldwide is alarming. We did a systematic review to estimate the prevalence of overweight and obesity in children aged 0-19 years in Latin America. We searched specialised databases and seven books for relevant studies that were done in Spanish-speaking and Portuguese-speaking Latin American and Caribbean countries and published in peer-reviewed journals between January 2008, and April 2013. Indicators used were BMI (kg/m(2)) in all age groups and weight-for-height in children younger than 5 years. We identified 692 publications and included 42. Estimated prevalence of overweight in children younger than 5 years in Latin America was 7·1% with the weight-for-height WHO 2006 classification method. National combined prevalences of overweight and obesity with the WHO 2007 classification method ranged from 18·9% to 36·9% in school-age children (5-11 years) and from 16·6% to 35·8% in adolescents (12-19 years). We estimated that 3·8 million children younger than 5 years, 22·2-25·9 million school-age children, and 16·5-21·1 million adolescents were overweight or obese. Overall, between 42·5 and 51·8 million children aged 0-19 years were affected-ie, about 20-25% of the population. Although undernutrition and obesity coexist in the region, policies in most countries favour prevention of undernutrition, and only a few countries have implemented national policies to prevent obesity. In view of the number of children who are overweight or obese, the associated detrimental effects on health, and the cost to health-care systems, implementation of programmes to monitor and prevent unhealthy weight gain in children and adolescents are urgently needed throughout Latin America. Copyright © 2014 Elsevier Ltd. All rights reserved.
Advances in volcano monitoring and risk reduction in Latin America
NASA Astrophysics Data System (ADS)
McCausland, W. A.; White, R. A.; Lockhart, A. B.; Marso, J. N.; Assitance Program, V. D.; Volcano Observatories, L. A.
2014-12-01
We describe results of cooperative work that advanced volcanic monitoring and risk reduction. The USGS-USAID Volcano Disaster Assistance Program (VDAP) was initiated in 1986 after disastrous lahars during the 1985 eruption of Nevado del Ruiz dramatizedthe need to advance international capabilities in volcanic monitoring, eruption forecasting and hazard communication. For the past 28 years, VDAP has worked with our partners to improve observatories, strengthen monitoring networks, and train observatory personnel. We highlight a few of the many accomplishments by Latin American volcano observatories. Advances in monitoring, assessment and communication, and lessons learned from the lahars of the 1985 Nevado del Ruiz eruption and the 1994 Paez earthquake enabled the Servicio Geológico Colombiano to issue timely, life-saving warnings for 3 large syn-eruptive lahars at Nevado del Huila in 2007 and 2008. In Chile, the 2008 eruption of Chaitén prompted SERNAGEOMIN to complete a national volcanic vulnerability assessment that led to a major increase in volcano monitoring. Throughout Latin America improved seismic networks now telemeter data to observatories where the decades-long background rates and types of seismicity have been characterized at over 50 volcanoes. Standardization of the Earthworm data acquisition system has enabled data sharing across international boundaries, of paramount importance during both regional tectonic earthquakes and during volcanic crises when vulnerabilities cross international borders. Sharing of seismic forecasting methods led to the formation of the international organization of Latin American Volcano Seismologists (LAVAS). LAVAS courses and other VDAP training sessions have led to international sharing of methods to forecast eruptions through recognition of precursors and to reduce vulnerabilities from all volcano hazards (flows, falls, surges, gas) through hazard assessment, mapping and modeling. Satellite remote sensing data-sharing facilitatescross-border identification and warnings of ash plumes for aviation. Overall, long-term strategies of data collection and experience-sharing have helped Latin American observatories improve their monitoring and create informed communities cognizant of vulnerabilities inherent in living near volcanoes.
Social Medicine Then and Now: Lessons From Latin America
Waitzkin, Howard; Iriart, Celia; Estrada, Alfredo; Lamadrid, Silvia
2001-01-01
The accomplishments of Latin American social medicine remain little known in the English-speaking world. In Latin America, social medicine differs from public health in its definitions of populations and social institutions, its dialectic vision of “health–illness,” and its stance on causal inference. A “golden age” occurred during the 1930s, when Salvador Allende, a pathologist and future president of Chile, played a key role. Later influences included the Cuban revolution, the failed peaceful transition to socialism in Chile, the Nicaraguan revolution, liberation theology, and empowerment strategies in education. Most of the leaders of Latin American social medicine have experienced political repression, partly because they have tried to combine theory and political practice—a combination known as “praxis.” Theoretic debates in social medicine take their bearings from historical materialism and recent trends in European philosophy. Methodologically, differing historical, quantitative, and qualitative approaches aim to avoid perceived problems of positivism and reductionism in traditional public health and clinical methods. Key themes emphasize the effects of broad social policies on health and health care; the social determinants of illness and death; the relationships between work, reproduction, and the environment; and the impact of violence and trauma. PMID:11574316
[Infant and child mortality in Latin America].
Behm, H; Primante, D A
1978-04-01
High mortality rates persist in Latin America, and data collection is made very difficult because of the lack of reliable statistics. A study was initiated in 1976 to measure the probability of mortality from birth to 2 years of age in 12 Latin American countries. The Brass method was used and applied to population censuses. Probability of mortality is extremely heterogeneous and regularly very high, varying between a maximum of 202/1000 in Bolivia, to a minimum of 112/1000 in Uruguay. In comparison, the same probability is 21/1000 in the U.S., and 11/1000 in sweden. Mortality in rural areas is much higher than in urban ones, and varies according to the degree of education of the mother, children being born to mothers who had 10 years of formal education having the lowest risk of death. Children born to the indigenous population, largely illiterate and living in the poorest of conditions, have the highest probability of death, a probability reaching 67% of all deaths under 2 years. National health services in Latin America, although vastly improved and improving, still do not meet the needs of the population, especially rural, and structural and historical conditions hamper a wider application of existing medical knowledge.
Social medicine then and now: lessons from Latin America.
Waitzkin, H; Iriart, C; Estrada, A; Lamadrid, S
2001-10-01
The accomplishments of Latin American social medicine remain little known in the English-speaking world. In Latin America, social medicine differs from public health in its definitions of populations and social institutions, its dialectic vision of "health-illness," and its stance on causal inference. A "golden age" occurred during the 1930s, when Salvador Allende, a pathologist and future president of Chile, played a key role. Later influences included the Cuban revolution, the failed peaceful transition to socialism in Chile, the Nicaraguan revolution, liberation theology, and empowerment strategies in education. Most of the leaders of Latin American social medicine have experienced political repression, partly because they have tried to combine theory and political practice--a combination known as "praxis." Theoretic debates in social medicine take their bearings from historical materialism and recent trends in European philosophy. Methodologically, differing historical, quantitative, and qualitative approaches aim to avoid perceived problems of positivism and reductionism in traditional public health and clinical methods. Key themes emphasize the effects of broad social policies on health and health care; the social determinants of illness and death; the relationships between work, reproduction, and the environment; and the impact of violence and trauma.
Human rhinoviruses and enteroviruses in influenza-like illness in Latin America
2013-01-01
Background Human rhinoviruses (HRVs) belong to the Picornaviridae family with high similarity to human enteroviruses (HEVs). Limited data is available from Latin America regarding the clinical presentation and strains of these viruses in respiratory disease. Methods We collected nasopharyngeal swabs at clinics located in eight Latin American countries from 3,375 subjects aged 25 years or younger who presented with influenza-like illness. Results Our subjects had a median age of 3 years and a 1.2:1.0 male:female ratio. HRV was identified in 16% and HEV was identified in 3%. HRVs accounted for a higher frequency of isolates in those of younger age, in particular children < 1 years old. HRV-C accounted for 38% of all HRVs detected. Phylogenetic analysis revealed a high proportion of recombinant strains between HRV-A/HRV-C and between HEV-A/HEV-B. In addition, both EV-D68 and EV-A71 were identified. Conclusions In Latin America as in other regions, HRVs and HEVs account for a substantial proportion of respiratory viruses identified in young people with ILI, a finding that provides additional support for the development of pharmaceuticals and vaccines targeting these pathogens. PMID:24119298
Vision-Based Navigation and Parallel Computing
1990-08-01
33 5.8. Behizad Kamgar-Parsi and Behrooz Karngar-Parsi,"On Problem 5- lving with Hopfield Neural Networks", CAR-TR-462, CS-TR...Second. the hypercube connections support logarithmic implementations of fundamental parallel algorithms. such as grid permutations and scan...the pose space. It also uses a set of virtual processors to represent an orthogonal projection grid , and projections of the six dimensional pose space
Hypercat - Hypercube of Clumpy AGN Tori
NASA Astrophysics Data System (ADS)
Nikutta, Robert; Lopez-Rodriguez, Enrique; Ichikawa, Kohei; Levenson, Nancy; Packham, Christopher C.
2017-06-01
Dusty tori surrounding the central engines of Active Galactic Nuclei (AGN) are required by the Unification Paradigm, and are supported by many observations, e.g. variable nuclear absorber (sometimes Compton-thick) in X-rays, reverberation mapping in optical/UV, hot dust emission and SED shapes in NIR/MIR, molecular and cool-dust tori observed with ALMA in sub-mm.While models of AGN torus SEDs have been developed and utilized for a long time, the study of the resolved emission morphology (brightness maps) has so far been under-appreciated, presumably because resolved observations of the central parsec in AGN are only possible very recently. Currently, only NIR+MIR interferometry is capable of resolving the nuclear dust emission (but not of producing images, until MATISSE comes online). Furthermore, MIR interferometry has delivered also puzzling results, e.g. that in some resolved sources the light emanates preferentially from polar directions above the "torus" system, and not from the equatorial plane, where most of the dust is located.We are preparing the release of a panchromatic, fully interpolable hypercube of brightness maps and projected dust images for a large number of CLUMPY torus models (Nenkova+2008), that will help facilitate studies of resolved AGN emission and dust morphologies. Together with the cube we will release a comprehensive set of open-source tools (Python) that will enable researches to work efficiently with this large hypercube:* easy sub-cube selection + memory-mapping (mitigating the too-big-for-RAM problem)* multi-dim image interpolation (get an image at any wavelength & model parameter combination)* simulation of observations with telescopes (compute/provide + apply a PSF) and interferometers (get visibilities)* analyze images with respect to the power contained at all scales and orientations (via 2D steerable wavelets), addressing the seemingly puzzling results mentioned aboveA series of papers is in preparation, aiming at solving the puzzles, and at making predictions about the resolvability of all nearby AGN tori with any combination of current and future instruments (e.g. VLTI+MATISSE, TMT+MICHI, GMT, ELT, JWST, ALMA).
Latin and the World of Work: Career Education and Foreign Languages.
ERIC Educational Resources Information Center
Kennedy, Dora F.; And Others
This curriculum guide for high school Latin courses emphasizes the usefulness of a knowledge of Latin for career preparation. As a supplement to standard Latin textbooks, a variety of classroom material is offered. The instructional approach revolves around the relation of English and Latin vocabulary acquisition and etymological knowledge on the…
How Do You Say "MOO" in Latin? Assessing Student Learning and Motivation in Beginning Latin.
ERIC Educational Resources Information Center
Gruber-Miller, John; Benton, Cindy
2001-01-01
Assesses the value of VRoma for Latin language learning. In particular, discusses three exercises that were developed that combine Latin language and Roman culture in order to help students reinforce their Latin skills and gain a more in-depth understanding of ancient Roman society. (Author/VWL)
McCook, Stuart
2013-12-01
The "global turn" in the history of science offers new ways to think about how to do national and regional histories of science, in this case the history of science in Latin America. For example, it questions structuralist and diffusionist models of the spread of science and shows the often active role that people in Latin America (and the rest of the Global South) played in the construction of "universal" scientific knowledge. It suggests that even national or regional histories of science must be situated in a global context; all too often, such histories have treated global processes as a distant backdrop. At the same time, historians need to pay constant attention to the role of power in the construction of scientific knowledge. Finally, this essay highlights a methodological tool for writing globally inflected histories of science: the method of "following".
[Managed care in Latin America: transnationalization of the health sector in the context of reform].
Iriart, C; Merhy, E E; Waitzkin, H
2000-01-01
This article presents the results of the comparative research project "Managed Care in Latin America: Its Role in Health Reform". The project was conducted by teams in Argentina, Brazil, Chile, Ecuador, and the United States. The study's objective was to analyze the process by which managed care is exported, especially from the United States, and how managed care is adopted in Latin American countries. Our research methods included qualitative and quantitative techniques. Adoption of managed care reflects transnationalization of the health sector. Our findings demonstrate the entrance of large multinational financial capital into the private insurance and health services sectors and their intention of participating in the administration of government institutions and medical/social security funds. We conclude that this basic change involving the slow adoption of managed care is facilitated by ideological changes with discourses accepting the inexorable nature of public sector reform.
Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing
NASA Technical Reports Server (NTRS)
Ozguner, Fusun
1996-01-01
Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.
Dynamical analysis of continuous higher-order hopfield networks for combinatorial optimization.
Atencia, Miguel; Joya, Gonzalo; Sandoval, Francisco
2005-08-01
In this letter, the ability of higher-order Hopfield networks to solve combinatorial optimization problems is assessed by means of a rigorous analysis of their properties. The stability of the continuous network is almost completely clarified: (1) hyperbolic interior equilibria, which are unfeasible, are unstable; (2) the state cannot escape from the unitary hypercube; and (3) a Lyapunov function exists. Numerical methods used to implement the continuous equation on a computer should be designed with the aim of preserving these favorable properties. The case of nonhyperbolic fixed points, which occur when the Hessian of the target function is the null matrix, requires further study. We prove that these nonhyperbolic interior fixed points are unstable in networks with three neurons and order two. The conjecture that interior equilibria are unstable in the general case is left open.
Running coupling constant from lattice studies of gluon and ghost propagators
NASA Astrophysics Data System (ADS)
Cucchieri, A.; Mendes, T.
2004-12-01
We present a numerical study of the running coupling constant in four-dimensional pure-SU(2) lattice gauge theory. The running coupling is evaluated by fitting data for the gluon and ghost propagators in minimal Landau gauge. Following Refs. [1, 2], the fitting formulae are obtained by a simultaneous integration of the β function and of a function coinciding with the anomalous dimension of the propagator in the momentum subtraction scheme. We consider these formulae at three and four loops. The fitting method works well, especially for the ghost case, for which statistical error and hyper-cubic effects are very small. Our present result for ΛMS is 200-40+60 MeV, where the error is purely systematic. We are currently extending this analysis to five loops in order to reduce this systematic error.
Monte Carlo methods for multidimensional integration for European option pricing
NASA Astrophysics Data System (ADS)
Todorov, V.; Dimov, I. T.
2016-10-01
In this paper, we illustrate examples of highly accurate Monte Carlo and quasi-Monte Carlo methods for multiple integrals related to the evaluation of European style options. The idea is that the value of the option is formulated in terms of the expectation of some random variable; then the average of independent samples of this random variable is used to estimate the value of the option. First we obtain an integral representation for the value of the option using the risk neutral valuation formula. Then with an appropriations change of the constants we obtain a multidimensional integral over the unit hypercube of the corresponding dimensionality. Then we compare a specific type of lattice rules over one of the best low discrepancy sequence of Sobol for numerical integration. Quasi-Monte Carlo methods are compared with Adaptive and Crude Monte Carlo techniques for solving the problem. The four approaches are completely different thus it is a question of interest to know which one of them outperforms the other for evaluation multidimensional integrals in finance. Some of the advantages and disadvantages of the developed algorithms are discussed.
Star Trek with Latin. Teacher's Guide. Tentative Edition.
ERIC Educational Resources Information Center
Masciantonio, Rudolph; And Others
The purpose of this guide is to assist Latin and English teachers with some background in Latin to expand the English vocabulary and reading skills of students through the study of Latin roots, prefixes, and suffixes. The introductory material in the guide provides general notes on the teaching of Latin in the Philadelphia School District,…
ON THE CONSTRUCTION OF LATIN SQUARES COUNTERBALANCED FOR IMMEDIATE SEQUENTIAL EFFECTS.
ERIC Educational Resources Information Center
HOUSTON, TOM R., JR.
THIS REPORT IS ONE OF A SERIES DESCRIBING NEW DEVELOPMENTS IN THE AREA OF RESEARCH METHODOLOGY. IT DEALS WITH LATIN SQUARES AS A CONTROL FOR PROGRESSIVE AND ADJACENCY EFFECTS IN EXPERIMENTAL DESIGNS. THE HISTORY OF LATIN SQUARES IS ALSO REVIEWED, AND SEVERAL ALGORITHMS FOR THE CONSTRUCTION OF LATIN AND GRECO-LATIN SQUARES ARE PROPOSED. THE REPORT…
NASA Astrophysics Data System (ADS)
Chaney, N.; Wood, E. F.
2014-12-01
The increasing accessibility of high-resolution land data (< 100 m) and high performance computing allows improved parameterizations of subgrid hydrologic processes in macroscale land surface models. Continental scale fully distributed modeling at these spatial scales is possible; however, its practicality for operational use is still unknown due to uncertainties in input data, model parameters, and storage requirements. To address these concerns, we propose a modeling framework that provides the spatial detail of a fully distributed model yet maintains the benefits of a semi-distributed model. In this presentation we will introduce DTOPLATS-MP, a coupling between the NOAH-MP land surface model and the Dynamic TOPMODEL hydrologic model. This new model captures a catchment's spatial heterogeneity by clustering high-resolution land datasets (soil, topography, and land cover) into hundreds of hydrologic similar units (HSUs). A prior DEM analysis defines the connections between each HSU. At each time step, the 1D land surface model updates each HSU; the HSUs then interact laterally via the subsurface and surface. When compared to the fully distributed form of the model, this framework allows a significant decrease in computation and storage while providing most of the same information and enabling parameter transferability. As a proof of concept, we will show how this new modeling framework can be run over CONUS at a 30-meter spatial resolution. For each catchment in the WBD HUC-12 dataset, the model is run between 2002 and 2012 using available high-resolution continental scale land and meteorological datasets over CONUS (dSSURGO, NLCD, NED, and NCEP Stage IV). For each catchment, the model is run with 1000 model parameter sets obtained from a Latin hypercube sample. This exercise will illustrate the feasibility of running the model operationally at continental scales while accounting for model parameter uncertainty.
NASA Astrophysics Data System (ADS)
Flores, A. N.; Entekhabi, D.; Bras, R. L.
2007-12-01
Soil hydraulic and thermal properties (SHTPs) affect both the rate of moisture redistribution in the soil column and the volumetric soil water capacity. Adequately constraining these properties through field and lab analysis to parameterize spatially-distributed hydrology models is often prohibitively expensive. Because SHTPs vary significantly at small spatial scales individual soil samples are also only reliably indicative of local conditions, and these properties remain a significant source of uncertainty in soil moisture and temperature estimation. In ensemble-based soil moisture data assimilation, uncertainty in the model-produced prior estimate due to associated uncertainty in SHTPs must be taken into account to avoid under-dispersive ensembles. To treat SHTP uncertainty for purposes of supplying inputs to a distributed watershed model we use the restricted pairing (RP) algorithm, an extension of Latin Hypercube (LH) sampling. The RP algorithm generates an arbitrary number of SHTP combinations by sampling the appropriate marginal distributions of the individual soil properties using the LH approach, while imposing a target rank correlation among the properties. A previously-published meta- database of 1309 soils representing 12 textural classes is used to fit appropriate marginal distributions to the properties and compute the target rank correlation structure, conditioned on soil texture. Given categorical soil textures, our implementation of the RP algorithm generates an arbitrarily-sized ensemble of realizations of the SHTPs required as input to the TIN-based Realtime Integrated Basin Simulator with vegetation dynamics (tRIBS+VEGGIE) distributed parameter ecohydrology model. Soil moisture ensembles simulated with RP- generated SHTPs exhibit less variance than ensembles simulated with SHTPs generated by a scheme that neglects correlation among properties. Neglecting correlation among SHTPs can lead to physically unrealistic combinations of parameters that exhibit implausible hydrologic behavior when input to the tRIBS+VEGGIE model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Andrew; Lawrence, Earl
The Response Surface Modeling (RSM) Tool Suite is a collection of three codes used to generate an empirical interpolation function for a collection of drag coefficient calculations computed with Test Particle Monte Carlo (TPMC) simulations. The first code, "Automated RSM", automates the generation of a drag coefficient RSM for a particular object to a single command. "Automated RSM" first creates a Latin Hypercube Sample (LHS) of 1,000 ensemble members to explore the global parameter space. For each ensemble member, a TPMC simulation is performed and the object drag coefficient is computed. In the next step of the "Automated RSM" code,more » a Gaussian process is used to fit the TPMC simulations. In the final step, Markov Chain Monte Carlo (MCMC) is used to evaluate the non-analytic probability distribution function from the Gaussian process. The second code, "RSM Area", creates a look-up table for the projected area of the object based on input limits on the minimum and maximum allowed pitch and yaw angles and pitch and yaw angle intervals. The projected area from the look-up table is used to compute the ballistic coefficient of the object based on its pitch and yaw angle. An accurate ballistic coefficient is crucial in accurately computing the drag on an object. The third code, "RSM Cd", uses the RSM generated by the "Automated RSM" code and the projected area look-up table generated by the "RSM Area" code to accurately compute the drag coefficient and ballistic coefficient of the object. The user can modify the object velocity, object surface temperature, the translational temperature of the gas, the species concentrations of the gas, and the pitch and yaw angles of the object. Together, these codes allow for the accurate derivation of an object's drag coefficient and ballistic coefficient under any conditions with only knowledge of the object's geometry and mass.« less
Constructing rigorous and broad biosurveillance networks for detecting emerging zoonotic outbreaks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Mac; Moore, Leslie; McMahon, Benjamin
Determining optimal surveillance networks for an emerging pathogen is difficult since it is not known beforehand what the characteristics of a pathogen will be or where it will emerge. The resources for surveillance of infectious diseases in animals and wildlife are often limited and mathematical modeling can play a supporting role in examining a wide range of scenarios of pathogen spread. We demonstrate how a hierarchy of mathematical and statistical tools can be used in surveillance planning help guide successful surveillance and mitigation policies for a wide range of zoonotic pathogens. The model forecasts can help clarify the complexities ofmore » potential scenarios, and optimize biosurveillance programs for rapidly detecting infectious diseases. Using the highly pathogenic zoonotic H5N1 avian influenza 2006-2007 epidemic in Nigeria as an example, we determined the risk for infection for localized areas in an outbreak and designed biosurveillance stations that are effective for different pathogen strains and a range of possible outbreak locations. We created a general multi-scale, multi-host stochastic SEIR epidemiological network model, with both short and long-range movement, to simulate the spread of an infectious disease through Nigerian human, poultry, backyard duck, and wild bird populations. We chose parameter ranges specific to avian influenza (but not to a particular strain) and used a Latin hypercube sample experimental design to investigate epidemic predictions in a thousand simulations. We ranked the risk of local regions by the number of times they became infected in the ensemble of simulations. These spatial statistics were then complied into a potential risk map of infection. Finally, we validated the results with a known outbreak, using spatial analysis of all the simulation runs to show the progression matched closely with the observed location of the farms infected in the 2006-2007 epidemic.« less
Drivers And Uncertainties Of Increasing Global Water Scarcity
NASA Astrophysics Data System (ADS)
Scherer, L.; Pfister, S.
2015-12-01
Water scarcity threatens ecosystems and human health and hampers economic development. It generally depends on the ratio of water consumption to availability. We calculated global, spatially explicit water stress indices (WSIs) which describe the vulnerability to additional water consumption on a scale from 0 (low) to 1 (high) and compare them for the decades 1981-1990 and 2001-2010. Input data are obtained from a multi-model ensemble at a resolution of 0.5 degrees. The variability among the models was used to run 1000 Monte Carlo simulations (latin hypercube sampling) and to subsequently estimate uncertainties of the WSIs. Globally, a trend of increasing water scarcity can be observed, however, uncertainties are large. The probability that this trend is actually occurring is as low as 53%. The increase in WSIs is rather driven by higher water use than lower water availability. Water availability is only 40% likely to decrease whereas water consumption is 67% likely to increase. Independent from the trend, we are already living under water scarce conditions, which is reflected in a consumption-weighted average of monthly WSIs of 0.51 in the recent decade. Its coefficient of variation points with 0.8 to the high uncertainties entailed, which might still hide poor model performance where all models consistently over- or underestimate water availability or use. Especially in arid areas, models generally overestimate availability. Although we do not traverse the planetary boundary of freshwater use as global water availability is sufficient, local water scarcity might be high. Therefore the regionalized assessment of WSIs under uncertainty helps to focus on specific regions to optimise water consumption. These global results can also help to raise awareness of water scarcity, and to suggest relevant measures such as more water efficient technologies to international companies, which have to deal with complex and distributed supply chains (e.g. in food production).
Winskill, Peter; Harrison, Wendy E; French, Michael D; Dixon, Matthew A; Abela-Ridder, Bernadette; Basáñez, María-Gloria
2017-02-09
The pork tapeworm, Taenia solium, and associated human infections, taeniasis, cysticercosis and neurocysticercosis, are serious public health problems, especially in developing countries. The World Health Organization (WHO) has set goals for having a validated strategy for control and elimination of T. solium taeniasis/cysticercosis by 2015 and interventions scaled-up in selected countries by 2020. Timely achievement of these internationally-endorsed targets requires that the relative benefits and effectiveness of potential interventions be explored rigorously within a quantitative framework. A deterministic, compartmental transmission model (EPICYST) was developed to capture the dynamics of the taeniasis/cysticercosis disease system in the human and pig hosts. Cysticercosis prevalence in humans, an outcome of high epidemiological and clinical importance, was explicitly modelled. A next generation matrix approach was used to derive an expression for the basic reproduction number, R 0 . A full sensitivity analysis was performed using a methodology based on Latin-hypercube sampling partial rank correlation coefficient index. EPICYST outputs indicate that chemotherapeutic intervention targeted at humans or pigs would be highly effective at reducing taeniasis and cysticercosis prevalence when applied singly, with annual chemotherapy of humans and pigs resulting, respectively, in 94 and 74% of human cysticercosis cases averted. Improved sanitation, meat inspection and animal husbandry are less effective but are still able to reduce prevalence singly or in combination. The value of R 0 for taeniasis was estimated at 1.4 (95% Credible Interval: 0.5-3.6). Human- and pig-targeted drug-focussed interventions appear to be the most efficacious approach from the options currently available. The model presented is a forward step towards developing an informed control and elimination strategy for cysticercosis. Together with its validation against field data, EPICYST will be a valuable tool to help reach the WHO goals and to conduct economic evaluations of interventions in varying epidemiological settings.
NASA Astrophysics Data System (ADS)
Gavilan, C.; Grunwald, S.; Quiroz, R.; Zhu, L.
2015-12-01
The Andes represent the largest and highest mountain range in the tropics. Geological and climatic differentiation favored landscape and soil diversity, resulting in ecosystems adapted to very different climatic patterns. Although several studies support the fact that the Andes are a vast sink of soil organic carbon (SOC) only few have quantified this variable in situ. Estimating the spatial distribution of SOC stocks in data-poor and/or poorly accessible areas, like the Andean region, is challenging due to the lack of recent soil data at high spatial resolution and the wide range of coexistent ecosystems. Thus, the sampling strategy is vital in order to ensure the whole range of environmental covariates (EC) controlling SOC dynamics is represented. This approach allows grasping the variability of the area, which leads to more efficient statistical estimates and improves the modeling process. The objectives of this study were to i) characterize and model the spatial distribution of SOC stocks in the Central Andean region using soil-landscape modeling techniques, and to ii) validate and evaluate the model for predicting SOC content in the area. For that purpose, three representative study areas were identified and a suite of variables including elevation, mean annual temperature, annual precipitation and Normalized Difference Vegetation Index (NDVI), among others, was selected as EC. A stratified random sampling (namely conditioned Latin Hypercube) was implemented and a total of 400 sampling locations were identified. At all sites, four composite topsoil samples (0-30 cm) were collected within a 2 m radius. SOC content was measured using dry combustion and SOC stocks were estimated using bulk density measurements. Regression Kriging was used to map the spatial variation of SOC stocks. The accuracy, fit and bias of SOC models was assessed using a rigorous validation assessment. This study produced the first comprehensive, geospatial SOC stock assessment in this undersampled region that serves as a baseline reference to assess potential impacts of climate and land use change.
Capturing spatial heterogeneity of soil organic carbon under changing climate
NASA Astrophysics Data System (ADS)
Mishra, U.; Fan, Z.; Jastrow, J. D.; Matamala, R.; Vitharana, U.
2015-12-01
The spatial heterogeneity of the land surface affects water, energy, and greenhouse gas exchanges with the atmosphere. Designing observation networks that capture land surface spatial heterogeneity is a critical scientific challenge. Here, we present a geospatial approach to capture the existing spatial heterogeneity of soil organic carbon (SOC) stocks across Alaska, USA. We used the standard deviation of 556 georeferenced SOC profiles previously compiled in Mishra and Riley (2015, Biogeosciences, 12:3993-4004) to calculate the number of observations that would be needed to reliably estimate Alaskan SOC stocks. This analysis indicated that 906 randomly distributed observation sites would be needed to quantify the mean value of SOC stocks across Alaska at a confidence interval of ± 5 kg m-2. We then used soil-forming factors (climate, topography, land cover types, surficial geology) to identify the locations of appropriately distributed observation sites by using the conditioned Latin hypercube sampling approach. Spatial correlation and variogram analyses demonstrated that the spatial structures of soil-forming factors were adequately represented by these 906 sites. Using the spatial correlation length of existing SOC observations, we identified 484 new observation sites would be needed to provide the best estimate of the present status of SOC stocks in Alaska. We then used average decadal projections (2020-2099) of precipitation, temperature, and length of growing season for three representative concentration pathway (RCP 4.5, 6.0, and 8.5) scenarios of the Intergovernmental Panel on Climate Change to investigate whether the location of identified observation sites will shift/change under future climate. Our results showed 12-41 additional observation sites (depending on emission scenarios) will be required to capture the impact of projected climatic conditions by 2100 on the spatial heterogeneity of Alaskan SOC stocks. Our results represent an ideal distribution of observation sites across Alaska that captures the land surface spatial heterogeneity and can be used in efforts to quantify SOC stocks, monitor greenhouse gas emissions, and benchmark Earth System Model results.
A clinically parameterized mathematical model of Shigella immunity to inform vaccine design
Wahid, Rezwanul; Toapanta, Franklin R.; Simon, Jakub K.; Sztein, Marcelo B.
2018-01-01
We refine and clinically parameterize a mathematical model of the humoral immune response against Shigella, a diarrheal bacteria that infects 80-165 million people and kills an estimated 600,000 people worldwide each year. Using Latin hypercube sampling and Monte Carlo simulations for parameter estimation, we fit our model to human immune data from two Shigella EcSf2a-2 vaccine trials and a rechallenge study in which antibody and B-cell responses against Shigella′s lipopolysaccharide (LPS) and O-membrane proteins (OMP) were recorded. The clinically grounded model is used to mathematically investigate which key immune mechanisms and bacterial targets confer immunity against Shigella and to predict which humoral immune components should be elicited to create a protective vaccine against Shigella. The model offers insight into why the EcSf2a-2 vaccine had low efficacy and demonstrates that at a group level a humoral immune response induced by EcSf2a-2 vaccine or wild-type challenge against Shigella′s LPS or OMP does not appear sufficient for protection. That is, the model predicts an uncontrolled infection of gut epithelial cells that is present across all best-fit model parameterizations when fit to EcSf2a-2 vaccine or wild-type challenge data. Using sensitivity analysis, we explore which model parameter values must be altered to prevent the destructive epithelial invasion by Shigella bacteria and identify four key parameter groups as potential vaccine targets or immune correlates: 1) the rate that Shigella migrates into the lamina propria or epithelium, 2) the rate that memory B cells (BM) differentiate into antibody-secreting cells (ASC), 3) the rate at which antibodies are produced by activated ASC, and 4) the Shigella-specific BM carrying capacity. This paper underscores the need for a multifaceted approach in ongoing efforts to design an effective Shigella vaccine. PMID:29304144
A clinically parameterized mathematical model of Shigella immunity to inform vaccine design.
Davis, Courtney L; Wahid, Rezwanul; Toapanta, Franklin R; Simon, Jakub K; Sztein, Marcelo B
2018-01-01
We refine and clinically parameterize a mathematical model of the humoral immune response against Shigella, a diarrheal bacteria that infects 80-165 million people and kills an estimated 600,000 people worldwide each year. Using Latin hypercube sampling and Monte Carlo simulations for parameter estimation, we fit our model to human immune data from two Shigella EcSf2a-2 vaccine trials and a rechallenge study in which antibody and B-cell responses against Shigella's lipopolysaccharide (LPS) and O-membrane proteins (OMP) were recorded. The clinically grounded model is used to mathematically investigate which key immune mechanisms and bacterial targets confer immunity against Shigella and to predict which humoral immune components should be elicited to create a protective vaccine against Shigella. The model offers insight into why the EcSf2a-2 vaccine had low efficacy and demonstrates that at a group level a humoral immune response induced by EcSf2a-2 vaccine or wild-type challenge against Shigella's LPS or OMP does not appear sufficient for protection. That is, the model predicts an uncontrolled infection of gut epithelial cells that is present across all best-fit model parameterizations when fit to EcSf2a-2 vaccine or wild-type challenge data. Using sensitivity analysis, we explore which model parameter values must be altered to prevent the destructive epithelial invasion by Shigella bacteria and identify four key parameter groups as potential vaccine targets or immune correlates: 1) the rate that Shigella migrates into the lamina propria or epithelium, 2) the rate that memory B cells (BM) differentiate into antibody-secreting cells (ASC), 3) the rate at which antibodies are produced by activated ASC, and 4) the Shigella-specific BM carrying capacity. This paper underscores the need for a multifaceted approach in ongoing efforts to design an effective Shigella vaccine.
Ebacher, G; Besner, M C; Clément, B; Prévost, M
2012-09-01
Intrusion events caused by transient low pressures may result in the contamination of a water distribution system (DS). This work aims at estimating the range of potential intrusion volumes that could result from a real downsurge event caused by a momentary pump shutdown. A model calibrated with transient low pressure recordings was used to simulate total intrusion volumes through leakage orifices and submerged air vacuum valves (AVVs). Four critical factors influencing intrusion volumes were varied: the external head of (untreated) water on leakage orifices, the external head of (untreated) water on submerged air vacuum valves, the leakage rate, and the diameter of AVVs' outlet orifice (represented by a multiplicative factor). Leakage orifices' head and AVVs' orifice head levels were assessed through fieldwork. Two sets of runs were generated as part of two statistically designed experiments. A first set of 81 runs was based on a complete factorial design in which each factor was varied over 3 levels. A second set of 40 runs was based on a latin hypercube design, better suited for experimental runs on a computer model. The simulations were conducted using commercially available transient analysis software. Responses, measured by total intrusion volumes, ranged from 10 to 366 L. A second degree polynomial was used to analyze the total intrusion volumes. Sensitivity analyses of both designs revealed that the relationship between the total intrusion volume and the four contributing factors is not monotonic, with the AVVs' orifice head being the most influential factor. When intrusion through both pathways occurs concurrently, interactions between the intrusion flows through leakage orifices and submerged AVVs influence intrusion volumes. When only intrusion through leakage orifices is considered, the total intrusion volume is more largely influenced by the leakage rate than by the leakage orifices' head. The latter mainly impacts the extent of the area affected by intrusion. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Reyes, J. J.; Adam, J. C.; Tague, C.
2016-12-01
Grasslands play an important role in agricultural production as forage for livestock; they also provide a diverse set of ecosystem services including soil carbon (C) storage. The partitioning of C between above and belowground plant compartments (i.e. allocation) is influenced by both plant characteristics and environmental conditions. The objectives of this study are to 1) develop and evaluate a hybrid C allocation strategy suitable for grasslands, and 2) apply this strategy to examine the importance of various parameters related to biogeochemical cycling, photosynthesis, allocation, and soil water drainage on above and belowground biomass. We include allocation as an important process in quantifying the model parameter uncertainty, which identifies the most influential parameters and what processes may require further refinement. For this, we use the Regional Hydro-ecologic Simulation System, a mechanistic model that simulates coupled water and biogeochemical processes. A Latin hypercube sampling scheme was used to develop parameter sets for calibration and evaluation of allocation strategies, as well as parameter uncertainty analysis. We developed the hybrid allocation strategy to integrate both growth-based and resource-limited allocation mechanisms. When evaluating the new strategy simultaneously for above and belowground biomass, it produced a larger number of less biased parameter sets: 16% more compared to resource-limited and 9% more compared to growth-based. This also demonstrates its flexible application across diverse plant types and environmental conditions. We found that higher parameter importance corresponded to sub- or supra-optimal resource availability (i.e. water, nutrients) and temperature ranges (i.e. too hot or cold). For example, photosynthesis-related parameters were more important at sites warmer than the theoretical optimal growth temperature. Therefore, larger values of parameter importance indicate greater relative sensitivity in adequately representing the relevant process to capture limiting resources or manage atypical environmental conditions. These results may inform future experimental work by focusing efforts on quantifying specific parameters under various environmental conditions or across diverse plant functional types.
NASA Astrophysics Data System (ADS)
Michot, Didier; Fouad, Youssef; Pascal, Pichelin; Viaud, Valérie; Soltani, Inès; Walter, Christian
2017-04-01
This study aims are: i) to assess SOC content distribution according to the global soil map (GSM) project recommendations in a heterogeneous landscape ; ii) to compare the prediction performance of digital soil mapping (DSM) and visible-near infrared (Vis-NIR) spectroscopy approaches. The study area of 140 ha, located at Plancoët, surrounds the unique mineral spring water of Brittany (Western France). It's a hillock characterized by a heterogeneous landscape mosaic with different types of forest, permanent pastures and wetlands along a small coastal river. We acquired two independent datasets: j) 50 points selected using a conditioned Latin hypercube sampling (cLHS); jj) 254 points corresponding to the GSM grid. Soil samples were collected in three layers (0-5, 20-25 and 40-50cm) for both sampling strategies. SOC content was only measured in cLHS soil samples, while Vis-NIR spectra were measured on all the collected samples. For the DSM approach, a machine-learning algorithm (Cubist) was applied on the cLHS calibration data to build rule-based models linking soil carbon content in the different layers with environmental covariates, derived from digital elevation model, geological variables, land use data and existing large scale soil maps. For the spectroscopy approach, we used two calibration datasets: k) the local cLHS ; kk) a subset selected from the regional spectral database of Brittany after a PCA with a hierarchical clustering analysis and spiked by local cLHS spectra. The PLS regression algorithm with "leave-one-out" cross validation was performed for both calibration datasets. SOC contents for the 3 layers of the GSM grid were predicted using the different approaches and were compared with each other. Their prediction performance was evaluated by the following parameters: R2, RMSE and RPD. Both approaches led to satisfactory predictions for SOC content with an advantage for the spectral approach, particularly as regards the pertinence of the variation range.
Constructing rigorous and broad biosurveillance networks for detecting emerging zoonotic outbreaks
Brown, Mac; Moore, Leslie; McMahon, Benjamin; ...
2015-05-06
Determining optimal surveillance networks for an emerging pathogen is difficult since it is not known beforehand what the characteristics of a pathogen will be or where it will emerge. The resources for surveillance of infectious diseases in animals and wildlife are often limited and mathematical modeling can play a supporting role in examining a wide range of scenarios of pathogen spread. We demonstrate how a hierarchy of mathematical and statistical tools can be used in surveillance planning help guide successful surveillance and mitigation policies for a wide range of zoonotic pathogens. The model forecasts can help clarify the complexities ofmore » potential scenarios, and optimize biosurveillance programs for rapidly detecting infectious diseases. Using the highly pathogenic zoonotic H5N1 avian influenza 2006-2007 epidemic in Nigeria as an example, we determined the risk for infection for localized areas in an outbreak and designed biosurveillance stations that are effective for different pathogen strains and a range of possible outbreak locations. We created a general multi-scale, multi-host stochastic SEIR epidemiological network model, with both short and long-range movement, to simulate the spread of an infectious disease through Nigerian human, poultry, backyard duck, and wild bird populations. We chose parameter ranges specific to avian influenza (but not to a particular strain) and used a Latin hypercube sample experimental design to investigate epidemic predictions in a thousand simulations. We ranked the risk of local regions by the number of times they became infected in the ensemble of simulations. These spatial statistics were then complied into a potential risk map of infection. Finally, we validated the results with a known outbreak, using spatial analysis of all the simulation runs to show the progression matched closely with the observed location of the farms infected in the 2006-2007 epidemic.« less
Duarte, Regina Horta
2013-12-01
This essay examines contemporary Latin American historical writing about natural history from the nineteenth through the twentieth centuries. Natural history is a "network science," woven out of connections and communications between diverse people and centers of scholarship, all against a backdrop of complex political and economic changes. Latin American naturalists navigated a tension between promoting national science and participating in "universal" science. These tensions between the national and the universal have also been reflected in historical writing on Latin America. Since the 1980s, narratives that recognize Latin Americans' active role have become more notable within the renewal of the history of Latin American science. However, the nationalist slant of these approaches has kept Latin American historiography on the margins. The networked nature of natural history and Latin America's active role in it afford an opportunity to end the historiographic isolation of Latin America and situate it within world history.
Preferences of orthopedic surgeons for treating midshaft clavicle fracture in adults
de Oliveira, Adilson Sanches; Roberto, Bruno Braga; Lenza, Mario; Pintan, Guilherme Figueiredo; Ejnisman, Benno; Schor, Breno; Carrera, Eduardo da Frota; Murachovsky, Joel
2017-01-01
ABSTRACT Objective To determine the current clinical practice in Latin America for treating midshaft clavicle fractures, including surgical and non-surgical approaches. Methods A cross-sectional study using a descriptive questionnaire. Shoulder and elbow surgeons from the Brazilian Society of Shoulder and Elbow Surgery and from the Latin American Society of Shoulder and Elbow were contacted and asked to complete a short questionnaire (SurveyMonkey®) on the management of midshaft fractures of the clavicle. Incomplete or inconsistent answers were excluded. Results The type of radiographic classification preferably used was related to description of fracture morphology, according to 41% of participants. Allman classification ranked second and was used by 24.1% of participants. As to indications for surgical treatment, only the indications with shortening and imminence of skin exposure were statistically significant. Conservative treatment was chosen in cortical contact. Regarding immobilization method, the simple sling was preferred, and treatment lasted from 4 to 6 weeks. Although the result was not statistically significant, the blocked plate was the preferred option in surgical cases. Conclusion The treatment of midshaft clavicle fractures in Latin America is in accordance with the current literature. PMID:29091151
Jayawardena, Asitha; Boardman, Allison; Cook, Thomas; Oprescu, Florin; Morcuende, Jose A
2011-01-01
This ethnographic study evaluated the use of low-bandwidth web-conferencing to enhance diffusion of a specific best practice, the Ponseti method to treat clubfoot, in three economically diverse countries in Latin America. A "Ponseti Virtual Forum" (PVF) was organized in Guatemala, Peru and Chile to examine the influences of economic level and telecommunication infrastructure on the effectiveness of tins approach. Across the three countries, a total of 14 different sites participated in the PVFs. Thirty-three Ponseti-trained practitioners were interviewed before and after each PVF, which included interactions with a Spanish-speaking Ponseti method expert. Semi-structured interviews, observations, and IP address data were triangulated and analyzed. The results demonstrated that 100% of the practitioners rated the sessions as very useful and that they would use this approach again. The largest obstacles to using PVFs were financial (7 out of 9 practitioners) in Guatemala; a lack of equipment and network access (6 out of 11) in Peru; and the organization and implementation of the conferences themselves (7 out of 9) in Chile. This study illustrates the usefulness of Ponseti Virtual Forums in Latin America. Health officials in Peru are currently developing a large-scale information session for traumatologists about the Ponseti method, while practitioners in Guatemala and Chile are organizing monthly scholarly meetings for physicians in remote areas. This initial feedback suggests that low-bandwidth web-conferencing can be an important vehicle for the dissemination of best practices, such as the Ponseti method, in developing countries.
Jayawardena, Asitha; Boardman, Allison; Cook, Thomas; Oprescu, Florin; Morcuende, Jose A
2011-01-01
This ethnographic study evaluated the use of low-bandwidth web-conferencing to enhance diffusion of a specific best practice, the Ponseti method to treat clubfoot, in three economically diverse countries in Latin America. A “Ponseti Virtual Forum” (PVF) was organized in Guatemala, Peru and Chile to examine the influences of economic level and telecommunication infrastructure on the effectiveness of tins approach. Across the three countries, a total of 14 different sites participated in the PVFs. Thirty-three Ponseti-trained practitioners were interviewed before and after each PVF, which included interactions with a Spanish-speaking Ponseti method expert. Semi-structured interviews, observations, and IP address data were triangulated and analyzed. The results demonstrated that 100% of the practitioners rated the sessions as very useful and that they would use this approach again. The largest obstacles to using PVFs were financial (7 out of 9 practitioners) in Guatemala; a lack of equipment and network access (6 out of 11) in Peru; and the organization and implementation of the conferences themselves (7 out of 9) in Chile. This study illustrates the usefulness of Ponseti Virtual Forums in Latin America. Health officials in Peru are currently developing a large-scale information session for traumatologists about the Ponseti method, while practitioners in Guatemala and Chile are organizing monthly scholarly meetings for physicians in remote areas. This initial feedback suggests that low-bandwidth web-conferencing can be an important vehicle for the dissemination of best practices, such as the Ponseti method, in developing countries. PMID:22096417
"The South American Way": Hollywood Looks at Latins and at Latin America.
ERIC Educational Resources Information Center
Aiex, Nola Kortner
Latin elements or themes made for the North American market have been used in American films, but at the same time these films have been playing in a Latin American market, making it useful to examine how Latin America has been portrayed in these films. The taste for exotic locales and themes is an element that has been present since the…
NASA Technical Reports Server (NTRS)
Holms, A. G.
1982-01-01
A previous report described a backward deletion procedure of model selection that was optimized for minimum prediction error and which used a multiparameter combination of the F - distribution and an order statistics distribution of Cochran's. A computer program is described that applies the previously optimized procedure to real data. The use of the program is illustrated by examples.
Software Techniques for Non-Von Neumann Architectures
1990-01-01
Commtopo programmable Benes net.; hypercubic lattice for QCD Control CENTRALIZED Assign STATIC Memory :SHARED Synch UNIVERSAL Max-cpu 566 Proessor...boards (each = 4 floating point units, 2 multipliers) Cpu-size 32-bit floating point chips Perform 11.4 Gflops Market quantum chromodynamics ( QCD ...functions there should exist a capability to define hierarchies and lattices of complex objects. A complex object can be made up of a set of simple objects
NASA Astrophysics Data System (ADS)
Tasaki, Hal
2018-06-01
We study a quantum spin system on the d-dimensional hypercubic lattice Λ with N=L^d sites with periodic boundary conditions. We take an arbitrary translation invariant short-ranged Hamiltonian. For this system, we consider both the canonical ensemble with inverse temperature β _0 and the microcanonical ensemble with the corresponding energy U_N(β _0) . For an arbitrary self-adjoint operator \\hat{A} whose support is contained in a hypercubic block B inside Λ , we prove that the expectation values of \\hat{A} with respect to these two ensembles are close to each other for large N provided that β _0 is sufficiently small and the number of sites in B is o(N^{1/2}) . This establishes the equivalence of ensembles on the level of local states in a large but finite system. The result is essentially that of Brandao and Cramer (here restricted to the case of the canonical and the microcanonical ensembles), but we prove improved estimates in an elementary manner. We also review and prove standard results on the thermodynamic limits of thermodynamic functions and the equivalence of ensembles in terms of thermodynamic functions. The present paper assumes only elementary knowledge on quantum statistical mechanics and quantum spin systems.
Global sensitivity analysis in wind energy assessment
NASA Astrophysics Data System (ADS)
Tsvetkova, O.; Ouarda, T. B.
2012-12-01
Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present research show that the brute force method is best for wind assessment purpose, SBSS outperforms other sampling strategies in the majority of cases. The results indicate that the Weibull scale parameter, turbine lifetime and Weibull shape parameter are the three most influential variables in the case study setting. The following conclusions can be drawn from these results: 1) SBSS should be recommended for use in Monte Carlo experiments, 2) The brute force method should be recommended for conducting sensitivity analysis in wind resource assessment, and 3) Little variation in the Weibull scale causes significant variation in energy production. The presence of the two distribution parameters in the top three influential variables (the Weibull shape and scale) emphasizes the importance of accuracy of (a) choosing the distribution to model wind regime at a site and (b) estimating probability distribution parameters. This can be labeled as the most important conclusion of this research because it opens a field for further research, which the authors see could change the wind energy field tremendously.
Charpak, Nathalie; Ruiz, Juan Gabriel
2017-06-01
Kangaroo Mother Care (KMC) is a human-based care intervention devised to complement neonatal care for low birth weight and premature infants. Kangaroo position (skin-to-skin contact on the mother's chest) offers thermal regulation, physiological stability, appropriate stimulation, and enhances bonding and breastfeeding. Kangaroo nutrition is based on breastfeeding, and kangaroo discharge policy relies on family empowerment and early discharge in kangaroo position with close ambulatory follow-up. We describe how the evidence has been developed and how it has been put into practice by means of direct preterm infants care and dissemination of the method, including training of KMC excellence centers in many countries not only in Latin America but worldwide. Copyright © 2016 Elsevier Inc. All rights reserved.
Dietary Interventions and Blood Pressure in Latin America - Systematic Review and Meta-Analysis
Mazzaro, Caroline Cantalejo; Klostermann, Flávia Caroline; Erbano, Bruna Olandoski; Schio, Nicolle Amboni; Guarita-Souza, Luiz César; Olandoski, Marcia; Faria-Neto, José Rocha; Baena, Cristina Pellegrino
2014-01-01
Background High blood pressure is the major risk factor for cardiovascular disease. Low blood pressure control rates in Latin American populations emphasize the need for gathering evidence on effective therapies. Objective To evaluate the effects of dietary interventions on blood pressure in Latin American populations. Methods Systematic review. Electronic databases (MEDLINE/PubMed, Embase, Cochrane Library, CINAHL, Web of Science, Scopus, SciELO, LILACS and VHL) were searched and manual search for studies published up to April 2013 was performed. Parallel studies about dietary interventions in Latin American adult populations assessing arterial blood pressure (mm Hg) before and after intervention were included. Results Of the 405 studies identified, 10 randomized controlled trials were included and divided into 3 subgroups according to the proposed dietary intervention. There was a non-significant reduction in systolic blood pressure in the subgroups of mineral replacement (-4.82; 95% CI: -11.36 to 1.73) and complex pattern diets (-3.17; 95% CI: -7.62 to 1.28). Regarding diastolic blood pressure, except for the hyperproteic diet subgroup, all subgroups showed a significant reduction in blood pressure: -4.66 mmHg (95% CI: -9.21 to -0.12) and -4.55 mmHg (95% CI: -7.04 to -2.06) for mineral replacement and complex pattern diets, respectively. Conclusion Available evidence on the effects of dietary changes on blood pressure in Latin American populations indicates a homogeneous effect of those interventions, although not significant for systolic blood pressure. Samples were small and the quality of the studies was generally low. Larger studies are required to build robust evidence. PMID:24676220
Differences on Primary Care Labor Perceptions in Medical Students from 11 Latin American Countries
Mayta-Tristán, Percy; Montenegro-Idrogo, Juan José; Mejia, Christian R.; Abudinén A., Gabriel; Azucas-Peralta, Rita; Barrezueta-Fernandez, Jorge; Cerna-Urrutia, Luis; DaSilva-DeAbreu, Adrián; Mondragón-Cardona, Alvaro; Moya, Geovanna; Valverde-Solano, Christian D.; Theodorus-Villar, Rhanniel; Vizárraga-León, Maribel
2016-01-01
Background The shortage in Latin-American Primary Care (PC) workforce may be due to negative perceptions about it. These perceptions might be probably influenced by particular features of health systems and academic environments, thus varying between countries. Methods Observational, analytic and cross-sectional multicountry study that evaluated 9,561 first and fifth-year medical students from 63 medical schools of 11 Latin American countries through a survey. Perceptions on PC work was evaluated through a previously validated scale. Tertiles of the scores were created in order to compare the different countries. Crude and adjusted prevalence ratios were calculated using simple and multiple Poisson regression with robust variance. Results Approximately 53% of subjects were female; mean age was 20.4±2.9 years; 35.5% were fifth-year students. Statistically significant differences were found between the study subjects’ country, using Peru as reference. Students from Chile, Colombia, Mexico and Paraguay perceived PC work more positively, while those from Ecuador showed a less favorable position. No differences were found among perceptions of Bolivian, Salvadoran, Honduran and Venezuelan students when compared to their Peruvian peers. Conclusions Perceptions of PC among medical students from Latin America vary according to country. Considering such differences can be of major importance for potential local specific interventions. PMID:27414643
Bantar, C; Curcio, D; Fernandez Canigia, L; García, P; Guzmán Blanco, M; Leal, A L
2009-04-01
The present study was performed to evaluate the in vitro activity of tigecycline in comparison to other agents against isolates recovered from patients hospitalized in latin American. Organisms were collected in 47 clinical laboratories from 4 countries of latin America between November 2005 and October 2006 and were tested by using disk diffusion method as described by the CLSI. A total of 7966 isolates were assessed. Tigecycline proved highly active against staphylococci and enterococci (>99% susceptibility). Imipenem was the most active agent against Escherichia coli (100% susceptibility), followed by tigecycline, 98.6% susceptibility. Resistance to cefotaxime in this species was 15.3%. Global tigecycline susceptibility of Klebsiella species was 90.2%, but the susceptibility rate was significantly slower in Venezuela (82%) than in Argentina, Colombia and Chile (93%) (p<0.01). Global cefotaxime resistance to Klebsiella spp. was 32.2% and carbapenem resistance was detected in all countries. By adopting a susceptible breakpoint >or =16mm, 91.3% of the Acinetobacter isolates proved susceptible to tigecycline. Results from the present study suggest that tigecycline may be a suitable option in latin America, a region where multidrug resistance seems to be a dramatic, increasing problem and new antimicrobial choices are urgently needed.
Perrin, Paul B.; Panyavin, Ivan; Morlett Paredes, Alejandra; Aguayo, Adriana; Macias, Miguel Angel; Rabago, Brenda; Picot, Sandra J. Fulton; Arango-Lasprilla, Juan Carlos
2015-01-01
Background. Multiple sclerosis (MS) rates in Latin America are increasing, and caregivers there experience reduced mental and physical health. Based on rigid gender roles in Latin America, women more often assume caregiving duties, yet the differential impact on women of these duties is unknown. Methods. This study examined gender differences in mental health (Patient Health Questionnaire-9, Satisfaction with Life Scale, Rosenberg Self-Esteem Scale, State-Trait Anxiety Inventory, and Zarit Burden Inventory), health-related quality of life (HRQOL; Short Form-36), and social support (Interpersonal Support Evaluation List-12) in 81 (66.7% women) Mexican MS caregivers. Results. As compared to men caregivers, women had lower mental health (p = 0.006), HRQOL (p < 0.001), and social support (p < 0.001). This was partially explained by women caregivers providing care for nearly twice as many hours/week as men (79.28 versus 48.48, p = 0.018) and for nearly three times as many months (66.31 versus 24.30, p = 0.002). Conclusions. Because gender roles in Latin America influence women to assume more substantial caregiving duties, MS caregiver interventions in Latin America—particularly for women caregivers—should address the influence of gender-role conformity on care and psychosocial functioning. PMID:26538818
Sapag, Jaime C; Rush, Brian; Ferris, Lorraine E
2016-02-01
This study examined Latin American evaluation needs regarding the development of a collaborative mental health care (CMHC) evaluation framework as seen by local key health-care leaders and professionals. Potential implementation challenges and opportunities were also identified. This multisite research study used an embedded mixed methods approach in three public health networks in Mexico, Nicaragua and Chile. Local stakeholders participated: decision-makers in key informant interviews, front-line clinicians in focus groups and other stakeholders through a survey. The analysis was conducted within site and then across sites. A total of 22 semi-structured interviews, three focus groups and 27 questionnaires (52% response rate) were conducted. Participants recognized a strong need to evaluate different areas of CMHC in Latin America, including access, types and quality of services, human resources and outcomes related to mental disorders, including addiction. A priority was to evaluate collaboration within the health system, including the referral system. Issues of feasibility, including the weaknesses of information systems, were also identified. Local stakeholders strongly supported the development of a comprehensive evaluation framework for CMHC in Latin America and cited several dimensions and contextual factors critical for inclusion. Implementation must allow flexibility and adaptation to the local context. © 2015 John Wiley & Sons Ltd.
Santos-Moreno, Pedro; Galarza-Maldonado, Claudio; Caballero-Uribe, Carlo V.; Cardiel, Mario H.; Massardo, Loreto; Soriano, Enrique R.; Olano, José Aguilar; Díaz Coto, José F.; Durán Pozo, Gabriel R.; da Silveira, Inês Guimarães; de Castrejón, Vianna J. Khoury; Pérez, Leticia Lino; Méndez Justo, Carlos A.; Montufar Guardado, Rubén A.; Muños, Rafael; Elvir, Sergio Murillo; Paredes Domínguez, Ernesto R.; Pons-Estel, Bernardo; Ríos Acosta, Carlos R.; Sandino, Sayonara; Toro Gutiérrez, Carlos E.; Villegas de Morales, Sol María; Pineda, Carlos
2015-01-01
Objective A consensus meeting of representatives of 16 Latin American and Caribbean countries and the REAL-PANLAR group met in the city of Bogota to provide recommendations for improving quality of care of patients with rheumatoid arthritis (RA) in Latin America, defining a minimum standards of care and the concept of center of excellence in RA. Methods Twenty-two rheumatologists from 16 Latin American countries with a special interest in quality of care in RA participated in the consensus meeting. Two RA Colombian patients and 2 health care excellence advisors were also invited to the meeting. A RAND-modified Delphi procedure of 5 steps was applied to define categories of centers of excellence. During a 1-day meeting, working groups were created in order to discuss and validate the minimum quality-of-care standards for the 3 proposed types of centers of excellence in RA. Positive votes from at least 60% of the attending leaders were required for the approval of each standard. Results Twenty-two opinion leaders from the PANLAR countries and the REAL-PANLAR group participated in the discussion and definition of the standards. One hundred percent of the participants agreed with setting up centers of excellence in RA throughout Latin America. Three types of centers of excellence and its criteria were defined, according to indicators of structure, processes, and outcomes: standard, optimal, and model. The standard level should have basic structure and process indicators, the intermediate or optimal level should accomplish more structure and process indicators, and model level should also fulfill outcome indicators and patient experience. Conclusions This is the first Latin American effort to standardize and harmonize the treatment provided to RA patients and to establish centers of excellence that would offer to RA patients acceptable clinical results and high levels of safety. PMID:26010179
"In our own words": Defining medical professionalism from a Latin American perspective.
Puschel, Klaus; Repetto, Paula; Bernales, Margarita; Barros, Jorge; Perez, Ivan; Snell, Linda
2017-01-01
Latin America has experienced a tremendous growth in a number of medical schools, and there are concerns about their quality of training in critical areas such as professionalism. Medical professionalism is a cultural construct. The aim of the study was to compare published definitions of medical professionalism from Latin American and non-Latin American regions and to design an original and culturally sound definition. A mixed methods approach was used with three phases. First, a systematic search and thematic analysis of the literature were conducted. Second, a Delphi methodology was used to design a local definition of medical professionalism. Third, we used a qualitative approach that combined focus groups and personal interviews with students and deans from four medical schools in Chile to understand various aspects of professionalism education. The data were analyzed using NVivo software. A total of 115 nonrepeated articles were identified in the three databases searched. No original definitions of medical professionalism from Latin America were found. Twenty-six articles met at least one of the three decisional criteria defined and were fully reviewed. Three theoretical perspectives were identified: contractualism, personalism, and deontology. Attributes of medical professionalism were classified in five dimensions: personal, interpersonal, societal, formative, and practical. Participants of the Delphi panel, focus groups, and personal interviews included 36 medical students, 12 faculties, and four deans. They took a personalistic approach to design an original definition of medical professionalism and highlighted the relevance of respecting life, human dignity, and the virtue of prudence in medical practice. Students and scholars differed on the value given to empathy and compassion. This study provides an original and culturally sound definition of medical professionalism that could be useful in Latin American medical schools. The methodology used in the study could be applied in other regions as a basis to develop culturally appropriate definitions of medical professionalism.
French Latin Association Demands Full Restoration of Secondary Classics Curriculum
ERIC Educational Resources Information Center
Denegri, Raymond
1978-01-01
Discusses the demands of the Association pour la Defense du Latin that Latin be returned to the French secondary school curriculum and that the Classics-Science baccalaureat be restored as one of the French secondary programs. Benefits of studying Latin are enumerated. (KM)
Distributed Adaptive Binary Quantization for Fast Nearest Neighbor Search.
Xianglong Liu; Zhujin Li; Cheng Deng; Dacheng Tao
2017-11-01
Hashing has been proved an attractive technique for fast nearest neighbor search over big data. Compared with the projection based hashing methods, prototype-based ones own stronger power to generate discriminative binary codes for the data with complex intrinsic structure. However, existing prototype-based methods, such as spherical hashing and K-means hashing, still suffer from the ineffective coding that utilizes the complete binary codes in a hypercube. To address this problem, we propose an adaptive binary quantization (ABQ) method that learns a discriminative hash function with prototypes associated with small unique binary codes. Our alternating optimization adaptively discovers the prototype set and the code set of a varying size in an efficient way, which together robustly approximate the data relations. Our method can be naturally generalized to the product space for long hash codes, and enjoys the fast training linear to the number of the training data. We further devise a distributed framework for the large-scale learning, which can significantly speed up the training of ABQ in the distributed environment that has been widely deployed in many areas nowadays. The extensive experiments on four large-scale (up to 80 million) data sets demonstrate that our method significantly outperforms state-of-the-art hashing methods, with up to 58.84% performance gains relatively.
Parallel solution of the symmetric tridiagonal eigenproblem. Research report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jessup, E.R.
1989-10-01
This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed-memory Multiple Instruction, Multiple Data multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speed up, and accuracy. Experiments on an IPSC hypercube multiprocessor reveal that Cuppen's method ismore » the most accurate approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effect of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptions of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less
Parallel solution of the symmetric tridiagonal eigenproblem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jessup, E.R.
1989-01-01
This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed memory MIMD multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues, and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speedup, and accuracy. Experiments on an iPSC hyper-cube multiprocessor reveal that Cuppen's method is the most accuratemore » approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effects of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptations of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less
Latin Curriculum Standards. Revised.
ERIC Educational Resources Information Center
Delaware State Dept. of Public Instruction, Dover.
Delaware's state standards for the Latin curriculum in the public schools are presented. An introductory section outlines the goals of the Latin program for reading, cultural awareness, grammar, writing, and oral language and briefly discusses the philosophy of and approaches to Latin instruction in elementary and middle schools. Three subsequent…
Latin American immigrants have limited access to health insurance in Japan: a cross sectional study
2012-01-01
Background Japan provides universal health insurance to all legal residents. Prior research has suggested that immigrants to Japan disproportionately lack health insurance coverage, but no prior study has used rigorous methodology to examine this issue among Latin American immigrants in Japan. The aim of our study, therefore, was to assess the pattern of health insurance coverage and predictors of uninsurance among documented Latin American immigrants in Japan. Methods We used a cross sectional, mixed method approach using a probability proportional to estimated size sampling procedure. Of 1052 eligible Latin American residents mapped through extensive fieldwork in selected clusters, 400 immigrant residents living in Nagahama City, Japan were randomly selected for our study. Data were collected through face-to-face interviews using a structured questionnaire developed from qualitative interviews. Results Our response rate was 70.5% (n = 282). Respondents were mainly from Brazil (69.9%), under 40 years of age (64.5%) and had lived in Japan for 9.45 years (SE 0.44; median, 8.00). We found a high prevalence of uninsurance (19.8%) among our sample compared with the estimated national average of 1.3% in the general population. Among the insured full time workers (n = 209), 55.5% were not covered by the Employee's Health Insurance. Many immigrants cited financial trade-offs as the main reasons for uninsurance. Lacking of knowledge that health insurance is mandatory in Japan, not having a chronic disease, and having one or no children were strong predictors of uninsurance. Conclusions Lack of health insurance for immigrants in Japan is a serious concern for this population as well as for the Japanese health care system. Appropriate measures should be taken to facilitate access to health insurance for this vulnerable population. PMID:22443284
NASA Astrophysics Data System (ADS)
Hsu, Hsiao-Ping; Nadler, Walder; Grassberger, Peter
2005-07-01
The scaling behavior of randomly branched polymers in a good solvent is studied in two to nine dimensions, modeled by lattice animals on simple hypercubic lattices. For the simulations, we use a biased sequential sampling algorithm with re-sampling, similar to the pruned-enriched Rosenbluth method (PERM) used extensively for linear polymers. We obtain high statistics of animals with up to several thousand sites in all dimension 2⩽d⩽9. The partition sum (number of different animals) and gyration radii are estimated. In all dimensions we verify the Parisi-Sourlas prediction, and we verify all exactly known critical exponents in dimensions 2, 3, 4, and ⩾8. In addition, we present the hitherto most precise estimates for growth constants in d⩾3. For clusters with one site attached to an attractive surface, we verify the superuniversality of the cross-over exponent at the adsorption transition predicted by Janssen and Lyssy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of manymore » computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.« less
Feasibility of a Latin Dance Program for Older Latinos With Mild Cognitive Impairment.
Aguiñaga, Susan; Marquez, David X
2017-12-01
This study investigates the feasibility of a Latin dance program in older Latinos with mild cognitive impairment (MCI) via a feasibility mixed methods randomized controlled design. Spanish-speaking older Latinos (N = 21, 75.4 [6.3] years old, 16 females/5 males, 22.4 [2.8] Mini-Mental State Examination [MMSE] score) were randomized into a 16-week dance intervention (BAILAMOS) or wait-list control; the control group crossed over at week 17 and received the dance intervention. Feasibility was determined by assessing reach, retention, attendance, dance logs, and postintervention focus groups. Reach was 91.3% of people who were screened and eligible. Program retention was 95.2%. The dropout rate was 42.8% (n = 9), and attendance for all participants was 55.76%. The focus group data revealed 4 themes: enthusiasm for dance, positive aspects of BAILAMOS, unfavorable aspects of BAILAMOS, and physical well-being after BAILAMOS. In conclusion, older Latinos with MCI find Latin dance as an enjoyable and safe mode of physical activity.
2011-01-01
Background To demonstrate the tobacco industry rationale behind the "Spanish model" on non-smokers' protection in hospitality venues and the impact it had on some European and Latin American countries between 2006 and 2011. Methods Tobacco industry documents research triangulated against news and media reports. Results As an alternative to the successful implementation of 100% smoke-free policies, several European and Latin American countries introduced partial smoking bans based on the so-called "Spanish model", a legal framework widely advocated by parts of the hospitality industry with striking similarities to "accommodation programmes" promoted by the tobacco industry in the late 1990s. These developments started with the implementation of the Spanish tobacco control law (Ley 28/2005) in 2006 and have increased since then. Conclusion The Spanish experience demonstrates that partial smoking bans often resemble tobacco industry strategies and are used to spread a failed approach on international level. Researchers, advocates and policy makers should be aware of this ineffective policy. PMID:22151884
The Bologna Process from a Latin American Perspective
ERIC Educational Resources Information Center
Brunner, Jose Joaquin
2009-01-01
Although Latin America's geography, history, and languages might seem a suitable foundation for a Bologna-type process, the development of a common Latin American higher education and research area meets predictable difficulties.The reasons are to be found in the continent's historic and modern institutional patterns. Latin American governments…
Latin America and the United States: What Do United States History Textbooks Tell Us?
ERIC Educational Resources Information Center
Fleming, Dan B.
1982-01-01
Evaluates how U.S.-Latin American relations are presented in high school U.S. history textbooks. An examination of 10 textbooks published between 1977-81 revealed inadequate coverage of Latin American cultural diversity and United States foreign policy from the Latin American perspective. (AM)
HRD in Latin America. Symposium 5. [AHRD Conference, 2001].
ERIC Educational Resources Information Center
2001
This document contains three papers on human resource development (HRD) in Latin America. "Looking at the Literature on Workplace Democracy in Latin America: Factors in Favor and Against It" (Max U. Montesino) discusses selected issues related to workplace democracy in Latin America and identifies salient issues for further research,…
Apuntes sobre Latinismos en Espanol (Notes on Latinisms in Spanish)
ERIC Educational Resources Information Center
Perez B., L. A.
1977-01-01
Several Latinisms appear in Latin American Spanish, which would logically be farther from its Latin roots than Spanish in Spain. The existence of these elements and their importance as linguistic facts is analyzed here. Four words are treated: "Cliente,""cuadrar,""cuarto" and "rabula." (Text is in Spanish.) (CHK)
Textbooks in Greek and Latin: 1975 List
ERIC Educational Resources Information Center
McCarty, Thomas G.
1975-01-01
List of textbooks in Greek and Latin for 1975. Subject, title, publisher and price are noted. Greek and Latin works are listed separately under the eight categories of texts, beginner's books, grammars, books about the language, readers and anthologies, composition, dictionaries, and New Testament Greek and Later Latin. (RM)
Internationalizing Business Education in Latin America: Issues and Challenges
ERIC Educational Resources Information Center
Elahee, Mohammad; Norbis, Mario
2009-01-01
This article examines the extent of internationalization of business education in Latin America and identifies the key challenges facing the Latin American business schools. Based on a survey of the business schools that are members of CLADEA (Consejo Latinoamericano de Escuelas de Administracion--Latin American Council of Management Schools), and…
Parallel Processing and Scientific Applications
1992-11-30
Lattice QCD Calculations on the Connection Machine), SIAM News 24, 1 (May 1991) 5. C. F. Baillie and D. A. Johnston, Crumpling Dynamically Triangulated...hypercubic lattice ; in the second, the surface is randomly triangulated once at the beginning of the simulation; and in the third the random...Sharpe, QCD with Dynamical Wilson Fermions 1I, Phys. Rev. D44, 3272 (1991), 8. R. Gupta and C. F. Baillie, Critical Behavior of the 2D XY Model, Phys
Mapping Flows onto Networks to Optimize Organizational Processes
2005-01-01
And G . Porter, “Assessments of Simulated Performance of Alternative Architectures for Command and Control: The Role of Coordination”, Proceedings of...the 1999 Command & Control Research & Technology Symposium, NWC, Newport, RI, June 1999, pp. 123-143. [Iverson95] M. Iverson, F. Ozguner, G . Follen...Technology Symposium, NPS, Monterrey, CA, June, 2002. [Wu88] Min-You Wu, D. Gajski . “A Programming Aid for Hypercube Architectures.” The Journal of Supercomputing, 2(1988), pp. 349-372.
NASA Technical Reports Server (NTRS)
Collins, Earl R., Jr.
1990-01-01
Authorized users respond to changing challenges with changing passwords. Scheme for controlling access to computers defeats eavesdroppers and "hackers". Based on password system of challenge and password or sign, challenge, and countersign correlated with random alphanumeric codes in matrices of two or more dimensions. Codes stored on floppy disk or plug-in card and changed frequently. For even higher security, matrices of four or more dimensions used, just as cubes compounded into hypercubes in concurrent processing.
Silva, Gisele Sampaio; Ameriso, Sebastián F.
2017-01-01
Atrial fibrillation (AF) is a prominent risk factor for stroke and a leading cause of death and disability throughout Latin America. Contemporary evidence-based guidelines for the management of AF and stroke incorporate the use of practical and relatively simple scoring methods to estimate both stroke and bleeding risk, in order to assist in matching patients with appropriate interventions. This review examines consistencies and differences among guidelines for reducing stroke risk in patients with AF, assessing the role of user-friendly scoring methods to determine appropriate patients for anticoagulation and other treatment options. Current options include warfarin and direct oral anticoagulants such as dabigatran, rivaroxaban, apixaban, and edoxaban. These agents have been found to be superior or noninferior to standard vitamin K antagonist anticoagulation in large randomized trials. Potential benefits of these agents mainly include lower ischemic stroke rates, reduced intracranial bleeding, no need for regular monitoring, and fewer drug–drug and drug–food interactions. Expert opinions regarding clinical situations for which data are presently lacking, such as emergency bleeding and stroke in anticoagulated patients, are also provided. Enhanced attention and adherence to evidence-based guidelines are essential components for a strategy to reduce stroke morbidity and mortality across Latin America. PMID:28992764
ERIC Educational Resources Information Center
Masciantonio, Rudolph; And Others
This gamebook is intended to assist the elementary school Latin teacher in introducing the reading and writing of English derivatives and cognates after these have been mastered audiolingually in the Latin course "How the Romans Lived and Spoke (Romani Viventes et Dicentes): A Humanistic Approach to Latin for Children in the Fifth Grade." (For the…
[Scientific journals of medical students in Latin-America].
Cabrera-Samith, Ignacio; Oróstegui-Pinilla, Diana; Angulo-Bazán, Yolanda; Mayta-Tristán, Percy; Rodríguez-Morales, Alfonso J
2010-11-01
This article deals with the history and evolution of student's scientific journals in Latin-America, their beginnings, how many still exist and which is their future projection. Relevant events show the growth of student's scientific journals in Latin-America and how are they working together to improve their quality. This article is addressed not only for Latin American readers but also to worldwide readers. Latin American medical students are consistently working together to publish scientific research, whose quality is constantly improving.
Sickle cell in Latin America and the United States [corrected].
Huttle, Alexandra; Maestre, Gladys E; Lantigua, Rafael; Green, Nancy S
2015-07-01
Latin Americans are an underappreciated population affected by sickle cell disease (SCD). Sickle trait and SCD exist throughout Latin America and U.S. Latino communities. We describe the epidemiology and genetic heterogeneity of SCD among Latin Americans, and fetal hemoglobin expression. National population-based newborn screening for SCD is limited to Brazil, Costa Rica, and the U.S. Available and extrapolated data suggest that over 6,000 annual births and 100,000-150,000 Latin Americans are affected by SCD. This comprehensive review highlights the substantial numbers and population distribution of SCD and sickle trait in Latin America, and where national newborn screening programs for SCD exist. © 2015 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Aboites, Hugo
2010-01-01
Through the "Tuning-Latin America" competencies project, Latin American universities have been incorporated into the Bologna Process. In 2003 the European Commission approved an initiative of this project for Latin America and began to promote it among ministries, university presidents' organisations and other institutions in Latin…
Considerations for Integrating Technology in Developing Communities in Latin America
ERIC Educational Resources Information Center
Ponte, Daniela Núñez; Cullen, Theresa A.
2013-01-01
This article discusses issues related to introducing new information and communication technologies (ICT) into Latin American countries. Latin American countries are gaining world focus with political changes such as the death of Hugo Chavez in Venezuela and the election of the first Latin American Pope. This region will host the World Cup,…
Latin American Immigrant Women and Intergenerational Sex Education
ERIC Educational Resources Information Center
Alcalde, Maria Cristina; Quelopana, Ana Maria
2013-01-01
People of Latin American descent make up the largest and fastest-growing minority group in the USA. Rates of pregnancy, childbirth, and sexually transmitted infections among people of Latin American descent are higher than among other ethnic groups. This paper builds on research that suggests that among families of Latin American descent, mothers…
ERIC Educational Resources Information Center
Emanouilidis, Emanuel
2008-01-01
Latin squares were first introduced and studied by the famous mathematician Leonhard Euler in the 1700s. Through the years, Latin squares have been used in areas such as statistics, graph theory, coding theory, the generation of random numbers as well as in the design and analysis of experiments. Recently, with the international popularity of…
Colloquamur Latine cum Pueris Puellesque: Latin in the Middle School.
ERIC Educational Resources Information Center
Graber, Charles F.; Norton, Harriet S.
Guidelines for the development of a Latin curriculum for the middle school, with specific suggestions as to content and methodology, are presented in this manual. The material, oriented toward new approaches in the teaching of the Latin and Graeco-Roman cultures, strives to develop proficiency in the skills of listening, speaking, reading, and…
Reis, Rodrigo S.; Kelly, Cheryl M.; Parra, Diana C.; Barros, Mauro; Gomes, Grace; Malta, Deborah; Schmid, Thomas; Brownson, Ross C.
2016-01-01
Objective To identify the highest priorities for research on environmental and policy changes for promoting physical activity (PA) in Brazil; to uncover any gaps between researchers' and practitioners' priorities; and to consider which tools, methods, collaborative strategies, and actions could be useful to moving a research agenda forward. Methods This was a mixed-methods study (qualitative and quantitative) conducted by Project GUIA (Guide for Useful Interventions for Activity in Brazil and Latin America) in February 2010–January 2011. A total of 240 individuals in the PA field (186 practitioners and 54 researchers) were asked to generate research ideas; 82 participants provided 266 original statements from which 52 topics emerged. Participants rated topics by “importance” and “feasibility;” a separate convenience sample of 21 individuals categorized them. Cluster analysis and multidimensional scaling were used to create concept maps and pattern matches. Results Five distinct clusters emerged from the concept mapping, of which “effectiveness and innovation in PA interventions” was rated most important by both practitioners and researchers. Pattern matching showed a divergence between the groups, especially regarding feasibility, where there was no consensus. Conclusions The study results provided the basis for a research agenda to advance the understanding of environmental and policy influences on PA promotion in Brazil and Latin America. These results should stimulate future research and, ultimately, contribute to the evidence-base of successful PA strategies in Latin America. PMID:23099869
Reveiz, Ludovic; Elias, Vanessa; Terry, Robert F; Alger, Jackeline; Becerra-Posada, Francisco
2013-07-01
To compare health research priority-setting methods and characteristics among countries in Latin America and the Caribbean during 2002 - 2012. This was a systematic review that identified national health research policies and priority agendas through a search of ministry and government databases related to health care institutions. PubMed, LILACS, the Health Research Web, and others were searched for the period from January 2002 - February 2012. The study excluded research organized by governmental institutions and specific national strategies on particular disease areas. Priority-setting methods were compared to the "nine common themes for good practice in health research priorities." National health research priorities were compared to those of the World Health Organization's Millennium Development Goals (MDG). Of the 18 Latin American countries assessed, 13 had documents that established national health research priorities; plus the Caribbean Health Research Council had a research agenda for its 19 constituents. These 14 total reports varied widely in terms of objectives, content, dissemination, and implementation; most provided a list of strategic areas, suggestions, and/or sub-priorities for each country; however, few proposed specific research topics and questions. Future reports could be improved by including more details on the comprehensive approach employed to identify priorities, on the information gathering process, and on practices to be undertaken after priorities are set. There is a need for improving the quality of the methodologies utilized and coordinating Regional efforts as countries strive to meet the MDG.
From magic to science: a journey throughout Latin American medical mycology.
San-Blas, G
2000-01-01
The start of Latin America's love story with fungi may be placed in pre-Hispanic times when the use of fungi in both ritual ceremonies and daily life were common to the native civilizations. But the medical mycology discipline in Latin America started at the end of the 19th Century. At that time, scholars such as A. Posadas, R. Seeber, A. Lutz and P. Almeida, discovered agents of fungal diseases, the study of which has influenced the regional research ever since. Heirs to them are the researchers that today thrive in regional Universities and Research Institutes. Two current initiatives improve cooperation among Latin American medical mycologists. First, the periodical organization of International Paracoccidioidomycosis Meetings (seven so far, from 1979 to 1999); second, the creation of the Latin American Association for Mycology in 1991 (three Congresses, from 1993 to 1999). Latin American publications have increased in international specialized journals such as that from our Society (ISHAM) (from 8% in 1967 to 19% in 1999), and the Iberoamerican Journal of Mycology (Revista Iberoamericana de Micologia; > 40% from 1997 to 1999). In addition, Latin American participation at ISHAM International Congresses has risen from 6.9% in 1975 to 21.3% in 1997, and 43.2% at the 14th ISHAM Congress, held for the first time in a Latin American country, Argentina. A significant contribution of women to the scientific establishment of Latin American medical mycology (e.g., 45% of Latin American papers vs. 18% of other regions published in Journal of Medical and Veterinary Mycology in 1987, had women as authors or coauthors) suggests a better academic consideration of Latin American women against their counterparts in the developed world. Taken together, all these figures reflect the enthusiasm of our Latin American colleagues in the field, despite the difficulties that afflict our region, and affect our work.
[External evaluation of population-based cancer registries: the REDEPICAN Guide for Latin America].
Navarro, Carmen; Molina, José Antonio; Barrios, Enrique; Izarzugaza, Isabel; Loria, Dora; Cueva, Patricia; Sánchez, María José; Chirlaque, María Dolores; Fernández, Leticia
2013-11-01
Evaluate the feasibility of the REDEPICAN Guide (Red Iberoamericana de Epidemiología y Sistemas de Información en Cáncer) and its adaptation to the current situation of population-based cancer registries (PBCRs) in Latin America and the Caribbean as a useful tool to improve these registries. Experts in cancer registries and health audits designed the guide and developed seven domains to evaluate in PBCRs. Several criteria were selected for each domain, with corresponding standards, scored according to three levels of compliance. Two training courses for external evaluators and three discussion panels for experts were organized. The guide was tested in six PBCRs in Latin America and Spain. The guide contains 68 criteria, 10 of which are considered essential for a PBCR. Based on its score, a registry is regarded as acceptable (41-199), good (200-299), or excellent (300-350). The registry methods domain accounts for 25% of the score, followed by completeness and validity (19%), dissemination of outcomes (19%), structure (13%), confidentiality and ethical aspects (11%), comparability (9%), and the procedures manual (3%). The pilot project enabled (1) enhancement of criteria and standards, (2) expansion of the quality concept to include client needs, and (3) strengthening the dissemination of outcomes section. Two of the Latin American registries that were evaluated improved their quality, meeting the standards of the International Agency for Research on Cancer. Development of the REDEPICAN Guide has taken into account the context of the registries in Latin America and is a useful and innovative tool for improving the quality of PBCRs. Furthermore, it is ready for use in other countries and registries.
Toro, Carlos; Trevisi, Patricia; López-Quintana, Beatriz; Amor, Aránzazu; Iglesias, Nuria; Subirats, Mercedes; de Guevara, Concepción Ladrón; Lago, Mar; Arsuaga, Marta; de la Calle-Prieto, Fernando; Herrero, Dolores; Rubio, Margarita; Puente, Sabino; Baquero, Margarita
2017-03-01
AbstractEpidemiological data on dengue in Africa are still scarce. We investigated imported dengue infection among travelers with a high proportion of subjects from Africa over a 9-year period. From January 2005 to December 2013, blood samples from travelers with clinical suspicion of dengue were analyzed. Dengue was diagnosed using serological, antigen detection, and molecular methods. Subjects were classified according to birthplace (Europeans versus non-Europeans) and last country visited. Overall, 10,307 serum samples corresponding to 8,295 patients were studied; 62% were European travelers, most of them from Spain, and 35.9% were non-Europeans, the majority of whom were born in Africa (mainly Equatorial Guinea) and Latin America (mainly Bolivia, Ecuador, and Colombia). A total of 492 cases of dengue were identified, the highest number of cases corresponding to subjects who had traveled from Africa ( N = 189), followed by Latin America ( N = 174) and Asia ( N = 113). The rate of cases for Africa (4.5%) was inferior to Asia (9%) and Latin America (6.1%). Three peaks of dengue were found (2007, 2010, and 2013) which correlated with African cases. A total of 2,157 of past dengue infections were diagnosed. Non-Europeans who had traveled from Africa had the highest rate of past infection (67.8%), compared with non-Europeans traveling from Latin America (38.7%) or Asia (35%). Dengue infection in certain regions of Africa is underreported and the burden of the disease may have a magnitude similar to endemic countries in Latin America. It is necessary to consider dengue in the differential diagnosis of other febrile diseases in Africa.
A soil-specific agro-ecological strategy for sustainable production in Argentina farm fields
NASA Astrophysics Data System (ADS)
Zamora, Martin; Barbera, Agustin; Castro-Franco, Mauricio; Hansson, Alejandro; Domenech, Marisa
2017-04-01
The continuous increment of frequencies and doses of pesticides, glyphosate and fertilizers, the deterioration of the structure, biotic balance and fertility of soils and the ground water pollution are characteristics of the current Argentinian agricultural model. In this context, agro-ecological innovations are needed to develop a real sustainable agriculture, enhancing the food supply. Precision agriculture technologies can strengthen the expansion of agro-ecological farming in experimental farm fields. The aim of this study was to propose a soil-specific agro-ecological strategy for sustainable production at field scale focused on the use of soil sensors and digital soil mapping techniques. This strategy has been developed in 15 hectares transition agro-ecological farm field, located at Barrow Experimental Station (Lat:-38.322844, Lon:-60.25572) Argentina. The strategy included five steps: (i) to measure apparent electrical conductivity (ECa) and elevation within agro-ecological farm field; (ii) to apply a clustering method using MULTISPATI-PCA algorithm to delimitate three soil-specific zones (Z1, Z2 and Z3); (iii) to determine three soil sampling points by zone, using conditioned Latin hypercube method, in addition to elevation and ECa as auxiliary information; (iv) to collect soil samples at 2-10 cm depth in each point and to determine in laboratory: total organic carbon content (TOC), cation-exchange capacity (CEC), pH and phosphorus availability (P-Bray). In addition, soil bulk density (SBD) was measured at 0-20 cm depth. Finally, (v) according to each soil-specific zone, a management strategy was recommended. Important differences in soil properties among zones could suggest that the strategy developed was able to apply an agro ecological soil-specific practice management. pH and P-Bray were significantly (p<0.05) higher in Z1 than in Z2 and Z3. TOC did not show significant difference among zones, but it was higher in Z2. CEC was significantly (p<0.05) lower in Z3 than in the other ones. SBD did not show significant difference among zones; however it had a higher content in Z1. From these results, we propose an agro ecology strategy which involves a continuous nutrient cycling. During the first two years, P-Bray levels will be adjusted among zones, by using different external phosphorous sources. Only in Z3, this strategy will be achieved adding P fertilizer and also rotating plots with high stocking rate. The aim is to increase soil organic matter content and CEC. Furthermore, P content will be supplied through manure because the animal nutrition will include wheat husk, in order to achieve similar P levels among zones. The proposed strategy demonstrated that the agro-ecology soil-specific management allows a sustainable scheme in Argentinian agro-productive systems.
Gaewsky, James P; Weaver, Ashley A; Koya, Bharath; Stitzel, Joel D
2015-01-01
A 3-phase real-world motor vehicle crash (MVC) reconstruction method was developed to analyze injury variability as a function of precrash occupant position for 2 full-frontal Crash Injury Research and Engineering Network (CIREN) cases. Phase I: A finite element (FE) simplified vehicle model (SVM) was developed and tuned to mimic the frontal crash characteristics of the CIREN case vehicle (Camry or Cobalt) using frontal New Car Assessment Program (NCAP) crash test data. Phase II: The Toyota HUman Model for Safety (THUMS) v4.01 was positioned in 120 precrash configurations per case within the SVM. Five occupant positioning variables were varied using a Latin hypercube design of experiments: seat track position, seat back angle, D-ring height, steering column angle, and steering column telescoping position. An additional baseline simulation was performed that aimed to match the precrash occupant position documented in CIREN for each case. Phase III: FE simulations were then performed using kinematic boundary conditions from each vehicle's event data recorder (EDR). HIC15, combined thoracic index (CTI), femur forces, and strain-based injury metrics in the lung and lumbar vertebrae were evaluated to predict injury. Tuning the SVM to specific vehicle models resulted in close matches between simulated and test injury metric data, allowing the tuned SVM to be used in each case reconstruction with EDR-derived boundary conditions. Simulations with the most rearward seats and reclined seat backs had the greatest HIC15, head injury risk, CTI, and chest injury risk. Calculated injury risks for the head, chest, and femur closely correlated to the CIREN occupant injury patterns. CTI in the Camry case yielded a 54% probability of Abbreviated Injury Scale (AIS) 2+ chest injury in the baseline case simulation and ranged from 34 to 88% (mean = 61%) risk in the least and most dangerous occupant positions. The greater than 50% probability was consistent with the case occupant's AIS 2 hemomediastinum. Stress-based metrics were used to predict injury to the lower leg of the Camry case occupant. The regional-level injury metrics evaluated for the Cobalt case occupant indicated a low risk of injury; however, strain-based injury metrics better predicted pulmonary contusion. Approximately 49% of the Cobalt occupant's left lung was contused, though the baseline simulation predicted 40.5% of the lung to be injured. A method to compute injury metrics and risks as functions of precrash occupant position was developed and applied to 2 CIREN MVC FE reconstructions. The reconstruction process allows for quantification of the sensitivity and uncertainty of the injury risk predictions based on occupant position to further understand important factors that lead to more severe MVC injuries.
Education, Adjustment, and Democracy in Latin America. Development Discussion Paper No. 363.
ERIC Educational Resources Information Center
Reimers, Fernando M.
This document examines changes in Latin American economies and educational systems during the 1980s and the responses of the Latin American democracies. Following a description of changes in Latin American public expenditures in the 1980s and subsequent adjustments in education expenditures, the dynamics of the adjustment in Costa Rica and…
"Dance for Your Health": Exploring Social Latin Dancing for Community Health Promotion
ERIC Educational Resources Information Center
Iuliano, Joseph E.; Lutrick, Karen; Maez, Paula; Nacim, Erika; Reinschmidt, Kerstin
2017-01-01
The goal of "Dance for Your Health" was to explore the relationship between social Latin dance and health as described by members of the Tucson social Latin dance community. Social Latin dance was selected because of the variety of dances, cultural relevance and popularity in Tucson, and the low-key, relaxed atmosphere. Dance has been…
Latin America: A Filmic Approach. Latin American Studies Program, Film Series No. 1.
ERIC Educational Resources Information Center
Campbell, Leon G.; And Others
This document describes a university course designed to provide an historical understanding of Latin America through feature films. The booklet contains an introductory essay on the teaching of a film course on Latin America, a general discussion of strengths and weaknesses of student analyses of films, and nine analyses written by students during…
She Doesn't Even Act Mexican: Smartness Trespassing in the New South
ERIC Educational Resources Information Center
Carrillo, Juan F.; Rodriguez, Esmeralda
2016-01-01
Historically, Latin@s have experienced deficit thinking as it relates to their perceived academic abilities. In North Carolina's emerging Latin@ population, the first waves of K-12 students are writing a new chapter in the state's history. Yet, there is a dearth of research that focuses on the identities of academically successful Latin@ students…
LATIN FOR SECONDARY SCHOOLS (A GUIDE TO MINIMUM ESSENTIALS).
ERIC Educational Resources Information Center
LEAMON, M. PHILLIP; AND OTHERS
A SET OF MINIMUM ESSENTIALS FOR EACH LEVEL OF A 4-YEAR SEQUENCE OF LATIN IN SECONDARY SCHOOLS IS PRESENTED IN THIS CURRICULUM GUIDE. FOLLOWING STATEMENTS OF THE OBJECTIVES OF LATIN STUDY--READING THE LATIN OF THE GREAT ROMAN AUTHORS, ATTAINING A LINGUISTIC PROFICIENCY, AND ACQUIRING A WIDER HISTORICAL AND CULTURAL AWARENESS--THE GUIDE OUTLINES FOR…
Word Power through Latin; A Curriculum Resource.
ERIC Educational Resources Information Center
Masciantonio, Rudolph
This curriculum guide is intended to assist Latin teachers in the School District of Philadelphia in achieving one of the goals of Latin instruction: the development of word power in English through a structured study of Latin roots and affixes. The guide may be used in two different ways. First, it may form the basis for a separate…
Machismo and Virginidad: Sex Roles in Latin America. Discussion Paper 79-10.
ERIC Educational Resources Information Center
Quinones, Julio
The purpose of this paper is to present a view of Latin American males and females that describes the situation in Latin America more accurately than the current stereotypical view accepted in the United States. The author discusses the roots of the North American misconception, citing differences between Latin American and North American cultures…
Denman-Vitale, S; Murillo, E K
1999-07-01
Across the United States, advance practice nurses (APNs) are increasingly encountering recently immigrated Latin American populations. This article provides an overview of the situation of Latin Americans in the United States and discusses aspects of Latin American culture such as, respeto (respect), confianza (confidence), the importance of family, and the value of a personal connection. Strategies that will assist practitioners to incorporate culturally holistic principles in the promotion of breastfeeding among Latin American women who are new arrivals in the United States are described. If practitioners are to respond to the increasing numbers of Latin American women who need health care services, and also provide thorough, holistic health care then health care activities must be integrated with cultural competence.
NASA Astrophysics Data System (ADS)
Gassmann, Matthias; Olsson, Oliver; Höper, Heinrich; Hamscher, Gerd; Kümmerer, Klaus
2016-04-01
The simulation of reactive transport in the aquatic environment is hampered by the ambiguity of environmental fate process conceptualizations for a specific substance in the literature. Concepts are usually identified by experimental studies and inverse modelling under controlled lab conditions in order to reduce environmental uncertainties such as uncertain boundary conditions and input data. However, since environmental conditions affect substance behaviour, a re-evaluation might be necessary under environmental conditions which might, in turn, be affected by uncertainties. Using a combination of experimental data and simulations of the leaching behaviour of the veterinary antibiotic Sulfamethazine (SMZ; synonym: sulfadimidine) and the hydrological tracer Bromide (Br) in a field lysimeter, we re-evaluated the sorption concepts of both substances under uncertain field conditions. Sampling data of a field lysimeter experiment in which both substances were applied twice a year with manure and sampled at the bottom of two lysimeters during three subsequent years was used for model set-up and evaluation. The total amount of leached SMZ and Br were 22 μg and 129 mg, respectively. A reactive transport model was parameterized to the conditions of the two lysimeters filled with monoliths (depth 2 m, area 1 m²) of a sandy soil showing a low pH value under which Bromide is sorptive. We used different sorption concepts such as constant and organic-carbon dependent sorption coefficients and instantaneous and kinetic sorption equilibrium. Combining the sorption concepts resulted in four scenarios per substance with different equations for sorption equilibrium and sorption kinetics. The GLUE (Generalized Likelihood Uncertainty Estimation) method was applied to each scenario using parameter ranges found in experimental and modelling studies. The parameter spaces for each scenario were sampled using a Latin Hypercube method which was refined around local model efficiency maxima. Results of the cumulative SMZ leaching simulations suggest a best conceptualization combination of instantaneous sorption to organic carbon which is consistent with the literature. The best Nash-Sutcliffe efficiency (Neff) was 0.96 and the 5th and 95th percentile of the uncertainty estimation were 18 and 27 μg. In contrast, both scenarios of kinetic Br sorption had similar results (Neff =0.99, uncertainty bounds 110-176 mg and 112-176 mg) but were clearly better than instantaneous sorption scenarios. Therefore, only the concept of sorption kinetics could be identified for Br modelling whereas both tested sorption equilibrium coefficient concepts performed equally well. The reasons for this specific case of equifinality may be uncertainties of model input data under field conditions or an insensitivity of the sorption equilibrium method due to relatively low adsorption of Br. Our results show that it may be possible to identify or at least falsify specific sorption concepts under uncertain field conditions using a long-term leaching experiment and modelling methods. Cases of environmental fate concept equifinality arouse the possibility of future model structure uncertainty analysis using an ensemble of models with different environmental fate concepts.
NASA Technical Reports Server (NTRS)
Scheper, C.; Baker, R.; Frank, G.; Yalamanchili, S.; Gray, G.
1992-01-01
Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified.
The Zika epidemic and abortion in Latin America: a scoping review.
Carabali, Mabel; Austin, Nichole; King, Nicholas B; Kaufman, Jay S
2018-01-01
Latin America presently has the world's highest burden of Zika virus, but there are unexplained differences in national rates of congenital malformations collectively referred to as Congenital Zika Syndrome (CZS) in the region. While Zika virulence and case detection likely contribute to these differences, policy-related factors, including access to abortion, may play important roles. Our goal was to assess perspectives on, and access to, abortion in Latin America in the context of the Zika epidemic. We conducted a scoping review of peer-reviewed and gray literature published between January 2015 and December 2016, written in English, Spanish, Portuguese, or French. We searched PubMed, Scielo, and Google Scholar for literature on Zika and/or CZS and abortion, and used automated and manual review methods to synthesize the existing information. 36 publications met our inclusion criteria, the majority of which were qualitative. Publications were generally in favor of increased access to safe abortion as a policy-level response for mitigating the impact of CZS, but issues with implementation were cited as the main challenge. Aside from the reform of abortion regulation in Colombia, we did not find evidence that the Zika epidemic had triggered shifts in abortion policy in other countries. Abortion policy in the region remained largely unchanged following the Zika epidemic. Further empirical research on abortion access and differential rates of CZS across Latin American countries is required.
Increasing access to Latin American social medicine resources: a preliminary report.
Buchanan, Holly Shipp; Waitzkin, Howard; Eldredge, Jonathan; Davidson, Russ; Iriart, Celia; Teal, Janis
2003-10-01
This preliminary report describes the development and implementation of a project to improve access to literature in Latin American social medicine (LASM). The University of New Mexico project team collaborated with participants from Argentina, Brazil, Chile, and Ecuador to identify approximately 400 articles and books in Latin American social medicine. Structured abstracts were prepared, translated into English, Spanish, and Portuguese, assigned Medical Subject Headings (MeSH), and loaded into a Web-based database for public searching. The project has initiated Web-based publication for two LASM journals. Evaluation included measures of use and content. The LASM Website (http://hsc.unm.edu/lasm) and database create access to formerly little-known literature that addresses problems relevant to current medicine and public health. This Website offers a unique resource for researchers, practitioners, and teachers who seek to understand the links between socioeconomic conditions and health. The project provides a model for collaboration between librarians and health care providers. Challenges included procurement of primary material; preparation of concise abstracts; working with trilingual translations of abstracts, metadata, and indexing; and the work processes of the multidisciplinary team. The literature of Latin American social medicine has become more readily available to researchers worldwide. The LASM project serves as a collaborative model for the creation of sustainable solutions for disseminating information that is difficult to access through traditional methods.
Kovalskys, Irina; Fisberg, Mauro; Gómez, Georgina; Rigotti, Attilio; Cortés, Lilia Yadira; Yépez, Martha Cecilia; Pareja, Rossina G; Herrera-Cuenca, Marianella; Zimberg, Ioná Z; Tucker, Katherine L; Koletzko, Berthold; Pratt, Michael
2015-09-16
Between-country comparisons of estimated dietary intake are particularly prone to error when different food composition tables are used. The objective of this study was to describe our procedures and rationale for the selection and adaptation of available food composition to a single database to enable cross-country nutritional intake comparisons. Latin American Study of Nutrition and Health (ELANS) is a multicenter cross-sectional study of representative samples from eight Latin American countries. A standard study protocol was designed to investigate dietary intake of 9000 participants enrolled. Two 24-h recalls using the Multiple Pass Method were applied among the individuals of all countries. Data from 24-h dietary recalls were entered into the Nutrition Data System for Research (NDS-R) program after a harmonization process between countries to include local foods and appropriately adapt the NDS-R database. A food matching standardized procedure involving nutritional equivalency of local food reported by the study participants with foods available in the NDS-R database was strictly conducted by each country. Standardization of food and nutrient assessments has the potential to minimize systematic and random errors in nutrient intake estimations in the ELANS project. This study is expected to result in a unique dataset for Latin America, enabling cross-country comparisons of energy, macro- and micro-nutrient intake within this region.
Kovalskys, Irina; Fisberg, Mauro; Gómez, Georgina; Rigotti, Attilio; Cortés, Lilia Yadira; Yépez, Martha Cecilia; Pareja, Rossina G.; Herrera-Cuenca, Marianella; Zimberg, Ioná Z.; Tucker, Katherine L.; Koletzko, Berthold; Pratt, Michael
2015-01-01
Between-country comparisons of estimated dietary intake are particularly prone to error when different food composition tables are used. The objective of this study was to describe our procedures and rationale for the selection and adaptation of available food composition to a single database to enable cross-country nutritional intake comparisons. Latin American Study of Nutrition and Health (ELANS) is a multicenter cross-sectional study of representative samples from eight Latin American countries. A standard study protocol was designed to investigate dietary intake of 9000 participants enrolled. Two 24-h recalls using the Multiple Pass Method were applied among the individuals of all countries. Data from 24-h dietary recalls were entered into the Nutrition Data System for Research (NDS-R) program after a harmonization process between countries to include local foods and appropriately adapt the NDS-R database. A food matching standardized procedure involving nutritional equivalency of local food reported by the study participants with foods available in the NDS-R database was strictly conducted by each country. Standardization of food and nutrient assessments has the potential to minimize systematic and random errors in nutrient intake estimations in the ELANS project. This study is expected to result in a unique dataset for Latin America, enabling cross-country comparisons of energy, macro- and micro-nutrient intake within this region. PMID:26389952
A unified model of the standard genetic code.
José, Marco V; Zamudio, Gabriel S; Morgado, Eberto R
2017-03-01
The Rodin-Ohno (RO) and the Delarue models divide the table of the genetic code into two classes of aminoacyl-tRNA synthetases (aaRSs I and II) with recognition from the minor or major groove sides of the tRNA acceptor stem, respectively. These models are asymmetric but they are biologically meaningful. On the other hand, the standard genetic code (SGC) can be derived from the primeval RNY code (R stands for purines, Y for pyrimidines and N any of them). In this work, the RO-model is derived by means of group actions, namely, symmetries represented by automorphisms, assuming that the SGC originated from a primeval RNY code. It turns out that the RO-model is symmetric in a six-dimensional (6D) hypercube. Conversely, using the same automorphisms, we show that the RO-model can lead to the SGC. In addition, the asymmetric Delarue model becomes symmetric by means of quotient group operations. We formulate isometric functions that convert the class aaRS I into the class aaRS II and vice versa. We show that the four polar requirement categories display a symmetrical arrangement in our 6D hypercube. Altogether these results cannot be attained, neither in two nor in three dimensions. We discuss the present unified 6D algebraic model, which is compatible with both the SGC (based upon the primeval RNY code) and the RO-model.
NASA Astrophysics Data System (ADS)
Näsi, R.; Viljanen, N.; Oliveira, R.; Kaivosoja, J.; Niemeläinen, O.; Hakala, T.; Markelin, L.; Nezami, S.; Suomalainen, J.; Honkavaara, E.
2018-04-01
Light-weight 2D format hyperspectral imagers operable from unmanned aerial vehicles (UAV) have become common in various remote sensing tasks in recent years. Using these technologies, the area of interest is covered by multiple overlapping hypercubes, in other words multiview hyperspectral photogrammetric imagery, and each object point appears in many, even tens of individual hypercubes. The common practice is to calculate hyperspectral orthomosaics utilizing only the most nadir areas of the images. However, the redundancy of the data gives potential for much more versatile and thorough feature extraction. We investigated various options of extracting spectral features in the grass sward quantity evaluation task. In addition to the various sets of spectral features, we used photogrammetry-based ultra-high density point clouds to extract features describing the canopy 3D structure. Machine learning technique based on the Random Forest algorithm was used to estimate the fresh biomass. Results showed high accuracies for all investigated features sets. The estimation results using multiview data provided approximately 10 % better results than the most nadir orthophotos. The utilization of the photogrammetric 3D features improved estimation accuracy by approximately 40 % compared to approaches where only spectral features were applied. The best estimation RMSE of 239 kg/ha (6.0 %) was obtained with multiview anisotropy corrected data set and the 3D features.
A multi-satellite orbit determination problem in a parallel processing environment
NASA Technical Reports Server (NTRS)
Deakyne, M. S.; Anderle, R. J.
1988-01-01
The Engineering Orbit Analysis Unit at GE Valley Forge used an Intel Hypercube Parallel Processor to investigate the performance and gain experience of parallel processors with a multi-satellite orbit determination problem. A general study was selected in which major blocks of computation for the multi-satellite orbit computations were used as units to be assigned to the various processors on the Hypercube. Problems encountered or successes achieved in addressing the orbit determination problem would be more likely to be transferable to other parallel processors. The prime objective was to study the algorithm to allow processing of observations later in time than those employed in the state update. Expertise in ephemeris determination was exploited in addressing these problems and the facility used to bring a realism to the study which would highlight the problems which may not otherwise be anticipated. Secondary objectives were to gain experience of a non-trivial problem in a parallel processor environment, to explore the necessary interplay of serial and parallel sections of the algorithm in terms of timing studies, to explore the granularity (coarse vs. fine grain) to discover the granularity limit above which there would be a risk of starvation where the majority of nodes would be idle or under the limit where the overhead associated with splitting the problem may require more work and communication time than is useful.
Public Policies and Interventions for Diabetes in Latin America: a Scoping Review.
Kaselitz, Elizabeth; Rana, Gurpreet K; Heisler, Michele
2017-08-01
Successful interventions are needed to diagnose and manage type 2 diabetes (T2DM) in Latin America, a region that is experiencing a significant rise in rates of T2DM. Complementing an earlier review exploring diabetes prevention efforts in Latin America, this scoping review examines the literature on (1) policies and governmental programs intended to improve diabetes diagnosis and treatment in Latin America and (2) interventions to improve diabetes management in Latin America. It concludes with a brief discussion of promising directions for future research. Governmental policies and programs for the diagnosis and treatment of diabetes in different Latin American countries have been implemented, but their efficacy to date has not been rigorously evaluated. There are some promising intervention approaches in Latin America to manage diabetes that have been evaluated. Some of these utilize multidisciplinary teams, a relatively resource-intensive approach difficult to replicate in low-resource settings. Other evaluated interventions in Latin America have successfully leveraged mobile health tools, trained peer volunteers, and community health workers (CHWs) to improve diabetes management and outcomes. There are some promising approaches and large-scale governmental efforts underway to curb the growing burden of type 2 diabetes in Latin America. While some of these interventions have been rigorously evaluated, further research is warranted to determine their effectiveness, cost, and scalability in this region.
ERIC Educational Resources Information Center
Bondy, Jennifer M.
2017-01-01
In this article, Jennifer Bondy argues that educators should teach students about the emotions that drive racialized violence. In order to do this, she focuses on political rhetoric that locates Latin@s as illegal aliens, criminals, and questionably American. She begins with her professional experiences as a social studies teacher who has worked…
From Latin Americans to Latinos: Latin American Immigration in US: The Unwanted Children
ERIC Educational Resources Information Center
Moraña, Ana
2007-01-01
It is my understanding that Latin American immigrants in the United States, during the contested process of becoming Latinos (US citizens or the offspring of Latin Americans born in US) are for the most part socially portrayed as unwanted, messy children who need to be educated before they can become American citizens. Whether they can be called…
ERIC Educational Resources Information Center
Glab, Edward, Jr., Ed.
This resource manual provides ideas, lesson plans, course outlines, arts and crafts projects, games, and other materials for teaching K-12 students about Latin America. A major objective is to help students understand and appreciate the diverse Latin American culture. There are six chapters in this volume. Chapter one discusses key ideas that can…
History of primary vasculitis in Latin America.
Iglesias Gammara, Antonio; Coral, Paola; Quintana, Gerardo; Toro, Carlos E; Flores, Luis Felipe; Matteson, Eric L; Restrepo, José Félix
2010-03-01
A literature review utilizing Fepafem, Bireme, LiLacs, Scielo Colombia, Scielo Internacional, former MedLine, Pubmed, and BVS Colombia as well as manual searches in the libraries of major Latin American universities was performed to study vasculitis in Latin America. Since 1945, a total of 752 articles have been published by Latin American authors. However, only a minority are devoted to primary vasculitides, and even fewer have been published in indexed journals. Approximately 126 are in OLD, Medline, Pubmed, Bireme, and Scielo. Most publications are from Mexico, followed by Brazil and Colombia. Systematic studies of the epidemiology of primary idiopathic vasculitis are available for a few countries, i.e. Brazil, Mexico, Colombia, Chile, and Peru. Takayasu arteritis and ANCA-associated vasculitis are the best studied forms of vasculitis in Latin America. Interest and expertise in vasculitis is growing in Latin America, as reflected in the increased number of published articles from this region of the world in the last decade. Racial and environmental factors are possibly responsible for the differential expression of various types of primary vasculitis observed in Latin America. With time, the unique features, epidemiology, and better treatment strategies for idiopathic vasculitides in Latin America will emerge.
The history of Latin teeth names.
Šimon, František
2015-01-01
This paper aims to give an account of the Latin naming of the different types of teeth by reviewing relevant historical and contemporary literature. The paper presents etymologies of Latin or Greek teeth names, their development, variants and synonyms, and sometimes the names of their authors. The Greek names did not have the status of official terms, but the Latin terms for particular types of teeth gradually established themselves. Names for the incisors, canines and molars are Latin calques for the Greek ones (tomeis, kynodontes, mylai), dens serotinus is an indirect calque of the Greek name (odús) opsigonos, and the term pre-molar is created in the way which is now common in modern anatomical terminology, using the prefix prae- = pre and the adjective molaris. The Latin terms dentes canini and dentes molares occur in the Classical Latin literature, the term (dentes) incisivi is found first time in medieval literature, and the terms dentes premolares and dens serotinus are modern-age ones.
ERIC Educational Resources Information Center
Suto, Irenka; Novakovic, Nadezda
2012-01-01
Some methods of determining grade boundaries within examinations, such as awarding, paired comparisons, and rank ordering, entail expert judgements of script quality. We aimed to identify the features of examinees' scripts that most influence judgements in the three methods. For contrasting examinations in biology and English, a Latin square…
Sarvin, Boris; Fedorova, Elizaveta; Shpigun, Oleg; Titova, Maria; Nikitin, Mikhail; Kochkin, Dmitry; Rodin, Igor; Stavrianidi, Andrey
2018-03-30
In this paper, the ultrasound assisted extraction method for isolation of steroidal glycosides from D. deltoidea plant cell suspension culture with a subsequent HPLC-MS determination was developed. After the organic solvent was selected via a two-factor experiment the optimization via Latin Square 4 × 4 experimental design was carried out for the following parameters: extraction time, organic solvent concentration in extraction solution and the ratio of solvent to sample. It was also shown that the ultrasound assisted extraction method is not suitable for isolation of steroidal glycosides from the D. deltoidea plant material. The results were double-checked using the multiple successive extraction method and refluxing extraction. Optimal conditions for the extraction of steroidal glycosides by the ultrasound assisted extraction method were: extraction time, 60 min; acetonitrile (water) concentration in extraction solution, 50%; the ratio of solvent to sample, 400 mL/g. Also, the developed method was tested on D. deltoidea cell suspension cultures of different terms and conditions of cultivation. The completeness of the extraction was confirmed using the multiple successive extraction method. Copyright © 2018 Elsevier B.V. All rights reserved.
Paraíso Torras, B; Maldonado Del Valle, M D; López Muñoz, A; Cañete Palomo, M L
2013-01-01
There are currently 6 million immigrants living in Spain. Half of them are women, the majority of whom are of childbearing age. These women, who suffer high rates of induced abortion, form a special group who require a special approach to their reproductive health. In order to study the use of contraceptive methods in this population, a review was made of 1100 clinical histories from our Sexual Health and Reproduction Clinic. Latin American women were the most prevalent group who came to seek information about contraception, followed by Eastern Europeans and Moroccans. Fewer Asian and Sub-Saharan women sought these services. The contraceptives most frequently used were the intrauterine device (used mostly by Latin American and Eastern European women), and combined oral contraception, most used by Moroccan women. It is important to advise the immigrant women about contraceptive methods, taking into account their preferences, in order to improve adherence to the method. Copyright © 2012 Sociedad Española de Médicos de Atención Primaria (SEMERGEN). Publicado por Elsevier España. All rights reserved.
"Manana Is Soon Enough for Me": Latin America through Tin Pan Alley's Prism.
ERIC Educational Resources Information Center
Aiex, Nola Kortner
In order to examine the vision of Latin America transmitted to the American public in Tin Pan Alley's popular songs in the first half of the twentieth century, a study analyzed nearly 50 songs. The songs were grouped into five categories: (1) songs which describe Latin locales; (2) songs which are constructed around a Latin woman's name; (3) songs…
China and Latin America: The Other Option
2017-06-09
maps, graphics, and any other works incorporated into this manuscript. A work of the United States Government is not subject to copyright, however... government adjust its priorities to mitigate the growing Chinese influence in Latin America. 15. SUBJECT TERMS China, Latin America, national...environment, should the U.S. government adjust its priorities to mitigate the growing Chinese influence in Latin America. v ACKNOWLEDGMENTS No great
ERIC Educational Resources Information Center
Suriel, Regina L.; Martinez, James; Evans-Winters, Venus
2018-01-01
Multicultural mentoring has been suggested to support Latin@ faculty success in their careers, yet current literature on effective mentorships of Latin@ faculty is limited. This critical co-constructed autoethnography draws on critical race theory (CRT) and latin@ critical race theory (LatCrit) frameworks to highlight the lived experiences and key…
Sets of Mutually Orthogonal Sudoku Latin Squares
ERIC Educational Resources Information Center
Vis, Timothy; Petersen, Ryan M.
2009-01-01
A Latin square of order "n" is an "n" x "n" array using n symbols, such that each symbol appears exactly once in each row and column. A set of Latin squares is c ordered pairs of symbols appearing in the cells of the array are distinct. The popular puzzle Sudoku involves Latin squares with n = 9, along with the added condition that each of the 9…
ERIC Educational Resources Information Center
Masciantonio, Rudolph
This gamebook is intended to assist the grade school (FLES) Latin teacher to introduce the reading and writing of English derivatives and cognates after these have been mastered audiolingually in Philadelphia's Latin course "Voces de Olympo (Echoes from Mt. Olympus): A Humanistic Approach to Latin for Children in the Sixth Grade." Games are…
Salinero-Fort, Miguel Ángel; Gómez-Campelo, Paloma; Bragado-Alvárez, Carmen; Abánades-Herranz, Juan Carlos; Jiménez-García, Rodrigo; de Burgos-Lunar, Carmen
2015-01-01
Background This study compares the health-related quality of life of Spanish-born and Latin American-born individuals settled in Spain. Socio-demographic and psychosocial factors associated with health-related quality of life are analyzed. Methods A cross-sectional Primary Health Care multi center-based study of Latin American-born (n = 691) and Spanish-born (n = 903) outpatients from 15 Primary Health Care Centers (Madrid, Spain). The Medical Outcomes Study 36-Item Short Form Health Survey (SF-36) was used to assess health-related quality of life. Socio-demographic, psychosocial, and specific migration data were also collected. Results Compared to Spanish-born participants, Latin American-born participants reported higher health-related quality of life in the physical functioning and vitality dimensions. Across the entire sample, Latin American-born participants, younger participants, men and those with high social support reported significantly higher levels of physical health. Men with higher social support and a higher income reported significantly higher mental health. When stratified by gender, data show that for men physical health was only positively associated with younger age. For women, in addition to age, social support and marital status were significantly related. Both men and women with higher social support and income had significantly better mental health. Finally, for immigrants, the physical and mental health components of health-related quality of life were not found to be significantly associated with any of the pre-migration factors or conditions of migration. Only the variable “exposure to political violence” was significantly associated with the mental health component (p = 0.014). Conclusions The key factors to understanding HRQoL among Latin American-born immigrants settled in Spain are age, sex and social support. Therefore, strategies to maintain optimal health outcomes in these immigrant communities should include public policies on social inclusion in the host society and focus on improving social support networks in order to foster and maintain the health and HRQoL of this group. PMID:25835714
Kuznik, Andreas; Muhumuza, Christine; Komakech, Henry; Marques, Elsa M. R.; Lamorde, Mohammed
2015-01-01
Background Untreated syphilis in pregnancy is associated with adverse clinical outcomes to the infant. In low- and middle-income countries in Asia and Latin America, 20%-30% of women are not tested for syphilis during pregnancy. We evaluated the cost-effectiveness of increasing the coverage for antenatal syphilis screening in 11 Asian and 20 Latin American countries, using a point-of-care immunochromatographic strip (ICS) test. Methods The decision analytical cost-effectiveness models reported incremental costs per disability-adjusted life years (DALYs) averted from the perspectives of the national health care payer. Clinical outcomes were stillbirths, neonatal deaths, and congenital syphilis. DALYs were computed using WHO disability weights. Costs included the ICS test, three injections of benzathine penicillin, and nurse wages. Country-specific inputs included the antenatal prevalence of syphilis and the proportion of women in the antenatal care setting that are screened for syphilis infection as reported in the 2014 WHO baseline report on global sexually transmitted infection surveillance. Country-specific data on the annual number of live births, proportion of women with at least one antenatal care visit, and per capita gross national income were also included in the model. Results The incremental cost/DALY averted of syphilis screening is US$53 (range: US$10-US$332; Prob<1*per capita GDP=99.71%) in Asia and US$60 (range: US$5-US$225; Prob<1*per capita GDP=99.77%) in Latin America. Universal screening may reduce the annual number of stillbirths by 20,344 and 4,270, neonatal deaths by 8,201 and 1,721, cases of congenital syphilis by 10,952 and 2,298, and avert 925,039 and 197,454 DALYs in the aggregate Asian and Latin American panel, respectively. Conclusion Antenatal syphilis screening is highly cost-effective in all the 11 Asian and 20 Latin American countries assessed. Our findings support the decision to expand syphilis screening in countries with currently low screening rates or continue national syphilis screening programs in countries with high rates. PMID:26010366
NASA Astrophysics Data System (ADS)
Aleksankina, Ksenia; Heal, Mathew R.; Dore, Anthony J.; Van Oijen, Marcel; Reis, Stefan
2018-04-01
Atmospheric chemistry transport models (ACTMs) are widely used to underpin policy decisions associated with the impact of potential changes in emissions on future pollutant concentrations and deposition. It is therefore essential to have a quantitative understanding of the uncertainty in model output arising from uncertainties in the input pollutant emissions. ACTMs incorporate complex and non-linear descriptions of chemical and physical processes which means that interactions and non-linearities in input-output relationships may not be revealed through the local one-at-a-time sensitivity analysis typically used. The aim of this work is to demonstrate a global sensitivity and uncertainty analysis approach for an ACTM, using as an example the FRAME model, which is extensively employed in the UK to generate source-receptor matrices for the UK Integrated Assessment Model and to estimate critical load exceedances. An optimised Latin hypercube sampling design was used to construct model runs within ±40 % variation range for the UK emissions of SO2, NOx, and NH3, from which regression coefficients for each input-output combination and each model grid ( > 10 000 across the UK) were calculated. Surface concentrations of SO2, NOx, and NH3 (and of deposition of S and N) were found to be predominantly sensitive to the emissions of the respective pollutant, while sensitivities of secondary species such as HNO3 and particulate SO42-, NO3-, and NH4+ to pollutant emissions were more complex and geographically variable. The uncertainties in model output variables were propagated from the uncertainty ranges reported by the UK National Atmospheric Emissions Inventory for the emissions of SO2, NOx, and NH3 (±4, ±10, and ±20 % respectively). The uncertainties in the surface concentrations of NH3 and NOx and the depositions of NHx and NOy were dominated by the uncertainties in emissions of NH3, and NOx respectively, whilst concentrations of SO2 and deposition of SOy were affected by the uncertainties in both SO2 and NH3 emissions. Likewise, the relative uncertainties in the modelled surface concentrations of each of the secondary pollutant variables (NH4+, NO3-, SO42-, and HNO3) were due to uncertainties in at least two input variables. In all cases the spatial distribution of relative uncertainty was found to be geographically heterogeneous. The global methods used here can be applied to conduct sensitivity and uncertainty analyses of other ACTMs.
ERIC Educational Resources Information Center
Gardner, Mary A.
1980-01-01
Reviews the historical development of the press in Latin America from the sixteenth century to the present. Discusses the various pressures that Latin American newspapers are subject to, including political censorship, economic restrictions, and cultural conflicts. (AEA)
Reis, Rodrigo S; Kelly, Cheryl M; Parra, Diana C; Barros, Mauro; Gomes, Grace; Malta, Deborah; Schmid, Thomas; Brownson, Ross C
2012-08-01
To identify the highest priorities for research on environmental and policy changes for promoting physical activity (PA) in Brazil; to uncover any gaps between researchers' and practitioners' priorities; and to consider which tools, methods, collaborative strategies, and actions could be useful to moving a research agenda forward. This was a mixed-methods study (qualitative and quantitative) conducted by Project GUIA (Guide for Useful Interventions for Activity in Brazil and Latin America) in February 2010-January 2011. A total of 240 individuals in the PA field (186 practitioners and 54 researchers) were asked to generate research ideas; 82 participants provided 266 original statements from which 52 topics emerged. Participants rated topics by "importance" and "feasibility;" a separate convenience sample of 21 individuals categorized them. Cluster analysis and multidimensional scaling were used to create concept maps and pattern matches. Five distinct clusters emerged from the concept mapping, of which "effectiveness and innovation in PA interventions" was rated most important by both practitioners and researchers. Pattern matching showed a divergence between the groups, especially regarding feasibility, where there was no consensus. The study results provided the basis for a research agenda to advance the understanding of environmental and policy influences on PA promotion in Brazil and Latin America. These results should stimulate future research and, ultimately, contribute to the evidence-base of successful PA strategies in Latin America.
Marganne, Marie-Hélène; de Haro Sanchez, Magali
2014-01-01
1. Far fewer Latin medical papyri, whether paraliterary, documentary or magical, have survived compared to Greek medical papyri, but they nonetheless provide interesting information about medical practices in the Graeco-Roman world, the relationship between Greek and Latin medical languages, and the choices made to use one rather than the other, a subject that has never been exhaustively studied. As part of the update undertaken by CEDOPAL since 2008 of the Corpus papyrorum Latinarum, published fifty years ago by the late Robert Cavenaile, we have inventoried Latin papyri containing medical references, classifying them by type or nature of content, provenance, form, layout, and writing. We finally analyse their content and what it reveals about the reception of Greek medicine by Latin or Latin-speaking writers. 2. The second section presents the only iatromagical papyrus in Latin known at the present time, P. Held. inv. lat. 5 (Suppl. Mag. 1.36, ca. fifth/sixth centuries, Fustat [?]), and compares its content with that of the Greek iatromagical papyri (dating from the first century B.C. to the seventh century A.D.) on one hand, and on the other hand with iatromagical formulae in Latin that have been preserved on metal leaves coming from Italy, Hungary, France, and England.
Immigration experience of Latin American working women in Alicante, Spain: an ethnographic study 1
González-Juárez, Liliana; Noreña-Peña, Ana Lucía
2014-01-01
OBJECTIVE: to describe the experience of Latin American working women regarding immigration, taking into account the expectations and conditions in which this process takes place. METHOD: ethnographic qualitative study. Data collection was performed by means of semi-structured interviews with 24 Latin American immigrant women in Spain. The information collected was triangulated through two focal groups. RESULTS: the expectations of migrant women focus on improving family living conditions. Social support is essential for their settling and to perform daily life activities. They declare they have adapted to the settlement country, although they live with stress. They perceive they have greater sexual freedom and power with their partners but keep greater responsibility in childcare, combining that with the role of working woman. CONCLUSIONS: migrant women play a key role in the survival of households, they build and create new meanings about being a woman, their understanding of life, their social and couple relationships. Such importance is shaped by their expectations and the conditions in which the migration process takes place, as well as their work integration. PMID:25493683
Culquichicón, Carlos; Helguero-Santin, Luis M; Labán-Seminario, L Max; Cardona-Ospina, Jaime A; Aboshady, Omar A; Correa, Ricardo
2017-01-01
Background: Massive open online courses (MOOCs) have undergone exponential growth over the past few years, offering free and worldwide access to high-quality education. We identified the characteristics of MOOCs in the health sciences offered by Latin American institutions (LAIs). Methods: We screened the eight leading MOOCs platforms to gather their list of offerings. The MOOCs were classified by region and subject. Then, we obtained the following information: Scopus H-index for each institution and course instructor, QS World University Ranking® 2015/16 of LAI, and official language of the course. Results: Our search identified 4170 MOOCs worldwide. From them, 205 MOOCs were offered by LAIs, and six MOOCs were health sciences related. Most of these courses (n = 115) were offered through Coursera. One health science MOOC was taught by three instructors, of which only one was registered in Scopus (H-index = 0). The remaining five health science MOOCs had solely one instructor (H-index = 4 [0-17]). The Latin American country with the highest participation was Brazil (n = 11). Conclusion: The contribution of LAI to MOOCs in the health sciences is low.
Aristizábal, Luis F; Bustillo, Alex E; Arthurs, Steven P
2016-02-03
The coffee berry borer (CBB), Hypothenemus hampei Ferrari (Coleoptera: Curculionidae: Scolytinae) is the primary arthropod pest of coffee plantations worldwide. Since its detection in Hawaii (September 2010), coffee growers are facing financial losses due to reduced quality of coffee yields. Several control strategies that include cultural practices, biological control agents (parasitoids), chemical and microbial insecticides (entomopathogenic fungi), and a range of post-harvest sanitation practices have been conducted to manage CBB around the world. In addition, sampling methods including the use of alcohol based traps for monitoring CBB populations have been implemented in some coffee producing countries in Latin America. It is currently unclear which combination of CBB control strategies is optimal under economical, environmental, and sociocultural conditions of Hawaii. This review discusses components of an integrated pest management program for CBB. We focus on practical approaches to provide guidance to coffee farmers in Hawaii. Experiences of integrated pest management (IPM) of CBB learned from Latin America over the past 25 years may be relevant for establishing strategies of control that may fit under Hawaiian coffee farmers' conditions.
Long-term abatement potential and current policy trajectories in Latin American countries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clarke, Leon; McFarland, James; Octaviano, Claudia
2016-05-01
This paper provides perspectives on the role of Latin American and Latin American countries in meeting global abatement goals, based on the scenarios developed through the CLIMACAP-LAMP modeling study.
Educational Building in Latin America.
ERIC Educational Resources Information Center
Baza, Jadille; Vaz, Rita de Cassia Alves; Millan, Eduardo; Almeida, Rodolfo
2002-01-01
Presents articles describing recent developments in three Latin American countries (Chile, Brazil, and Venezuela) to expand public education facilities, along with a report on UNESCO's recent seminar in Latin America on architecture for an inclusive education. (EV)
Gene therapy coming of age in Latin America.
Podhajcer, Osvaldo; Pitossi, Fernando; Agilar-Cordova, Estuardo
2002-08-01
"Gene Therapy in Latin America: From the Bench to the Clinic," a meeting sponsored by the Wellcome Trust and the United Nations University through the Biotechnology Program for Latin America and the Caribbean, took place in Buenos Aires, Argentina from May 20 to 22. This symposium, which was hosted by Osvaldo Podhajcer and Fernando Pitossi,had more than 150 basic scientists and physician-scientists from academia, government and industry in Latin America, similar to the first meeting of the Asociacion Iberoamericana de Terapia Génica (Iberoamerican Society of Gene Therapy, AITG) held in Guadalajara, México, two years ago. Participants represented Argentina, Mexico, Brazil, Chile, Uruguay, Costa Rica, Colombia, Venezuela, and Guatemala, with guests from the United States and Europe. All came together to discuss the latest developments in this field in the region. A primary objective of this gathering was to bring together Latin American scientists involved in gene therapy to strengthen continental collaborations and to further disseminate the scientific expertise available in Latin America. The symposium was followed by a 10-day practical course for 25 students from all over Latin America.
Radiology practice in Latin America: a literature review.
Teague, Jordan
2013-01-01
To discover the status and structure of radiology in Latin America with respect to the health care systems it is part of, the effects of socioeconomics, the equipment and technology used, technologists and their training, accreditation, and professional organizations. Health-related databases and Google Scholar were searched for articles concerning radiology practice in Latin America. Articles were selected based on relevance to the research scope. Many regions in Latin America offer little to no access to radiology. Where there is access, the equipment often is old or not functioning, with limited and costly service and maintenance. Most trained technologists live in urban areas. There are no standardized accreditation practices in Latin America. However, forming professional organizations would help promote the practice of radiology and accreditation standards. International cooperative organizations enhance radiology by providing resources and opportunities for cooperation between countries. The current status of radiology in Latin America must be determined. This knowledge will help us discover opportunities for cooperation and ways to improve radiology practice. The main need in Latin America is to extend coverage to the underserved population.
1991-12-01
abstract data type is, what an object-oriented design is and how to apply "software engineering" principles to the design of both of them. I owe a great... Program (ASVP), a research and development effort by two aerospace contractors to redesign and implement subsets of two existing flight simulators in...effort addresses how to implement a simulator designed using the SEI OOD Paradigm on a distributed, parallel, multiple instruction, multiple data (MIMD
A Portable Parallel Implementation of the U.S. Navy Layered Ocean Model
1995-01-01
Wallcraft, PhD (I.C. 1981) Planning Systems Inc. & P. R. Moore, PhD (Camb. 1971) IC Dept. Math. DR Moore 1° Encontro de Metodos Numericos...Kendall Square, Hypercube, D R Moore 1 ° Encontro de Metodos Numericos para Equacöes de Derivadas Parciais A. J. Wallcraft IC Mathematics...chips: Chips Machine DEC Alpha CrayT3D/E SUN Sparc Fujitsu AP1000 Intel 860 Paragon D R Moore 1° Encontro de Metodos Numericos para Equacöes
On k-ary n-cubes: Theory and applications
NASA Technical Reports Server (NTRS)
Mao, Weizhen; Nicol, David M.
1994-01-01
Many parallel processing networks can be viewed as graphs called k-ary n-cubes, whose special cases include rings, hypercubes and toruses. In this paper, combinatorial properties of k-ary n-cubes are explored. In particular, the problem of characterizing the subgraph of a given number of nodes with the maximum edge count is studied. These theoretical results are then used to compute a lower bounding function in branch-and-bound partitioning algorithms and to establish the optimality of some irregular partitions.
Synchronizability of random rectangular graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estrada, Ernesto, E-mail: ernesto.estrada@strath.ac.uk; Chen, Guanrong
2015-08-15
Random rectangular graphs (RRGs) represent a generalization of the random geometric graphs in which the nodes are embedded into hyperrectangles instead of on hypercubes. The synchronizability of RRG model is studied. Both upper and lower bounds of the eigenratio of the network Laplacian matrix are determined analytically. It is proven that as the rectangular network is more elongated, the network becomes harder to synchronize. The synchronization processing behavior of a RRG network of chaotic Lorenz system nodes is numerically investigated, showing complete consistence with the theoretical results.